uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,941,325,220,988 | arxiv | \section{Introduction}\label{sec:intro}
Nearby resolved dwarf galaxies in the Local Group (LG) constitute a compelling
sample to address fundamental open questions about galaxy evolution. The
variety of properties in terms of mass, luminosity, surface brightness, gas
content, chemical evolution \citep[e.g.][]{mcconnachie12}, together with the
possibility to resolve them into individual stars, offer a large number of
observables to investigate how small systems evolved since their formation to
the present time. In particular, the ability to derive quantitative star
formation histories (SFH) based on deep photometry reaching below the oldest main
sequence Turn-Off \citep[TO, ][]{gallart05} allows us to put firm constraints
on the time of the onset and of the end of star formation, which opens the
possibility to constrain the physical mechanisms directly affecting the early
stages of dwarf galaxy evolution. On the one hand, it is expected that both
internal (supernova feedback, e.g., \citealt{maclow99}) and external mechanisms
(e.g., ionizing photons from the first sources; \citealt{ricotti05,susa04})
affect the star formation activity, terminating it at an
early epoch. On the other hand, the environment is also expected to play a
significant r\^ole on small systems orbiting massive primaries (tidal
stirring: \citealt{mayer01a}; ram pressure: \citealt{mayer06}; resonances:
\citealt{donghia09}), which may have a substantial effect in stripping mass
from small systems, again leading to an early cessation of the star formation.
In a series of papers, based on deep HST/ACS photometry within the framework of
the LCID collaboration \citep{monelli10a, monelli10b,hidalgo11,skillman14}, we
have shown that star formation generally continues well past $z \sim 6$ in the
mass regime $M_{\star} \ga 10^6 M_{\odot}$. However, during the last ten years,
our knowledge of the LG has been deeply influenced by photometric surveys that
have brought about unexpected discoveries and new questions. First, the number
of known LG galaxies has more than doubled in a few years only. Starting with
the discovery of the first faint dwarf \citep[also called ``ultra-faint
dwarfs'',][]{willman06} the known satellites of the Milky Way (MW) jumped from
11 (9 bright dSph plus the Magellanic Clouds) to 37 today. These
faint dwarfs extend the spectrum of galactic properties to a regime of very low
mass, low luminosity, and typically low mean metallicity. They are thought to
have formed stars very early on and for a very short period of time
\citep{brown14}, possibly because cosmic reionization might have inhibited
further star formation in this low mass regime.
All currently known Local Group faint dwarfs fit well within this general trend, apart from
one exception. Leo T, discovered as a stellar over-density in the Sloan Data
Release 5, immediately presented a peculiar combination of low mass ($\sim 10^5
M_{\odot}$, \citealt{ryanweber08}) and young stellar populations ($<$ 200
Myr, \citealt{irwin07}), together with a large fraction of HI gas
\citep{ryanweber08}. Deeper HST data confirmed the extended star formation
activity from the oldest epochs to the present day
(\citealt{clementini12,weisz12}, see also \citealt{dejong08}), which revives
the question of whether cosmic reionization is the actual cause of thee star formation
quenching in the faintest dwarfs. Remarkably, two more galaxies recently
discovered have stellar mass smaller than that of Leo T but their CMD show
hints of extended star formation until intermediate epoch: Eridanus II
\citep{koposov15, bechtol15} and Hydra II \citep{martin15}, detected in the
Dark Energy Survey and in the Survey for the MAgellanic Stellar History
footprints.
Similarly to what occurred in the MW, the number of known satellites of M31 has
increased considerably in the last few years
\citep{martin09,richardson11,slater11,bell11, martin13a,martin13c}. This was
mainly thanks to the effort of the PAndAS project \citep{mcconnachie09}. The
discovery of And {\sc XVI} was reported in \citet{ibata07}, from MegaCam/CFHT
observations of the M31 surroundings that later would be folded in the PAndAS
survey \citep{mcconnachie09}. And {\sc XVI} is located $\sim 279$ kpc from M31
in the south-east direction. The initial estimate of its luminosity
\citep[$M_V$ = -9.2 mag,][]{ibata07} suggested a relatively bright object.
However, more recent estimates (Martin et al. 2016, submitted) revised this
value to a significantly fainter value, $M_V$ = --7.6 mag. First estimates based
on the tip of the red giant branch (RGB) indicated a distance
($m-M$)$_0$=23.60$\pm$0.2 mag, corresponding to 525$\pm$50 kpc, though smaller
values have been suggested \citep[23.39$^{+0.19}_{-0.14}$,][]{conn12}.
Spectroscopic follow-up supports a low mean metallicity, close to [Fe/H] = --2
\citep{letarte09,collins14, collins15}. However, the most distinctive
characteristic of And {\sc XVI} is its extended SFH, which
continued to $\sim$6 Gyr ago \citep{weisz14a}. The present work is part of
the ISLAND project (Initial
Star-formation and Lives of the ANDromeda Satellites), which obtained a total
of 111 HST orbits to study six satellites of M31 (GO 13028, 13739): And {\sc I},
And {\sc II}, And {\sc III}, And {\sc XV}, And {\sc XVI}, and And {\sc XXVIII}. In this paper we present a
detailed reanalysis of the data from \citet{weisz14a}, adding information on the
properties of the variable stars population and on the spatial variation of the
stellar populations. In particular, \S \ref{sec:data} presents a brief summary
of the ACS data used in this work and a detailed presentation of the And {\sc
XVI} CMD. In \S \ref{sec:vars} we present the discovery and analysis of RR
Lyrae (RRL) stars, and we derive a new distance for And {\sc XVI}
in \S \ref{sec:distance}. \S \ref{sec:sfh} is devoted to
the derivation of the detailed SFH, while \S \ref{sec:radial} presents an
analysis of the variation of the properties of And {\sc XVI} as a function of
radius, both in terms of SFH and CMD morphology. The discussion of these
results (\S \ref{sec:discussion}) and a summary of the conclusions (\S
\ref{sec:conclusions}) close the paper.
\begin{table*}[ht!]
\begin{center}
\caption{Log of the observations}
\begin{tabular}{ccrcc}
\hline
\hline
\textit{Image Name} & \textit{Filter} & \textit{Exp. time} & \textit{Date} & \textit{MJD} \\
& & \textit{$s$} & \textit{(UT start)} & \textit{d-2,400,000} \\
\hline
jc1d09upq & $F475W$ & 1,280 & 2013 Nov 20 12:46:13 & 56616.545139 \\
jc1d09urq & $F814W$ & 987 & 2013 Nov 20 13:10:30 & 56616.560301 \\
jc1d09uuq & $F814W$ & 1,100 & 2013 Nov 20 14:13:37 & 56616.604792 \\
jc1d09uyq & $F475W$ & 1,359 & 2013 Nov 20 14:34:55 & 56616.621076 \\
jc1d10wdq & $F475W$ & 1,280 & 2013 Nov 20 23:55:40 & 56617.010037 \\
jc1d10wfq & $F814W$ & 987 & 2013 Nov 21 00:19:57 & 56617.025199 \\
jc1d10xaq & $F814W$ & 1,100 & 2013 Nov 21 01:23:05 & 56617.069701 \\
jc1d10xeq & $F475W$ & 1,359 & 2013 Nov 21 01:44:23 & 56617.085986 \\
jc1d11ywq & $F475W$ & 1,280 & 2013 Nov 21 09:29:27 & 56617.408499 \\
jc1d11yyq & $F814W$ & 987 & 2013 Nov 21 09:53:45 & 56617.423673 \\
jc1d11z1q & $F814W$ & 1,100 & 2013 Nov 21 10:56:55 & 56617.468199 \\
jc1d11z5q & $F475W$ & 1,359 & 2013 Nov 21 11:18:13 & 56617.484483 \\
jc1d12a2q & $F475W$ & 1,280 & 2013 Nov 21 15:52:01 & 56617.674172 \\
jc1d12a5q & $F814W$ & 987 & 2013 Nov 21 16:16:18 & 56617.689334 \\
jc1d12a9q & $F814W$ & 1,100 & 2013 Nov 21 17:23:30 & 56617.736660 \\
jc1d12zzq & $F475W$ & 1,359 & 2013 Nov 21 17:44:48 & 56617.752522 \\
jc1d13b9q & $F475W$ & 1,280 & 2013 Nov 21 23:50:12 & 56618.006245 \\
jc1d13bbq & $F814W$ & 987 & 2013 Nov 22 00:14:29 & 56618.021407 \\
jc1d13caq & $F814W$ & 1,100 & 2013 Nov 22 01:17:39 & 56618.065933 \\
jc1d13ceq & $F475W$ & 1,359 & 2013 Nov 22 01:38:57 & 56618.082218 \\
jc1d14f2q & $F475W$ & 1,280 & 2013 Nov 22 10:59:39 & 56618.471143 \\
jc1d14f4q & $F814W$ & 987 & 2013 Nov 22 11:23:56 & 56618.485872 \\
jc1d14f7q & $F814W$ & 1,100 & 2013 Nov 22 12:27:08 & 56618.530854 \\
jc1d14fbq & $F475W$ & 1,359 & 2013 Nov 22 12:48:26 & 56618.547139 \\
jc1d14feq & $F475W$ & 1,360 & 2013 Nov 22 14:02:46 & 56618.598771 \\
jc1d14fiq & $F814W$ & 1,100 & 2013 Nov 22 14:28:23 & 56618.615056 \\
\hline
\hline
\end{tabular}
\end{center}
\label{tab:tab01}
\end{table*}
\section{Data}\label{sec:data}
\begin{figure}
\includegraphics[width=9cm]{f01.eps}
\caption{Stacked image of the ACS field on And {\sc XVI} (North is up, East
is left). A large number of extended
sources are clearly visible, and prompted a careful selection of the
photometry list. The cross marks the center of {\sc And XVI}, while the solid
and dashed lines show the ellipses corresponding to r$_e$=1.38r$_h$=1.38$\arcmin$
and r$_e$=5r$_h$=5.00$\arcmin$, being r$_h$ the half-light radius. }
\label{fig:image}
\end{figure}
\begin{figure}
\includegraphics[width=9cm]{f02.eps}
\caption{Spatial distribution of bone fide stellar sources in the ACS field around
And {\sc XVI}. The center of the galaxy is marked by the black cross. The two
ellipses correspond are the same as in Figure \ref{fig:image}. The global SFH
has been derived selecting sources within the dashed line. The location of
the detected RRL variable stars is shown: red triangles mark the three RR$ab$
type, while blue circles represent the five RR$c$ type stars. The green plus marks
the position of the peculiar, faint RRL star V0.}
\label{fig:map}
\end{figure}
The data set used here is the same presented in \citet{weisz14a}, and
consists of 13 ACS images in each the $F475W$ and $F814W$ passbands. Parallel
photometric reductions were conducted using both DOLPHOT and DAOPHOT/ALLFRAME
as was done for the LCID project \citep[e.g.,][]{monelli10b}. Here we have
chosen to use the DAOPHOT/ALLFRAME photometry as a matter of convenience.
The calibration to the standard VEGAMAG system was done adopting the updated
zero point from the instrument web page. Figure \ref{fig:image} shows a
stacked drizzled image, where a large number of background extended objects
is evident. In particular, note the edge-on galaxy apparently interacting
with the big elliptical to the West, and the group of late-type galaxies in
the North-East. The two ellipses correspond to elliptical radii $r_e$=1.38$\arcmin$,
and 5.00$\arcmin$, and will be used in \S \ref{sec:radial} to investigate the
radial properties. Figure \ref{fig:map} shows the spatial distribution of the
sources in the final catalog. Big colored symbols mark the position of the 9
discovered RRL stars (see \S \ref{sec:rrl}).
\subsection{CMD analysis}\label{sec:cmd}
Figure \ref{fig:cmd} shows the ($F475W - F814W$, $F814W$) CMD of And {\sc
XVI}. In the construction of Figure \ref{fig:cmd} we adopted a reddening of
E($B$-$V$)=0.06 \citep{schlafly11} and a distance modulus of $(m-M)_0 =$ 23.72 mag.
The latter value has been derived from the RRL stars, as detailed in
\S \ref{sec:distance_rrl}. The RRLs discovered in And {\sc XVI} are plotted as
large symbols and will be discussed in \S \ref{sec:vars}.
\begin{figure*}[!t]
\includegraphics[width=17cm,height=15cm]{f03.eps}
\caption{{\em Left -} The CMD of And {\sc XVI}, spanning from the tip of the RGB to well
below the main sequence TO. Colored symbols show the RRL stars, with the same color code
as in Figure \ref{fig:map}. {\rm Center - } The same CMD with superimposed selected isochrones
from the BaSTI database, for labeled age and metallicity. The two selected isochrones
bracket completely the TO region and the color spread of the RGB, suggesting a significant spread
both in age and in metallicity. The black line shows the ZAHB for Z=0.0003, which nicely
reproduce the lower envelope of the HB stars. {\rm Right - }CMD of the outermost
region in the field of view, for r$_e >$ 5 \arcmin, where the majority of
the detected sources are polluting unresolved background galaxies. }
\label{fig:cmd}
\end{figure*}
A photometric selection was applied according to the sharpness parameter provided
by DAOPHOT ($|sharp| <$ 0.3). Given the small number of And {\sc XVI} stars,
and the heavy contamination from background galaxies, we performed a further
check on the stacked image, removing a few hundred sources associated with
extended objects and spikes of heavily saturated field stars. Finally, we
ended up retaining 5,714 bona-fide stellar sources within the 5.0\arcmin
ellipse. These are shown in the CMD of the left panel, where the typical
features of a predominantly old stellar populations clearly appear. The red
giant branch (RGB) spans more than five magnitudes, from the tip at $F814W
\approx$ 20 mag down to $F814W \approx$ 25.5. The horizontal branch (HB) has
a predominantly red morphology with a well populated red part, concentrated
close to ($F475W - F814W$, $F814W$) $\sim$ (1.2, 23.5) mag, which is well
separated from the RGB, suggesting a limited metallicity spread. On the
other extreme, the HB extends to the blue reaching well beyond the RRL
instability strip to $F475W - F814W$ $\sim$ 0.2 mag. Overall, we
derive an HB morphology index$= -$0.64\footnote{The HB index was introduced
by \citet{lee90} and it is defined as
HBR = ($B$-$R$)/($B$+$V$+$R$), where B and R are the number of HB
stars bluer and redder than the instability strip, and V is the total
number of RR Lyrae stars.}.
The central panel of the same figure shows the comparison with selected
isochrones from the BaSTI\footnote{ http://basti.oa-teramo.inaf.it/index.html}
stellar evolution library \citep{pietrinferni04, pietrinferni09}. In particular,
the red and green lines represent an old (13 Gyr, Z=0.0001) and an
not-too-old-age (6 Gyr, Z=0.0003, \citealt{castellani95}) population. These two
isochrones bracket both the RGB and the main sequence Turn-Off region well.
Interestingly, this suggests that the stellar populations in And {\sc XVI} are
characterized by a considerable age spread, but a small range of
metallicities.
Finally, the right panel presents the sources detected in the outermost region
of the field of view, for r$_e >$ 5\arcmin. The same old isochone as in the
central panel is shown. Roughly 400 sources are present in this diagram, but no
obvious features appear. Many of the detected objects present colors redder than
the MS stars of And {\sc XVI}, suggesting that they are unresolved background
galaxies. Neverthless, we cannot rule out the possibility that some And {\sc
XVI} stars are still present in this region, which will be anyway excluded from
the SFH analysis.
\begin{table*}[ht!]
\begin{center}
\caption{Variable Stars Properties}
\begin{tabular}{ccccccccccccccc}
\hline
\hline
\textit{Name} & \textit{R.A.} & \textit{Dec.} & \textit{type} & \textit{P} & \textit{m$_{F475W}$} & \textit{A$_{F475W}$} & \textit{m$_{F814W}$} & \textit{A$_{F814W}$} & \textit{m$_{B}$} & \textit{A$_{B}$} & \textit{m$_{V}$} & \textit{A$_{V}$} & \textit{m$_{I}$} & \textit{A$_{I}$} \\
\textit{ } & \textit{hr min sec} & $\arcdeg$ $\prime$ $\arcsec$ & \textit{ }& \textit{d} & \textit{mag} & \textit{mag} & \textit{mag} & \textit{mag} & \textit{mag} & \textit{mag} & \textit{mag} & \textit{mag} & \textit{mag} & \textit{mag} \\
\hline
V0 & 00:59:24.38 & 32:22:33.14 & $ab$ & 0.622 & 25.582 & 0.889 & 24.682 & 0.431 & 25.727 & 0.990 & 25.244 & 0.767 & 24.670 & 0.446 \\
V1 & 00:59:25.33 & 32:22:16.09 & $c$ & 0.358 & 24.568 & 0.641 & 23.875 & 0.399 & 24.681 & 0.725 & 24.328 & 0.538 & 23.857 & 0.397 \\
V2 & 00:59:27.97 & 32:22:57.56 & $c$ & 0.391 & 24.560 & 0.557 & 23.831 & 0.284 & 24.668 & 0.616 & 24.308 & 0.454 & 23.812 & 0.290 \\
V3 & 00:59:29.43 & 32:22:25.88 & $c$ & 0.350 & 24.569 & 0.541 & 23.892 & 0.358 & 24.667 & 0.573 & 24.325 & 0.480 & 23.873 & 0.361 \\
V4 & 00:59:30.84 & 32:22:13.99 & $ab$ & 0.617 & 24.623 & 0.840 & 23.751 & 0.589 & 24.741 & 0.969 & 24.313 & 0.810 & 23.735 & 0.600 \\
V5 & 00:59:34.27 & 32:21:59.43 & $ab$ & 0.638 & 24.594 & 1.200 & 23.747 & 0.655 & 24.717 & 1.182 & 24.323 & 0.978 & 23.731 & 0.667 \\
V6 & 00:59:36.07 & 32:23:16.33 & $c$ & 0.399 & 24.586 & 0.444 & 23.870 & 0.216 & 24.694 & 0.478 & 24.342 & 0.390 & 23.851 & 0.217 \\
V7 & 00:59:37.51 & 32:22:10.07 & $c$ & 0.288 & 24.668 & 0.300 & 24.150 & 0.161 & 24.736 & 0.316 & 24.495 & 0.251 & 24.134 & 0.157 \\
V8 & 00:59:38.10 & 32:23:15.76 & $ab$ & 0.651 & 24.608 & 0.673 & 23.783 & 0.427 & 24.734 & 0.719 & 24.327 & 0.574 & 23.767 & 0.432 \\
\hline
\hline
\end{tabular}
\end{center}
\label{tab:tab02}
\end{table*}
\subsection{Blue Straggler stars}\label{sec:bss}
The CMD clearly show a plume of objects bluer and brighter than the old MSTO,
between $F814W \sim 25.5$ mag and $F814W \sim 27.5$ mag. They are most likely
Blue Stragglers stars (BSSs) formed by primordial binary stars, as commonly
found in many dSph \citep{mapelli07, mapelli09, monelli12a, santana13}. On the
other hand, stars in that region of the CMD might be genuine young objects, with
ages in the range between $\sim$1 and $\sim$ 3 Gyr. The blue line in Figure
\ref{fig:cmd} represents a metal-poor isochrone of 2.5 Gyr, which provides a
fair agreement with the observed sequence. If And {\sc XVI} hosted such a young
population, one would expect to find it spatially concentrated in the innermost
region of the galaxy, as commonly observed in LG dwarfs. Figure \ref{fig:cumul}
shows the cumulative distribution of stars in the blue plume, the RGB and the
HB. Within the error, they are identical a a function lf elliptical radius. This
indirectly supports the inference that the stars in the blue plume are BSSs and
not a young population. The plume of blue objects causes a minor peak in
the SFH between 2 and 3 Gyr ago \S \ref{sec:sfh_global},
which contributed with $\sim$3\% of the stellar mass. Both the age range and the
mass percentage are consistent with those estimated in Cetus and Tucana
\citep{monelli12a}. This again indirectly supports the BSS hypothesis.
\begin{figure}
\vspace{-3cm}
\includegraphics[width=9cm]{f04.eps}
\caption{Normalized cumulative radial distribution of stars in the RGB, HB,
and the candidate BSSs stars. Within the error, no significant differences
are detected.}
\label{fig:cumul}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=17cm]{f05.eps}
\vspace{-8cm}
\caption{The light curves of the 9 discovered RRL stars. In each panel, black
and grey points refer to the $F475W$ and $F814W$ data. For the sake of
clarity, $F814W$ was shifted by -0.6 mag to avoid overlap. Open points are
excluded from the LC fit due to large photometric error. Note that V0 is
significantly fainter than the other 8 RRL stars, compatible with an M31
field variable.}
\label{fig:lcs}
\end{figure*}
\section{Variable stars}\label{sec:vars}
Our observational strategy was designed for optimal time sampling of short
period ($\lesssim$1 d) variable stars such as
RRL and Anomalous Cepheids (ACs). All the observations were executed within
$\sim$2.1 days, and were organized in six visits, five of two and one of three
orbits. Moreover, each orbit was split into one $F475W$ and one $F814W$
exposure, and a sequence $F475W$-$F814W$-$F814W$-$F475W$ was observed in each
group of two visits. This allowed a larger time difference between the two
images at shorter wavelength, where the amplitude is larger. Table 1 reports
the observation log, listing the image name, filter, exposure time, starting
date of observation and modified Julian date at mid exposure.
\subsection{RR Lyrae stars}\label{sec:rrl}
Candidate variable stars were identified following the same approach adopted for
the galaxies of the LCID project \citep{bernard09,bernard10,bernard13}. In
particular, we used the variability index introduced by \citet{stetson96b}. The
light curves of selected candidates were individually visually inspected, and
nine variables were confirmed. Given their pulsational properties and the
location on the CMD, we classify all of them as RRL stars. The $F475W$ and
$F814W$ magnitudes were re-calibrated to the Johnson $BVI$ system using the same
relations adopted in \citet{bernard09}. Table 2 summarizes the properties of the
confirmed variables which are named in order of increasing right ascension.
Their position in the CMD is shown in Figure \ref{fig:cmd}: red triangles and
blue circles represent RR$ab$ and RR$c$ type stars, respectively.
Interestingly, one variable (V0) is significantly fainter (green plus), by
$\sim$0.8 mag in the $F814W$ band. Figure \ref{fig:lcs} presents the light
curves of the nine variables. Despite the small number of phase points, the time
sampling chosen when preparing the observations provides a fairly homogeneous
coverage of the light curves. In particular, we do not find any obvious problem
with V0 that may account for the fainter magnitude, though the light curve is
admittedly noisy. We have checked whether a significantly higher metal content
may be responsible for such a lower luminosity. Adopting the
luminosity-metallicity relation by \citet{clementini03}, we derive that an
[Fe/H] approximately solar is required to explain such a large magnitude
difference. Such a large metallicity spread within the population of stars able
to form RRL in And {\sc XVI} looks unlikely, in particular if we take into
account the relatively small range estimated both spectroscopically
\citep{letarte09, tollerud12} and with the SFH (see \S \ref{sec:sfh_global}).
Alternatively, we can assume that V0 does not belong to And {\sc XVI}, and the
magnitude difference is due to a distance effect. Assuming a metal content
of [Fe/H]=$-$1.9 and the metallicity-luminosity relation from
\citet[][see \$ \ref{sec:distance_rrl}]{clementini03}, we derive a distance
difference between V0 and the rest of variables of the order of $\sim$290 kpc.
Given that And {\sc XVI} is located $\sim$200 kpc closer than M31, this means
that V0 is compatible with being located $\approx$ 100 Kpc beyond M31, but
still well within its virial radius, thus being a possible candidate M31 halo
star \citep{ibata14a}.
Figure \ref{fig:bailey} shows the period-amplitude (Bailey) diagram for the
detected variable stars. The dotted and dashed lines mark the loci of Oosterhoff
I and Oosterhoff II globular clusters, respectively, from \citet{cacciari05}.
The solid line is the locus defined for RR$c$ stars, from \citet{kunder13c}. The
three And {\sc XVI} RR$ab$ stars occupy the region intermediate to the two
curves. However, both the mean period of RR$ab$ stars ($<P_{ab}>$=0.636 d) and
of the RR$c$ type ($<P_c>$=0.357 d) are close to the typical values for the
Oosterhoff II type stellar systems. Note that the ratio between the number of
RR$c$ and RR$ab$ is unusually large \citep{catelan09}, and And {\sc XVI} is the
only dwarf known with more RR$c$ than RR$ab$. This finding is particularly
intriguing given the red morphology of the HB which would favor the sampling of
the red part of the instability strip, where the RR$ab$ are located. However it
might be possibly related to the small total number of RRL stars.
Alternatively, this effect could be related to the low metallicity of the
oldest stars, such as the RR Lyrae stars which would be preferentially located
in the blue part of the HB. However, we note that other M31 satellites with
similar small number of RRL variables and bluer HB morphology such as And XI
and And XIII \citep{yang12}, the number of RR$ab$ type is larger than that of
RR$c$ type (10 vs 5 and 8 vs 1, respectively).
\subsection{Anomalous Cepheids}\label{sec:ac}
We report that we did not discover any AC in the surveyed
area of And {\sc XVI}. This kind of pulsating variables, present only in
metal-poor \citep[Z$<$0.0006][]{fiorentino06} populations, are centrally
He-burning stars typically $\sim$1 mag brighter then RRL stars. They can form
through two different channels: {\em i)} single, evolved stars of mass 1.3
$\lesssim$ M $\lesssim$ 2.2 M$_{\odot}$, therefore younger than $\sim$1 Gyr;
{\em ii)} coalescent binary stars evolved after the BSS phase. Despite
the fact that ACs
have been observed in many dSph galaxies (Sculptor: \citealt{kaluzny95};
Fornax: \citealt{bersier02}; Carina: \citealt{dallora03,coppola13}, Draco:
\citealt{kinemuchi08}; Cetus and Tucana: \citealt{bernard09}), the
non-detection in And {\sc XVI} is not surprising, and we ascribe it to its low
mass. First, the lack of ACs agrees with the lack of recent star formation,
thus excluding the first formation channel.
Second, the small number of stars populating the blue plume of BSS ($\sim230$)
implies that very few evolved stars of this population are expected. These,
due to their mass, tend to occupy the red part of the HB, at temperatures
lower than the instability strip, and few such stars are clearly visible above
the red HB, at $F814W\sim$23 mag. Finally, adopting the relation between the
frequency of ACs and the luminosity of the host galaxy, discovered by
\citet{mateo95} and updated by \citet{fiorentino12b}, we estimate that
1$\pm$1 ACs are expected in the surveyed area, in agreement with the
current observations.
\begin{figure}
\includegraphics[width=9cm]{f06.eps}
\caption{Period-Amplitude (Bailey) diagram for the nine detected variables. The dotted
and dashed lines are the loci of RRL stars in Oosterhoff I and Oosterhoff II type
globular clusters, from \citet{cacciari05}. The solid line is the analogous curve for RR$c$
type from \citep{kunder13c}. }
\label{fig:bailey}
\end{figure}
\section{Distance estimate}\label{sec:distance}
\subsection{RR Lyrae distance estimate}\label{sec:distance_rrl}
Pulsational properties of RRL stars can be used to derive a robust estimate
of the distance. In the following we will use three different methods. In the
analysis, we did not include the faint V0 RRL star.
{\em a)} First, we adopt the relation between the intrinsic luminosity, $M_V$, and
the metallicity. In the range below [Fe/H] = --1.6 we assume two linear
relations\footnote{the $F475W$ and $F814W$ magnitudes were re-calibrated to the
Johnson $BVI$ system using the same relations adopted in \citet{bernard09}}:
\begin{equation}
M_V(RR) = 0.866 (\pm0.085) + 0.214(\pm0.047)\hbox{[Fe/H]}
\end{equation}
from \citet{clementini03} and
\begin{equation}
M_V(RR) = 0.72 (\pm0.07) + 0.18(\pm0.07)\hbox{\rm [Fe/H]}
\end{equation}
from \citet{bono03}. We assume a value for the [Fe/H]=$-$2, in agreement with
the available spectroscopic measurements \citep{letarte09,collins14, collins15}. For
the metal content, the \citet{clementini03} and the \citet{bono03} relations
provide absolute magnitude values of $M_V$ = 0.438 and 0.360 mag, respectively.
We derive absolute distance moduli for And {\sc XVI}, corrected for
extinction, of $(m-M)_0$ = 23.72$\pm$0.09 mag and 23.79$\pm$0.08 mag,
respectively, corresponding to 554 and 572 kpc. We note that a change in the
metal content by 0.2 dex affects the distance estimates by $\sim$0.04 mag
{\em b)} It is well established that RRL stars obey a period-luminosity-metallicity
relation in the near-infrared, which can be expressed in the form
\begin{equation}
Mag = a + b[Fe/H] +cLogP
\end{equation}
We adopt here the most updated theoretical relations from Marconi et al. (2015),
both for the Wesenheit W($I$,$B-I$) and W($I$,$V-I$) magnitudes.
We used here the full sample of RRL stars after fundamentalizing the
RR$c$ type by adding 0.127 to the
logarithm of their period. We calculated the Wesenheit apparent magnitudes
of each star, and adopting these relations we derived the true distance modulus.
Assuming [Fe/H] $\sim$ -2.3 dex (Z $\sim$ 0.0001) the two relations provide
$(m-M)_0$ = 23.74$\pm$0.03 mag and $(m-M)_0$ = 23.77$\pm$0.06 mag,
respectively. A slightly larger metallicity, [Fe/H] $\sim$ -1.8 dex (Z
$\sim$ 0.0003) shortens the derived distance by few hundredths of magnitude:
$(m-M)_0$ = 23.68$\pm$0.03 mag and $(m-M)_0$ = 23.70$\pm$0.03
mag.
{\em c)} An independent method to derive the distance based on the RRL
properties was introduced by \citet{caputo00b} and takes advantage of the
period-luminosity-metallicity relation at the first overtone blue edge of the
instability strip. Ideally, this method works well if the blue side of the
instability strip is well sampled, which is not the case for the current data
set. However, the few RR$c$ type stars found can provide an upper limit to
the distance. Applying the relations from \citet{caputo00b} to the
shortest period star, we derive a distance modulus of 23.83$\pm0.07$,
again assuming [Fe/H]=$-$2.
Overall, these different methods applied to the RRL stars sample of And {\sc
XVI} provide consistent results about its distance. For consistency with
previous analysis of isolated galaxies within the framework of the LCID
project, we will adopt the distance derived with the $M_V$-[Fe/H] relation by
\citet{clementini03}, $(m-M)_0$= 23.72$\pm$0.09 mag, to derive the SFH in
\S \ref{sec:sfh}.
\begin{figure}
\includegraphics[width=9cm]{f07.eps}
\caption{{\em Left - }Colour-Magnitude diagram of And {\sc XVI} showing the
RGB stars (asterisks) used to detect the RGB tip. Two isochrones are also
over-plotted: Z=0.0001, t=13 Gyr (red line), Z=0.001, t=6yr (blue line). We
assumed $(m-M)_0 = 23.72$ mag and E(B-V) = 0.06 mag. {\rm Right -
}Luminosity function of RGB stars. The three arrows mark the expected position
of the RGB bump for three isochrones of 10 Gyr: Z=0.0001, Z=0.0006, and Z=0.001.}
\label{fig:tip}
\end{figure}
\subsection{RGB tip distance estimate}\label{sec:distance_rgb}
\citet{ibata07} estimated the distance of And {\sc XVI} to be (m-M)$_0$ =
23.6$\pm$0.2 mag (525 Kpc), based on the position of the tip of the RGB.
A more recent study by \citet{conn12}, based on a more sophisticated analysis
of the same feature,
suggested a slightly shorter distance, (m-M)$_0$ = 23.39$^{+0.19}_{-0.14}$ (476
kpc). We note that the distance estimate based on the RRL stars are systematically
larger than those based on the tip of the RGB. However, they are is still in
agreement, within the error bars, with the value provided by \citet{ibata07},
and only in marginal agreement at the 2$\sigma$ level with the measurement
by \citet{conn12} .
Figure \ref{fig:tip} summarizes our attempt to derive a distance to And {\sc
XVI} based on tip of the RGB as detected in the ACS data. The left panel shows
a zoom of the CMD in the RGB region, and the right one presents the luminosity
function of RGB stars in the $F814W$ band. These are highlighted by big
asterisks in the left panel. The plot clearly shows that the region of the RGB
tip is heavily under-sampled, with only 9 stars detected in the half brightest
magnitude. This is far from the at least 50 stars recommended by
\citet{madore95} to derive a distance modulus with 0.1 uncertainty. This is
also supported by the comparison with theoretical isochrones (red line:
Z=0.0001, t=13 Gyr; blue: Z=0.001, t=6 Gyr). Assuming (m-M)$_0$=23.72 mag (from
the RRL estimate, see \S \ref{sec:distance_rrl}), it is evident that the
brightest portion of the RGB is devoid of observed stars. Note that a shorter distance
modulus would move the isochrones to brighter apparent magnitudes, thus
worsening the problem. Given the little contamination from both And {\sc XVI}
AGB and foreground field stars, we can set an upper limit to the distance,
assuming that the brightest observed star is representative of the tip. This
has magnitude $F814W$ = 20.116 mag. The F814W absolute magnitude of the RGB tip
shows a mild dependence on the metallicity in the metal regime appropriate for
the stars in And {\sc XVI}. In more detail, theoretical predictions based on
BaSTI stellar models show that $M_{F814W}^{tip}$ is equal to -4.087 at
Z=0.0001 and to -4.166 for Z=0.001. When combining these model predictions with
an extinction estimate of $A_{F814W}=0.11$ mag, we obtain a distance modulus
upper limit ranging from 24.09 to 24.17 mag, i.e., in the range 657 - 682 Kpc. A
visual inspection of the CMD from \citet{ibata07} discloses at least one very
bright star is missing in our photometry, possibly because it is outside our
field of view. This is probably what causes the difference in the derived
distance using the same approach. In any case, it is evident that the poor
statistics in the RGB star counts are strongly hampering the possibility to use
the RGB tip method for a robust distance estimate.
In passing, we note that for the same reason no clear detection of the RGB bump
is possible. The luminosity function in the right panel of Figure \ref{fig:tip}
does not show any clear evidence of the RGB bump. The three overplotted arrows
mark the position of the RGB bump derived from theoretical isochrones of 10 Gyr,
and metallicity ranging from Z=0.0001 to Z=0.001. We note that an observed peak
around $F814W\sim$22.5 mag agrees well with the predicted bump for Z=0.0006 and
age of 10 Gyr. As the bump positions depends both on the age (fainter bump
for increasing age) and the metallicity (fainter bump for increasing metallicity)
there is some degree of degeneracy. However, it seems clear that the observed peak cannot
be reproduced with very metal-poor populations, as the predicted bump
for Z=0.0001 and an age of 10 Gyr is too bright ($F814W\sim$22.35 mag),
and gets brighter for decreasing age, while it gets virtually undetectable
for older ages, as the magnitude extension of the loop drops. Similarly, in the case of
more metal-rich populations, the predicted bump is too faint, and an age of 6 Gyr is
required fit the observed peak (though with a color that is too red).
\section{Star formation history}\label{sec:sfh}
\begin{figure}
\includegraphics[width=9cm]{f08.eps}
\caption{CMD of And {\sc XVI} with superimposed the five regions ({\it bundles}) used to
derive the SFH. See text for details.}
\label{fig:bundles}
\end{figure}
\begin{figure*}
\includegraphics[width=17cm]{f09.eps}
\caption{SFH of And {\sc XVI}. As a function of look-back time, from top to
bottom the three panels show the star formation rate, the age-metallicity relation,
and the cumulative SFH. Clearly, And {\sc XVI} was able to sustain star formation
for at least 6 Gyr.}
\label{fig:sfh}
\end{figure*}
\subsection{Star formation history derivation}\label{sec:sfh_parameters}
The SFH was derived using the IAC-star, MinnIAC and IAC-pop codes
\citep{iacstar,hidalgo11,iacpop}, in a similar fashion as already presented in previous
papers of the LCID project \citep{monelli10a,
monelli10b,hidalgo11,skillman14}. For the present data set, we used a model
CMD of 3$\times$10$^6$ stars with ages and metallicities uniformly distributed
in the ranges 0 $<$ t $<$ 13.5 Gyr and 0.0001 $<$ Z $<$ 0.0025. Observational
errors were simulated taking into account the results of 2$\times$10$^6$ artificial
stars.
IAC-star requires the selection of a number of parameters that are used in the
solution derivation. On the one hand, parameters used to build the model CMD such
as the amount of binary stars and the initial mass function were chosen to be the
same as in previous LCID papers. Namely, we used a 40\% binary faction ($q>$0.5) and
the \citet{kroupa02} IMF (x=1.3 for $M<M_{\odot}$ and x=2.3 for $0.5 < M_{\odot} <
100$). To run IAC-pop and MinnIAC, decisions have to be taken concerning the
parametrization of both the age and metallicity bins (that define the
``simple stellar populations'') and that of the CMDs.
The adopted age and metallicity bins were: \\
age=[0 1 2.5:1:13.5] Gyr \\
metallicity=[0.0001 0.0003 0.0005 0.0007 0.0010 0.0015 0.0020 0.0025]\\
The sampling of the CMD is based on macro-regions, called {\it
bundles} (see Figure \ref{fig:bundles}). In each bundle, stars are counted in a
regular grid of boxes, whose size is fixed and constant.
The main limit of the current data set is the relatively small number of stars in
the observed CMD, which can introduce noise in the solution if too fine of a sampling
of the CMD is adopted. Therefore, we performed a number of tests to optimize the
bundle and boxe sizes. We found that the final solution is mostly affected by two
factors: {\em i)} the sizes of the boxes in {\it bundle 1}; {\em ii)} the
inclusion of the RGB in {\it bundle 4}.
Most of the information on the age of the stellar population comes from the main
sequence and TO region. For predominantly old populations as those present in
And {\sc XVI}, most of the information will come from {\it bundle 1}. {\it
Bundle 3} and {\it bundle 5} are useful to set limits to the youngest
populations and the highest metallicity, respectively, while {\it bundle 2}
samples the blue plume.
In previous works, in order to give more weight
to the TO region, we adopt smaller boxes in the corresponding bundle. However,
we found that, in comparison with our previous LCID experience, we had to
significantly increase the size of individual boxes in this bundle in order to
avoid fluctuations and the appearance of spurious populations in the solution.
Namely, the box size chosen is (color, magnitude) = (0.04, 0.2) mag, compared
to typically (0.02, 0.1) in LCID. Given the little or negligible number of
stars, larger boxes are used in bundles 2, 3 and 5. The HB is excluded from
the SFH analysis because the details of its morphology depend on highly
unknown factors such as the mass loss during the RGB phase, and they are not
properly modeled in our synthetic CMD.
The second major difference with the LCID strategy is that including the RGB
significantly improves the solution as well. With the LCID galaxies we had
demonstrated that, whenever the CMDs are well populated by at least tens of
thousands of stars, the inclusion of the RGB has little, if any, effect on the
final solution, and typically the $\chi^2$ increases \citep{bernard12}. This is
mostly due to the fact that the age is highly degenerate in the RGB, while a
bundle such as the current {\it bundle 5} is always useful to set a constraint
to the most metal-rich population. In the current analysis, where only few
thousand stars are available, we found that the solution strongly benefits from
the inclusion of a bundle on the RGB. The main effect is that spurious
populations (such as simultaneously very old and very metal-rich ones) disappear
from the solution.
\begin{figure}
\includegraphics[width=9cm]{f10.eps}
\caption{SFH solutions obtained adopting different photometry sets and
different stellar evolution libraries. The top panel shows the SFR as a function
of time, while the bottom one present the normalized cumulative SFH.
}
\label{fig:sfr_comparison}
\end{figure}
\subsection{Global star formation history of And {\sc XVI}}\label{sec:sfh_global}
The SFH of And {\sc XVI} was derived using only stars within 5 $r_h$ from
the center. The total number of stars used to derive the SFH are 3985, 202, and
491, in {\it bundles 1,2, 4} respectively. For larger galactocentric distances,
the majority of sources are expected to be background unresolved galaxies,
Nevertheless, the comparison with theoretical isochrones in Figure \ref{fig:cmd}
suggests that a small fraction of And {\sc XVI} stars may be present at larger
radius. Scaling the number of objects in the outer regions found in the same
bundles within 5$r_h$, we find that an upper limit of $\sim$4.5\% of
contaminating objects may be affecting the star counts, thus not strongly
affecting the derived SFH. In particular, since the distribution of the contaminating
galaxies in the CMD does not resemble that of a stellar population, we do not
expect that they originate any strong features at a specific age in the SFH.
The final solution is presented in Figure \ref{fig:sfh}. The three panels
represent, from top to bottom, the star formation rate (SFR), the
age-metallicity relation (AMR) and the cumulative SFH as a function of the
look-back time. And {\sc XVI} is populated by both old and intermediate age
stars. It started forming stars at the oldest possible epoch. Remarkably, in our
SFH solutions, there appears to be a significant very old peak at 13.5 Gy ago,
followed by a sudden drop of star formation. After a minimum occurred 12 Gyr
ago, star formation increased again and reached its peak $\sim$10 Gyr ago. This
is an extremely interesting finding, as this feature is not common either among
the MW dSph satellites, nor the isolated ones such as Cetus and Tucana. In fact,
they typically present one single dominant event of star formation occurred at
the oldest epochs \citep[see e.g.,][]{monelli10b, monelli10c, deboer12a,
deboer12b}. The second distinctive feature we recover, as already found by
\citet{weisz14a}, is that the star formation activity extends for many Gyr,
vanishing 6 Gyr ago. The blue plume of stars in {\it bundle 2} produces the
small peak at $\sim$3 Gyr, which we interpret as BSS stars (see \$
\ref{sec:bss}). We also recover a fundamentally constant AMR, with metallicity
not exceeding [M/H]=$-$1.5 (Z=0.0006), in agreement with the qualitative
comparison with theoretical isochrones.
further constrain the nature
cumulative SFH reveals that 10\% of And {\sc XVI} stellar mass was in place by
$z\sim$6 ($\sim$12.8 Gyr ago), that is when the reionization epoch concluded,
and that And {\sc XVI} formed 50\% of its stellar mass by $z\approx$2, or
$\sim$10.1$\pm$0.2 Gyr ago (see Table 3 for the derived integrated and mean
quantities).
\begin{table}[ht!]
\begin{center}
\caption{Integrated and mean quantities}
\begin{tabular}{lc}
\hline
\hline
\textit{Quantity} & \textit{value} \\
& \\
\hline
$\int<\Psi(t)>dt$ (10$^6 M_{\odot}$) & 1.92$\pm$0.03 \\
$<\Psi(t)>$ (10$^{-8} M_{\odot} yr^{-1} pc^{-1}$) & 3.6$\pm$0.1 \\
$<$age$>$ (Gyr) & 9.9$\pm$0.1 \\
$<$[Fe/H]]$>$ 10$^{-4}$ dex & 4.2$\pm$0.1 \\
& \\
\hline
\hline
\end{tabular}
\end{center}
\label{tab:tab03}
\end{table}
Figure \ref{fig:sfr_comparison} presents a comparison between the SFH recovered
using different photometry sets and stellar evolution libraries. In particular,
together with the previous DAOPHOT+BaSTI solution (black lines), we show the
SFH obtained with DAOPHOT+Girardi (red lines, \citealt{girardi00}) and DOLPHOT+BaSTI (grey lines),
The Figure presents both the SFR as a function of time (top panel) as well as
the normalized cumulative SFH (bottom). The plots disclose a general
very good agreement. In particular, the three solutions confirm the fundamental
results that the star formation in And {\sc XVI} did extend to $\sim$ 6 Gyr
ago, and that there is no dominant initial event as in other dSph such as Cetus
and Tucana. We exclude that this can be an artifact due to photometric errors, as
they are too small to affect the TO morphology causing the age spread, in either
photometry. Moreover, the three solutions confirm an initial star
formation followed by a less intense activity. In particular, the use of either
photometry set together with the BaSTI models provide a minimum at 12 Gyr, while
the subsequent maximum is 1 Gyr younger in the DOLPHOT+BaSTI solution than in
the DAOPHOT+BaSTI one. Interestingly, while the BaSTI solution provides a
strong peak at such ages, the solution based on the Girardi library is
characterized by a flatter SFR, though the age of the peaks agrees very well with
the BaSTI solutions. The consistency between the three solutions is clear in the
bottom panels, where the cumulative SFHs agree at the 1-$\sigma$
level.
\begin{figure}
\includegraphics[width=9cm]{f11.eps}
\caption{CMDs of the inner (r$_e < $1.38\arcmin, left) and outer regions
(1.3$\ge$ r$_e <$5.00\arcmin, right) of And {\sc XVI}. Colors
symbols show the RRL stars in each region. The separation between the inner
and outer region is such that the two CMDs contain the same number of
sources within the {\it bundles}. Two not-too-old isochrones are
over-plotted (Z=0.0003, t=6,8 Gyr). The number of stars in the TO region
comprised by the two curves is large in the inner than in the outer region,
suggesting stronger star formation at this age in the closer to the center
of the galaxy.
}
\label{fig:grad_cmd}
\end{figure}
\begin{figure}
\includegraphics[width=9cm]{f12.eps}
\caption{The SFR derived for the inner and outer region of And {\sc XVI}.
The star formation was slightly more prolonged in the inner than in the outer region,
but no strong gradient was found.}
\label{fig:grad_sfh}
\end{figure}
\section{Radial spatial gradient}\label{sec:radial}
In this section we investigate how the properties of And {\sc XVI} change as
a function of the distance from its center. First, we note that we do not
have a symmetric spatial sampling of the galaxy. In fact, due to a bright
field star next to the innermost regions of And {\sc XVI}, we were forced to
point the telescope such that the center of the galaxy is next to the edge of
the ACS camera, at (X,Y)$\approx$(566,1847) px (see the black cross in Figure
\ref{fig:map}). Second, we estimate that the current ACS data cover
$\approx$23\% of the galaxy area.
For the following analysis, we take advantage of a homogeneous derivation of
the structural parameters of all M31 dwarf spheroidal galaxies that fall in
the PAndAS footprint \citep{salomon15} and use the following,
updated values for the centroid (0:59:30.3+-0.4;+32:22:34+-0.4), ellipticity
(0.29$\pm$0.08), position angle (98$\pm$9\arcdeg), and half-density radius
(1.0$\pm$0.1\arcmin).We calculated the elliptical distance for each star from
the galaxy center, and we used it to select three regions. The two panels
of Fig. \ref{fig:grad_cmd} show the CMD of the inner and outer regions, selected such that they have a
similar total number of sources {\it in the bundles used for the SFH
derivation} ($\sim$2,300). This occurs at r=1.38$r_h$. Interestingly, the
overall morphology of the CMD does not change strongly as a function of
radius. In the following we analyze in detail the differences in the SFH,
and how these reflect in the variation of the CMD morphology. The CDM of the
outer regions, already presented in the right panel of Figure \ref{fig:cmd},
clearly demonstrates that there is marginal evidence for the presence of And
{\sc XVI} stars beyond 5r$_h$ (r$_e$=5.0\arcmin).
\subsection{The spatially and temporally extended SFH of And {\sc XVI}}\label{sec:sfh_rad}
To guide the eye, we over-plotted on Figure \ref{fig:grad_cmd} two isochrones
from the BaSTI database, assuming Z=0.0003 and ages= 6,8 Gyr. Comparing the
two panels we found that the region between two curves is
slightly more populated in the inner (276 stars) than in the outer region
(191 stars), suggesting that the star formation rate $\sim$6 Gyr ago was
higher in the inner than in the outer region. It also may indicate that the
star formation was slightly more prolonged toward the
center of And {\sc XVI}, as commonly found in nearby dwarf galaxies, though
the effect looks small. It is remarkable that And {\sc XVI} was able to
sustain star formation for at least 6 Gyr over a vast fraction of its main
body.
To support this finding, we derive the SFH in the two elliptical regions, in
identical way as for the full galaxy. The results are shown in Figure
\ref{fig:grad_sfh}, where the calculated SFRs vs time are over-plotted. The
figure shows that the main features are consistent in the inner, outer, and the
global solutions. The SFR in both the central and external region presents an
initial peak followed by a decreased activity. The main peak is recovered at
similar age ($\sim$10 Gyr ago), and star formation continues to 6 Gyr in both
regions. However, at the most recent epochs, it presents stronger activity in
the central part compared to the outskirts, with a secondary peak occurred
$\sim$7 Gyr ago. It must be stressed that the uncertainties are large, mostly
due to the small number of stars used to derive both solutions, and therefore
such detailed comparison should be treated cautiously. However, the fact that
And {\sc XVI} was able to sustain star formation for at least 6 Gyr over its
entire body remains a solid result. This is significantly different from what
was found in other dwarfs. For example, the spatial variation of the SFH in LGS
3 and Phoenix \citep{hidalgo13} indicate the presence of a gradient in the age
of the youngest populations, which are confined in the central regions only.
Similar conclusions have been reached also in the case of the MW satellites
Fornax and Carina \citep{deboer13,deboer14b}, which are dominated by
intermediate-age populations in the center and by purely old populations in
the outskirts.
\section{Discussion}\label{sec:discussion}
Given its size and luminosity, And {\sc XVI} is somewhat at the boundary between
classical and faint dwarfs. Figure \ref{fig:relations} shows the absolute $M_V$
magnitude of Local Group dwarfs as a function of their size (half light radius,
$r_h$) and metallicity. The data are from the compilation paper by
\citet{mcconnachie12}, and the plots partially replicate his figures 6 and 12
(see also \citealt{clementini12}, their figure 1). Different symbols indicate
LG dwarf galaxies of different morphological types, as labeled. We updated here
the position of And {\sc XVI}, shown as a black diamond, using the luminosity
from Martin et al 2016 (submitted). And {\sc XVI} occupies the faint tail of the
M31 satellites sequence, being $\sim$1 mag brighter than M31 dwarfs of similar
size, such as And XI and And XX. With respect to previous estimates
\citep{ibata07}, the absolute $M_V$ magnitude increased by $\sim$1.7 mag,
moving And {\sc XVI} significantly closer to the faint dwarfs region ($M_V$ =
$-$7.5 mag), but nonetheless it is still $\sim $2-3 mag brighter than Galactic
faint dwarfs of similar size such as Leo V and Ursa Major II.
And {\sc XVI} is thus a small mass satellite of M31, located relatively far
from both its host ($\sim$279 kpc) and the MW ($\sim$575 kpc). The most
striking feature of its evolution is that it was able to sustain star formation
for $\sim$7 Gyr and, as proven in the previous section, over most of its body,
with only a small spatial gradient in the sense that the youngest star formation
(6-8 Gyr ago) was stronger in the inner regions. This occurrence is an
interesting and peculiar feature among LG dwarfs. In fact, broadly speaking, it
is something intermediate between the two typical observed behaviors.
Following the nomenclature introduced by \citet{gallart15}, we identify that
the majority of dSph galaxies are {\itshape fast} systems, i.e., they have formed
stars for a short amount of time at the oldest epochs (e.g., Draco, Ursa Minor,
Cetus, Tucana). On the other extreme, {\itshape slow} dwarf galaxies which present
current or recent star formation are characterized by continuous activity from
the oldest to youngest epochs (e.g., Leo A: \citealt{cole07}; Leo T:
\citealt{weisz12,clementini12}; DDO210: \citealt{cole14}; the Fornax dSph:
\citealt{deboer12b,delpino13}; the Magellanic Clouds: \citealt{smeckerhane02,noel09,
meschin14}). Within this scheme, the dominant old peak of star formation makes
And {\sc XVI} similar to a {\itshape fast} system, but nonetheless the extended activity
is typical of {\itshape slow} galaxies, though the quenching occurred $\sim$ 6 Gyr ago.
What mechanisms influenced the evolution of And {\sc XVI}? What favored the extended
star formation, and what caused its termination?
We derived that the mass formed in the
surveyed area during the first two Gyr is of the order of $\approx 3\times10^4
M_{\odot}$ (15\% of the total mass). Therefore, And {\sc XVI} would have
properties comparable to a typical faint dwarf, if star formation had been truncated
at a similar epoch. This suggests that, despite the similar stellar mass back then,
And {\sc XVI} was not strongly affected by reionitazion, which is thought to be
the strongest mechanism shutting down star formation in low mass Milky Way satellites
\citep{brown14}. On the other hand, the properties of the old
population in And {\sc XVI} are reminiscent of those of the old population in
the low-mass dIrr isolated galaxies, Leo A and Leo T, at least in terms of
integrated quantities. On the one hand, the mean star formation rate of Leo A
between 13.5 and 11.5 Gyr ago was $\sim$2.$\times10^{-5} M_{\odot} yr^{-1}$,
implying that this dIrr formed in the first 2 Gyr a mass of stars of the order of
4$\times$10$^4$ M$_{\odot}$. This is within a factor of 2 of what was produced
by And {\sc XVI}\footnote{Taking into account the area covered by ACS data and
the size of Leo A\citep{vansevicius04}, we estimate that this number might be
underestimated by a factor or $\approx$2-3, thus not affecting the following
discussion}. Moreover, the number of RRL stars is very similar in both systems
(8 $v_s$ 10, \citealt{bernard13}). On the other hand, Figure
\ref{fig:relations} also shows that, in both planes, And {\sc XVI} is located
remarkably close to Leo T, the lowest mass star forming galaxy known in the LG.
In particular, And {\sc XVI} is $\sim $0.5 mag fainter than Leo T which,
despite its low mass (total mass $< 10^7 M_{\odot}$, \citealt{simon07},
$M_{\star} \sim 1.2\times10^5 M_{\odot}$ \citealt{ryanweber08}, thus comparable
to that of And {\sc XVI}), was able to form stars over a Hubble time
\citep{weisz12,clementini12}.
\begin{figure*}
\includegraphics[]{f13.eps}
\caption{M$_V$ magnitude vs the logarithm of the half-light radius (left) and
the metallicity (right) for LG dwarf galaxies of different morphological type.
The data are from \citet{mcconnachie12}, but with updated values for And {\sc XVI}.
Different symbols indicate galaxies of
different morphological types: red circle: MW dSphs (full: purely old systems;
open: systems with strong intermediate populations); blue circles: isolated dSph
(Cetus and Tucana); green squares: MW faint dwarfs satellites; open squares: M31 dSph
satellites; butterflies: dIrr systems (including transition types such as LGS3
and Phoenix).}
\label{fig:relations}
\end{figure*}
This suggests that the initial properties of And {\sc XVI}, Leo A, and Leo T
were similar to those of a faint dwarf progenitor. Nonetheless, if the initial
masses were similar, And {\sc XVI}, Leo T, and Leo A would have been equally
vulnerable as faint dwarfs to the quenching effect of reionization. This clearly
does not seem to be the case, since in the SFH there is no trace of a strong
damping effect during the early evolution, contrary to what occurs in faint
dwarfs. {\itshape Possibly, this is indicating that the different evolution is
dictated by the environmental conditions}. At present, Leo T is located in
relative isolation quite similar to And {\sc XVI}, at $\sim$400 kpc from the MW
and more than $\sim$900 kpc from M31. Interestingly, the negative radial
velocities of both Leo T and And {\sc XVI} with respect of both spirals and the
LG barycenter is compatible with them approaching the LG for the first time.
Leo A is remarkably one of the most isolated systems at the
fringes of the LG. Together with DDO210 and VV124 it belongs to the restricted
group of dwarf galaxies that did not ever strongly interact with either the MW
and M31 all along their history \citep{mcconnachie12}. {\itshape The similarity
of And {\sc XVI} and these dIrrs may also indirectly support the idea that And
{\sc XVI} was initially located in a lower density environment, far from both
the ionizing radiation and the gravitational effect of the growing MW and M31,
thus explaining the prolonged star formation despite the initial low mass. This
has been proposed to be generally the case for {\itshape slow} systems
\citep{gallart15}. }
Moreover, it has been suggested that And {\sc XVI} is among the least dark
matter dominated of the M31 satellites \citep{collins14}. This as well might be
an indication of a slower mass assembling history maybe related to the formation
in a low-density environment. Although such small systems are expected to be
strongly affected by reionization, the subsequent evolution may be driven by a
complex interplay of mass assembly history, effect of the reionization and
effect of stellar feedback. Theoretical models by \citet{benitezllambay14}
suggest that the stellar feedback acts as regulator of the evolution of small
galaxies after the reionization epoch: in those systems where the star formation
started before the reionization, the stellar feedback contributes to sweep out
the gas, causing a definitive termination in the star formation. In those
systems where no stars formed before the reionization epoch, this contributes to
heat up and disperse the gas, but is not strong enough to permanently remove the
gas from these systems. This gas is later recollected by the central halo and can
start producing stars mostly at intermediate to young ages. Leo A, Leo T and
And {\sc XVI} may fit in this scheme, and therefore they may be galaxies with
mass below threshold for star formation before the reionization.
\section{Conclusions}\label{sec:conclusions}
We have presented a detailed analysis of the And {\sc XVI} dSph galaxy,
satellite of M31, based on deep CMD obtained from ACS data. The main
conclusions can be summarized as follows: \\
$\bullet$ We have derived three SFH of And {\sc XVI} using two different
photometric reduction (DAOPHOT and DOLPHOT) and two stellar evolution
libraries (BaSTI and Girardi), obtaining a very good agreement independently
on the assumptions; \\
$\bullet$ The SFH of And {\sc XVI} at the oldest epochs seems different from both the MW and
isolated dSph, as the dominant peak occurred relatively late, around 10 Gyr ago,
is preceded by an initial peak at the oldest
ages, followed by a period of decreased activity; \\
$\bullet$ Despite the low stellar mass (M$\sim$10$^5$M$_{\odot}$), And {\sc XVI}
presents an extended star formation activity, which begun at the oldest
epochs and was maintained until $\sim$6 Gyr ago; \\
$\bullet$ We detected 9 variable stars, all RRL stars. Eight of them
belong to And {\sc XVI}, while one is compatible with being a more distant,
M31 halo field star; \\
$\bullet$ We provided a new estimate of the distance of And {\sc XVI},
$(m-M)_0$= 23.72$\pm$0.09 mag, based on the properties of RRL stars. We
found that different methods (Luminosity-metallicity relation,
period-luminosity-metallicity relation) provide values slightly larger than
previous estimates based on the RGB tip; \\
$\bullet$ We discussed the properties of And {\sc XVI} in comparison
with other LG dwarfs. And {\sc XVI} occupies the faint end of the dSph
sequence. However, we found that if its star formation would have been
truncated 12 Gyr ago, today it would closely resemble a faint dwarf galaxy
in stellar mass. \\
$\bullet$ The SFH of And {\sc XVI} is consistent with a formation and early
evolution in a low-density environment, which favored a slow mass assembly
and prolonged star formation. A late arrival in the inner region of the LG
may have been the cause of the termination in star formation occurring $\sim$7 Gyr ago.
New data available for more M31 satellites, collected within the framework
of this project, will allow us to build a fundamental sample to compare
the MW, M31, and isolated dwarfs in the LG.
\section*{acknowledgments}
We are grateful to the anonymous referee for the pertinent comments which improved
the paper. The authors thanks M. Marconi and V. Braga for providing the coefficients of
the period-luminosity relations. MM is grateful to G. Fiorentino and to G.
Bono for the discussion on the HB morphology and the RRL properties. Support
for this work has been provided by the Education and Science Ministry of
Spain (grants AYA2013-42781, AYA2014-56765-P). DRW is supported by NASA through
Hubble Fellowship grant HST-HF-51331.01 awarded by the Space Telescope Science
Institute. MBK is supported by the HST grants AR-12836 and AR-13888.
{\it Facility:} \facility{HST (ACS)}
|
1,941,325,220,989 | arxiv | \section{Introduction}
Recently, there is a splash of interest in statistical properties of different stochastic processes
under resetting, when a random process is interrupted by
a resetting event, and restarts anew from prescribed initial conditions.
The interest to this kind of processes is nurtured by their abundance in nature and by their importance in search, see \cite{review} for the review.
The situation is mostly exemplified by a time-dependent position of a particle which performs some kind of random motion
and returns to the origin on the resetting event.
The random motion under stochastic resetting can thus be considered as the interplay of two distinct random processes:
the resetting process, a point process on the real line representing the time axis, and particle's motion between the resetting events, the displacement process.
The waiting time distribution function between two resetting events can
be exponential \cite{EvansMajumdar}, deterministic (the most effective one for the search processes) \cite{shlomi2017}, power-law \cite{NagarGupta} or of
other type \cite{shlomi2017,palrt,res2016,shlomi2016}. Most studies treat the resetting as an instantaneous event \cite{review}, but also the situations when
some time is needed by the particle to come back to the initial position were considered \cite{me,me2,shlomi, shlomi1,shlomi2,campos}. The first study of resetting has been
devoted to Brownian motion \cite{EvansMajumdar} as a displacement process, later the discussion has been generalized for other types of motion,
such as L\'evy flights \cite{levy1,levy2}, L\'evy walks \cite{china}, scaled Brownian motion (SBM) \cite{AnnaNonrenewal, AnnaRenewal}, and
continuous time random walks (CTRW) \cite{MV2013,MC2016,Sh2017,ctrw,ctrwres}. This last situation is the topic of the present work.
Continuous time random walk (CTRW) is a process, when the time of the next step of a random walk is chosen according to a certain probability distribution \cite{sokbook, MontrollWeiss}. The
applications of CTRW range from charge carrier motion in disordered semiconductors \cite{disordered} to earthquake modeling \cite{earthquake1, earthquake2}, biology \cite{bio}, and economics
\cite{eco1,eco2}. The properties of CTRW with an exponential waiting time density $\psi(t)=re^{-rt}$ correspond to normal diffusion \cite{sokbook}, with the mean squared displacement
(MSD) $\langle x^2 (t) \rangle$ growing linearly in time, but the properties of
CTRW with a power-law waiting time probability density function (PDF) $\psi (t) \sim t^{-1-\alpha}$ (with $0< \alpha < 1$) are quite different, giving rise to a slower, subdiffusive behavior with
$\langle x^2 (t) \rangle \propto t^\alpha$. The properties of such subdiffusive CTRW under Poissonian resetting were recently considered in Ref. \cite{ctrwres},
providing a nice introduction to the problem of resetting in CTRW.
On the mean field level some properties of CTRW (for example, its aging) resemble those of subdiffusive scaled Brownian motion (SBM), a diffusion process with the time-dependent diffusion coefficient $D(t)\sim
t^{\alpha-1}$ and the mean-squared displacement (MSD) $\left\langle x^2(t)\right\rangle\sim t^{\alpha}$ \cite{SokolovSBM}. The SBM is a Markovian process, while CTRW is a non-Markovian (semi-Markovian) one.
Both random processes, the CTRW and the SBM are processes with non-stationary increments. However, in SBM this non-stationarity is modeled via the explicit time dependence of the diffusion coefficient,
while the CTRW, being of the renewal class, lacks explicit time dependence of its parameters. Therefore, some properties of the processes (for example, their behavior under
confinement) differ \cite{MetzlerSBM}. SBM can be used in order to describe the dynamics of granular gases \cite{annapccp,annagg,hadiseh,ultraslow}.
The non-stationarity of increments of the displacement process leads to two different situations under resetting, which were indistinguishable if the increments of the displacement process were stationary.
The first one corresponds to the case when the memory on the course of the displacement process preceding the resetting event is fully erased, and the second one to the case when this
memory is partially retained: The dynamics of the underlying process can be either rejuvenated after resetting or not influenced by the resetting of the coordinate. We will
refer to the first case as to the one of \textit{complete resetting}, while the second case will be referred to as the case of \textit{incomplete resetting}.
In SBM these two situations correspond to the cases when the time dependent diffusion coefficient $D(t)$ also resets to the initial value $D(0)$ together with the coordinate
of the particle \cite{AnnaRenewal}, or remains unaffected by the resetting events \cite{AnnaNonrenewal}. In CTRW the first assumption corresponds to a situation
when the resetting interrupts the waiting period between the jumps, and, after the resetting, a new waiting time is chosen independently from the prehistory of the process.
The second assumption implies that the waiting period started before the resetting event is not interrupted by the resetting.
These two cases has been investigated and compared for the CTRW with exponential resetting \cite{ctrwres}. In the current study we investigate the behavior of subdiffusive CTRW
under resetting process with power-law distribution of times between the resetting events, compare the results with such for SBM, and discuss similarities and differences between
the behavior of CTRW and its mean field model. We show that the behavior of the MSD in both processes is similar. Considerable similarities are also found in the intermediate
asymptotic behavior of the probability density functions (PDFs) of both processes under complete resetting, while for incomplete resetting CTRW shows additional fluctuation
effects leading to strong differences in PDFs.
The further structure of the work is as follows: In Sec. \ref{sec:Gen} we define the models and introduce the notation. The behavior of the MSD is then discussed in Sec. \ref{sec:MSD}.
The properties of the intermediate asymptotics of the PDFs and the conditions under which these can be observed are discussed in Sec. \ref{sec:PDFs}. The conclusions follow in Sec. \ref{sec:Concl}.
\section{Model}
\label{sec:Gen}
\subsection{Continuous time random walks}
A standard (``wait-first'') CTRW starts at $x=0$ at time $t_0$ (in a situation without resetting this is typically put to zero) with the waiting time
\cite{ScherMont}. Other variants of the CTRW include the walks starting from a jump (similar to the corresponding correlated model of \cite{Magdziarz}), the walks anticipating the
next jump after
the observation time $t$ (``oracle'' walk) and other clustered models, \cite{Jurlewitz}. The CTRW by itself may be considered as an interplay (subordination) of two
distinct random processes: The \textit{parent process}, being a simple random walk with discrete steps, and the \textit{directing process} (subordinator, operational time) defining the
random number of steps the parent process made up to the physical time $t$. In this work we will consider resetting of the classical Scher-Montoll wait-first scheme,
although the jump-first variant will appear at intermediate steps of our discussion.
Although general expressions may be obtained in a Fourier-Laplace domain, like it was done in Ref. \cite{ctrwres}, these,
for the case when the resetting times follow a power-law distribution, are difficult to analyze. Therefore, for getting asymptotic expressions for PDFs we will use the real space / time domain approach, relying on the asymptotic form of the CTRW's PDFs.
Therefore, the methods applied in the present work differ considerably from those used previously.
We study power-law distribution of the CTRW steps:
\begin{equation}
\psi(t) = \frac{\alpha t_0^\alpha}{(t_0 + t)^{1+\alpha}}\,.
\label{eq:CTRWpar}
\end{equation}
Here $t_0$ is the characteristic time of the power-law decay connected with the median
value $m_t$ of the waiting time via $m=(2^{1/\alpha}-1) t_0$.
The survival probability $\Psi(t)$ gives the probability that no stepping occurs between 0 and $t$:
\begin{equation}
\Psi (t) = 1 - \int\limits_0^t {\psi (t')dt'} = \int\limits_t^\infty {\psi (t')dt'}\,
\end{equation}
For the power-law distribution of the waiting times it scales also acccording to the power law:
\begin{equation}
\Psi(t) = \frac{t_0^\alpha}{(t_0 + t)^\alpha}\,.
\end{equation}
It is convenient to to switch between the time and the Laplace domains. The Laplace transform of the resetting PDF is
\begin{equation}
\tilde{\psi}(s)=\int_0^{\infty}\psi(t)\exp(-ts)dt
\end{equation}
and the Laplace transform of the survival probability can be expressed via $\tilde{\psi} (s)$ as
\begin{equation}
\tilde{\Psi} (s) = \frac{1-\tilde{\psi}(s)}{s}\,.
\end{equation}
For $\alpha < 1$ the asymptotics of the Laplace transform of $\Psi(t)$ is $\Psi(s) \simeq \Gamma(1-\alpha) s^{\alpha -1} t_0^\alpha$, and $\psi(s) \simeq 1 -
\Gamma(1-\alpha) s^\alpha t_0^\alpha$.
The probability density $\psi_n(t)$ that the $n$-th resetting event happens at time $t$ satisfies the renewal equation \cite{sokbook}
\begin{equation}
\psi_n(t) = \int_0^{t}\psi_{n-1}(t^{\prime})\psi(t-t^{\prime})dt^{\prime}\,,
\end{equation}
and the sum of all $\psi_n(t)$ gives the rate of resetting events at time $t$:
\begin{equation}\label{rate}
\mu(t)=\sum_{n=1}^{\infty}\psi_n(t)\,.
\end{equation}
Its Laplace transform yields
\begin{equation}
\tilde \mu (s) = \sum\limits_{n = 1}^\infty {{{\tilde \psi }^n}(s)} = \frac{{\tilde \psi (s)}}{{1 - \tilde \psi (s)}}\,.
\end{equation}
The stepping rate for $\alpha < 1$ is given by
\begin{eqnarray}
\mu(s) &\simeq& \frac{1}{\Gamma(1-\alpha)} (s t_0)^{-\alpha} \nonumber \\
\mu(t) &\simeq& \frac{\sin \pi \alpha}{\pi} t_0^{-\alpha} t^{\alpha -1}
\label{eq:mu}
\end{eqnarray}
in the Laplace domain and in the time domain, respectively.
For $\alpha > 1$ (the case which would correspond to normal diffusion for CTRW without resetting) we have
\begin{equation}
\mu(t) = \frac{\beta - 1}{\tau_0}
\end{equation}
\subsection{Complete and incomplete resetting}
As we already mentioned, two situations are considered. In the first one, after a resetting the CTRW process
starts anew, from a new waiting time which is independent of the prehistory of the process (complete resetting). This case corresponds to the first model of Ref. \cite{ctrwres}
and will be denoted as case (1) in the text and in figures.
The case of incomplete resetting (case (2)) corresponds to the second model of Ref. \cite{ctrwres}. In this case the coordinate of the walker is set to zero under resetting, which however does not interrupt the waiting period. In this case the memory on the beginning of the waiting period of the CTRW is not erased.
The event diagrams, showing the temporal order of jumps of CTRW and resetting events for the two models are displayed in Fig. \ref{fig:Events} and elucidate the
notation used. Thus, in the case (1) a wait-first (standard) CTRW starts anew at time of the last resetting event $t_r$.
The total duration of the observed part of the CTRW (which is the time interval between the last resetting event at $t_r$ and the time $t$ at which the
position of the walker is measured) is equal to $\Delta t = t-t_r$.
For the case (2) of incomplete resetting, the resetting time falls into a waiting time between the two steps of the CTRW (or in the
very first waiting time between the preparation and the first step), which is not interrupted by the resetting event.
In this case we consider a jump-first CTRW starting at this forward recurrence time of a CTRW following the time of the last resetting.
The total duration of the observed part of this jump-first CTRW is then $\Delta t' = t - t_r - t_f$. Since the time $t_r$ of the last resetting event now corresponds to the aging time of the CTRW, the waiting time for the first step in CTRW after resetting will typically be longer than in the previous case due to aging effects \cite{sokbook}, provided the second moment of the waiting time is large enough or diverges.
\begin{figure}[tbp]
\begin{center}
\scalebox{0.4}{\includegraphics{Events_6.eps} }
\caption{The event diagrams of CTRW under complete (1) and under incomplete (2) resetting. The renewal events of the CTRW are denoted by black and gray filled squares, the
renewals of the resetting process are denoted by empty circles. The difference between the two situations
is that the time of the last renewal of the resetting process before the observation is the time of the (new) beginning of the
wait-first CTRW in (1), and the aging time of the CTRW in (2). The new beginning of the CTRW in (1) is denoted by a larger empty square (there is no jump taken at this time),
and the time $\Delta t = t-t_r$ corresponds to the observed duration of the wait-first CTRW.
In (2) the time of the last resetting is the aging time for the CTRW, and the first jump of the CTRW takes place at time $t_1$, denoted by a
larger black square.The observed duration of the jump-first CTRW $\Delta t'$ corresponds to $t -
t_1$. \label{fig:Events}}
\end{center}
\end{figure}
\subsection{Power-law resetting}
The waiting time PDF of the resetting process will be denoted by $\phi(t)$ and is distributed according to the power law distribution function
\begin{equation}
\phi(t) = \frac{\beta \tau_0^\beta}{(\tau_0 + t)^{1+\beta}}\,.\label{eq:respar}
\end{equation}
Here $\tau_0$ is the characteristic time of the power-law decay connected with the median
value $m_{\tau}$ of the resetting time via $m_{\tau}=(2^{1/\beta}-1)\tau_0$.
The survival probability $\Phi(t)$ gives the probability that no resetting event occurs between 0 and $t$,
\begin{equation}
\Phi (t) = \int\limits_t^\infty {\phi (t')dt'}\,
\end{equation}
For the power law distribution of waiting times it also scales according to the power law:
\begin{equation}\label{survres}
\Phi(t) = \frac{\tau_0^\beta}{(\tau_0 + t)^\beta}
\end{equation}
For the case of the power-law PDF it is convenient to to switch between the time and the Laplace domains.
The rate of resetting events at time $t$ may be obtained analogously to the stepping rate of the CTRW. For $\beta < 1$ the resetting rate is time-dependent and is given by
\begin{eqnarray}
\kappa(s) &\simeq& \frac{1}{\Gamma(1-\beta)} (s \tau_0)^{-\beta} \nonumber \\
\kappa(t) &\simeq& \frac{\sin \pi \beta}{\pi} \tau_0^{-\beta} t^{\beta -1}
\label{kappab0}
\end{eqnarray}
in the Laplace domain and in the time domain, respectively.
For $\beta > 1$ the rate of resetting events stagnates for long $t$ and is given by
\begin{equation} \label{kappab1}
\kappa(t) = \frac{1}{\langle t \rangle} = \frac{\beta - 1}{\tau_0}
\end{equation}
with $\langle t \rangle$ being the mean waiting time between two resetting events.
\section{Mean number of steps and the MSD}
\label{sec:MSD}
\subsection{MSD of free CTRW}
The MSD in a free CTRW is proportional to the mean number of steps \cite{sokbook}
\begin{equation}
\langle x^2(t) \rangle = a^2 \langle n(t) \rangle
\label{eq:MSDCTRW}
\end{equation}
where $a^2$ is the mean squared displacement in a single step.
The mean number of steps performed up to time $t$ can be obtained as the integral of the stepping rate (Eq.~\ref{rate}):
\begin{equation}
\langle n(t) \rangle = \int_0^t \mu(t') dt'
\label{eq:meann}
\end{equation}
For $\alpha < 1$ it is equal to
\begin{equation}\label{nal1}
\langle n(t) \rangle = \frac{\sin \pi \alpha}{\pi \alpha} \left(\frac{t}{t_0} \right)^\alpha
\end{equation}
and for $\alpha > 1$
\begin{equation}\label{nag1}
\langle n(t) \rangle \simeq (\alpha -1)\frac{t}{t_0}
\end{equation}
Both expressions hold for $ t \gg t_0$.
The coefficient of anomalous diffusion $K_\alpha$ is normally defined via
\begin{equation}
\langle x^2(t) \rangle = 2 K_\alpha t^\alpha,
\end{equation}
so that for $\alpha<1$
\begin{equation}
K_\alpha = \frac{1}{2} \frac{\sin \pi \alpha}{\pi \alpha} \frac{a^2 }{t_0^\alpha}. \label{eq:DiffKoeff}
\end{equation}
and for $\alpha>1$ the (normal) diffusion coefficient reads
\begin{equation}
K_\alpha = K_1 = \frac{\alpha -1}{2} \frac{a^2}{t_0}.
\end{equation}
\subsection{MSD of CTRW with resetting}
For a non-biased CTRW with resetting the mean squared displacement, both for aged, and for non-aged situation is proportional to the mean number of steps made during the observation time \cite{SoBluKla},
\begin{equation}\label{xan}
\langle x^2 (t) \rangle = a^2 \langle \langle n_{1,2}( \Delta t ) \rangle \rangle,
\end{equation}
where $a^2$ is the mean squared length of the step, and the indices $1$ and $2$ denote the complete and incomplete resetting (See Fig.~\ref{fig:Events}). The double average on the right-hand site of Eq.~(\ref{xan}) is taken over the realizations of the directing process of the CTRW (i.e. over the CTRW waiting times), and
over the duration
$\Delta t$ of the period between the last resetting and the observation time.
For given $t_r$ (or $\Delta t$),
the mean numbers of steps $n_1 (\Delta t)$ for the complete resetting (the average over all possible realization of the waiting times of the directing process of the CTRW)
is given by
\begin{equation}
\langle n_1 (\Delta t) \rangle = \langle n(\Delta t) \rangle
\end{equation}
with $\langle n(\Delta t) \rangle$ given by Eq.(\ref{eq:meann}). For the incomplete resetting we have for the same single average (See Fig.~\ref{fig:Events}, panel (2))
\begin{equation}
\langle n_2 (\Delta t) \rangle = \langle n(t) \rangle - \langle n(t_r) \rangle.
\label{eq:MMDeltat}
\end{equation}
The double means we are interested in are obtained by averaging these means over the distribution of $\Delta t$ or $t_r$. For the first case of complete resetting we obtain
\begin{equation}\label{n1}
\langle \langle n_1( \Delta t ) \rangle \rangle = \int_0^t \langle n( \Delta t ) \rangle p_1(\Delta t |t) d \Delta t
\end{equation}
Here $\langle n_1( \Delta t ) \rangle$ is given by Eq.~(\ref{nal1}) for $\alpha<1$ and by Eq.~(\ref{nag1}) for $\alpha>1$.
For the second case of incomplete resetting we get
\begin{equation}\label{n2}
\langle \langle n_2( \Delta t ) \rangle \rangle = \langle n( t ) \rangle- \int_0^t \langle n(t_r) \rangle p_2(t_r|t) d t_r
\end{equation}
The PDF $ p_2(t_r|t)$ of the last resetting before the observation at time $t$ is given by
\begin{equation}\label{p2initial}
p_2(t_r|t) = \kappa(t_r) \Phi(t-t_r).
\end{equation}
The meaning of this equation is that the $\kappa(t_r)dt_t$ defines the probability to have a resetting event between $t_r$ and $t_r + dt_r$, and $\Phi(t-t_r)$
the survival probability that no resetting event took place afterwards (Eq.~\ref{survres}).
The distribution of the duration of the part of CTRW observed after the resetting $\Delta t = t-t_r$ follows by the change of variables:
\begin{equation}\label{p1initial}
p_1(\Delta t|t) = \kappa(t-\Delta t) \Phi(\Delta t).
\end{equation}
The information about mean number of steps will be also important for the next section \ref{sec:PDFs}. The calculation of the probability distribution functions is performed under assumption that this number of steps is large. Only under this condition the universal (not dependent on microscopic parameters) intermediate asymptotics can appear.
\subsection{Mean number of steps for $0 < \beta < 1$ }
The distribution of $\Delta t$ at given $t$ for complete resetting (case 1) is given by inserting Eq.~(\ref{kappab0}) and Eq.~(\ref{survres}) into Eq.~(\ref{p1initial}) and for longer $\Delta t$ gets independent from $\tau_0$:
\begin{equation}\label{p1b0}
p_1(\Delta t | t) \simeq \frac{\sin \pi \beta}{\pi} (t-\Delta t)^{\beta -1} \Delta t^{-\beta}.
\end{equation}
For the case 2 of the incomplete resetting the distribution of the aging time $t_r$ for given $t$ is given by the similar expression
\begin{equation}\label{p2b0}
p_2(t_r | t) \simeq \frac{\sin \pi \beta}{\pi} (t_r)^{\beta -1} (t-t_r)^{-\beta}.
\end{equation}
\subsubsection{Subdiffusion with $0 < \alpha <1$ }
For the subdiffusive CTRW with \textbf{complete resetting} we insert Eq.~(\ref{nal1}) and Eq.~(\ref{p1b0}) into Eq.~(\ref{n1}) and obtain after straightforward algebra
\begin{equation}
\langle \langle n_{1}( \Delta t ) \rangle \rangle = \left(\frac{t}{t_0} \right)^\alpha \frac{\sin \pi \alpha}{\pi \alpha} \frac{\sin \pi \beta}{\pi} \mathrm{B}(\beta, 1+\alpha - \beta).
\end{equation}
For the subdiffusive CTRW with \textbf{incomplete resetting} we introduce Eq.~(\ref{nal1}) and Eq.~(\ref{p2b0}) into Eq.~(\ref{n2}) and get
\begin{equation}
\langle \langle n_{2}(t) \rangle \rangle = \frac{\sin \pi \alpha}{\pi \alpha} \left( \frac{t}{t_0} \right)^\alpha \left[1 - \frac{\sin \pi \beta}{\pi} \mathrm{B}(\alpha+ \beta,1-\beta) \right].
\end{equation}
The fact that resetting with $0 < \beta < 1$ does not change the power-law behavior in MSD is analog to
the observation for SBM. \\
\subsubsection{Normal diffusion with $\alpha \geq 1$ }
Let us consider at first the case of the \textbf{complete resetting}. Inserting Eq.~(\ref{nag1}) and Eq.~(\ref{p1b0}) into Eq.~(\ref{n1}) we get
\begin{equation}\label{n1nice}
\langle \langle n_{1}( \Delta t ) \rangle \rangle = \frac{\alpha - 1}{t_0} \frac{\sin \pi \beta}{\pi}\int_0^t \Delta t^{1 - \beta} (t- \Delta t)^{\beta -1} d \Delta t.
\end{equation}
Changing the variable of integration to $\xi = \Delta t /t$ we obtain
\begin{equation}
\langle\langle n_{1}( \Delta t ) \rangle\rangle =\frac{\alpha - 1}{t_0} \frac{\sin \pi \beta}{\pi} t \mathrm{B}(\beta, 2 - \beta),
\end{equation}
where the beta function
\begin{equation}
\mathrm{B}(\beta, 2 - \beta) = \int_0^1 \xi^{\beta - 1}(1-\xi)^{1-\beta} d \xi
\end{equation}
is equal to
\begin{equation}
\mathrm{B}(\beta, 2 - \beta) = \frac{\pi \beta}{\sin \pi \beta}\,.
\end{equation}
In such a way we get
\begin{equation}
\langle \langle n_{1}( \Delta t ) \rangle \rangle = \beta (\alpha-1)\frac{t}{t_0}.
\end{equation}
For the \textbf{incomplete resetting} substitution of Eq.~(\ref{nag1}) and Eq.~(\ref{p2b0}) into Eq.~(\ref{n2}) surprisingly leads to the same result
\begin{equation}
\langle \langle n_{2}( \Delta t ) \rangle \rangle =\langle \langle n_{1}( \Delta t ) \rangle \rangle = \beta (\alpha-1)\frac{t}{t_0}.
\end{equation}
In both cases the mean number of steps grows with observation time, so that for these cases the intermediate asymptotics in $x$ discussed in the next section indeed appear at long times.
\subsection{Mean number of steps for \boldmath $\beta > 1$ \unboldmath .} In this case the rate of resetting events is time-independent, so that
\begin{equation}\label{p1b1}
p_1(\Delta t|t) = \frac{\beta -1}{\tau_0^{1-\beta}} \frac{1}{(\tau_0 + \Delta t)^\beta}
\end{equation}
and
\begin{equation}\label{p2b1}
p_2(t_r | t) = \frac{\beta -1}{\tau_0^{1-\beta}} \frac{1}{[\tau_0 + t - t_r]^\beta}
\end{equation}
\subsubsection{Subdiffusion with $0 < \alpha <1$ }
In case of \textbf{complete resetting} we introduce Eq.~(\ref{nal1}) and Eq.~(\ref{p1b1}) into Eq.~(\ref{n1}) and get
\begin{equation}
\langle \langle n_{1}( \Delta t ) \rangle \rangle = t_0^{-\alpha} \frac{\sin \pi \alpha}{\pi \alpha} \frac{\beta -1}{\tau_0^{\beta -1}} t^{\alpha - \beta + 1} I_1(\alpha,\beta; z),\end{equation}
where $\xi = \Delta t / t$, $z = \tau_0 / t$ and the integral
\begin{equation}\label{I1}
I_1(\alpha,\beta;z) = \int_0^1 \xi^\alpha (z + \xi)^{-\beta} d \xi
\end{equation}
will repeatedly appear in our calculations, and its asymptotic behavior
in different domains of parameters is discussed in Appendix A.
For $\alpha > \beta - 1$ the function $I_1(\alpha,\beta;z)$ tends to a constant (see Eq.(\ref{eq:AB1}) in Appendix A) and at large
$t$ we have
\begin{equation}
\langle \langle n_{1}( \Delta t ) \rangle \rangle \simeq t_0^{-\alpha} \tau_0^{\beta-1} \frac{\beta - 1}{2 + \alpha - \beta} \frac{\sin \pi \alpha}{\pi \alpha} t^{\alpha - \beta
+1}.
\end{equation}
For $\alpha < \beta -1$ and for $t$ large the behavior is different, Eq.(\ref{eq:AB2}) and the MSD stagnates:
\begin{equation}
\langle \langle n_{1}( \Delta t ) \rangle \rangle \simeq C \left( \frac{\tau_0}{t_0} \right)^\alpha
\end{equation}
with
\begin{equation}
C= \frac{\sin \pi \alpha}{\pi \alpha} (\beta -1) \mathrm{B}(\alpha +1, \beta - \alpha -1).
\end{equation}
The stagnant number of steps is large only if $\tau_0 \gg t_0$. Only in this case any universal behavior of the PDF can be anticipated. \\
On the other hand, \textbf{for the incomplete resetting} we substitute Eq.~(\ref{nal1}) and Eq.~(\ref{p2b1}) into Eq.~(\ref{n2}) and introduce new variables $z=\tau_0/t$ and $\zeta= 1-t_r/t$:
\begin{equation}\label{n2w}
\langle \langle n_2(t) \rangle \rangle = \frac{\sin \pi \alpha}{\pi \alpha} \left(\frac{t}{t_0} \right)^\alpha \left(1 - t^ {1 -\beta}\frac{\beta-1}{\tau_0^{1-\beta}}I_0 \right).
\end{equation}
The integral
\begin{equation}
I_0=\int_0^1 (1-\zeta)^\alpha (z + \zeta)^{-\beta} d\zeta
\end{equation}
diverges at lower limit for $z \to 0$. Close to this limit the first multiplier in the integrand can be set to unity and therefore
\begin{equation}
I_0\simeq \int_0^1 (z + \zeta)^{-\beta} d\zeta \simeq \frac{z^{1-\beta}}{\beta-1}
= \frac{\tau_0^{1 - \beta}}{t^{1-\beta} (\beta -1)}.
\end{equation}
In contrast to the case $\beta < 1$,the second term in the brackets in Eq.~(\ref{n2w}) converges to unity for $t \to \infty$, and the main asymptotics of the expression comes from subleading terms. The reason is that for $\beta > 1$ the PDF $p(t_r|t)$ is very strongly peaked at $t_r \approx t$, and the difference between $t_r$ and $t$ is typically small.
The way to circumvent the calculation of the subleading terms is as follows. Introducing $\Delta t = t-t_r$ we now may expand the expression
Eq.~(\ref{eq:MMDeltat}) with substitution from Eq.~(\ref{nal1}) in $\Delta t$ and write
\begin{equation}
\langle n_2( \Delta t ) \rangle\simeq \frac{\sin \pi \alpha}{\pi} t_0^{-\alpha} t^{\alpha-1} \Delta t.
\end{equation}
Performing the average this over the distribution of $\Delta t$, Eq.~(\ref{p1b1}) and introducing the variable $z=\tau_0/t$, we get
\begin{equation}
\langle \langle n_2(t) \rangle \rangle =
\frac{\sin \pi \alpha}{\pi} \frac{t^{\alpha -1}}{t_0^\alpha} \frac{\beta-1}{\tau_0^{1-\beta}} t^{2 - \beta} I_1(1,\beta;z).
\label{eq:steps2}
\end{equation}
where the integral $I_1$ is defined in terms of Eq.~(\ref{I1}). According to Eq.(\ref{eq:AB1}) we thus get for $\beta < 2$
\begin{equation}
\langle \langle n_2(t) \rangle \rangle = \frac{\sin \pi \alpha}{\pi} t_0^{-\alpha} \tau_0^{\beta-1} \frac{\beta-1}{2-\beta} t^{1+\alpha - \beta}.
\end{equation}
Depending on the relation between $\alpha$ and $\beta$ this may be a decaying or a growing function of $t$. Thus, for $\alpha > \beta -1$, $\langle \langle n(t) \rangle \rangle$
grows at longer times
monotonically, and the typical number of steps will be large. In the opposite case the number of steps would decay at longer times, and can be large only in the intermediate time
domain
\begin{equation}
t \ll (t_0^{\alpha} \tau_0^{\beta-1})^\frac{1}{\beta - 1-\alpha} = \tau_0 \left(\frac{\tau_0}{t_0} \right)^{\frac{\alpha}{\beta - 1 -\alpha}}.
\end{equation}
Noting that our asymptotic discussion is only valid for $t_0,\tau_0 \ll t$, the necessary condition of the existence of large $\langle \langle n(t) \rangle \rangle$ is
\begin{equation}
t_0,\tau_0 \ll \tau_0 \left(\frac{\tau_0}{t_0} \right)^\frac{\alpha}{\beta - 1-\alpha}
\end{equation}
which would hold for $t_0 \ll \tau_0$. \\
For $\beta > 2$ we have
\begin{equation}
\frac{\sin \pi \alpha}{\pi} \frac{t^{\alpha -1}}{t_0^\alpha} \frac{\beta-1}{\tau_0^{1-\beta}} t^{2 - \beta} z^{2-\beta} \mathrm{B}(2,\beta - 2) = C_1 \frac{t^{\alpha
-1}}{t_0^\alpha \tau_0},
\end{equation}
which is a decaying function of $t$. To get the intermediate domain in which $\langle \langle n(t) \rangle \rangle \gg 1$ together with
$t_0, \tau_0 \ll t$ one again needs to chose $\tau_0 \gg t_0$. \\
\subsubsection{Normal diffusion with $\alpha \geq 1$ }
For the case $\alpha > 1$ we have for \textbf{complete resetting}
\begin{equation}
\langle \langle n_{1}(t) \rangle \rangle = \frac{\alpha -1}{t_0}\frac{\beta - 1}{\tau_0^{1-\beta}} \int_0^t \frac{\Delta t}{(\tau_0 + \Delta t)^\beta} d\Delta t.
\end{equation}
Changing the variable of integration to $\xi = \Delta t/t$ and taking $z= \tau_0/t$ leads to
\begin{equation}
\langle \langle n_{1}(t) \rangle \rangle = \frac{\alpha -1}{t_0}\frac{\beta - 1}{\tau_0^{1-\beta}} t^{2 - \beta} I_1(1, \beta; z)\,,
\end{equation}
where the integral $I_1$ is defined in terms of Eq.~(\ref{I1}). The result depends on whether $1 < \beta < 2$ or $\beta > 2$. For $\beta < 2$ Eq.(\ref{eq:AB1}) applies with
\begin{equation}
\langle \langle n_{1}(t) \rangle \rangle = \frac{\alpha -1}{t_0}\frac{\beta - 1}{(3-\beta)\tau_0^{1-\beta}} t^{2 - \beta} .
\end{equation}
For $\beta > 2$ we have for $z \to 0$
\begin{equation}
\langle \langle n_{1}(t) \rangle \rangle = \mathrm{B}(2,\beta - 2) (\alpha -1)(\beta - 1) \frac{\tau_0}{t_0},
\end{equation}
i.e. $\langle \langle n_{1}(t) \rangle \rangle $ tends to a constant which is large provided $\tau_0 \gg t_0$.\\
For the \textbf{incomplete resetting} we obtain the same result, $\langle \langle n_{2}(t) \rangle \rangle=\langle \langle n_{1}(t) \rangle \rangle$.
\begin{figure}[tbp]
\begin{center}
\scalebox{0.4}{\includegraphics{MSD_C.eps} } \\
\scalebox{0.4}{\includegraphics{MSD_I.eps} }
\caption{(Color online) The time dependence of the MSD in different domains of parameters $\alpha$ and $\beta$ for
complete and for incomplete resetting. Note that these dependencies are the same as for the mean-filed model, the scaled Brownian motion (SBM) \cite{AnnaNonrenewal,AnnaRenewal}.
The case of complete resetting corresponds to the renewal \cite{AnnaRenewal}, and the case of incomplete resetting to the non-renewal \cite{AnnaNonrenewal} cases for the SBM.
\label{fig:MSD}}
\end{center}
\end{figure}
\subsection{Asymptotic of the mean number of steps}
The Table \ref{tab:3} represents the time domains in which the mean number of steps is much larger than unity. Here the notation is as follows:
If $\langle \langle n(t) \rangle \rangle \to \infty$ in the limit $t \to \infty$, the behavior is called asymptotic. In other cases $\langle \langle n(t) \rangle \rangle \gg 1$
only when $\tau_0 \gg t_0$. This may take place at any value of $t$ provided it is large enough, $t \gg t_0, \tau_0$ or only in some
domain of $t$ bounded from above. In the first case we will say that the behavior is independent of $t$, and in the second case that the behavior is transient.
These results will be of use in Section IV.
\begin{table}[h!]
\caption{Conditions for $\langle \langle n(t) \rangle \rangle \gg 1$. \label{tab:3}}
\begin{center}
\begin{tabular}{|c|c|c|c |} \hline
$\beta$ & $\alpha$ & complete & incomplete \\
& & resetting & resetting \\
\hline \hline
$0< \beta < 1$ & all $\alpha$ & $t \gg \tau_0, t_0$ & $t \gg \tau_0, t_0$ \\
& & (asymptotic) & (asymptotic) \\
\hline
$1 < \beta < 2$ & $\alpha > \beta - 1$ & $t \gg \tau_0, t_0$ & $t \gg \tau_0, t_0$ \\
& & (asymptotic) & (asymptotic) \\
\cline{2-4}
& $ \alpha < \beta - 1$ & $\tau_0 \gg t_0$ & $\tau_0 \gg t_0$ \\
& & (all $t$) & $\tau_0 \ll t \ll \tau_0 \left(\frac{\tau_0}{t_0} \right)^{\frac{\alpha}{\beta - 1 - \alpha}} $\\
& & & (transient) \\
\hline
$ 2 < \beta$ & $\alpha < 1$ & $\tau_0 \gg t_0$ & $\tau_0 \gg t_0$ \\
& & (all $t$) & $\tau_0 \ll t \ll \tau_0 \left(\frac{\tau_0}{t_0} \right)^{\frac{\alpha}{1 - \alpha}}$ \\
& & & (transient) \\
\cline{2-4}
& $\alpha > 1$ & $\tau_0 \gg t_0$ & $\tau_0 \gg t_0$ \\
& & (all $t$) & (all $t$) \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Mean squared displacement: numerical results}
After $ \langle \langle n_{1,2}(t) \rangle \rangle$ are found, the behavior of the MSD $\langle x^2 (t) \rangle$ follows from Eq.(\ref{eq:MSDCTRW}). The overview of all possible regimes
of the MSD is provided in Fig. \ref{fig:MSD}.
The analytical results for the MSD are confirmed in terms of the numerical simulation. For each realization of the process we generate random numbers $s_i$ distributed according to Eq.~(\ref{eq:CTRWpar}) for the CTRW
waiting times and random numbers $r_i$ distributed according to Eq.~(\ref{eq:respar}) for the resetting waiting times. We take $t_0=1$, corresponding to $K_\alpha = 0.318$ (according to Eq.~\ref{eq:DiffKoeff}). The values of $\tau_0$ differ in different simulations and are
given explicitly in the captions or in the legends. The times of steps are then obtained as $t_1 = s_1$, $t_n = t_{n-1} + s_n$, and the procedure is stopped when $t_n$ exceeds the maximal simulation time $T$.
The resetting times $r_n$ are generated in a similar manner. In the simulation of the CTRW the time, starting from $t=0$, is increased by an amount of $\Delta t$, and it is checked,
whether a jump, or the resetting event falls in the corresponding time interval. In the first case the walker performs the jump with the
length $\Delta x = 1$ either to the right or to the left with equal probability. In the second case the coordinate of the walker is set to zero.
Fig.~\ref{Gxyzpowres} displays three trajectories for the CTRW with power-law waiting time density and power-law resetting in the case of incomplete resetting.
The simulations reported in other figures are performed with $10^5$ walkers.
\begin{figure}
\begin{center}
\scalebox{0.3}{\includegraphics{Gxyzpowres.eps} }
\caption{Typical trajectories for CTRW with power-law waiting time density for jumps and the power-law distribution of
waiting times for resetting (incomplete resetting case). The parameters are $\beta=0.5$, $\alpha=0.5$, $\tau_0=1$. \label{Gxyzpowres} }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{0.3}{\includegraphics{Gpowpow5.eps}} \\
\scalebox{0.3}{\includegraphics{Gpowpow15.eps}}
\caption{Mean squared displacement $\left\langle
x^2(t)\right\rangle$ for CTRW with the power law waiting time density with $\alpha=0.5$ and power-law resetting with $\tau_0=100$ and a) $\beta=5$, b) $\beta=0.5, 1.4,1.5$, and $1.6$.
Dashed lines show the corresponding theoretical asymptotics. \label{Gpowpow}}
\end{center}
\end{figure}
In Fig.~\ref{Gpowpow} we show the simulation results for the MSD for both complete and incomplete resetting in a broad domain of parameters.
These simulations confirm that the corresponding asymptotics are the same as in the scaled Brownian motion, see Ref. \cite{AnnaRenewal} for
the renewal case, corresponding to complete resetting, and Ref. \cite{AnnaNonrenewal} for the non-renewal case (incomplete resetting).
\section{Probability density functions}
\label{sec:PDFs}
\subsection{Asymptotic forms of the CTRW Green's functions}
The standard variant of the CTRW is the Scher-Montroll ``wait first'' scheme, starting from the waiting time.
The PDF in the coordinate-time representation is
\begin{equation}
P_w(x,t) = \sum_{n=0}^\infty P_n(x)\chi_n(t)
\end{equation}
where $P_n(x)$ is the position after $n$ steps in a simple random walk, and $\chi_n(t)$ is the probability that exactly $n$ steps are taken up to the time $t$.
The functions $\chi_n$ take a very simple form in the Laplace domain,
\begin{equation}
\chi_n(s) = \psi^n(s) \frac{1-\psi(s)}{s},
\end{equation}
and the functions $P_n(x)$ in Fourier domain read $P_n(k) = \lambda^n(k)$, where $\lambda(k)$ is the characteristic function of the displacement distribution in a single step.
The PDF of the walker's position in the Fourier-Laplace representation
for this scheme is given by
\begin{equation}
p_w(k,s) = \frac{1}{1 - \lambda(k) \psi(s)} \frac{1-\psi(s)}{s}.
\end{equation}
Another scheme, the ``jump first'' one, differs only in the fact that the walk starts not from a waiting time but from a jump at $t=0$, so that
\begin{equation}
P_j(x,t) = \sum_{n=0}^\infty P_{n+1}(x)\chi_n(t),
\end{equation}
and
\begin{equation}
p_j(k,s) = \frac{\lambda(k)}{1 - \lambda(k) \psi(s)} \frac{1-\psi(s)}{s}.
\end{equation}
Assuming the steps to be symmetric and to have the finite second moment ($\lambda(k) \simeq 1 - a^2 k^2/2$) and the waiting times to follow a power law,
$\psi(s) \simeq 1 - \Gamma(1-\alpha) t_0^\alpha s^\alpha$ we get in both cases in the lowest order in $k$ and $s$ (i.e. in the continuous limit of long times and large scales)
the same asymptotic expression
\begin{equation}
p(k,s) = \frac{s^{\alpha - 1}}{k^2 [a^2 / 2 \Gamma(1-\alpha)t_0^\alpha]+ s^\alpha}.
\label{eq:GFasy}
\end{equation}
The combination $\tilde{K}_\alpha = a^2 / 2 \Gamma(1-\alpha)t_0^\alpha$ of the specific parameters of the walk is related to the coefficient of the anomalous diffusion $K_\alpha$ defined in
Eq.(\ref{eq:DiffKoeff}), $\tilde{K}_\alpha = \Gamma(\alpha) K_\alpha$.
The limiting form of the Green's function of CTRW is given by the inverse Fourier-Laplace transform of Eq.(\ref{eq:GFasy}),
and reads
\begin{equation}
G_{1,2}(x,t)=P_{w,j}(x,t) = \frac{1}{2 \sqrt{\tilde{K}_\alpha t^\alpha}} M_{\alpha / 2} \left(\frac{ |x|}{ \sqrt{\tilde{K}_\alpha t^\alpha} }\right).
\label{eq:Green}
\end{equation}
with
\begin{equation}
M_{\alpha / 2}(y) = \sum_{n=0}^\infty \frac{(-y)^n}{n! \Gamma[-\alpha n/2 + (1-\alpha/2)]}
\end{equation}
being the Mainardi function, see Appendix B.
For $|x|$ large compared to $\sqrt{\tilde{K}_\alpha} t^{\alpha / 2}$ the function $G(x,t)$ shows a squeezed exponential tail, Eq.(\ref{eq:Masympt}).
For $|x|$ small compared to $\sqrt{\tilde{K}_\alpha} t^{\alpha / 2}$ the function $G(x,t)$ shows the cusp at zero
which disappears only in the Gaussian case $\alpha = 1$. The asymptotic form, Eq.(\ref{eq:Green}) applies when the number of steps performed during the time $t$ is large.
Thus, for $|x| \ll K t^{\alpha/2}$ the Green's function tends to $G(0, t) = C_1 \Delta t^{-\alpha/2}$, while for $|x| \gg K t^{\alpha/2}$
the leading asymptotics of the Green's function is
\begin{equation}
G(x,t) \simeq C_2 t^{-\frac{\alpha}{2(2-\alpha)}} |x|^\frac{2 \alpha - 2}{2 (2-\alpha)} \exp \left(C_3 \frac{|x|^{\frac{2}{2-\alpha}}}{t^{\frac{\alpha}{2-\alpha} }}\right).
\end{equation}
We will never need the exact form of the Green's function but only its similarity form, Eq.(\ref{eq:GFasy}), combined with the fact that the
Mainardi function is rapidly decaying at infinity.
According to our discussion accompanying Fig.1, the PDFs of the CTRW under resetting $P(x,t)$ is given by mixtures of the PDFs (Green's functions) of the CTRW $G_{1,2}(x,\Delta t)$
of the
wait first or jump first CTRW in cases (1) and (2), respectively. These Green's functions are weighted with the
probability density $p(\Delta t |t)$ of the observed duration of the corresponding walk $\Delta t$, conditioned on the observation time $t$ in the case (1) or with the probability
density of the time $\Delta t'$ elapsed between the first step of the walk after the resetting and the end of the observation, $p_2(\Delta t' | t)$ (\textit{vide infra}).
Thus,
\begin{equation}
P_{1}(x,t) = \int_0^t G_{1}(x,\Delta t) p_1(\Delta t | t) d \Delta t
\label{eq:General1}
\end{equation}
and
\begin{equation}
P_{2}(x,t) = \int_0^t G_{2}(x,\Delta t') p_{2}(\Delta t' | t) d \Delta t'.
\label{eq:General_WF}
\end{equation}
Let us assume that the PDF $p(\Delta t | t) = p_{1,2}(\Delta t | t)$ of the CTRW duration $\Delta t$ (or $\Delta t'$) follows a power law
in some domain of $\Delta t$, i.e. possesses an intermediate asymptotics
\begin{equation}
p(\Delta t | t) = A(t) \Delta t^{-\gamma}
\label{eq:funct}
\end{equation}
with $\gamma > 0$ in the domain $t_{\min} \ll \Delta t \ll t$. Then the corresponding intermediate asymptotics of $P(x, t)$
will be
\begin{eqnarray}
P(x, t) &=& A(t) \int_0^t d \Delta t \; \Delta t^{-\gamma} \frac{1}{2 \sqrt{\tilde{K}_\alpha} \Delta t^{\alpha / 2}} \times \nonumber \\
&& M_{\alpha / 2} \left(\frac{|x|}{\sqrt{\tilde{K}_\alpha} \Delta t^{\alpha / 2} } \right) .
\end{eqnarray}
Introducing the scaling variable
\begin{equation}
\xi = \frac{|x|}{\sqrt{\tilde{K}_\alpha} \Delta t^{\alpha / 2} }
\end{equation}
we rewrite the last expression as
\begin{eqnarray}
P(x, t) &=& A(t) \tilde{K}_\alpha^{\frac{\gamma -1}{\alpha}} |x|^{-1 - \frac{2(\gamma -1)}{\alpha}} \times \nonumber \\
&& \int_{\frac{|x|}{\sqrt{\tilde{K}_\alpha} t^{\alpha/2}}}^\infty \xi^{\frac{2(\gamma - 1)}{\alpha}} M_{\alpha/2}(\xi) d\xi.
\label{eq:General2}
\end{eqnarray}
The existence of the intermediate power-law asymptotics in $x$ (i.e. of the universal behavior for $|x| \ll \sqrt{\tilde{K}_\alpha} t^{\alpha / 2}$) corresponds to situations
when the integral stays convergent when its lower limit tends to zero, i.e. for $2(\gamma - 1)/\alpha > -1$, or, in other words, for
\begin{equation}
\gamma > 1 - \frac{\alpha}{2}.
\label{eq:mainneq}
\end{equation}
In the opposite case the integral for small $|x|$ is dominated by its behavior on the lower limit of integration, where the Mainardi function tends to a
constant, so that $P(x,t) \propto const \cdot A(t) t^{1 -\gamma - \alpha/2} x^0$, i.e. develops a flat top.
Therefore the intermediate power-law asymptotics of the PDF exists for $\gamma > 1 - \alpha / 2$ and is given by
\begin{equation}
P(x,t) \propto A(t) |x|^{-1 - \frac{2(\gamma -1)}{\alpha}}.
\label{eq:PrAs}
\end{equation}
The far asymptotics of large $|x|$ follows (up to power-law prefactors) the squeezed exponential wing of the Mainardi function.
\subsection{Equations for the distributions of observed walk duration}
The PDF of $\Delta t$ in the case of the complete resetting is given by Eq.(\ref{p1initial}) with its two special cases, Eqs.(\ref{p1b0}) and (\ref{p1b1}).
The final results follow from the observations that for $\Delta t \ll t$ in the case (1) for $\beta < 1$
\begin{equation}
p_1(\Delta t | t) \simeq \frac{\sin \pi \beta}{\pi} t^{\beta -1} \Delta t^{-\beta}
\end{equation}
so that $A(t) \sim t^{\beta -1}$ and $\gamma = \beta$, and
\begin{equation}
p_1(\Delta t | t) \simeq \frac{\beta - 1}{\tau_0^{1-\beta}} \Delta t^{-\beta}
\end{equation}
so that $A(t) = const \cdot \tau_0^{\beta -1}$ and $\gamma = \beta$ for $\beta > 1$. \\
The distribution of the duration $\Delta t'$ of the jump-first CTRW in the case (2) of incomplete resetting was not considered yet. This
CTRW starts after the forward recurrence time $t_f= t_1 - t_r$ after the resetting event, so that its duration is $\Delta t' = t - t_1$.
Given $t_r$ (which is the aging time of the aged CTRW), the distribution of the forward recurrence time $t_f$ is given by \cite{sokbook}
\begin{equation}
\psi_1(t_f|t_r) = \frac{\sin \pi \alpha}{\pi} \left(\frac{t_r}{t_f} \right)^\alpha \frac{1}{t_r+t_f}.
\label{eq:psi1}
\end{equation}
The duration $\Delta t'$ of the following ``jump first'' CTRW is $\Delta t'= t -( t_r + t_f)$ if the sum $t_r + t_f$ does not exceed $t$, and is zero otherwise.
Let us first fix $t_r$ and calculate the conditional PDF $p(\Delta t' | t_r, t)$:
\begin{eqnarray}
p(\Delta t' | t_r,t) &=& \int_0^{t-t_r} dt_f \delta[\Delta t' -(t - t_r-t_f)] \psi_1(t_f|t_r) \nonumber \\
&& + \delta(\Delta t') \int_{t-t_r}^\infty \psi_1(t_f|t_r) dt_f \nonumber \\
&=& \frac{\sin \pi \alpha}{\pi} \left(\frac{t_r}{t - t_r - \Delta t'} \right)^\alpha \frac{1}{t - \Delta t'} \nonumber \\
&& + \delta(\Delta t') \int_{t-t_r}^\infty \psi_1(t_f|t_r) dt_f,
\label{eq:cond1}
\end{eqnarray}
where $\delta(x)$ is the Dirac delta function. The weight of this $\delta$-term, the integral $I(t-t_r) = \int_{t-t_r}^\infty \psi_1(t_f|t_r) dt_f$, is the probability that
no steps of CTRW were done after resetting. Now we average the expression Eq.(\ref{eq:cond1}) over $t_r$ which has to lay between 0 and $t-\Delta t'$ if $\Delta t'$ is nonzero:
\begin{eqnarray}
&& p_2(\Delta t' | t) = \int_0^{t-\Delta t'} p(\Delta t' | t_r,t) p_r(t_r | t) dt_r \nonumber \\
&& \qquad =\int_0^{t-\Delta t'} \frac{\sin \pi \alpha}{\pi} \left(\frac{t_r}{t - t_r - \Delta t'} \right)^\alpha \frac{1}{t - \Delta t'} p_r(t_r| t) dt_r \nonumber \\
&& \qquad + \delta(\Delta t') \int_0^t \left[\int_{t-t_r}^\infty \psi_1(t_f|t_r) dt_f \right] p_r(t_r|t) dt_r.
\label{eq:p2}
\end{eqnarray}
The term with zero steps contributes to the overall normalization and corresponds to a delta-peak at the origin in the total PDF. This term does not influence
the wings of the PDF.
We will denote the weight of the $\delta$-function in the last line by $R$.
The explicit form of $R$ for \boldmath $\beta < 1$ \unboldmath is
\begin{eqnarray}
R &=& \frac{\sin \pi \alpha}{\pi }\frac{\sin \pi \beta}{\pi } \times \label{eq:rest} \\
&& \int_0^t t_r^{\beta -1}(t-t_r)^{-\beta}\int_{t-t_r}^\infty t_r^\alpha t_f^{-\alpha} (t_r+t_f)^{-1} dt_r dt_f. \nonumber
\end{eqnarray}
We note that the conditional PDF $\psi_1(t_f|t_r)$, Eq.(\ref{eq:psi1}), is normalized for any $t_r$, and therefore $\int_0^t p_r(t_r|t) \left[ \int_0^\infty \psi_1(t_f |t_r) dt_f
\right] dt_r = 1$.
Note that the integrand of the second integral in Eq.(\ref{eq:rest}) is non-negative, so that
\begin{equation}
\int_{t-t_r}^\infty t_r^\alpha t_f^{-\alpha} (t_r+t_f)^{-1} dt_f \leq \int_{0}^\infty t_r^\alpha t_f^{-\alpha} (t_r+t_f)^{-1} dt_f
\end{equation}
and therefore $R \leq 1$, so that the whole double integral has to be convergent (except for the limiting cases
$\alpha =1$ or $\beta = 1$ when the trigonometric prefactors vanish). On the other hand, introducing the new variables $\xi = t_r/t$ and $\eta = t_f/t$ we see that
\begin{eqnarray}
R &=& t^0 \frac{\sin \pi \alpha}{\pi }\frac{\sin \pi \beta}{\pi } \times\\
&& \int_0^1 \xi^{\beta -1} (1-\xi)^{-\beta} \left[ \int_{1-\xi}^\infty \xi^\alpha \eta^{-\alpha} (\xi + \eta)^{-1} d\eta \right] d \xi. \nonumber
\end{eqnarray}
The integral in this expression converges, as we have seen above, is positive, and depends only on parameters $\alpha$ and $\beta$, but not on $t$.
Therefore the weight of the $\delta$-peak tends to a constant in the course of time.
For \boldmath $\beta > 1$ \unboldmath the qualitative result is the same, but the discussion is slightly different. Now
\begin{equation}
p_r(t_r | t) \simeq \frac{1}{\beta - 1} \frac{\tau_0^{\beta -1}}{(\tau_0 + t - t_r)^\beta},
\end{equation}
so that
\begin{eqnarray}
R &=& \frac{1}{\beta - 1} \frac{\sin \pi \alpha}{\pi } \tau_0^{\beta -1} \times \\
&& \int_0^t (\tau_0 + t-t_r)^{-\beta}\int_{t-t_r}^\infty t_r^\alpha t_f^{-\alpha} (t_r+t_f)^{-1} dt_r dt_f. \nonumber
\end{eqnarray}
Denoting $\zeta = \tau_0/t$ we write
\begin{eqnarray}
R &=& \frac{1}{\beta - 1} \frac{\sin \pi \alpha}{\pi} \zeta^{\beta -1} \times \\
&& \int_0^1 (1+\zeta - \xi)^{-\beta} \int_{1-\xi}^\infty \xi^\alpha \eta^{-\alpha}(\xi + \eta)^{-1} d\eta d \xi . \nonumber
\end{eqnarray}
Now we note that this expression is bounded from above (since $R \leq 1$) and would tend to zero only if the double integral
in the last expression converges or diverges slower that $\zeta^{1-\beta}$ for $\zeta \to 0$.
Now we introduce the new variable $z= \eta/\xi$ in the inner integral, and write
\begin{eqnarray}
R &=& \frac{1}{\beta - 1} \frac{\sin \pi \alpha}{\pi} \zeta^{\beta -1} \times \\
&& \int_0^1 (1+\zeta - \xi)^{-\beta} \left[\int_{\xi^{-1}-1}^\infty z^{-\alpha} (1 + z)^{-1} d z\right] d \xi. \nonumber
\end{eqnarray}
The $\zeta$-dependence of the whole integral is dominated by the behavior of the integrand for $\xi \to 1$ when the internal integral tends to a constant
\begin{equation}
\int_0^\infty z^{-\alpha} (1 + z)^{-1} d z = \frac{\pi}{\sin [(1-\alpha) \pi ] },
\end{equation}
and therefore
\begin{eqnarray}
&& \int_0^1 (1+\zeta - \xi)^{-\beta} \int_{1-\xi}^\infty \xi^\alpha \eta^{-\alpha}(\xi + \eta)^{-1} d\eta d \xi \nonumber \\
&& \qquad \simeq \frac{\pi}{(\beta - 1)\sin [(1-\alpha) \pi ] } \zeta^{1-\beta},
\end{eqnarray}
so that for $t \to \infty$
\begin{eqnarray}
R &\to& \frac{1}{\beta - 1} \frac{\sin \pi \alpha}{\pi} \frac{\pi}{(\beta - 1)\sin [(1-\alpha) \pi ] } \nonumber \\
&=& \frac{1}{(\beta -1)^2} \frac{\sin \pi \alpha}{\sin \pi(1-\alpha)},
\end{eqnarray}
i.e. again tends to the constant. The $\delta$-peak only disappears for $\beta \to \infty$ and for exponential resetting.
The main integral (the second line in Eq.(\ref{eq:p2})) is awkward, but we can still distill the general time dependence (up to prefactors).
To do so we note that the intermediate asymptotics appears when for $t_0 \ll \Delta t' \ll t$ the function $p_2(\Delta t' | t)$ possesses a power-law asymptotics
$p_2 \sim A(t)\Delta t'^{-\gamma}$.
\paragraph{\boldmath $\beta < 1$ \unboldmath. } For this case we have
\begin{eqnarray}
&& p(\Delta t' | t) = \int_0^{t-\Delta t'} p(\Delta t' | t_r,t) p_2(t_r | t) dt_r \nonumber \\
&& = \int_0^{t-\Delta t'} \frac{\sin \pi \alpha}{\pi} \left(\frac{t_r}{t - t_r - \Delta t'} \right)^\alpha \times \nonumber \\
&& \frac{1}{t - \Delta t'} \frac{\sin \pi \beta}{\pi} t_r^{\beta -1}(t-t_r)^{-\beta} dt_r \nonumber \\
&& + R \delta(\Delta t'). \label{eq:NRGen}
\end{eqnarray}
Thus:
\begin{eqnarray}
p(\Delta t' | t) = && \frac{\sin \pi \alpha \sin \pi \beta}{\pi^2} \frac{1}{t - \Delta t'} \int_0^{t-\Delta t'} \times \nonumber \\
&& (t - \Delta t'- t_r)^{-\alpha} t_r^{\alpha+\beta -1}(t-t_r)^{-\beta} dt_r \nonumber \\
&& + R \delta(\Delta t').
\label{eq:NR2}
\end{eqnarray}
The intermediate asymptotic power-law behavior in the wing of the PDF may appear if for $t$ long the PDF $p(\Delta t' | t)$ shows a power-law behavior for
$t_0 \ll \Delta t' \ll t$, in which the $\delta$-peak does not play a role. To distill the power-law dependence on $\Delta t'$ we introduce in Eq.(\ref{eq:NR2}) new variables
$z=\Delta t'/t$ and $\xi=t_r/t$ and rewrite the integral
as
\begin{eqnarray}
&& p(\Delta t' | t) = \frac{\sin \pi \alpha \sin \pi \beta}{\pi^2} t^{-1} \frac{1}{1-z} \times \\
&& \qquad \qquad \int_0^{1-z} (1 -z- \xi)^{-\alpha} \xi^{\alpha+\beta -1} (1-\xi)^{-\beta} d\xi. \nonumber
\end{eqnarray}
Now we investigate the behavior of the integral for $z \to 0$. This behavior depends on whether $\alpha + \beta < 1$ or $\alpha + \beta > 1$.
In the first case the integral converges and tends to a constant value. This corresponds to $\gamma = 0$. In the second case it shows a divergence at its upper limit.
Since this limit is approximately unity, we can set the second multiplier in the integrand to unity and simplify the expression:
\begin{equation}
p(\Delta t' | t) \simeq C \times t^{-1} \int_0^{1-z} (1 -z- \xi)^{-\alpha} (1-\xi)^{-\beta} d\xi.
\end{equation}
Now we introduce the new variable of integration $\zeta = 1 - z - \xi$ and write:
\begin{eqnarray}
&& I=\int_0^{1-z} (1 -z- \xi)^{-\alpha} (1-\xi)^{-\beta} d\xi \\
&& \;\;\; = \int_0^{1-z} \zeta^{-\alpha}(z+\zeta)^{-\beta} d \zeta \nonumber \\
&& \;\;\; = \frac{1}{1-\alpha}(1-z)^{1-\alpha} z^{-\beta} \;_2F_1 \left(\beta, 1-\alpha, 2-\alpha;-\frac{1-z}{z} \right), \nonumber
\end{eqnarray}
see Eq.(1.2.4.3) of Ref. \cite{BrPr}. Now we apply the Pfaff transformation
\begin{eqnarray}
&& 2F_1 \left(\beta, 1-\alpha, 2-\alpha; x \right) \\
&& \qquad = (1-x)^{-1+\alpha}\;_2F_1 \left(1-\alpha,2-\alpha- \beta; 2-\alpha; \frac{x}{x-1} \right), \nonumber
\end{eqnarray}
so that (for $z \to 0$)
\begin{equation}
I \to \frac{1}{1-\alpha} z^{1-\alpha - \beta} \;_2F_1(1-\alpha, 2-\alpha - \beta; 2-\alpha;1) \sim z^{1-\alpha - \beta} .
\end{equation}
The value of the corresponding hypergeometric function is
\begin{equation}
\;_2F_1(1-\alpha, 2-\alpha - \beta; 2-\alpha;1) = \frac{\Gamma(2-\alpha)\Gamma(\alpha + \beta -1)}{\Gamma(1)\Gamma(\beta)}
\end{equation}
(note that $\alpha + \beta -1 >0$ is exactly the condition under which this asymptotic value is attained), so that
\begin{eqnarray}
&& p(\Delta t' | t) \simeq \frac{\sin \pi \alpha \sin \pi \beta}{\pi^2} \frac{\Gamma(2-\alpha)\Gamma(\alpha + \beta -1)}{\Gamma(1)\Gamma(\beta)}\times \nonumber \\
&& \qquad \qquad t^{-1} \left(\frac{\Delta t'}{t} \right)^{1 - \alpha - \beta},
\end{eqnarray}
which corresponds to our power law with $A(t) \propto t^{-\alpha - \beta}$ and $\gamma = \alpha + \beta -1$.
\paragraph{\boldmath $ \beta > 1$ \unboldmath.} For the case $\beta > 1$ we have
\begin{eqnarray}
p_2(\Delta t' |t) &\simeq& \int_0^{t-\Delta t'} \frac{\sin \pi \alpha}{\pi} \left(\frac{t_r}{t - t_r - \Delta t'} \right)^\alpha \times \nonumber \\
&& \frac{1}{t - \Delta t'} \frac{1}{\beta - 1} \frac{\tau_0^{\beta -1}}{(\tau_0 + t - t_r)^\beta} dt_r.
\end{eqnarray}
Now we again introduce $z= \Delta t'/t$, $\zeta = \tau_0/t$ and $\xi = t_r/t$ and obtain
\begin{eqnarray}
p_2(\Delta t' |t) &\simeq& t^{-1} \frac{\sin \pi \alpha}{\pi (\beta - 1)} \frac{\zeta^{\beta -1}}{1-z} \times \nonumber \\
&& \int_0^{1-z} \frac{\xi^\alpha}{(z+1-\xi)^{\alpha}(\zeta + 1 - \xi)^\beta} d \xi.
\end{eqnarray}
We are interested in the asymptotic $z$-dependence of this expression for $z \to 0$. We note that at $z=0$ the integral stays convergent,
however the interesting condition is $z \gg \zeta$. For both $z$ and $\zeta$ small the integral is dominated by the behavior of the integrand at the upper bound,
where, due to the restriction $z \gg \zeta$ we can neglect $\zeta$ in the second multiplier in the denominator, take $\xi^\alpha \approx 1$ in the numerator and change the
integration variable to $y=1-\xi$:
\begin{eqnarray}
I &=& \int_0^{1-z} \frac{\xi^\alpha}{(z+1-\xi)^{\alpha}(\zeta + 1 - \xi)^\beta} d \xi \nonumber \\
&\approx& \int_z^1 (z+y)^{-\alpha}(\zeta + y)^{-\beta} d y.
\end{eqnarray}
Now, close to the lower bound the first term is simply $z^{-\alpha}$ and in the second one the small regularizing term $\zeta$ can be neglected, so that
\begin{equation}
I \approx z^{-\alpha} \int_z^1 y^{-\beta} dy \simeq \frac{1}{\beta -1} z^{1-\alpha -\beta}.
\end{equation}
Putting this in the expression for $p_2$ we get
\begin{eqnarray}
p_2(\Delta t' |t) &\simeq& t^{-1} \frac{\sin \pi \alpha}{\pi (\beta - 1)^2} \zeta^{\beta -1} z^{1-\alpha -\beta} \nonumber \\
&=& \frac{\sin \pi \alpha}{\pi (\beta - 1)^2} \tau_0^{\beta - 1} t^{\alpha-1} \Delta t'^{1-\alpha -\beta},
\end{eqnarray}
and get our power law expression with $A(t) \propto \tau_0^{\beta - 1} t^{\alpha-1}$, and $\gamma = \alpha + \beta -1$ like in the previous case.
The overall results for the intermediate asymptotics pf $P_{1,2}(\Delta t' | t)$ are summarized in Table \ref{tab:1}.
\\
\begin{table}[h!]
\caption{Intermediate asymptotics of random walk duration \label{tab:1}}
\begin{center}
\begin{tabular}{|c|c|c|l|} \hline
kind of resetting & $A(t)$ & $\gamma$ & restrictions \\
\hline \hline
complete & $t^{\beta -1}$ & $\beta$ & $\beta < 1$ \\
\cline{2-4}
&$\tau_0^{\beta -1}$ & $\beta$ & $\beta > 1$ \\
\hline
\hline
& $t^{-1}$ & 0 & $\beta < 1$, $\alpha + \beta < 1$ \\
\cline{2-4}
incomplete & $t^{-\alpha - \beta}$ & $\alpha + \beta -1$ & $\beta < 1$, $\alpha + \beta > 1$ \\
\cline{2-4}
& $\tau_0^{\beta -1} t^{\alpha -1}$ & $\alpha + \beta - 1$ & $\beta > 1$ \\
\hline
\end{tabular}
\end{center}
\end{table}
The inspection of this table allows us to tell under which conditions we can await the power-law intermediate behavior of the PDF, when we remember that this only appears for
$\gamma > 1 - \alpha/2$, Eq.(\ref{eq:mainneq}). Thus, for $\beta < 1$ the intermediate asymptotics in the complete resetting is observed only for $\beta > 1-\alpha/2$, otherwise
the flat top of the PDF immediately merges with its squeezed exponential tail. For incomplete resetting with $\alpha + \beta < 1$ it does not exist at all (one has a delta-peak connected to
the wing), and for $\alpha + \beta >1$ the condition to observe the intermediate asymptotics is $\beta > 2 - \frac{2}{3} \alpha$ (under which condition the inequality $\alpha + \beta >1$
holds automatically for all $\alpha < 1$).
\begin{figure}[h!]
\begin{center}
\scalebox{0.4}{\includegraphics{Domains.eps} }
\caption{(Color online) Classification of intermediate asymptotic behaviors of $P(x,t)$ as a function of $x$ for CTRW under resetting for $0< \alpha < 1$. The lower, white, region
corresponds to the domain of parameters $\alpha, \beta$ where the intermediate power-law behavior does not occur. The intermediate triangular domain (yellow online) corresponds
to the values of parameters when the power-law behavior in $|x|$ is observed at long times. The gray domains correspond to the cases when this behavior is observed only
for $\tau_0 \gg t_0$ either at all times (left) or only transiently (right). The hatched domains correspond to situations when the intermediate $|x|$ asymptotics
is observed for $\beta < 1$. The types of behavior are: domain 1: $p(x,t) \sim t^{\beta -1} |x|^{-1 - \frac{2(\beta -1)}{\alpha}}$, domain 2: $p(x,t) \sim |x|^{-1 - \frac{2(\beta
-1)}{\alpha}}$,
domain 3: $p(x,t) \sim t^{-\alpha - \beta} |x|^{\frac{2(1-\beta)}{\alpha} - 3}$, and domain 4: $p(x,t) \sim \tau_0^{\beta -1} t^{\alpha - 1} |x|^{\frac{2(1-\beta)}{\alpha} - 3}$.
\label{fig:Domains}}
\end{center}
\end{figure}
\subsection{Final results}
For a complete resetting the final results are as follows: The intermediate asymptotics exists for $\beta > 1 - \frac{\alpha}{2}$, and reads
\begin{equation}
P(x,t) \sim \left\{
\begin{array}{lll}
t^{\beta -1} |x|^{-1 - \frac{2(\beta -1)}{\alpha}} \qquad &\mbox{for } \beta < 1\\
|x|^{-1 - \frac{2(\beta -1)}{\alpha}} \qquad &\mbox{for } \beta > 1.
\end{array}
\right.
\label{eq:Fin_complete}
\end{equation}
This behavior is exactly the same as for SBM with the corresponding exponent of the anomalous diffusion $\alpha$, see Ref. \cite{AnnaRenewal}.
Note that for $\beta > 1$ and $\alpha < \beta -1$ the universal form of the PDF is only transient (i.e. visible only at intermediate times) and only exists for $\tau_0 \gg t_0$.
For incomplete resetting the intermediate asymptotics is visible only for $\beta > 2 - \frac{3}{2}\alpha$ and reads
\begin{equation}
P_2(x,t) \sim \left\{
\begin{array}{lll}
t^{-\alpha - \beta} |x|^{\frac{2(1-\beta)}{\alpha} - 3} & \mbox{for }& \beta < 1 \\
\tau_0^{\beta -1} t^{\alpha - 1} |x|^{\frac{2(1-\beta)}{\alpha} - 3} & \mbox{for }& \beta > 1 .
\end{array}
\right.
\end{equation}
We have to stress that the universal form of the Green's functions based on taking only the lowest order contribution in $k$ is only applicable if the corresponding PDF is broad
enough, i.e.
typical value of $|x|$ is much larger than $a$. This implies that the number of steps of CTRW made during the time $t$ must be large. The typical number of steps is of the order
of $\langle \langle n(t) \rangle \rangle$, whose behavior was already discussed in Sec. \ref{sec:MSD}.
Note that for incomplete resetting the corresponding behavior represents a decaying function of $|x|$, which is ``switched'' between the
delta-peak at the origin and the squeezed exponential tail, starting late. This behavior differs from the one observed in SBM both
with respect to the existence of the $\delta$-peak and with respect to the presence of the $\alpha$-dependence in the corresponding power law. Both features
are connected with the fact that the first step of the CTRW after resetting follows very late after the resetting event. This is a true fluctuation
effect, which is not captured by the mean-field SBM-description. Note that since the prefactor of $|x|$ explicitly depends on time
the situation is always nonstationary. In this case again the universal asymptotic behavior in the case $\alpha < \beta -1$ is only observable for $t_0 \ll \tau_0$.
The overview about intermediate power-law asymptotics of the PDFs is given in Fig. \ref{fig:Domains}.
The examples of such asymptotics as seen in numerical simulations, are given in Figs. \ref{Gpdfc} and \ref{Gpdfi}.
\begin{figure}
\begin{center}
\scalebox{0.3}{\includegraphics{Gpx15c.eps}}
\caption{Probability density function for CTRW with power-law densities for waiting times and for resetting times under complete resetting.
The parameters are: $\alpha=0.5, \beta=1.5$, $t_0=1$ and $\tau_0=1000$, $t=1000$. The wing of the distribution scales according to $p(x)\sim x^{-1-2\beta/\alpha+2/\alpha}$, corresponding to the domain 2 in Fig. \ref{fig:Domains}.\label{Gpdfc}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{0.3}{\includegraphics{Gpxb15.eps}}
\caption{Probability density function for CTRW with power-law densities for waiting times and for resetting times under incomplete resetting at $t=1000$.
The parameters are the same as in Fig. \ref{Gpdfc}: $\alpha=0.5, \beta=1.5$, $t_0=1$, $\tau_0=1000$.
The wing of the distribution now scales according to $p(x)\sim x^{-3-2\beta/\alpha+2/\alpha}$ corresponding to the domain 4 in Fig. \ref{fig:Domains}.\label{Gpdfi} }
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:Concl}
We have studied subdiffusive continuous time random walks (CTRWs) with power-law resetting. We have considered the incomplete resetting, when the waiting period of CTRW is
unaffected by the resetting event and complete resetting, when the waiting period starts anew at the resetting event. We have shown that the behavior of MSD in CTRW under
resetting is the same as for subdiffusive SBM under the same conditions which reflects the fact that SBM can serve as a mean field approximation for the CTRW for both cases.
The PDF of displacements in CTRW under complete resetting is similar to such for the renewal SBM \cite{AnnaRenewal}. The fact that for both SBM and CTRW with complete resetting the forms of the MSD and PDF are similar is highly non-trivial (note that the free SBM is a Gaussian process, while the free CTRW possesses the PDF with a cusp and stretched-Gaussian tails). On the contrary, for the CTRW with incomplete resetting the behavior of the PDF of displacements differs considerably from the one for SBM \cite{AnnaNonrenewal} due to fluctuation effects connected with the distribution of the waiting time for the first jump of
CTRW after resetting.
|
1,941,325,220,990 | arxiv | \section{Introduction}
Because of the large extinctions and distances typically involved, direct observations of the accretion process of mass onto forming massive stars are difficult to obtain. However, the accretion and ejection processes in massive young stellar objects (MYSOs) are known to be linked, as shown from observational studies of molecular outflows at radio wavelengths \citep[e.g.][]{Beuther02,Maud15} and collimated jets at infrared wavelengths \citep[e.g.][]{Caratti15}, and further supported by numerical simulations by \citet{Kolligan18}.
In this Letter, we present new high-resolution near-infrared observations of the environment around the massive young stellar object G192.16-3.82 (G192). Radio observations by \citet{Hughes93} of the ultra compact \ion{H}{II} region associated with this MYSO indicate it harbors a forming early B-star, with a luminosity of $\sim 2400 L_\odot$ scaled to the distance of $1.52 \pm 0.08$~kpc found by \citet{Shiozaki11}. Early interest in the 1990s was raised because this region exhibits a strong and wide bipolar CO outflow \citep{Shepherd98}. Traces of (shocks in) the bipolar outflow can even be seen in the optical, where H$\alpha$ and [\ion{S}{II}] emission associated with the Herbig-Haro objects HH\,396/397 can be found over a total extent of $>$~5 pc in each direction from the driving source \citep{Devine99}.
\citet{Imai06} discovered several H$_2$O maser features in the G192 region. Two of the three maser clusters are associated with the infalling/rotating accretion disk of the northernmost YSO, and the third with the highly collimated jet of the southern YSO. The distance between them is about 1200~au. Using VLA observations with a spatial resolution of 40~mas, \citet{Shepherd01} established the presence of a solar-system-sized circumstellar dust disk around the southern YSO. Moreover, they suggested the presence of a close companion of the southern source, located at a distance of $\sim80$~au. These observations brought G192 to wide attention within the star-formation community and indicated that at least early B-stars may form in a general accretion-disk scenario like their lower-mass siblings, the T~Tauri and Herbig Ae stars. Clear observational support for this is still rare in case of even more massive O-stars.
At near-infrared wavelengths, the intricate morphology of the close circumstellar environment of G192 has proven difficult to discern given the resolution limits of existing observations.
\citet{Indebetouw03} and \citet{Varricatt10} presented near-infrared narrow-band imaging of the region. The latter work identified a few knots of H$_2$ emission away from the central source, however, these seeing-limited observations illustrate the limits of 3-4 meter class telescopes in recovering faint extended emission in such distant star-forming regions. High spatial resolution and signal-to-noise data are needed to study the substructure and dynamics of the region. In this letter, we present a near-infrared investigation with 0.25\arcsec{} spatial resolution and $R\sim10000$ spectral resolution of the G192 region.
\section{Observations and data reduction}
\label{Sect:ObsReduct}
We observed G192
in the near-infrared with the LUCI imager and spectrograph \citep{Seifert03,Buschkamp12} at the Large Binocular Telescope (LBT) on Mount Graham International Observatory between 2015 and 2017, during several commissioning nights of the Advanced Rayleigh guided Ground layer adaptive Optics System \citep[ARGOS;][]{Rabien19}. These ground-layer-corrected AO observations cover the entire 4\arcmin{}$\times$4\arcmin{} of the LUCI field of view, including G192 and extended outflows in the region.
\subsection{Imaging}
Observations in the $K_s$, H$_2$ and Br$\gamma$ filters were obtained
on December 17th, 2015, with LUCI1. The observing strategy consisted of small dither offset exposures (to remove bad pixels) shortly followed by a large offset set of exposures to build a sky frame. Each exposure was of 3~sec in order to minimize saturation and persistence effects. The latter we corrected on a pixel-by-pixel basis with persistence and linearity maps \cite[see Appendix~A in][]{Georgiev19}, prior to performing standard dark, flat fielding, sky subtraction and final image registration. Finally, the $K_s$ image is an average combined from 58 exposures resulting in 174~sec of total exposure time. The H$_2$ and Br$\gamma$ filters were combined from 72 and 39 exposures, respectively, each of 6~s, for total on-source exposure times of 432~sec (H$_2$) and 234~sec (Br$\gamma$). The DIMM seeing at $\lambda\!=\!0.55$~$\mu$m was about 1.0\arcsec{}, corresponding to about 0.76\arcsec{} at $\lambda\!=\!2.2$~$\mu$m. The pixel scale of the images is 0.118\arcsec{}/pix, and the FWHM of the PSF after AO correction is 0.31\arcsec{} in both
narrow-band filters and 0.37\arcsec{} in $K_s$.
We obtained an astrometric solutions with
Astrometry.net \citep{Lang10}, using
index files compiled from the Gaia DR2 catalog \citep{GaiaDR2}. The accuracy of the absolute astrometric solution, estimated by measuring the scatter in the positions of the approximately 100 Gaia stars in the
field, is 79~mas in the H$_2$ and Br$\gamma$ filters, and 98~mas in the $K_s$ filter.
\subsection{Spectroscopy}
Long-slit spectroscopic observations with LUCI1\&2 were performed using both 8-m telescopes in binocular mode
on the night of March 11th, 2017 with a spectral resolution of $R\approx 10\,000$ using the 0.25\arcsec{} slit. We took a total of eight 150~s exposures in sequence, including two off-target positions for sky subtraction, for a total on-source integration time of 15 minutes. The slit was oriented at a position angle of 92.5\degr{}, covering the position of the 1.3-cm continuum source and several H2 knots visible in the narrow-band H2 images. The wavelength coverage was 2.104-2.256~$\mu$m and 2.151-2.303~$\mu$m for LUCI1 and LUCI2, respectively.
The spectra were reduced in IDL using the Flame data reduction pipeline \citep{Belli18}, including flat fielding, wavelength calibration and field distortion correction using telluric OH lines, and corrections for non-linearity of the detector. The spectra from LUCI1 and LUCI2 were averaged together to increase the signal-to-noise ratio in the common wavelength range (2.151-2.256~$\mu$m). No absolute flux calibration was applied.
\section{Results}\label{Sect:Results}
\subsection{Morphology}
\begin{figure}
\centering
\includegraphics[width=\hsize]{Kimage_color.eps}
\caption{Color-composite image of the inner region of G192 showing emission in the $K_s$ (red), H$_2$ (green) and Br$\gamma$ (blue) filters. The blue circle marks the position and error ellipse of the $K^\prime$ source detected by \citet{Indebetouw03}, and the red circle marks the location of the 1.3-cm continuum source \citep{Shepherd04} with an assumed positional uncertainty $\le0.1$\arcsec{}. The white circle in the lower right shows the positional accuracy (98~mas) of our astrometric solution, and the thin white line shows the slit position of the spectroscopic observations.}
\label{fig_Kimage}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{G192_H2-cont.pdf}
\caption{\emph{Above}: Color-composite image showing emission in the $K_s$ (red), H$_2$ (green) and Br$\gamma$ (blue) filters. \emph{Below}: Continuum-subtracted H$_2$ image. The slit position of the spectroscopic observations is overlaid in both panels. Stars observed by \citet{Jones04} are marked in the upper panel, while individual knots in HH\,396/397 (A, B, etc.) identified by \citet{Devine99} are marked in the lower panel, together with near-infrared knots (NIRS1, 2, etc.) identified in the present work.}
\label{fig_H2-cont}
\end{figure*}
Figure~\ref{fig_Kimage} shows a color-composite image of the $K_s$, H$_2$ and Br$\gamma$ emission of the inner region (12\arcsec{}$\times$14\arcsec{}) around G192, together with the locations and positional uncertainties of the 1.3-cm continuum source of \citet{Shepherd98} and the $K^\prime$ source reported by \citet{Indebetouw03}. In Fig.~\ref{fig_H2-cont}, we show the large-scale $K_s$, H$_2$ and Br$\gamma$ emission in the region, together with the continuum-subtracted H$_2$ emission. In the continuum-subtracted image the elongated structure of several H$_2$ knots, some of which are associated with optical emission from HH 396/397 (HH\,396A, B; HH\,397A, B, C, D), become visible. The knots to the east of the 1.3-cm continuum source are associated with the blue-shifted lobe of the outflow, while those to the west are associated with the red lobe (see Sec.~\ref{sec_kinematics}). We have designated knots detected primarily in the near-infrared as NIRS1, NIRS2, etc. These knots are part of the large-scale molecular outflow moving along the east-west direction. The western knots lie along a relatively straight line, and are spaced at roughly equal intervals of 10-15\arcsec. To the east, several knots (HH\,397~NIRS3, C, D) show extended arc-like features, resembling bow shocks. The east-west orientation seen in the near infrared matches that of the CO and optical outflows \citep{Shepherd98,Devine99}. Additionally, a faint ``bubble'' of H$_2$ emission is seen around Star~5 from \citet{Jones04}.
\subsection{Kinematics}
\label{sec_kinematics}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{G192_1dspect.pdf}
\caption{Continuum-normalized one-dimensional spectrum of HH\,397A, averaged over 9\arcsec{}, where the detected H$_2$ emission lines are labeled.}
\label{fig_1dspect}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{G192_pv.eps}
\caption{{\it Left two panels}: emission in the $K_s$ filter and H$_2$-$K_s$ (continuum subtracted) with the slit position overlaid (solid black lines). The red cross marks the position of the 1.3-cm continuum emission \citep{Shepherd04}, which is placed at the origin of the coordinates $\delta x$ (across the slit) and $\delta y$ (along the slit). {\it Right three panels}: PV diagrams of the three lines of H$_2$ emission along the slit. Contours have been drawn for the continuum-subtracted H$_2$ emission at 4.5, 8.2, 15, 27 and 50 times the background rms level. The horizontal dotted lines mark the location of the radio continuum source along the slit in all five panels, while the vertical dotted lines in the right three panels show the velocity of the peak of NH$_3$ emission, 5.7~km~s$^{-1}$ (LSR), reported by \citet{Shepherd04}.}
\label{fig_pv}
\end{figure*}
To obtain two-dimensional spectra, the spectrograph slits of both telescopes were oriented along the western episodic outflows, as shown in Fig.~\ref{fig_H2-cont}. The brightest emission is in the H$_2$ lines $1-0$~S(1) (2.12~$\mu$m), $1-0$~S(0) (2.22~$\mu$m), and $2-1$~S(1) (2.24~$\mu$m), with weak emission also detected from HH\,397A in the $2-1$~S(2) (2.154~$\mu$m) line (Fig.~\ref{fig_1dspect}). Notably, no Br$\gamma$ emission was detected in HH\,397A, which is at odds with the narrow-band observations presented by \citet{Indebetouw03}. HH\,397A has a bright, red continuum contribution (see Fig.~\ref{fig_pv}), which is not seen in any of the other knots. The continuum has an irregular spatial structure along the slit, with two bright emission peaks about 3\arcsec{} (4500~au) apart.
\begin{figure}
\centering
\includegraphics[width=\hsize]{vel.eps}
\caption{Velocity of H$_2$ emission along the slit obtained from fitting the line profiles. The red cross marks the location of the 1.3-cm continuum source and the peak velocity of NH$_3$ emission \citep{Shepherd04}.}
\label{fig_vel}
\end{figure}
The three right panels in Fig.\,\ref{fig_pv} show PV diagrams of the H$_2$ emission along the slit. The knots appear to be episodic in nature, and have an asymmetrical velocity structure. Each western knot is red shifted relative to the peak of NH$_3$ emission reported by \citet{Shepherd04}, while the eastern knots are blue shifted. We fit the line profiles with a Gaussian at each point along the slit to determine the radial velocity (Fig.\,\ref{fig_vel}), and show the mean radial velocity of each knot in Table\,\ref{tab_knots}. Finally, using an inclination angle of the outflow of 63\degr{} from the line of sight \citep{Shepherd98}, we estimate the corresponding proper motion, along with the back-projected launch time from the assumed starting location, the position of the 1.3-cm source, of each knot observed in the slit (see Table\,\ref{tab_knots}).
\begin{table*}
\caption{Parameters of H$_2$ emission knots in G192}
\label{tab_knots}
\centering
\begin{tabular}{l l l c c c}
\hline\hline
Knot & RA & Dec &$\langle v_\mathrm{LSR} \rangle$ & Estimated proper motion & Launch time \\
& (J2000) & (J2000) & (km s$^{-1}$) & (mas yr$^{-1}$) & (yrs ago) \\
\hline
HH 396 NIRS4 & 5$^\mathrm{h}$58$^\mathrm{m}$8.8$^\mathrm{s}$ & +16\degr{}31\arcmin{}59.3\arcsec{} & & & \\
HH 396B & 5$^\mathrm{h}$58$^\mathrm{m}$9.6$^\mathrm{s}$ & +16\degr{}31\arcmin{}59.8\arcsec{} & $67 \pm 3$ & $\sim16$ & $\sim4000$ \\
HH 396 NIRS3 & 5$^\mathrm{h}$58$^\mathrm{m}$10.2$^\mathrm{s}$ & +16\degr{}31\arcmin{}59.2\arcsec{} & & & \\
HH 396A & 5$^\mathrm{h}$58$^\mathrm{m}$11.0$^\mathrm{s}$ & +16\degr{}31\arcmin{}59.7\arcsec{} & $133 \pm 6$ & $\sim32$ & $\sim1300$ \\
HH 396 NIRS2 & 5$^\mathrm{h}$58$^\mathrm{m}$11.1$^\mathrm{s}$ & +16\degr{}32\arcmin{}7.0\arcsec{} & & & \\
HH 396 NIRS1 & 5$^\mathrm{h}$58$^\mathrm{m}$12.1$^\mathrm{s}$ & +16\degr{}31\arcmin{}59.3\arcsec{} & $107 \pm 23$ & $\sim25$ & $\sim900$ \\
HH 397A & 5$^\mathrm{h}$58$^\mathrm{m}$13.8$^\mathrm{s}$ & +16\degr{}31\arcmin{}57.1\arcsec{} & $ -24 - 0$ & $\sim0 - 6$ & $\lesssim2100$ \\
HH 397 NIRS1 & 5$^\mathrm{h}$58$^\mathrm{m}$15.6$^\mathrm{s}$ & +16\degr{}31\arcmin{}55.7\arcsec{} & $-20 \pm 12$ & $\sim5$ & $\sim7000$ \\
HH 397 NIRS2 & 5$^\mathrm{h}$58$^\mathrm{m}$16.4$^\mathrm{s}$ & +16\degr{}32\arcmin{}16.3\arcsec{} & & & \\
HH 397 NIRS3 & 5$^\mathrm{h}$58$^\mathrm{m}$18.2$^\mathrm{s}$ & +16\degr{}32\arcmin{}5.7\arcsec{} & & & \\
HH 397B & 5$^\mathrm{h}$58$^\mathrm{m}$18.3$^\mathrm{s}$ & +16\degr{}31\arcmin{}21.0\arcsec{} & & & \\
HH 397C & 5$^\mathrm{h}$58$^\mathrm{m}$19.2$^\mathrm{s}$ & +16\degr{}32\arcmin{}3.1\arcsec{} & & & \\
HH 397D & 5$^\mathrm{h}$58$^\mathrm{m}$20.7$^\mathrm{s}$ & +16\degr{}31\arcmin{}55.9\arcsec{} & & & \\
\hline
\end{tabular}
\tablefoot{Mean radial velocities and corresponding proper motion estimates were determined only for knots lying along the slit.}
\end{table*}
\subsection{Excitation temperature of HH\,397A}
\begin{figure}
\centering
\includegraphics[width=\hsize]{boltz_temp.eps}
\caption{Boltzmann diagram for the $2-1$~S(1) (2.24~$\mu$m), $1-0$~S(0) (2.22~$\mu$m) and $1-0$~S(1) (2.12~$\mu$m) H$_2$ lines (left to right) in the HH\,397A region. The solid line shows the best fit to the level populations, with an excitation temperature of $2600$~K.}
\label{fig_boltz_temp}
\end{figure}
Under the assumption that conditions are close to local thermodynamic equilibrium, we used the Boltzmann distribution to estimate the excitation temperature of HH\,397A. In Fig.~\ref{fig_boltz_temp} we plot the logarithm of the H$_2$ column density $N_{\nu}$ (plus an arbitrary constant) against the corresponding energy level $E_{\nu}$ for each line. The column density may be expressed as $N_J = \cfrac{2 F_J \lambda_J}{A_J \hslash c}$, where $F_J, \lambda_J$ and $A_J$ are the flux, line wavelength and Einstein coefficient of each transition, respectively. Since our observations are not absolute flux calibrated, the value of the column density can't be calculated directly; however, the Boltzmann distribution requires only the ratios of fluxes to estimate the excitation temperature (i.e. $\ln{\cfrac{N_J}{g_J}} \propto \cfrac{E_J}{T}$, where $g_J, E_J$ and $T$ are statistical weight of the transition, energy of upper level and excitation temperature, respectively). We estimate the relative flux calibration uncertainty to be on the order of 10\%, and ignore the effects of interstellar absorption on the line fluxes, which are small due to the closeness in wavelength and energy levels of the transitions. Under these assumptions, we find an average excitation temperature in the HH\,397A region of $2600\pm500$~K.
\section{Discussion}
\label{Sect:Discussion}
\citet{Indebetouw03} identified a very red 2-$\mu$m source near the position of the millimeter source of \citet{Shepherd01}. Given their seeing-limited spatial resolution and positional accuracy of $\sim1$\arcsec{}, they concluded that the 2-$\mu$m source and millimeter sources are coincident, with the emission at 2~$\mu$m arising from a combination of photospheric emission and scattered light from the driving source(s) in G192. Using arguments based on line-of-sight extinction to the central source, they speculate that the circumstellar disk of G192 may have a cleared inner hole (roughly $10-15$~au), and that active accretion in the source has ended or temporarily stopped.
Our AO-corrected imaging and much higher positional accuracy show that the 2~$\mu$m and millimeter emission in G192 are distinct, with no significant near-infrared emission at the position of the millimeter source. Indeed, as \citet{Indebetouw03} show, given the estimates for the disk mass \citep[$3\!-\!15$\,M$_\odot$;][]{Shepherd01} and inclination angle of the outflow \citep[$\sim$63\degr;][]{Shepherd98}, there should be no detectable near-infrared emission if we are looking through the mid-plane of the disk. At the same time, the relatively bright continuum component in HH\,397A, which is absent in all the other near-infrared extended sources in the region, indicates that this region is indeed illuminated by one or more YSOs in the vicinity. The derived excitation temperature of $\sim\!2600$\,K~in HH\,397A is consistent with shock excitation of the gas, indicating that this region is not merely a quiescent reflection nebula. Taken together with the age estimates for the other knots in the region, this implies that accretion bursts in G192 have been taking place quasi-regularly over at least the past few $10^2\!-\!10^3$ years.
The velocity distribution of the red-shifted knots shows that the HH\,396 NIRS1 and NIRS2 knots have higher velocities than the HH\,396 NIRS3 knot, which may be an indication of interactions with interstellar surrounding material. This same scenario may be occurring in the case of HH\,397 NIRS1 in the eastern (blue-shifted) blue part of the outflow, if the outflow material was ejected into a region with much higher local density, and hence almost instantly decelerated to the low velocities ($\sim20$~km~s$^{-1}$) observed. On the other hand, another explanation of such a velocity distribution may be related to deceleration of accretion, where the slowest (and closest to the central source) knot might have been ejected when the accretion rate was lower.
An explanation of apparent episodic nature of the outbursts may be related to the fragmentary structure of the accretion disk. \citet{Meyer17} performed 3D radiation hydrodynamics simulations, showing the gravitational collapse of a 100~M$_\odot$ proto-stellar core, tracing the evolution of the accretion disk over 30~kyr. The accreting material around the protostellar core forms into spiral arms, and density inhomogeneities cause part of the material in the arms to fragment into clumps. These clumps continue to fall onto the star, causing periodic outbursts of matter. The high infall rates recently derived for G192 by \citet{Tang19} of $>10^{-3}$ M$_\odot$/yr support this picture. These SMA molecular line measurements trace the infall from the surrounding envelope to the central (potentially disk-like) part of G192, and such high infall rates can certainly induce instabilities within the disk, which could lead to subsequent fragmentation.
It is interesting to note that the blue-shifted velocities in the strongest H$_2$ emission structure, HH397A, just raise up to around $-30$~km/s. In comparison, the SMA CO mapping by \citet{Liu13} revealed a high-velocity CO component at this location with velocities of over $-70 ... -90$~km/s. This can be reconciled when assuming that the inner parts of the molecular outflow have a strong component with a large opening angle of $\sim 90^\circ$ \citep{Shepherd98, Shepherd99}. Hence, the CO observations include a component with smaller inclinations to the observer, while the H$_2$ flow mainly proceeds along the main jet axis, $\sim 63^\circ$ inclined to the line-of-sight.
Finally, we note that for the proper motions of the knots estimated in Table~\ref{tab_knots}, observations with comparable spatial resolution ($\sim300$~mas) and astrometric accuracy ($\sim100$~mas) should reveal positional changes of the outflow knots on the order of 100~mas after only five years.
\section{Summary and conclusion}
In this work, we presented new AO-corrected photometric and spectroscopic observations of a 4\arcmin{} field around the massive young stellar object G192.16-3.82 at near-infrared wavelengths ($2.0-2.3$~$\mu$m).
Using continuum-subtracted narrow-band images, we revealed several knots of emission in the H$_2$ line at 2.12~$\mu$m lying approximately along the east-west axis, consistent with the morphology of the Herbig-Haro flow seen at optical wavelengths. Five of the six knots detected to the west of the central MYSO lie along a straight line and are spaced roughly evenly at intervals of $10-15$\arcsec{}. The back-projection launch times of the western knots, which were presumably associated with major accretion events, were estimated from the observed radial velocities and assumed inclination of 63\degr{} and span up to $\sim10^3-10^4$~yrs in the past. The H$_2$ emission to the east of the MYSO, on the other hand, is more irregular. The morphologies of several of the knots resemble bow shocks, with the brightest line and continuum emission arising in HH~397A near the central source. The excitation temperature derived from the H$_2$ line fluxes in this region is consistent with shock excitation.
The high spatial resolution ($0.3-0.4$\arcsec) and precision of the astrometric solution ($0.08-0.1$\arcsec) of the observations allowed us to determine that the driving source (or sources) of the large-scale outflow HH~396/397 remains completely obscured at near-infrared wavelengths, which suggests that the line of sight to the central source lies through the midplane of the disk. Taken together with the high infall rates, the observed asymmetries and episodic nature of the G192 outflow at near-infrared wavelengths support the picture that the central MYSO is still accreting material on scales of hundreds to thousands years, and that the accretion disk may have undergone several fragmentation incidents in the recent past.
\begin{acknowledgements}
We are grateful for the support of Martin Kulas, Jose Borelli, Diethard Peter, Julian Ziegeleder, Tommaso Mazzoni for technical support with ARGOS and gathering data in the early stages of instrument commissioning. We thank Debra Shepherd for discussions and for providing her OVRO interferometric CO data in electronic form.
The work of P. Boley and N. Dmitrienko was supported by grant 18-72-10132 of the Russian Science Foundation.
This research made use of Astropy\footnote{http://www.astropy.org}, a community-developed core Python package for Astronomy \citep{astropy13, astropy18}, as well as Matplotlib \citep{Hunter07}.
The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Instituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia.
\end{acknowledgements}
|
1,941,325,220,991 | arxiv | \section{\textbf{Introduction}} The approximation theory
is one of the important branch of functional analysis that
\u{C}eby\u{s}ev originated it in nineteenth century. But,
convexity of \u{C}eby\u{s}ev sets is one of the basic problems in
this thoery. In a finite dimensional smooth normed linear space a
\u{C}eby\u{s}ev set is convex[2]. Also, every boundedly compact
\u{C}eby\u{s}ev set in a smooth Banach space is convex [3,7] and
in a Banach space which is uniformly smooth, each approximately
compact \u{C}eby\u{s}ev set is convex[4]. In addition, in a
strongly smooth space, every \u{C}eby\u{s}ev set with continuous
metric projection is convex[5,6]. Regarding convexity of
\u{C}eby\u{s}ev sets, there are still several open problems. It is
a wellknown problem whether a \u{C}eby\u{s}ev set in a Hilbert
space must be convex. Of course, in a finite-dimensional Hilbert
space, every \u{C}eby\u{s}ev set is convex. For inverse, we know
that every closed convex set in a strictly convex reflexive Banach
space and in particular Hilbert space is \u{C}eby\u{s}ev. However,
this problem that whether every \u{C}eby\u{s}ev set in a strictly
convex reflexive Banach space is convex is still open.
\section{\textbf{Basic definitions and Preliminaries}} In this section
we collect some elementary facts which will help us to establish
our main results.\\\textbf{Definition 2.1. } Let $(X,\|.\|)$ be a
real normed linear space, $x \in X $ and $X^{*}$ be its dual
space. For a nonempty subset $K$ in X, the distance of $x$ from
$K$ is defined as $d_{K}(x)= inf \{\|x-v\| ; v\in K \}$. The $K$
is said to be a \u{C}eby\u{s}ev set if, each point in $X$ has a
unique best approximation in $K$. In other words, for every $x \in
X $, there exist a unique $v\in K$ such that $\|x-v\|= d_{K}(x)$.
(This concept was introduced by S. B. Stechkin in honour of the
founder of best approximation theory, \u{C}eby\u{s}ev). The metric
projection is given by $P_{K}(x)= \{v\in K ; \|x-v\|= d_{K}(x) \}$
which consists of the closest points in $K$ to $x$. The $P_{K}$ is
said to be continuous if, $P_{K}(x)$ is a singleton for each $x\in
X \setminus K$ and it is sequently continuous.
\\\textbf{Definition 2.2. } A norm $\|.\|$ on X is said to be Kadec if, each
weakly convergent sequence $(x_{n})_{n=1}^{\infty}$ in X with the
weak limit $x\in X$ converges in norm to $x$ whenever
$\|x_{n}\|\rightarrow\|x\|$ as $n \rightarrow
\infty$.\\\textbf{Definition 2.3. } The space $X$ is said to be
strictly convex if, $x=y$ whenever $x , y \in S(X)$ and
$\displaystyle{\frac{x+y}{2}\in S(X)}$, where $S(X)=\{x\in
X;\|x\|=1\}$.
\\\textbf{Remark 2.4. } Every Hilbert space is strictly convex. Hence the dual of each Hilbert space is strictly convex.\\Related to the notion of strict convexity, is the notion
of smoothness.\\ \textbf{Definition 2.5. }For each $x\in X$ the
element \ $x^{*}\in S(X^{*})$ satisfying
$\|x\|=\hspace{1mm}\langle x^{*},x \rangle $ is called the support
functional corresponding to $x$ and $X$ is smooth in a non-zero
$x\in X$ if, the support functional corresponding to $x$ is
unique.\\Of course, the Hahn-Banach extension theorem, ensures the
existence of at least one such support functional.\\Smoothness and
strict convexity are not quite dual properties. There are examples
of smooth spaces whose duals fail to be strictly
convex.\\\textbf{Theorem 2.6. }[1] Each Hilbert space is
smooth.\\\textbf{Example 2.7. }The space $\Bbb{R}^{2}$ with
Euclidian norm,
$$\|(x_{1},x_{2})\|=\sqrt{x_{1}^{2}+x_{2}^{2}}$$ is a smooth and
strictly convex space.
\\\\\textbf{Theorem 2.8. }[1]
If $X$ be a reflexive and smooth space, then the dual space $X^{*}$ is strictly convex.\\
\textbf{Definition 2.9.} The space $X$ is uniformly convex if, for
every sequences $( x_{n})_{n=1}^{\infty}$ and $(
y_{n})_{n=1}^{\infty}$ we have,
$\displaystyle{\lim_{n\rightarrow\infty}\|x_{n}-y_{n}\|=0}$
whenever, $\displaystyle{\lim_{n\rightarrow\infty}\|x_{n}+y_{n}\|=2}$\\
\textbf{Theorem 2.10. }[1] Every uniformly convex space, is
strictly convex.\\\textbf{Remark 2.11. }[1] The inverse of theorem
(2.10) is not true, necessary. For example, define a norm
$|\|.\||$ on $C[0,1]$ by
$|\|x\||^{2}=\|x\|_{\infty}^{2}+\|x\|_{2}^{2}$, where
$\|.\|_{\infty}$ and $\|.\|_{2}$, denote the norms of $C[0,1]$ and
$L^{2}[0,1]$, respectively. Then $|\|.\||$ is strictly convex but,
not uniformly convex on $C[0,1]$.\\\textbf{Theorem 2.12. }[1] Each
Hilbert space, is uniformly convex.\\\textbf{Theorem 2.13. }[1]
Every uniformly convex Banach space, is
reflexive.\\\textbf{Theorem 2.14. }[1] The norm
of every uniformly convex space, is Kadec.\\
\section{\textbf{Main Results}}In this section, we state the conditions that ensure the
convexity of a \u{C}eby\u{s}ev sets in Hilbert
spaces\\\textbf{Theorem 3.1. } Let $K$ be a weakly closed set in a
reflexive space X with Kadec norm. Then the metric projection
$P_{K}$ is continuous.
\\\textit{Proof. }Let $x \in X\backslash K$, $v \in P_{K}(x)$
and suppose $(x_{n})_{n=1}^{\infty} \subseteq X $,
$(v_{n})_{n=1}^{\infty} \subseteq P_{K}(x_{n})$ such that
$x_{n}\rightarrow x$ in norm. It is sufficient show that
$P_{K}(x)$ is a singleton and $v_{n}\rightarrow v$ in norm. Since
that $d_{K}$ is continuous, we have :
$$d_{K}(x)= \|x-v\|\leq \|v_{n}-x\|\leq\|x_{n}-v_{n}\|+\|x_{n}-x\|=d_{K}(x_{n})+\|x_{n}-x\|\rightarrow d_{K}(x)$$
So, $\displaystyle{\lim_{n\rightarrow\infty}\|v_{n}-x\|=\|x-v\|}$
and hence $(v_{n})_{n=1}^{\infty}$is bounded. Thus
$(v_{n})_{n=1}^{\infty}$ is contained in an set $A$ such that $A$
is weakly closed and boundedly in norm. Since $X$ is reflexive,
the set $A$ is weakly compact. Hence there exists a weakly
convergent subsequence $(v_{n_{k}})_{k=1}^{\infty}$ of
$(v_{n})_{n=1}^{\infty}$ whose weak limit $v_{0}$ lies in $A$:
Such an $v_{0}$ must be in $K$. Note that the norm on a normed
space is lower semicontinuous for the weak topology. Then
$$\|x-v\|=d_{K}(x)\leq \|x-v_{0}\|\leq \displaystyle{\liminf_{k\rightarrow\infty}\|v_{n_{k}}-x\|=d_{k}(x)=\|x-v\|}$$
This implies $v_{0}=v$ and so $P_{K}(x)$ is a singleton. The
$(x-v_{n_{k}})_{k=1}^{\infty}$ is weakly converges to $x-v$ and
satisfies
$\displaystyle{\lim_{k\rightarrow\infty}\|v_{n_{k}}-x\|=\|x-v\|}$.
Since the norm on X is Kadec, the sequence
$(x-v_{n_{k}})_{k=1}^{\infty}$ is normly convergent to $(x-v)$.
Therefore, $( v_{n_{k}})_{k=1}^{\infty}$ converges to $v$ in norm
and consequently $( v_{n})_{n=1}^{\infty}$ converges to $v$ in
norm . This proves that $P_{K}$ is continuous.\\Now by the
theorems (2.12), (2.13), (3.1), we have:\\\textbf{Corollary 3.2. }
Let $K$ be a weakly closed set in a uniformly convex Banach space
X. Then the metric projection $P_{K}$ is continuous.
\\\textbf{Theorem 3.3. }[5,6] Every \u{C}eby\u{s}ev set $K$ with
continuous metric projection $P_{K}$, in a Banach space $X$ with
strictly convex dual $X^{*}$, is convex.\\Now by remark (2.4) and
the previous theorem, we have:\\\textbf{Corollary 3.4. } Every
\u{C}eby\u{s}ev set with continuous metric projection, in a
Hilbert space is convex.\\Now by the theorems (3.3), (2.12), (2.8)
and corollary (3.2), we have:\\\textbf{Theorem 3.5. } Every weakly
closed \u{C}eby\u{s}ev set in a smooth uniformly convex Banach
space, is convex.\\Finally, by the theorems (2.6), (2.11) and the
previous theorem, we have:\\\textbf{Corollary 3.6.} Every weakly
closed \u{C}eby\u{s}ev set in a Hilbert space, is convex .\\
\section{\textbf{Acknowledgments}}
We wish to express our appreciation to our supervisor Dr. A.
Assadi for his advice and encouragement in the preparation of
this dissertation.\\
|
1,941,325,220,992 | arxiv | \section{Introduction}
Deep convolutional neural networks trained on a large number of labeled data boost the performance of image recognition on various tasks. However, the preparation of many labeled samples to train the network is time-consuming and expensive. The method for transferring knowledge from a label-rich domain (source domain) to a label-scarce domain (target domain) is termed as domain adaptation and enables us to reduce the cost for annotation. Specifically, the method for unsupervised domain adaptation (UDA) where we do not require any annotated target samples during training can solve the aforementioned difficulty.
The difficulty in the task involves the difference between each domain with respect to the texture, color, and appearance of objects. The classifier trained on the source domain typically does not work well on the other domain due to the domain-gap problem. Additionally, the target domain includes only unlabeled samples, and this implies that we do not know the classes that are present in the target domain.
A possible strategy involves collecting samples belonging to various classes subsuming those present in the target domain and adapting a model from the source to the target domain.
As the result, the target domain may not include some classes present in the source domain. The adaptation setting is termed as \textit{partial domain adaptation} (PDA) and corresponds to an extremely practical setting.\par
However, several methods for UDA~\cite{GRL,DAN,MCD,domain_confusion} should degrade their performance in the PDA setting because they assume that the source and target completely share the same classes.
Their aim involves matching the marginal feature distributions between different domains. If the target feature distribution is aligned with the source overall, the target samples can be assigned to the class absent in the target domain as shown in the left of Fig.~\ref{fig:feature}. In summary, partially aligning marginal feature distributions is necessary in PDA.\par
Our method integrates three motivations to effectively deal with PDA. The first motivation involves precisely estimating the label distribution of the target to train a model on the classes present in the target domain.
Second, the extraction of discriminative features for the target domain is important for highly accurate classification of target samples as shown in the center and right of Fig.~\ref{fig:feature}. This figure indicates that considering relationship between target samples and task-specific decision boundaries is important to extract discriminative features.
The utilization of a classifier's output for the target domain is shown to be effective to extract discriminative features~\cite{MCD} although the method matches marginal feature distributions.
Third, to partially match feature distributions between domains, it is necessary to use a measurement that can partially evaluate the distance between domains. Many methods for UDA aim to measure the distance between the entire distributions of the source and target, which is not desirable in PDA.\par
In this paper, we propose a novel method called \textit{Two Weighted Inconsistency-reduced Networks} (TWINs). We utilize two classification networks that do not share their parameters. With respect to the first motivation, the label distribution of the target is precisely estimated by the outputs of the two networks for all target samples with which a classification loss is weighted. The estimation is more accurate than when using one network. With respect to the second and third motivations, we propose to minimize the domain divergence measured by the inconsistency of classifiers on target samples. Thus, the two networks are trained to agree on their predictions on the target samples to extract discriminative features. To partially measure the distance between domains, we propose not to use adversarial training, and this is different from \cite{MCD}.
Our method displays a connection with the theory of domain adaptation that measures the divergence between domains by the inconsistency of two classifiers.\par
We evaluate our method on digits, traffic signs, and object classification tasks on PDA setting. In most tasks, our method outperforms other existing methods by a large margin.
\section{Related Work}
\subsection{Unsupervised Domain Adaptation}
Several methods are proposed for unsupervised domain adaptation (UDA). In the present study, we mainly focus on methods that are based on deep learning, since they are proven as powerful learning systems.\par
\textbf{Feature distribution matching} to extract domain invariant features is the most popular method for UDA and includes Maximum Mean Discrepancy~\cite{MMD,DAN,JAN,RTN,WMMD}, Central Moment Discrepancy~\cite{CMD}, and CORAL~\cite{CORAL}.\par
\textbf{Domain classifier based method} through adversarial learning is also a representative method for UDA~\cite{GRL,CADA,LEL,CoGAN,MCD,domain_confusion,ADDA}. A domain classifier is trained to discriminate the domain from where the feature originates while a feature extractor is trained to deceive the domain classifier. Adversarial training aligns the feature distribution of the source and target with each other.
The methods are designed with the assumption that the label distributions of the source and target are approximately the same. When the assumption does not hold, such as in the case of partial domain adaptation, the target samples can be assigned to the class of the source that is absent in the target domain.\par
\textbf{Classifier's discrepancy based method} is recently proposed for UDA and significantly improves performance. Maximum Classifier Discrepancy~\cite{MCD} utilizes the outputs of task-specific classifiers to align features and also to extract more discriminative features for target samples. They construct two classifier networks with a shared feature extractor network. They train the two classifiers to output different predictions on the target samples while training the feature extractor to generate features that make the output of the two networks similar.
The method is useful since it considers the task-specific decision boundary and avoids generating ambiguous target features near class boundaries. Sampling two classifiers by using dropout achieves a similar effect~\cite{dropout}. However, the methods are not also effective for partial domain adaptation (PDA). They use adversarial training between classifiers and feature extractor, with which source and target features are likely to be strictly aligned. We discuss further details of the aforementioned point in Sec.~\ref{sec:method}.\par
In the study, we introduce a method that can partially adapt features by weighting the classification loss by the estimated label distribution of the target domain and training to reduce inconsistency between two task-specific classifiers. Please note that our method does not rely on adversarial training. Deep mutual learning is proposed for large-scale supervised image classification~\cite{dml}. This method also has the objective of minimizing inconsistency of two networks. They do not show that the technique is useful for UDA. In addition, in our work, we present how to utilize the technique for PDA and why it is useful for this task.\par
\begin{figure*}
\centering
\includegraphics[clip,width=0.9\linewidth]{figure/training3.eps}
\caption{The overview of the proposed method where $X_s$ and $X_t$ denote source and target samples, respectively; and $F_1$ and $F_2$ denote two classifiers. The label distribution of the target domain ${\mathbf w}$ is estimated by two classifiers' outputs on target samples. The estimated label distribution ${\mathbf w}$ is used to weight classification loss on source samples ($L_s$). Inconsistency loss $L_t$ on target samples are also calculated by the difference between two classifiers' outputs.}
\label{fig:training}
\end{figure*}
\subsection{Partial Domain Adaptation}
In the PDA setting, the target domain contains the classes that are a subset of the source classes. To the best of our knowledge, all methods for PDA utilize a domain classifier to achieve adaptation~\cite{SAN,PADA,ImportanceWA}. The main idea of the aforementioned methods is to identify whether source samples belong to the classes present in the target domain and weight the task-specific classifier's loss or the domain classifier's loss to avoid the alignment of target samples with the source classes absent in the target domain.
For instance, Partial Adversarial Domain Adaptation (PADA)~\cite{PADA} estimates the label distribution of the target samples by averaging the outputs of the classifier for target samples and utilizes it to re-weight the task-specific classification loss and domain classification loss.
The major differences in the methods are as follows. First, we estimate the label distribution of the target samples by two parameter-unshared networks that enables accurate estimation. Second, we introduce the idea of a task-specific classifier based on feature distribution alignment for the task. Existing methods do not use task-specific classifiers to align features that generate ambiguous features near the decision boundary.
\section{Proposed Method}\label{sec:method}
This section presents the proposed method for partial domain adaptation in detail. The overview of our method is illustrated in Fig.~\ref{fig:training}. The following two key ideas are involved in our method: classification loss on the source weighted by the estimated label distribution in the target domain and feature distribution alignment by using task-specific classifiers. Additionally, we propose a training procedure to integrate the two ideas to achieve effective adaptation.
The label distribution of the target domain is estimated by using two networks that do not share parameters. When the estimated label distribution is obtained, it is used to make the networks focus on classifying classes present in the target domain as described in Sec.~\ref{sec:weighted_loss}. Specifically, the distribution is used to weight the classification loss on the source samples.
Furthermore, we conduct feature distribution alignment by minimizing the inconsistency of the two task-specific classifiers, and this leads to the extraction of discriminative features (and not ambiguous features) for target samples as described in Sec.~\ref{sec:inconsistency_loss}.
The estimation of the label distribution and feature distribution alignment are alternately performed. All the training procedures are described in Sec.~\ref{sec:procedure}.
We make a connection between our method and the theory of domain adaptation in Sec.~\ref{sec:theory}.\par
We state the definitions of terminologies. The source domain data $X_s \in \mathbb{R}^{d\times n_s}$ are drawn from distribution $P_s\left(X_s\right)$, and target domain data $X_t \in \mathbb{R}^{d\times n_t}$ are drawn from distribution $P_t\left(X_t\right)$ where $d$ denotes the dimension of the data instance, and $n_s$ and $n_t$ denote the number of samples in the source and target domain respectively. This is due to the domain shift, $P_s\left(X_s\right) \neq P_t\left(X_t\right)$. We use labeled source samples $\mathcal{D}_s=\left\{\left({\mathbf{x}}_i^s,y_i^s\right)\right\}_{i=1}^{n_s}, {\mathbf{x}}_i^s \in \mathbb{R}^d$ and unlabeled target samples $\mathcal{D}_t=\left\{\left({\mathbf{x}}_i^t\right)\right\}_{i=1}^{n_t}, {\mathbf{x}}_i^t \in \mathbb{R}^d$ during training. The source label space $\mathcal{Y}_s$ and the target one $\mathcal{Y}_t$ are different. The target domain label space is contained in the source domain label space ($\mathcal{Y}_t \subseteq \mathcal{Y}_s$). We use two networks, namely $F_1$ and $F_2$, with exactly the same architecture although they do not share their parameters $\theta_1,\theta_2$.
The probabilities that ${\mathbf x}$ is classified into class $k$ when it is inputted into $F_1$ and $F_2$ are denoted by $p_1(y=k|{\mathbf x})$ and $p_2(y=k|{\mathbf x})$, respectively. Furthermore, we use the notation $p_1({\mathbf y}|{\mathbf x})$ and $p_2({\mathbf y}|{\mathbf x})$ to denote the $|\mathcal{Y}_s|$-dimensional probabilistic output for ${\mathbf x}$ inputted into $F_1$ and $F_2$, respectively. We assume that the outputs are obtained after the softmax layer.
\par
\subsection{Weighted Loss with the Label Distribution}\label{sec:weighted_loss}
\noindent
\textbf{Target Label-Distribution Estimation.}\ \ We explain how to estimate the label distribution of the target domain.
As we mentioned, we assume that there are two networks that are trained with the following classification loss on the source domain:
\begin{align}
L_s(X_s,Y_s)\mathalpha{=}L_{s_1}\left(F_1\left(X_s\right),Y_s\right)
\mathalpha{+}L_{s_2}\left(F_2\left(X_s\right),Y_s\right),\label{eq:loss}
\end{align}
where $L_{s_1}$ and $L_{s_2}$ denote the classification losses with respect to $F_1$ and $F_2$, respectively.
$L_{s_j}(j=1,2)$ is defined as follows,
\begin{equation}
L_{s_j}=-\frac{1}{n_s}\sum_{i=1}^{n_s}\sum_{k=1}^{|\mathcal{Y}_s|}\mbox{1}\hspace{-0.25em}\mbox{l}_{[k=y_i^s]}\log p_j\left(y=k|{\mathbf x}_i^s\right),
\label{eq:entropy}
\end{equation}
where $\mbox{1}\hspace{-0.25em}\mbox{l}_{[k=y_s]}$ is $1$ when $k=y_s$, otherwise, $0$.\par
We focus on the estimation of the label distribution by averaging the outputs of two networks for all target samples. Specifically, ${\mathbf w}$ is calculated as,
\begin{equation}
\mathbf{w}=\frac{|\mathcal{Y}_s|}{2n_t}\sum_{i=1}^{n_t}\left(p_1(\mathbf{y}|\mathbf{x}_i^t)+p_2(\mathbf{y}|\mathbf{x}_i^t)\right),\label{eq:weight}
\end{equation}
where ${\mathbf w}$ denotes a $|\mathcal{Y}_s|$-dimensional vector. To obtain the weight, we multiply the label distribution by the number of the source class to prevent the loss from being very small.\par
\noindent
\textbf{Target Label-Distribution Weighted Loss.}\ \ The estimated label distribution is used to make the model focus on classifying classes present in the target domain.
Each dimension of the vector indicates the approximated ratio of the target samples of the corresponding class. While training the networks with source samples, we aim to focus on the present classes and suppress the effect of absent classes in the target.
The weighted classification loss $L_s'$ is summarized as follows,
\begin{align}
L_s^{\prime}&= L_{s_1}^{\prime}(F_1(X_s),Y_s)+L_{s_2}^{\prime}(F_2(X_s),Y_s),
\label{eq:loss_weighted}\\
L_{s_j}^{\prime}&=-\frac{1}{n_s}\sum_{i=1}^{n_s}\sum_{k=1}^{|\mathcal{Y}_s|}w_k\mbox{1}\hspace{-0.25em}\mbox{l}_{[k=y_s]}\log p_j\left(y=k|{\mathbf x}_i^s\right),
\label{eq:entropy_weighted}
\end{align}
where $w_k$ denotes the k-th element of $\mathbf{w}$.
With the loss function, the classifiers can increasingly focus on classes present in the target domain when trained on source samples.
\begin{figure}
\centering
\includegraphics[clip,width=0.7\linewidth]{figure/inconsistent.eps}
\caption{The inconsistent region marked with diagonal lines denotes the area where the outputs of the two classifiers are inconsistent and the features should be far from the source samples. Reducing the inconsistency between the two classifiers' outputs aligns target with source samples considering task-specific decision boundaries.}
\label{fig:inconsistent}
\end{figure}
\subsection{Inconsistency Loss}\label{sec:inconsistency_loss}
We explain how we align the target samples with the source. We train $F_1$ and $F_2$ to reduce the inconsistency of predictions for target samples.
As shown in the study of MCD \cite{MCD} and Fig.~\ref{fig:inconsistent}, we can measure how discriminative the target features are by examining the inconsistency of the task-specific classifiers. If the inconsistency is high, the features should be far from the source. Conversely, if the inconsistency is low, the features should be near the source with respect to the task-specific decision boundary.\par
Then, we propose using an objective to minimize the inconsistency between the predictions of two classifiers for target samples and call it \textit{Inconsistency loss}. We make a connection with the theory of domain adaptation in Sec.~\ref{sec:theory}. In the study, we utilize the $L_1$ distance as inconsistency loss following \cite{MCD} although other functions can be used here. The inconsistency loss is as follows:
\begin{equation}
L_t = \frac{1}{n_t} \sum_{i=1}^{n_t}\|p_1\left({\mathbf y}|{\mathbf x}_i^t\right)-p_2\left({\mathbf y}|{\mathbf x}_i^t\right)\|_1.
\label{eq:loss_t}
\end{equation}
Our method is different from MCD~\cite{MCD} since the MCD includes a training step that increases the discrepancy of the task-specific classifiers. This is intended to effectively measure the distance between domains, and this should lead to strictly matching the feature distribution. Hence, the aforementioned type of strict matching should not be effective for PDA. This fact is empirically demonstrated in the experiments. Then, we propose not to employ the step of increasing the discrepancy of the task-specific classifiers.
\begin{algorithm}[t]
\caption{Training of TWINs. $N_1$, $N_2$, $N_3$ denote maximum iterations of Phase 1, the number of interval iterations of Phase 2, and maximum iterations of Phase 3, respectively.}
\label{alg:twins}
\begin{algorithmic
\REQUIRE
Data: $\mathcal D_s=\left\{\left({\mathbf{x}}_i^s,y_i^s\right)\right\}_{i=1}^{n_s},\mathcal{D}_t=\left\{{\mathbf x}_i^t\right\}_{i=1}^{n_t}$\\
Prediction Model: $F_1,F_2$\\
\STATE Initialize model parameters ${\mathbf \theta}_1,{\mathbf \theta}_2$
\FOR{$i = 1$ to $N_1$}
\STATE{Get random minibatch $\mathcal{D}_s^{\prime}$
from $\mathcal{D}_s$.}
\STATE Update model parameters ${\mathbf \theta}_j (j=1,2)$ by ascending their stochastic gradients with respect to Eq.~\ref{eq:loss}:
\begin{center}
{$\nabla_{{\mathbf \theta}_j}L_{s_j}$}
\end{center}
\ENDFOR
\FOR {$i=N_1+1$ to $N_1+N_3$}
\IF {$i \% N_2 == 0$}
\STATE Update the label distribution $\mathbf{w}$ via Eq.~\ref{eq:weight}.
\ENDIF
\STATE{Get random minibatch $\mathcal{D}^{\prime}$
from $\mathcal{D}_s$ and $\mathcal{D}_t$.}
\STATE Update model parameters $\theta_j (j=1,2)$ by ascending their stochastic gradient with respect to Eq.~\ref{eq:loss_all}:
\begin{center}
$\nabla_{{\mathbf \theta}_j}L_\mathrm{total}$
\end{center}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Training Procedure}\label{sec:procedure}
We summarize how we integrated the two ideas to achieve partial domain adaptation. The complete training procedure is summarized in Alg.~\ref{alg:twins}. Phase~2 and Phase~3 are also visualized in Fig.~\ref{fig:training}.
\noindent{\textbf{Phase~1: Pre-train classifiers with only source samples.}}\ \ We pre-train the two networks by using source samples. In the training phase, we do not use any adaptation methods. The phase is not required when we possess access to the pre-trained model.\par
\noindent{\textbf{Phase~2: Estimate the target label distribution.}}\ \ We estimate the label distribution of the target by using two classification networks (Eq.~\ref{eq:weight}) using all training target samples.\par
\noindent{\textbf{Phase~3: Optimize the parameters of classifiers.}}\ \ We train networks by using weighted classification loss on source samples (Eq.~\ref{eq:loss_weighted}) and inconsistency loss calculated on target samples (Eq.~\ref{eq:loss_t}). The overall loss function used in the phase is as follows:
\begin{equation}
L_{\mathrm{total}}=L_s^{\prime}+L_t.
\label{eq:loss_all}
\end{equation}\par
\noindent{\textbf{Repeat Phase~2 and Phase~3 alternately.}}\ \ We train the two networks by repeating Phase~2 and Phase~3 alternately. In Phase~3, the target features are aligned with the source, and this ensures that the estimation of the label distribution is more accurate. This also results in better feature alignment. Therefore, the label distribution estimation and feature alignment by using the estimated label distribution should benefit from each other. Thus, we alternately change Phase~2 and Phase~3. The training objective is simplified when compared to MCD~\cite{MCD} since we do not employ adversarial learning. Although the phase of estimating the target label distribution is required, the objective to train the network involves simply minimizing Eq.~\ref{eq:loss_all}.
\subsection{Theoretical Insight}\label{sec:theory}
Given that MCD~\cite{MCD} is motivated by the theory proposed by Ben-David \etal~\cite{ben2010theory} and that the proposed method is related to it, our aim involves demonstrating the relationship between our method and the theory in this section. Ben-David \etal~\cite{ben2010theory} proposed a theory that bounds the expected error on the target samples $R_{\mathcal{T}}(h)$ by using the following three terms: (i) the expected error on the source domain $R_{\mathcal{S}}(h)$; (ii) the $\mathcal{H} \Delta \mathcal{H}$-distance ($d_{{\mathcal{H}\Delta\mathcal{H}}}(\mathcal{S},\mathcal{T})$), which is measured as the discrepancy between two classifiers; and (iii) the shared error of the ideal joint hypothesis $\lambda$ that is considered as a constant value. Specifically, $\mathcal{S}$ and $\mathcal{T}$ denote source and target domains, respectively.
\begin{theorem1}\label{th:th_1}
Let $H$ be the hypothesis class. Given two domains $\mathcal{S}$ and $\mathcal{T}$, we obtain the following:
\begin{eqnarray}
\forall h \in H, R_{\mathcal{T}}(h)\leq R_{\mathcal{S}}(h) +\frac{1}{2}{d_{\mathcal{H} \Delta \mathcal{H}}(\mathcal{S},\mathcal{T})}+\lambda,
\label{eq:main}
\end{eqnarray}
where
\begin{align}
&\frac{1}{2}d_{{\mathcal{H}\Delta\mathcal{H}}}(\mathcal{S},\mathcal{T})\\\nonumber
=&\sup_{(h,h{'})\in \mathcal{H}^{2}}\left| \underset{\mathbf{x}\sim \mathcal{S}}{\mathbf{E}}{\rm I}\bigl[h(\mathbf{x})\!\neq\! h^{'}(\mathbf{x}) \bigr]\mathalpha{-} \underset{\mathbf{x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\bigl[h(\mathbf{x})\!\neq\! h^{'}(\mathbf{x}) \bigr]\right|, \\
&\lambda=\min \left[R_{\mathcal{S}}(h)+R_{\mathcal{T}}(h)\right].
\end{align}
Here, $R_{\mathcal{T}}(h)$ denotes the error of hypothesis $h$ on the target domain, and $R_{\mathcal{S}}(h)$ denotes the corresponding error on the source domain. Additionally, ${\rm I}[a]$ denotes the indicator function, and this corresponds to 1 if the predicate a is true and 0 otherwise.
\label{th:thm1}
\end{theorem1}\par
Based on the theory, we can argue that it is possible to approximate the divergence between two domains by using the discrepancy between two classifiers.
In a study of MCD \cite{MCD}, the aim involved approximating $d_{{\mathcal{H}\Delta\mathcal{H}}}(\mathcal{S},\mathcal{T})$ by adversarial training between two classifiers and a feature extractor. They assume that the term $\scalebox{0.9}{$\displaystyle \underset{\mathbf{x}\sim \mathcal{S}}{\mathbf{E}} {\rm I}\bigl[h(\mathbf{x}) \neq h^{'}(\mathbf{x}) \bigr]$}$ is extremely low because the source samples are labeled. Therefore, $d_{{\mathcal{H}\Delta\mathcal{H}}}(\mathcal{S},\mathcal{T})$ is approximately calculated as follows: $\scalebox{0.9}{$\displaystyle\sup_{(h,h{'})\in \mathcal{H}^{2}}\underset{\mathbf{x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\bigl[h(\mathbf{x}) \neq h^{'}(\mathbf{x}) \bigr],$}$ and this denotes the supremum of the expected discrepancy of two classifiers' predictions on target samples. To calculate the maximum of the discrepancy, they train the two classification networks to disagree on their predictions on the target.\par
We minimize the left side of the following inequality:
\scalebox{0.9}{
$\displaystyle
\underset{{\mathbf{x} \sim \mathcal{T}}}{\mathbf{E}}{\rm I}\bigl[h(\mathbf{x})\neq h^{'}(\mathbf{x}) \bigr]
\leq\sup_{(h,h{'})\in \mathcal{H}^{2}}\underset{\mathbf{x}\sim \mathcal{T}}{\mathbf{E}}{\rm I}\bigl[h(\mathbf{x})\neq h^{'}(\mathbf{x})\bigr].
$
}
Therefore, we approximate the divergence between domains lower than the divergence used in MCD~\cite{MCD}. If we can completely estimate the label distribution of the target domain in Phase~2, then strictly aligning feature distribution by using $\scalebox{0.9}{$\displaystyle \sup_{(h,h{'})\in \mathcal{H}^{2}}\underset{\mathbf{x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\bigl[h({\bf x}) \neq h^{'}(\mathbf{x}) \bigr]$}$ is effective. However, in reality, the estimation always includes a few errors, and thus minimizing the relaxed divergence $\scalebox{0.9}{$\displaystyle \underset{{\mathbf{x} \sim \mathcal{T}}}{\mathbf{E}} {\rm I}\bigl[h(\mathbf{x}) \neq h^{'}(\mathbf{x}) \bigr]$}$ is considered as appropriate.
\section{Experiments}
We evaluate our method on several datasets to compare our method with state-of-the-art deep learning methods for domain adaptation. It should be noted that all experiments are performed in the unsupervised setting where labels in the target domain are not given. The goal of the experiments involves demonstrating that our method is effective on both digits classification and general object classification datasets
{\tabcolsep = 0.6mm
\begin{table}[t]
\centering
\scalebox{0.9}{
\begin{tabular}{l|cccc}\hline
\toprule[1.5pt]
Method
&
\begin{tabular}{c}
MNIST\\$\downarrow$\\USPS
\end{tabular}
&
\begin{tabular}{c}
USPS\\$\downarrow$\\MNIST
\end{tabular}
&
\begin{tabular}{c}
SVHN\\$\downarrow$\\MNIST
\end{tabular}
&
\begin{tabular}{c}
SYN SIGNS\\$\downarrow$\\GTSRB
\end{tabular}
\\ \hline
Source Only & 85.2 & 80.0 & 73.9 & 89.2 \\ \hline
\multicolumn{5}{c}{\textit{Methods for Unsupervised Domain Adaptation}} \\\hline
DAN~\cite{DAN} & 83.5 & 80.7& 70.9 & 90.2 \\
DANN~\cite{GRL} & 67.1 & 72.1 & 39.8 & 55.5 \\
MCD~\cite{MCD} & 66.4 & 59.4 & 71.2 & 93.1 \\
\hline
\multicolumn{5}{c}{\textit{Methods for Partial Domain Adaptation}} \\\hline
IWAN~\cite{ImportanceWA} & 90.6 & 85.7 &75.6 & 77.7\\
PADA~\cite{PADA} & 78.2 & 73.9 & 44.1 & 71.2\\ \hline
TWINs (Ours)& \textbf{96.3} & \textbf{90.2} & \textbf{99.6} & \textbf{95.5}\\
\bottomrule[1.5pt]
\end{tabular}
}
\caption{Accuracy for digits and traffic sign datasets. In the tasks of MNIST $\to$ USPS, USPS $\to$ MNIST, and SVHN $\to$ MNIST, the source domain includes 10 classes, and the target domain includes 5 classes. In the task of SYN SIGNS $\to$ GTSRB, the source domain includes 43 classes, and the target domain includes 20 classes. TWINs achieves strongest results on all four evaluated partial domain adaptation scenarios.}
\label{tab:digits}
\end{table}
}
\subsection{Experiments on Digit and Traffic Sign Datasets}\label{sec:digits}
In the experiment, we evaluate our proposed method with respect to adaptation for digits and traffic sign datasets. The networks are trained from scratch in the setting.\par
\noindent{\textbf{Setup.}}\ \ We utilize digit datasets (including MNIST~\cite{mnist}, Street View House Numbers (SVHN)~\cite{SVHN}, and US Postal handwritten digit dataset (USPS)~\cite{USPS}) and traffic sign datasets (including Synthetic Traffic Signs (SYN SIGNS)~\cite{SYN_SIGNS} and German Traffic Signs Recognition Benchmark (GTSRB)~\cite{GTSRB}).
Digit datasets consist of 10 classes, and traffic sign datasets consist of 43 classes, respectively.
We evaluate our method across four domain adaptation tasks (\ie, \textbf{MNIST $\to$ USPS}, \textbf{USPS $\to$ MNIST}, \textbf{SVHN $\to$ MNIST}, and \textbf{SYN SIGNS $\to$ GTSRB}). When the dataset is used as the target domain, we use the first five classes in the experiment on digit datasets and the first twenty classes in the experiment on traffic sign datasets in ascending order, and we use all images in the classes for training.\par
Extant studies do not report the results for the aforementioned adaptation scenarios of PDA, and thus we employ the optimization procedure and CNN architecture used in~\cite{MCD}, basically. Optimization proceeds via the Adam optimizer~\cite{Adam} for $30$ epochs with a learning rate of $2.0 \times 10^{-4}$, a $\beta_1$ of 0.9, a $\beta_2$ of 0.999, and a batch size of $256$ images (128 per domain) in all experiments. We pre-train our classifiers by only source samples in the first 10 epochs (Phase~1), and we subsequently estimate the label distribution of target samples (Phase~2) and optimize the parameters of classifiers by source and target samples (Phase~3), repeatedly.
We follow the protocol of unsupervised domain adaptation by using all labeled source data and all unlabeled target data and do not use validation samples to tune hyperparameters per each adaptation scenario. Further details are provided in our supplementary material due to space limitations.\par
\noindent{\textbf{Results.}}\ \ Our method achieves better accuracy on all four partial domain adaptation scenarios as shown in Tab.~\ref{tab:digits}.
Our method outperforms existing methods for PDA. The results indicate the effectiveness of using task-specific classifier's inconsistency as the distance between domains. IWAN~\cite{ImportanceWA} and PADA~\cite{PADA} do not exhibit better performance even compared with the methods for UDA. This is potentially because the models are based on DANN wherein the training process is unstable in a few scenarios~\cite{ADDA}. Furthermore, they propose to partially align the feature distributions by controlling weight on source samples in training a domain classifier. This can make the training of the domain classifier more unstable.\par
\begin{figure}[t]
\centering
\subfigure[Source only]{\includegraphics[trim=0cm 3.5cm 0cm 3.5cm,clip,width=0.45\linewidth]{figure/source_emb.eps}
\label{fig:source_emb}}
\subfigure[MCD]{\includegraphics[trim=0cm 3.5cm 0cm 3.5cm,clip,width=0.45\linewidth]{figure/mcd_emb.eps}
\label{fig:MCD_emb}}
\subfigure[PADA]{\includegraphics[trim=0cm 3.5cm 0cm 3.5cm,clip,width=0.45\linewidth]{figure/pada_emb.eps}\label{fig:PADA_emb}}
\subfigure[TWINs]{\includegraphics[trim=0cm 3.5cm 0cm 3.5cm,clip,width=0.45\linewidth]{figure/twins_emb.eps}
\label{fig:twins_emb}}
\caption{T-SNE visualization of features obtained from the second last fully connected layer of (a) source only, (b) MCD, (c) PADA, and (d) TWINs. The transfer task is MNIST (10 classes) $\to$ USPS (5 classes). Blue/light blue dots correspond to the source domain samples in which the classes are present/absent in the target domain while orange dots correspond to the target domain samples. All samples are testing samples. The results indicate our method enable target samples to be aligned with the classes present in the target domain. Furthermore, our method extracts discriminative features considering classification boundaries.}
\label{fig:t_SNE}
\end{figure}
\begin{table*}[t!]
\centering
\scalebox{0.9}{
\begin{tabular}{l|cccccc|c}
\toprule[1.5pt]
Method
& \textbf{A $\to$ W} & \textbf{D $\to$ W} & \textbf{W $\to$ D} & \textbf{A $\to$ D} & \textbf{D $\to$ A} & \textbf{W $\to$ A} & Avg.\\ \hline
ResNet~\cite{ResNet} & 54.5 & 94.6 & 94.2 & 65.6 & 73.2 & 71.7 & 75.6\\ \hline
\multicolumn{8}{c}{\textit{Methods for Unsupervised Domain Adaptation}} \\\hline
DAN~\cite{DAN} & 46.4 & 53.6 & 58.6 & 42.7 & 65.7 & 65.3 & 55.4\\
DANN~\cite{GRL} & 41.4 & 46.8 & 38.9 & 41.4 & 41.3 & 44.7 & 42.4\\
ADDA~\cite{ADDA} & 43.7 & 46.5 & 40.1 & 43.7 & 42.8 & 46.0 & 43.8\\
RTN~\cite{RTN} & 75.3 & 97.1 & 98.3 & 66.9 & 85.6 & 85.7 & 84.8\\
JAN~\cite{JAN} & 43.5 & 53.6 & 41.4 & 35.7 & 51.0 & 51.6 & 46.1\\
LEL~\cite{LEL} & 73.2 & 93.9 & 96.8 & 76.4 & 83.6 & 84.7 & 84.8\\
\hline
\multicolumn{8}{c}{\textit{Methods for Partial Domain Adaptation}} \\\hline
IWAN~\cite{ImportanceWA} & 77.7 & 98.4 & \bf{100} & 81.5 & 77.7 & 73.4 & 84.8\\
PADA~\cite{PADA} & \bf{86.5} & \bf{99.3} & \bf{100} & 82.2 & 92.7 & \bf{95.4}& 92.7\\ \hline
TWINs (Ours) & 86.0 & \textbf{99.3} & \textbf{100} & \textbf{86.8} & \textbf{94.7} & 94.5 & \textbf{93.6} \\
\bottomrule[1.5pt]
\end{tabular}
}
\caption{Accuracy on \textit{Office-31} dataset. The source domain includes 31 classes, and the target domain includes 10 classes. TWINs achieves results that either equal or surpass those of existing methods.}
\label{tab:office}
\vspace{-1ex}
\end{table*}
\noindent\textbf{Feature Visualization.}\ \ We visualize feature distribution via t-SNE~\cite{t_SNE} to qualitatively compare our method with other methods such as MCD~\cite{MCD} and PADA~\cite{PADA}. The features are extracted from the middle layer of the network.
The visualized feature distribution is shown in Fig.~\ref{fig:t_SNE}.
As shown in Fig.~\ref{fig:twins_emb}, our method aligns target samples with source classes present in the target domain and acquires discriminative features by considering the task-specific decision boundary, thereby enabling a high performance classification.
When MCD is applied (Fig.~\ref{fig:MCD_emb}), target samples are not correctly aligned with source classes present in the target domain. Furthermore, they fail to extract discriminative features for target samples because their aim involves matching the overall feature distribution.
As shown in Fig.~\ref{fig:PADA_emb}, PADA extracts ambiguous features for target samples and fails to align target samples to the source classes present in the target domain. Comparing this result with ours, we can see the effectiveness of considering decision boundary's information for the feature alignment.
\par
\subsection{Experiments on \textbf{\textit{Office-31}} Datasets}
We further evaluate our proposed method on object classification task.\par
\noindent{\textbf{Setup.}}\ \ \textit{Office-31}~\cite{office} is a benchmark dataset for domain adaptation.
It contains a total of 4110 images across 31 classes in the following three domains: \textit{Amazon} (\textbf{A}) that contains of images from the web downloaded from online merchants (\url{www.amazon.com}), and \textit{Webcam} (\textbf{W}) and \textit{DSLR} (\textbf{D}) that are captured with a web camera and a digital SLR camera, respectively.
We evaluate our method across the following six scenarios: \textbf{A $\to$ W}, \textbf{D $\to$ W}, \textbf{W $\to$ D}, \textbf{A $\to$ D}, \textbf{D $\to$ A}, and \textbf{W $\to$ A}. When a domain is used as the target domain, we use the samples of ten classes shared by \textit{Office-31} and Caltech-256~\cite{caltech} by following~\cite{PADA,ImportanceWA}.\par
We follow standard evaluation protocols and use all labeled source data and all unlabeled target data for unsupervised domain adaptation. We use PyTorch-provided models of ResNet-50~\cite{ResNet} pre-trained on ImageNet~\cite{ImageNet} as two classifiers of our method. Essentially, we use the same hyperparameters and network architectures as used in~\cite{PADA}. The finally fully connected layer is removed and replaced by a three-layered fully connected network that is randomly initialized. We fine-tune all pre-trained feature layers and train the initialized fully connected layer. We use mini-batch stochastic gradient descent (SGD) with a momentum of 0.9, and the learning rate is adjusted during SGD by using $\eta_p = \frac{\eta_0}{(1+\alpha p)^\gamma}$ where $p$ denotes the training progress changing from 0 to 1, while $\eta_0=0.001$, $\alpha=0.001$, and $\gamma=0.75$.
In the experiment, we possess access to the pre-trained model, and thus we do not use Phase~1 and begin training a model from Phase~3.\par
\noindent{\bf Results.}\ \ Our method achieves results that either equal or surpass those of existing methods as shown in Tab.~\ref{tab:office}. In addition to experiments on digits and traffic sign datasets, methods for UDA are also prone to exhibit a worse performance than that of model trained only source samples. Methods for PDA perform well when using a pre-trained CNN feature extractor and fine-tuning it. However, the performance of our model exceeds that of extant methods on average because our method accounts for the relationship between target samples and the task-specific decision boundary.
\subsection{Empirical Analysis}\label{sec:empirical}
We conduct empirical analyses to clarify the characteristic of our method.
\par
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{figure/graph.eps}
\caption{Accuracy when the number of target classes varies. From left to right, the adaptation scenarios are MNIST $\to$ USPS, USPS $\to$ MNIST, SVHN $\to$ MNIST, and SYN SIGNS $\to$ GTSRB.
Our method performs better than other methods in all settings.}
\label{fig:class_number}
\end{center}
\vspace{-2ex}
\end{figure*}
\noindent{\bf Study on the Number of Target Classes.}\ \ We explore the prediction performance on our method when the number of target classes varies. We list the experimental results
in the task of MNIST $\to$ USPS, USPS $\to$ MNIST, SVHN $\to$ MNIST, and SYN SIGNS $\to$ GTSRB. Fig.~\ref{fig:class_number} shows the results. In all the settings, the performance of our method is comparable to or exceeds that of other methods. When the number of target classes is equivalent to source classes, which is the standard unsupervised domain adaptation setting, our method also performs better than other methods. Therefore, our method is useful for the standard domain adaptation setting too.
When the number of target classes is small, the performance of our method occasionally drops although it still performs better than other existing methods.
\par
\begin{figure}[t]
\begin{center}
\subfigure[Ground Truth]{\includegraphics[width=0.48\linewidth]{figure/gt.eps}\label{fig:ground_truth}
}
\subfigure[Phase 1]{\includegraphics[width=0.48\linewidth]{figure/epoch10.eps}\label{fig:phase1}
}
\subfigure[11 epoch]{\includegraphics[width=0.48\linewidth]{figure/epoch11.eps}\label{fig:middle}
}
\subfigure[30 epoch]{\includegraphics[width=0.48\linewidth]{figure/epoch29.eps}\label{fig:final}
}
\caption{The estimated label distribution of target samples. (a) Ground Truth shows the real label distribution of the target samples. The estimated class distributions (b) after Phase~1, (c) after 11 epoch, and (d) after 30 epoch are shown here.}
\label{fig:class_distribution}
\end{center}
\vspace{-2ex}
\end{figure}
\noindent{\bf Estimated Label Distribution of Target Samples.}\ \ Our method estimates the label distribution of target samples to align them with only the source classes present in the target domain. The estimated class distribution on the task SVHN (10 classes) $\to$ MNIST (5 classes) is shown Fig.~\ref{fig:class_distribution}. Fig.~\ref{fig:ground_truth} shows the true class distribution of training samples in the target domain. Fig.~\ref{fig:phase1} shows the estimated label distribution of the target domain after finishing pre-training models with only source samples (Phase~1). This is prior to the adaptation, and thus the estimated label distribution is far from the real distribution and is assigned to the class absent in the target domain. Fig.~\ref{fig:middle} shows the estimated distribution one epoch after starting Phase~3 (11~epoch). The estimated class distribution gets closer to the ground truth although a few samples are assigned to the class absent in the target domain. Fig.~\ref{fig:final} shows the estimated distribution after completing the series of training procedure (30~epoch). All target samples are aligned with source classes present in the target domain, and the distribution is closer to the ground truth.
\par
\begin{table}[t]
\begin{center}
\scalebox{0.88}{
\begin{tabular}{l|llll}
\toprule[1.5pt]
Method & $N\mathalpha{=}3$ & $N\mathalpha{=}5$ & $N\mathalpha{=}7$ & $N\mathalpha{=}10$ \\ \hline
&\multicolumn{4}{c}{USPS $\rightarrow$ MNIST} \\\hline
Ours w/o incons &\textbf{93.2} & 89.4 & 91.7 & 82.1\\
Ours w/o label d & 79.3 & 95.4 & 95.1 & 92.9\\
Ours & 83.1 & \textbf{96.3} & \textbf{95.5} &\textbf{93.1}\\\hline
&\multicolumn{4}{c}{MNIST $\rightarrow$ USPS} \\\hline
Ours w/o incons & 75.4 & 83.4 & 83.8 & 75.2\\
Ours w/o label d & 80.5 & 90.2 & 94.3 & \textbf{97.4}\\
Ours & \textbf{83.2} & \textbf{90.3} & \textbf{94.6} & 97.1\\
\bottomrule[1.5pt]
\end{tabular}}
\caption{Ablation studies for weighting with the target label distribution and the inconsistency loss. \textit{incons and label d} denote the inconsistency loss and label distribution based weighting respectively. $N$ denotes the number of classes in the target domain.}
\label{tb:ablation}
\end{center}
\end{table}
\noindent{\bf Ablation Study.}\ \ We investigate the effectiveness of weighting with the target label distribution and the inconsistency loss by using ablation. The first model involves the ablation of the inconsistency loss. The model is trained only with the weighted loss on the source.
The second model involves the ablation of target label distribution estimation and weighting. The model is trained with the inconsistency loss and the loss on the source without weighting with the target label distribution.
Tab.~\ref{tb:ablation} shows the results of adaptation between MNIST and USPS. Although the accuracy of the model without label-distribution weighting drops at $N=3$, it performs well on the other setting. The results indicate that the inconsistency loss itself is effective for PDA. It is useful to combine the label distribution estimation when the number of target classes is small.
\section{Conclusion}
In the study, we presented a novel method called Two Weighted Inconsistency-reduced Networks (TWINs) for partial domain adaptation (PDA). To align target samples with source classes present in the target domain, two classifiers estimate the label distribution in the target domain and weight classification loss. Furthermore, it learns discriminative features by minimizing inconsistency of two classifiers while inputting target samples.
Our method outperformed existing methods with respect to several tasks in the PDA setting.
\section{Acknowledgement}
The work was partially supported by JST CREST Grant Number JPMJCR1403, Japan and was partially funded by the ImPACT Program of the Council for Science, Technology, and Innovation (Cabinet Office, Government of Japan). We
would like to thank Yusuke Mukuta, Yusuke Kurose, and Atsushi Kanehira for helpful
discussions.
{\small
\bibliographystyle{ieee}
|
1,941,325,220,993 | arxiv | \section{Introduction}
This is a sequel to the earlier paper \cite{CaoLiu21} by the first and the second authors in which curvature estimates were obtained for $4$-dimensional complete gradient expanding Ricci solitons.
By scaling the metric $g$ if necessary, we shall assume throughout the paper that a gradient expanding Ricci soliton $(M^n, g, f)$ satisfies the equation
\begin{equation} \label{expandingeq}
Rc+\nabla^2 f=-\frac 1 2 g,
\end{equation}
where $Rc$ and $\nabla^2 f$ denote the Ricci tensor of $g$ and the Hessian of the potential function $f\in C^{\infty}(M)$, respectively.
For any $4$-dimensional complete gradient expanding Ricci soliton $(M^4, g, f)$ with nonnegative Ricci curvature $Rc\ge 0$, it was shown in \cite{CaoLiu21} that there exists a constant $C>0$ such that, for any
$0\leq a<1$, the following curvature estimate hold on $M^4$,
\begin{equation*}
|Rm| \le \frac {C} {1-a} R^a.
\end{equation*}
Moreover, if the scalar curvature $R$ has at most polynomial decay, then
\begin{equation}
{|Rm|} \le C R \quad on \ M^4.
\end{equation}
On the other hand, if the asymptotic scalar curvature ratio of $(M^4, g)$ is finite, i.e.,
\begin{equation*}
\limsup_{r\to \infty} R r^2< \infty,
\end{equation*}
then $(M^4, g)$ has finite asymptotic curvature ratio
\begin{equation}
A:= \limsup_{r\to \infty} |Rm| r^2< \infty.
\end{equation}
As an application, it follows from the above result and the work of Chen-Deruelle \cite{ChenDer} that any 4-dimensional complete noncompact non-flat gradient expanding Ricci soliton with nonnegative Ricci curvature and finite asymptotic scalar curvature ratio must have a $C^{1,\alpha}$ asymptotic cone structure at infinity, for any $\alpha \in (0, 1)$.
We remark that recent progress on curvature estimates for 4-dimensional gradient Ricci solitons has been led by the work of Munteanu-Wang \cite{MW15}, in which they proved that any complete gradient shrinking soliton with bounded scalar curvature $R$ must have bounded Riemann curvature tensor $Rm$. More significantly, they showed that the Riemann curvature tensor is controlled by the scalar curvature by $|Rm|\le C R$ so that if the scalar curvature $R$ decays at infinity so does the curvature tensor $Rm$. Moreover, by exploring the differential equation $\Delta_f R=R-2|Rc|^2$ satisfied by shrinking solitons and combining with the scalar curvature lower bound of Chow-Lu-Yang \cite{CLY}, they showed that the scalar curvature $R$ in fact must decay quadratically if $R$ goes to zero at infinity. It then follows that the curvature tensor $Rm$ must decay quadratically, hence the 4D shrinking soliton is asymptotically conical. Their curvature estimate, together with the uniqueness result of Kotschwar-Wang \cite{KW15}, has played a crucial role in the recent advance of classifying 4-dimensional complete gradient Ricci solitons, as well as in the classification of complex 2-dimensional complete gradient K\"ahler-Ricci solitons with scalar curvature going to zero at infinity by Conlon-Deruelle-Sun \cite{CDS19}. See \cite {Cao et al2} for an extension, and also \cite{CaoCui, Chan1} and \cite{Cao2021} for similar curvature estimates in the steady soliton case.
\medskip
In \cite{MW17}, via the Moser iteration and a tour de force of integral estimates, Munteanu and Wang also obtained the curvature estimate for higher dimensional gradient shrinking Ricci solitons. Precisely, they showed that if the Ricci curvature of an $n$-dimensional ($n\ge 5$) complete gradient {\em shrinking Ricci soliton} goes to zero at infinity, then its Riemann curvature tensor $Rm$ must also go to zero at infinity. Furthermore, based on $|Rm| \to 0$ at infinity and the fact that the curvature tensor of Ricci shrinkers satisfy the differential inequality $\Delta_f |Rm| \geq |Rm| -c |Rm|^2$, they were able to show that $Rm$ has to decay quadratically at infinity by using the maximum principle argument.
In this paper, inspired by the work of Munteanu-Wang \cite{MW17}, we investigate curvature estimates for higher dimensional gradient expanding Ricci solitons with nonnegative Ricci curvature and finite
asymptotic scalar curvature ratio. Our main result is the following
\begin{theorem} \label{maintheorem}
Let $(M^n, g, f)$, $n\ge 5$, be an $n$-dimensional complete gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc\geq 0$ and finite asymptotic scalar curvature ratio
\begin{equation} \label{fas}
\limsup_{r\to \infty} R \ \!r^2< \infty,
\end{equation}
where $r=r(x)$ is the distance function to a fixed base point $x_0\in M$.
Then $(M^n, g, f)$ has finite $\alpha$-asymptotic curvature ratio for any $0<\alpha<2$,
\begin{equation} \label{maindecay}
A_{\alpha} := \limsup_{r\to \infty} |Rm| \ \! r^{\alpha}< \infty.
\end{equation}
Furthermore, there exist constant $C>0$ depending on $n$ and the geometry of $(M^n, g, f)$, sequences $\{r_j\} \to \infty$ and $\{\alpha_j\} \to 2$ such that
\begin{equation*}
|Rm|(x) \leq C (r(x)+1)^{-\alpha_j}
\end{equation*}
for any $x \in M\setminus B(x_0, r_j+1)$.
\end{theorem}
We point out that, compared to the shrinking case, there are several essential differences in the expanding case.
First of all, certain integrals in the key integral estimate, which were good terms in the shrinking case, turned into potential trouble terms (see Remark \ref{remark3.2}) and they prevented us from obtaining the pointwise $Rm$ decay by merely assuming the Ricci curvature goes to zero at infinity. Secondly, in the expanding case, the assumption of $Rc\ge 0$ is essential to ensure a uniform lower bound for the Sobolev constant of unit geodesic balls $B_x(1)$ for all $x\in M$ (see Lemma \ref{sobolev} and Lemma \ref{avr}), or a uniform non-collapsing estimate for $B_x(1)$ (see Lemma \ref{non-collapsing}), that is crucial for the Moser iteration to work. Finally, the corresponding differential inequality for $|Rm|$ in the expanding case becomes
\[\Delta_f |Rm| \geq -|Rm| -c |Rm|^2,\]
from which the maximum principle argument does not seem to work for getting the quadratic decay as in the shrinking case, or any improved decay for $Rm$, even knowing $Rm$ goes to zero at infinity.
Nevertheless, by adapting the integral estimates in \cite{MW17} and using the Moser iteration, we are able to obtain the sub-quadratic decay
(\ref{maindecay}) for $|Rm|$ under the assumption of finite asymptotic scalar curvature ratio (\ref{fas}).
\begin{remark}
The same proof can be used to show that if
the rate of decay for the scalar curvature $R$ is in the order of $\alpha$, with $0<\alpha\leq 2$, then $Rm$ would have sub-$\alpha$ decay (see Theorem \ref{subalphadecaythm}).
\end{remark}
\begin{remark}
Unlike the works of Munteanu and Wang \cite{MW15, MW17} for shrinking Ricci solitons, it seems that the best one can hope to prove in the expanding case is for $Rm$ to have the same decay rate as assumed for the Ricci curvature (or the scalar curvature). It remains an interesting question if one can improve the arbitrary sub-quadratic decay for $Rm$ in Theorem \ref{maintheorem} to the quadratic decay.
\end{remark}
\medskip
\noindent {\bf Acknowledgements.} We would like to thank Ovidiu Munteanu and Jiaping Wang for their interests in this work and their helpful comments and suggestions. We are also grateful to the referee for the careful reading of our paper and for providing valuable suggestions which led to a simpler version of Lemma 3.1 and a more streamlined proof of Lemma 3.2 and Lemma 3.3 than in the previous version.
\section{Preliminaries}
In this section, for the reader's convenience, we fix the notations and collect several known results about gradient expanding Ricci solitons that we shall need later. Throughout the paper, we denote by $$Rm=\{R_{ijkl}\}, \quad Rc=\{R_{ik}\},\quad R $$ the Riemann curvature tensor, the Ricci tensor, and the scalar curvature of the metric $g=g_{ij}dx^idx^j$ in local coordinates $(x^1, \cdots, x^n)$, respectively.
\begin{lemma} {\bf (Hamilton \cite{Ha95F})} Let $(M^n, g, f)$
be a complete gradient expanding Ricci soliton satisfying Eq. (1.1).
Then
$$ R+\Delta f =-\frac n 2,$$
$$\nabla_iR=2R_{ij}\nabla_jf, $$
$$R+|\nabla f|^2=-f +C_0 $$ for some constant $C_0$.
\end{lemma}
Moreover, replacing $f$ by $f-C_0$, we can normalize the potential function $f$ so that
\[R+|\nabla f|^2=-f. \]
In the rest of the paper, we shall always assume this normalization.
Furthermore, by setting
$$F=-f+\frac n 2, \eqno (2.1)$$ the expanding soliton equation (1.1) becomes
$$ \nabla^2 F =Rc+\frac 1 2 g. \eqno (2.2)$$
From (2.2), Lemma 2.1 and the normalization of $f$, we have
$$ \nabla R=-2Rc \ \!(\nabla F, \cdot), \qquad |\nabla F|^2=F-R-\frac n 2, \eqno(2.3)$$
$$ \Delta F= R+ \frac n 2 \qquad \mbox{and} \qquad \Delta_f F =F \quad ({\mbox {i.e.}}, \ \Delta_f f=f-\frac n 2), \eqno (2.4)$$
where $\Delta_f =:\Delta -\nabla f\cdot \nabla$ is the weighted Laplace operator.
Next, we have the following well-known fact about the asymptotic behavior of the potential function of a complete non-compact gradient expanding soliton with nonnegative Ricci curvature (see, e.g., Lemma 5.5 in \cite{Cao et al} or Lemma 2.2 in \cite{ChenDer}).
\begin{lemma} \label{potencialfunction} Let $(M^n, g , f)$ be a complete noncompact gradient expanding Ricci soliton satisfying Eq. (1.1) and with nonnegative Ricci curvature $Rc \ge 0$. Then there exist some constants $c_1 >0$ such that, outside some compact subset of $M^n$, the function $F=-f+n/2$ satisfies the estimates
\[\frac 1 4(r(x)-c_1)^2\le F(x)\le \frac 1 4 (r(x)+2\sqrt{F(x_0)})^2, \eqno(2.5)\]
where $r(x)$ is the distance function from a base point in $M^n$. In particular, $F$ is a strictly convex exhaustion function achieving its minimum at its unique interior point $x_0$, which we shall take as the base point, and the underlying manifold $M^n$ is diffeomorphic to ${\mathbb R}^n$.
\end{lemma}
Another useful fact is the boundedness of the scalar curvature of a gradient expanding soliton with nonnegative Ricci curvature (see, e.g., Ma-Chen \cite{MC}).
\begin{lemma}
Let $(M^n, g, f)$ be a complete noncompact gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc \ge 0$. Then its scalar curvature $R$ is bounded from above, i.e., $R\le R_0$ for some positive constant $R_0$.
Moreover, $R>0$ everywhere unless $(M^n, g, f)$ is the Gaussian expanding soliton.
\end{lemma}
Note that, under the assumption of $Rc\ge 0$ (or even $Rc\ge (\epsilon-\frac 12)g$ for some constant $\epsilon>0$), the potential function $F(x)$ defined by (2.1) grows quadratically hence is proportional to $r^2(x)$, the square of the distance function, from above and below at large distance. In the rest of the paper, we denote by
\begin{eqnarray*}
D(r)&=& \{x\in M : F(x) \leq r \}.
\end{eqnarray*}
By the Bishop volume comparison, we know that the volume $V(r)$ of $D(r)$ satisfies
$$ V(r) \leq cr^{\frac{n}{2}}. \eqno (2.6) $$
We now collect several well-known differential identities on the curvatures $R, Rc$ and $Rm$ that we shall use later.
\begin{lemma} \label{diffequation}
Let $(M^n, g, f)$ be a complete gradient expanding Ricci soliton satisfying Eq. (1.1). Then, we have
\begin{eqnarray*}
\Delta_{f} R &=&-R-2|Rc|^2,\\
\Delta_{f} R_{ik} &=&-R_{ik} -2R_{ijkl}R_{jl},\\
\Delta_{f} {Rm} &=& -Rm+ Rm\ast Rm,\\
\nabla_lR_{ijkl} &=& \nabla_jR_{ik}-\nabla_i R_{jk}=-R_{ijkl}\nabla_lF,
\end{eqnarray*}
where, on the RHS of the third equation, $Rm\ast Rm$ denotes the sum of a finite number of terms involving quadratics in $Rm$.
\end{lemma}
Based on Lemma 2.4, one can easily derive the following differential inequalities (see also \cite{MW15, CaoCui} for the shrinking and steady ones):
\begin{lemma} \label{diffeqofnorm}
Let $(M^n, g, f)$ be a complete gradient expanding Ricci soliton satisfying Eq. (1.1). Then
\begin{eqnarray*}
\Delta_{f} |Rc|^2 & \ge & 2|\nabla Rc|^2-2|Rc|^2-4|Rm| |Rc|^2, \\
\Delta_{f}|Rm|^2 &\ge & 2|\nabla Rm|^2 - 2|Rm|^2-c|Rm|^3,\\
\Delta_{f} |Rm| &\ge & -|Rm|-c|Rm|^2.
\end{eqnarray*}
Here $c>0$ is some universal constant depending only on the dimension $n$.
\end{lemma}
Also, from \cite{CaoLiu21} we have the following differential inequalities on the covariant derivative $\nabla Rm$ of the curvature tensor (see \cite{MW15} for the shrinking case).
\begin{lemma} Let $(M^n, g, f)$ be a complete gradient expanding Ricci soliton satisfying Eq. (1.1). Then
\begin{eqnarray*}
\Delta_{f} |\nabla Rm|^2 &\ge & 2|\nabla^2 Rm|^2 -3|\nabla Rm|^2-c|Rm| |\nabla Rm|^2 \quad \mbox{and} \\
\Delta_{f} |\nabla Rm| &\ge & -\frac 3 2|\nabla Rm|-c|Rm||\nabla Rm|.
\end{eqnarray*}
\end{lemma}
In \cite{CarNi}, Carrillo and Ni proved the following non-collapsing result for gradient expanding soliton with nonnegative Ricci curvature.
\begin{lemma} {\bf (Carrillo-Ni \cite{CarNi})} \label{non-collapsing} Let $(M^n, g, f)$ be a complete gradient expanding Ricci soliton with nonnegative Ricci curvature. Then there exists a constant $\kappa>0$ such that if $|Rc| \leq 1$ on a unit geodesic ball $B(x_0, 1)$ centered at $x_0$, then
\[V (x_0, 1)\geq \kappa,\] where $V (x_0, 1)$ denotes the volume of $B(x_0, 1)$.
\end{lemma}
\begin{remark} In \cite{CarNi}, the authors only stated the non-collapsing result for shrinking Ricci solitons (see Corollary 4.2 in \cite{CarNi}). But Lemma 2.7 holds similarly because of their logarithmic Sobolev inequality for expanding solitons with nonnegative Ricci curvature (see Theorem 5.2 in \cite{CarNi}).
\end{remark}
Concerning the volume growth, Hamilton \cite {Ha05} obtained the following result (see also Proposition 9.46 in \cite{CLN}).
\begin{lemma} {\bf (Hamilton \cite{Ha05})} \label{avr} Let $(M^n, g, f)$ be any $n$-dimensional complete noncompact gradient expanding Ricci soliton with nonnegative Ricci curvature. Then it must have positive {\it asymptotic volume ratio}. Namely, for any base point $x_0\in M^n$,
$$ \nu_M:= \lim_{r\to \infty} \frac {V (x_0, r)}{r^n} >0, \eqno (2.7) $$
where $V(x_0, r)$ denotes the volume of the geodesic ball $B(x_0, r)$.
\end{lemma}
Finally, we shall need the following well-known result about Sobolev inequality on manifolds with nonnegative Ricci curvature and positive asymptotic volume ratio;
see Yau \cite{Yau82}, and the very recent work of Brendle \cite{Brendle} for a sharp version.
\begin{lemma} \label{sobolev}
Let $(M^n, g)$ be an $n$-dimensional complete manifold with nonnegative Ricci curvature $Rc\geq 0$ and positive asymptotic volume ratio $\nu_M>0$. Then there exists a constant $C_s>0$ such that, for any compact domain $\Omega \subset M$
and any positive smooth function $\varphi$ with compact support in $\Omega$,
\[ C_s \left(\int_{\Omega} \varphi^{\frac n {n-1}}\right)^{\frac {n-1}n} \leq \int_{\Omega} |\nabla \varphi|.\]
\end{lemma}
\medskip
\section{The Integral Estimate}
In this section, we prove a crucial integral curvature estimate needed in the proof of Theorem 1.1. First of all,
we note that the assumptions of $Rc \geq 0$ and the finite asymptotic scalar curvature ratio (1.4) imply that
\smallskip
\begin{enumerate}
\item [(i)] $0\leq R \leq R_0. $
\smallskip
\item [(ii)] $ \nabla_i\nabla_j F \geq \frac{1}{2}g_{ij}. $
\smallskip
\item [(iii)] $ |Rc| \leq R \leq \frac{C}{F}, $ for some constant $C>0$.
\smallskip
\end{enumerate}
\noindent Next, following \cite{MW17}, we define the cut-off function $\phi$ with support in $D (r)$ by
\begin{equation} \label{cut-off}
\phi \left( x\right) =\left\{
\begin{array}{ccc}
\frac{1}{r}\left( r-F\left( x\right) \right) & \text{if} & x\in D\left(
r\right) \\
0 & \text{if} & x\in M\backslash D\left( r\right)%
\end{array}%
\right.
\end{equation}
so that
\[ \nabla \phi=-\nabla F/r \quad \mbox{and} \quad \Delta \phi =-\Delta F/ r \quad \mbox{on} \ D(r). \]
Also, for any large number $p>0$ to be chosen later, we let $q=2p$ and pick $r_0 > 0$ sufficiently large so that
\begin{equation} \label{decayofRic}
F \geq p^5 \quad \text{and} \quad |Rc| \leq \frac{1}{p^5} \quad \text{on}\ M \backslash D(r_0).
\end{equation}
In the rest of the paper, we shall use the following conventions.
\begin{enumerate}
\item[$\bullet$] $C$: a positive constant that may depend on the geometry of $D(r_0)$.
\item[$\bullet$] $c$: a positive constant depending only on the dimension $n$ and $R_0$.
\item[$\bullet$] $c(p)$: a positive constant depending on $p$, $c$ and $C$.
\end{enumerate}
In addition, those constants may change from line to line.
\smallskip
Now we are ready to state our key integral curvature estimate.
\begin{proposition} \label{prop}
Let $(M^n, g, f)$ be an $n$-dimensional complete gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc\geq 0$ and finite
{\it asymptotic scalar curvature ratio}
$$\limsup_{r\to \infty} R \ \!r^2< \infty. $$
Then, for any constant $a>0$, there exists a constant $c\ge 1$ such that if $p>a+R_0+\frac{n}{2}+c$
we have
\begin{equation}
[1-p^{-1}(a+R_0+\frac{n}{2}+c)]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q} \leq c(p),
\end{equation}
where $c(p)$ is in the order of $p^p$.
\end{proposition}
\smallskip
\begin{remark} Proposition 3.1 actually holds for gradient expanding Ricci solitons with finite asymptotic Ricci curvature ratio, i.e., $\limsup_{r\to \infty} |Rc| \ \!r^2< \infty $, without assuming $Rc \ge 0$. Indeed, as we shall see below in the proof of Proposition 3.1, the condition of $Rc \geq 0$ is basically used only to guarantee that geodesic balls have at most polynomial (Euclidean) volume growth.
However, note that any complete Riemannian manifold with quadratic Ricci curvature decay from below has polynomial volume growth (see \cite{CheeGT})\footnote{See also Corollary 4.11 in the very recent work of Chan-Ma-Zhang \cite{CMZ2022}.}. Meanwhile, under the finite asymptotic Ricci curvature ratio assumption, all the relevant properties or differential inequalities concerning the potential function $F$ would still hold outside a compact set. Hence, the nonnegative Ricci assumption in Proposition 3.1 is not essential.
\end{remark}
We shall divide the proof of Proposition \ref{prop} into several lemmas and adapt the arguments in \cite{MW17}.
\begin{lemma} \label{lemma1}
Let $(M^n, g, f)$ be an $n$-dimensional complete gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc\geq 0$.
Suppose $p>a+R_0+\frac{n}{2}+1$, then
\begin{eqnarray*}
[1-p^{-1}(a+R_0+\frac{n}{2})]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q}
&\leq & 4 \int_{M} |\nabla Rc|^2|Rm|^{p-1}F^{a+1}\phi^q \\
&& + \ cp^2\int_M |\nabla Rm|^2|Rm|^{p-3} F^{a-1}\phi^{q}\\
&& + \ c(p).
\end{eqnarray*}
\end{lemma}
\begin{remark} \label{remark3.2} In Lemma 3.1, the first two terms on the right hand side of the inequality are different from the shrinking case in \cite{MW17}. Also note that Lemma 3.1 does not require the {\it finite asymptotic scalar curvature ratio} assumption.
\end{remark}
\begin{proof}
Since $\Delta F \leq R_0 +n/2$ by (2.4) and Lemma 2.3, by integration by parts, we have
\begin{eqnarray*}
-(R_0+\frac{n}{2})\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q}
&\leq & -\int_{M} (\Delta F) \left\vert Rm \right\vert ^{p}F^{a}\phi ^{q} \\
&=& \int_M \nabla F \cdot \nabla(|Rm|^p)F^a\phi^q \\
&& + a \int_M |Rm|^p |\nabla F|^2 F^{a-1}\phi^q \\
&& + q \int_M |Rm|^p F^{a}\phi^{q-1} \nabla F\cdot\nabla \phi \\
&\leq & \int_M \nabla F \cdot \nabla(|Rm|^p)F^a\phi^q + a\int_M |Rm|^pF^a\phi^q,
\end{eqnarray*}
where in the last inequality we have used the fact $|\nabla F|^2 <F$ from (2.3) and $\nabla \phi=-\nabla F/r$.
It then follows from the second Bianchi identity, as in \cite{MW17}, that
\begin{eqnarray*}
-(a+R_0+\frac{n}{2})\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q}
&\leq& \int_M \nabla F \cdot \nabla(|Rm|^p)F^a\phi^q \\
&=& p\int_M (\nabla_hF \cdot \nabla_hR_{ijkl}) R_{ijkl}|Rm|^{p-2}F^a\phi^q \\
&=& 2p\int_M (\nabla_h F \cdot \nabla_lR_{ijkh}) R_{ijkl}|Rm|^{p-2}F^a\phi^q.
\end{eqnarray*}
Performing integration by parts again, we obtain
\begin{eqnarray*}
-(a+R_0+\frac{n}{2})\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q}
&\leq& -2p\int_M R_{ijkh}(\nabla_h\nabla_lF) R_{ijkl} \ \! |Rm|^{p-2}F^a\phi^q \\
&& -2p\int_M (R_{ijkh}\nabla_hF) (\nabla_lR_{ijkl}) \ \! |Rm|^{p-2}F^a\phi^q \\
&& -2p\int_M (R_{ijkh}\nabla_hF) R_{ijkl} \nabla_l(|Rm|^{p-2})F^a\phi^q \\
&& -2pa\int_M |R_{ijkl} \nabla_lF|^2 |Rm|^{p-2}F^{a-1}\phi^q \\
&& + \frac{4p^2}{r}\int_M |R_{ijkl} \nabla_lF|^2 |Rm|^{p-2}F^{a}\phi^{q-1}.
\end{eqnarray*}
Since $Rc\geq 0$ implies $\nabla_i\nabla_jF \geq \frac{1}{2}g_{ij}$, it follows that
\begin{eqnarray*}
-2p\int_M R_{ijkh}(\nabla_h\nabla_lF) R_{ijkl} |Rm|^{p-2}F^a\phi^q \leq -p \int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q}.
\end{eqnarray*}
Thus, by also using the last equality in Lemma 2.4, we get
\begin{eqnarray*}
&&[p-(a+R_0+\frac{n}{2})]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q} \\
&\leq & -2p\int_M (R_{ijkh} \nabla_hF) R_{ijkl} \nabla_l (|Rm|^{p-2})F^a\phi^q \\
&&+ 2p\int_{M} |R_{ijkl}\nabla_lF|^2 |Rm|^{p-2} F^{a}\phi^{q} \\
&&+ \frac{4p^2}{r} \int_{M} |R_{ijkl}\nabla_lF|^2 |Rm|^{p-2} F^{a}\phi^{q-1} \\
&=&I +II +III.
\end{eqnarray*}
For the first term, by using Lemma 2.4 again, we have
\begin{eqnarray*}
I &=&-2p\int_M (R_{ijkh} \nabla_hF) R_{ijkl}(\nabla_l|Rm|^{p-2})F^a\phi^q \\
&=& 2p\int_M (\nabla_jR_{ik}-\nabla_iR_{jk})R_{ijkl}(\nabla_l|Rm|^{p-2})F^a\phi^q\\
&=& 2p(p-2)\int_M (\nabla_jR_{ik}-\nabla_iR_{jk}) R_{ijkl}(\nabla_l|Rm|)|Rm|^{p-3}F^a\phi^q \\
&\leq& 4p^2\int_{M} |\nabla Rc| |\nabla Rm| |Rm|^{p-2} F^{a}\phi^{q} \\
&\leq& p \int_{M} |\nabla Rc|^2|Rm|^{p-1}F^{a+1}\phi^q + 4p^3\int_M |\nabla Rm|^2|Rm|^{p-3} F^{a-1}\phi^{q}.
\end{eqnarray*}
On the other hand, by Lemma \ref{diffequation}, we have
\begin{eqnarray*}
II &=& 2p\int_M |R_{ijkl}\nabla_lF|^2 |Rm|^{p-2} F^a\phi^q \\
& = & 2p\int_M |\nabla_i R_{jk}-\nabla_jR_{ik}|^2 |Rm|^{p-2} F^a\phi^q \\
& \leq & 8p \int_M |\nabla Rc|^2 |Rm|^{p-2} F^a\phi^q \\
& \leq & cp \int_M |\nabla Rc||\nabla Rm| |Rm|^{p-2} F^a\phi^q \\
& \leq & p \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+1}\phi^q + cp \int_M |\nabla Rm|^2 |Rm|^{p-3} F^{a-1}\phi^q.
\end{eqnarray*}
Finally, since $|\nabla F|^2\le F\le r$ on $D(r)$, by Lemma \ref{diffequation} and Young's inequality,
\begin{eqnarray*}
III &=& \frac{4p^2}{r} \int_{M} |R_{ijkl}\nabla_lF|^2 |Rm|^{p-2} F^{a}\phi^{q-1} \\
& \leq & 4p^2 \int_{M} |R_{ijkl}\nabla_lF|^{\frac{2p}{p+1}} |R_{ijkl}\nabla_lF|^{\frac{2}{p+1}} |Rm|^{p-2} F^{a-1}\phi^{q-1} \\
& \leq & 16p^2 \int_{M} |\nabla Rc|^{\frac{2p}{p+1}} |Rm|^{\frac{2}{p+1}}F^{\frac{1}{p+1}} |Rm|^{p-2} F^{a-1}\phi^{q-1} \\
& = & 16p^2 \int_{M} |\nabla Rc|^{\frac{2p}{p+1}} |Rm|^{\frac{p(p-1)}{p+1}}F^{a-1+\frac{1}{p+1}}\phi^{q-1} \\
& = & 16p^2 \int_{M} \left( |\nabla Rc|^{2} |Rm|^{p-1}F^{a+1}\phi^{q} \right)^{\frac{p}{p+1}}\cdot \left( F^{a-2p} \phi^{q-p-1} \right)^{\frac{1}{p+1}} \\
& \leq & 2p \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+1}\phi^q + c(p) \int_M F^{a-2p}\phi^{p-1} \\
& \leq & 2p \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+1}\phi^q + c(p).
\end{eqnarray*}
Here, in the last inequality, we have used the assumption $p>a+R_0+\frac{n}{2}+1$ and the fact that $(M, g)$ has at most Euclidean volume growth to deduce that $\int_M F^{a-2p} \leq c$.
Therefore,
\begin{eqnarray*}
&& [p-(a+R_0+\frac{n}{2})]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q} \\
&\leq & 4p \int_{M} |\nabla Rc|^2|Rm|^{p-1}F^{a+1}\phi^q \\
&& + cp^3\int_M |\nabla Rm|^2|Rm|^{p-3} F^{a-1}\phi^{q} +c(p).
\end{eqnarray*}
This completes the proof of Lemma \ref{lemma1}.
\end{proof}
\begin{remark}
In the proof of Lemma \ref{lemma1}, as well as the proofs of Lemma \ref{lemma2} and Lemma \ref{lemma3} below, the constant $c(p)$ could be in the order of $p^p$ after applying Young's inequality.
\end{remark}
\smallskip
\begin{lemma} \label{lemma2}
Let $(M^n, g, f)$ be an $n$-dimensional complete gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc\geq 0$ and finite
{\it asymptotic scalar curvature ratio}
$$\limsup_{r\to \infty} R \ \!r^2< \infty. $$
Suppose $p>a+\frac{n}{2}+1$, then
\begin{eqnarray*}
2 \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+1}\phi^{q}
&\leq & cp^3\int_M |\nabla Rm|^2|Rm|^{p-3}F^{a-1}\phi^{q} \\
&&+ \frac{c}{p^2}\int_M |Rm|^pF^a\phi^q + c(p).
\end{eqnarray*}
\end{lemma}
\begin{proof}
First of all, by Lemma 2.5 and direct computations, we obtain
\begin{eqnarray*}
\Delta_f (|Rc|^2 |Rm|^{p-1}) &=& (\Delta_f |Rc|^2) |Rm|^{p-1} + |Rc|^2 \Delta_f (|Rm|^{p-1}) \\
&&+2 \nabla (|Rc|^2) \cdot \nabla (|Rm|^{p-1})\\
&\geq & 2 |\nabla Rc|^2|Rm|^{p-1} -2p |Rc|^2 |Rm|^{p-1} -cp |Rc|^2 |Rm|^{p} \\
&&-4p|\nabla Rc| |\nabla Rm| |Rc| |Rm|^{p-2}.
\end{eqnarray*}
Consequently,
\begin{eqnarray*}
&&2 \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+1}\phi^{q} \\
&\leq & \int_M \Delta_f(|Rc|^2|Rm|^{p-1})F^{a+1}\phi^{q} \\
&&+ 2p\int_M |Rc|^2|Rm|^{p-1}F^{a+1}\phi^{q} \\
&&+ cp\int_M |Rc|^2|Rm|^{p}F^{a+1}\phi^{q} \\
&&+ 4p\int_M |\nabla Rc||\nabla Rm||Rc||Rm|^{p-2}F^{a+1}\phi^{q} \\
&=& I+II+III+IV.
\end{eqnarray*}
On one hand, using the quadratic decay of $Rc$ and Young's inequality, we have
\begin{eqnarray*}
II &=& 2p \int_M |Rc|^2 |Rm|^{p-1} F^{a+1}\phi^{q} \\
&\leq& cp\int_M |Rm|^{p-1} F^{a-1}\phi^{q} \\
&\leq& \frac{1}{p^2} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p)\int_M F^{a-p}\phi^{q} \\
&\leq& \frac{1}{p^2}\int_M |Rm|^{p} F^{a}\phi^{q} + c(p),
\end{eqnarray*}
where, in the last inequality, we have $\int_M F^{a-p}<c$ due to (2.6) and $p>a+\frac{n}{2}+1$.
Moreover, by the quadratic decay of $Rc$ and (\ref{decayofRic}), we get
\begin{eqnarray*}
III &=& cp \int_M |Rc|^2 |Rm|^{p} F^{a+1}\phi^{q} \\
&\leq& \frac{c}{p^2} \int_M |Rm|^{p} F^{a}\phi^{q} + C.
\end{eqnarray*}
On the other hand, since $\Delta_f u=\Delta u-\nabla f\cdot \nabla u=\Delta u+\nabla F\cdot \nabla u$, by integration by parts, we have
\begin{eqnarray*}
I &=& \int_M \Delta_f(|Rc|^2|Rm|^{p-1})F^{a+1}\phi^{q} \\
&=& \int_M \Delta (|Rc|^2 |Rm|^{p-1})F^{a+1}\phi^{q} \\
&&+ \int_M \nabla F \cdot \nabla(|Rc|^2 |Rm|^{p-1})F^{a+1}\phi^{q} \\
&=& \int_M |Rc|^2 |Rm|^{p-1} \Delta (F^{a+1}\phi^{q}) \\
&&+ \frac {q}{r} \int_M |Rc|^2 |Rm|^{p-1} |\nabla F|^2 F^{a+1}\phi^{q-1} \\
&&- \int_M |Rc|^2 |Rm|^{p-1} [\Delta F + (a+1)F^{-1} |\nabla F|^2]F^{a+1}\phi^{q}\\
&\leq & \int_M |Rc|^2 |Rm|^{p-1} \Delta (F^{a+1}\phi^{q}) \\
&&+ 2p \int_M |Rc|^2 |Rm|^{p-1} F^{a+1}\phi^{q-1} \\
& = & I_A + I_B.
\end{eqnarray*}
Here, we have used the facts that $|\nabla F|^2\le F\le r$ on $D(r)$ and $\Delta F=R+\frac{n}{2}\geq 0$.
Now, by direct computations, we have
\begin{eqnarray*}
&&\Delta (F^{a+1}\phi^{q})\\
&=& \Delta (F^{a+1})\phi^{q} + F^{a+1}\Delta (\phi^{q}) +2\nabla F^{a+1} \cdot \nabla \phi^{q}\\
&\leq & [(a+1)F^{a}\Delta F+a(a+1)F^{a-1}|\nabla F|^2]\phi^{q} \\
& & +F^{a+1} [q \phi^{q-1} \Delta\phi +q(q-1)\phi^{q-2} |\nabla \phi|^2] \\
&\leq & cp^2 F^{a}\phi^{q} + 4p^2 F^{a}\phi^{q-2}\\
&\leq & cp^2 F^{a}\phi^{q-2},
\end{eqnarray*}
where we have used the facts that $\nabla F\cdot \nabla\phi\leq 0, \ \Delta \phi\leq 0, \ \Delta F\leq R_0+n/2$, $|\nabla F|^2 \leq F$, and $F |\nabla \phi|^2\leq 1$.
Hence, by Young's inequality, the quadratic decay of $Rc$ and (\ref{decayofRic}), we obtain
\begin{eqnarray*}
I_A &=& \int_M |Rc|^2 |Rm|^{p-1} \Delta (F^{a+1}\phi^{q}) \\
&\leq& cp^2\int_M |Rc|^2 |Rm|^{p-1} F^{a}\phi^{q-2} \\
&\leq& \frac{c}{p^2}\int_M |Rm|^{p-1} F^{a-1}\phi^{q-2} + C \\
&\leq& \frac{1}{p^2} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p)\int_M F^{a-p}\phi^{q-2p} + C \\
&\leq& \frac{1}{p^2} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p).
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
I_B &=& 2p \int_M |Rc|^2 |Rm|^{p-1} F^{a+1}\phi^{q-1} \\
&\leq& cp\int_M |Rm|^{p-1} F^{a-1}\phi^{q-1} \\
&\leq& \frac{1}{p^2}\int_M |Rm|^{p} F^{a}\phi^q + c(p)\int_M F^{a-p}\phi^{q-p} \\
&\leq& \frac{1}{p^2}\int_M |Rm|^{p} F^{a}\phi^{q} + c(p).
\end{eqnarray*}
Finally,
\begin{eqnarray*}
IV &=& 4p\int_M |\nabla Rc||\nabla Rm||Rc||Rm|^{p-2}F^{a+1}\phi^{q} \\
&\leq& 4p^3\int_M |\nabla Rm|^2|Rm|^{p-3} F^{a-1}\phi^{q} \\
&&+ \frac{1}{p} \int_M |\nabla Rc|^2|Rc|^2|Rm|^{p-1}F^{a+3}\phi^{q} \\
&\leq& 4p^3\int_M |\nabla Rm|^2|Rm|^{p-3} F^{a-1}\phi^{q} \\
&&+ \frac{c}{p}\int_M |\nabla Rc|^2|Rm|^{p-1}F^{a+1}\phi^{q}.
\end{eqnarray*}
By combining the above estimates, we have completed the proof of Lemma \ref{lemma2}.
\end{proof}
\begin{lemma} \label{lemma3}
Let $(M^n, g, f)$ be an $n$-dimensional complete gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc\geq 0$.
Suppose $p>a+\frac{n}{2}+1$, then
\begin{eqnarray*}
2\int_M |\nabla Rm|^2 |Rm|^{p-3}F^{a-1}\phi^{q}
&\leq & \frac{c}{p^5}\int_M |Rm|^{p} F^{a}\phi^{q} + c(p).
\end{eqnarray*}
\end{lemma}
\begin{remark}
Note that, like Lemma 3.1, Lemma \ref{lemma3} does not require the {\it finite asymptotic scalar curvature ratio} assumption either.
\end{remark}
\begin{proof}
First of all, note that
\begin{eqnarray*}
2|\nabla Rm|^2 \leq \Delta|Rm|^2 + \nabla F\cdot \nabla |Rm|^2 + 2|Rm|^2 + c|Rm|^3.
\end{eqnarray*}
Therefore, by integration by parts, we have
\begin{eqnarray*}
&& 2\int_M |\nabla Rm|^2 |Rm|^{p-3}F^{a-1}\phi^{q} \\
& \leq & \int_M (\Delta|Rm|^2)|Rm|^{p-3}F^{a-1}\phi^{q} \\
&& + \int_M (\nabla F\cdot \nabla |Rm|^2) |Rm|^{p-3}F^{a-1}\phi^{q} \\
&& + 2\int_M |Rm|^{p-1}F^{a-1}\phi^{q} \\
&& + c\int_M |Rm|^{p}F^{a-1}\phi^{q} \\
& \leq & -(a-1)\int_M (\nabla F\cdot \nabla |Rm|^2) |Rm|^{p-3}F^{a-2}\phi^{q} \\
&& + \frac{q}{r} \int_M (\nabla F\cdot \nabla |Rm|^2) |Rm|^{p-3}F^{a-1}\phi^{q-1} \\
&& + \int_M (\nabla F\cdot \nabla |Rm|^2) |Rm|^{p-3}F^{a-1}\phi^{q} \\
&& + 2\int_M |Rm|^{p-1}F^{a-1}\phi^{q} \\
&& + c\int_M |Rm|^{p}F^{a-1}\phi^{q} \\
&=& I+II+III+IV+V.
\end{eqnarray*}
It follows from integration by parts, $\Delta F\leq R_0+n/2$, $|\nabla F|^2 \leq F$ and (\ref{decayofRic}) that
\begin{eqnarray*}
I &=& -(a-1)\int_M ( \nabla F\cdot \nabla |Rm|^2 ) |Rm|^{p-3}F^{a-2}\phi^{q} \\
&=& -\frac{2(a-1)}{p-1}\int_M ( \nabla F\cdot \nabla |Rm|^{p-1} ) F^{a-2}\phi^{q} \\
&=& \frac{2(a-1)}{p-1}\int_M |Rm|^{p-1}[\Delta F+(a-2)F^{-1}|\nabla F|^2] F^{a-2}\phi^{q} \\
&&- \frac{2(a-1)q}{(p-1)r}\int_M |Rm|^{p-1} |\nabla F|^2 F^{a-2}\phi^{q-1} \\
&\leq& c\int_M |Rm|^{p-1} F^{a-1}\phi^{q} +C \\
&\leq& \frac{1}{p^5} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p)\int_M F^{a-p}\phi^{q} +C \\
&\leq& \frac{1}{p^5} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p),
\end{eqnarray*}
where, in the last two inequalities, we have again used Young's inequality, (2.6), and $p>a+\frac{n}{2}+1$.
Similarly, for $r \geq 1$, as $\Delta F=R+\frac{n}{2}> 0$ and $|\nabla F|^2 \leq F \le r$ \ \! over $D(r)$,
\begin{eqnarray*}
II + III &=& \frac{q}{r}\int_M (\nabla F\cdot \nabla |Rm|^2) |Rm|^{p-3}F^{a-1}\phi^{q-1} \\
&& + \int_M (\nabla F\cdot \nabla |Rm|^2) |Rm|^{p-3}F^{a-1}\phi^{q}\\
&\leq& (2p+1)\int_M (\nabla F\cdot \nabla |Rm|^2) |Rm|^{p-3}F^{a-1}\phi^{q-1} \\
&=& \frac{2(2p+1)}{p-1}\int_M ( \nabla F\cdot \nabla |Rm|^{p-1} ) F^{a-1}\phi^{q-1} \\
&=& -\frac{2(2p+1)}{(p-1)}\int_M |Rm|^{p-1}[\Delta F+(a-1)F^{-1}|\nabla F|^2] F^{a-1}\phi^{q-1} \\
&&+ \frac{2(2p+1)(q-1)}{(p-1)r}\int_M |Rm|^{p-1} |\nabla F|^2 F^{a-1}\phi^{q-2} \\
&\leq& \frac{2(2p+1)(q-1)}{(p-1)}\int_M |Rm|^{p-1} F^{a-1}\phi^{q-2} \\
&\leq& \frac{1}{p^5} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p)\int_M F^{a-p}\phi^{q-2p} \\
&\leq& \frac{1}{p^5} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p).
\end{eqnarray*}
On the other hand, by Young's inequality, (2.6), and $p>a+\frac{n}{2}+1$, we get
\begin{eqnarray*}
IV &=& 2\int_M |Rm|^{p-1} F^{a-1}\phi^{q} \\
&\leq& \frac{1}{p^5} \int_M |Rm|^{p} F^{a}\phi^{q} + c(p),
\end{eqnarray*}
and
\begin{eqnarray*}
V &=& c \int_M |Rm|^{p} F^{a-1}\phi^{q} \\
&\leq& \frac{c}{p^5}\int_M |Rm|^{p} F^{a}\phi^{q} +C \\
&\leq& \frac{1}{p^5}\int_M |Rm|^{p} F^{a}\phi^{q} +c(p),
\end{eqnarray*}
where we have used (\ref{decayofRic}) in deriving the first inequality for $V$.
Combining all the estimates above, the proof of Lemma \ref{lemma3} is completed.
\end{proof}
Now we can conclude the proof of {\bf Proposition 3.1.}
\proof
For any constant $a>0$, let $p>a+R_0+\frac{n}{2}+c$ for some constant $c\geq 1$. Then, by combining Lemma \ref{lemma1}, Lemma \ref{lemma2} and Lemma \ref{lemma3}, Proposition \ref{prop} follows immediately.
\hfill $\Box$
\begin{comment}
First of all, by Lemma \ref{diffequation},
\begin{eqnarray*}
\int_M |R_{ijkl}\nabla_lF|^2 |Rm|^{p-2} F^a\phi^{q-1}
& = & \int_M |\nabla_i R_{jk}-\nabla_jR_{ik}|^2 |Rm|^{p-2} F^a\phi^{q-1} \\
& \leq & 4 \int_M |\nabla Rc|^2 |Rm|^{p-2} F^a\phi^{q-1}.
\end{eqnarray*}
Then, by combining with Lemma \ref{lemma1} and assuming $r\geq 1$, we get
\begin{eqnarray*}
[1-p^{-1}(a+R_0+\frac{n}{2})]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q}
&\leq & 2 \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+1}\phi^q \\
&&+ cp^2 \int_{M} |\nabla Rm|^2 |Rm|^{p-3} F^{a-1} \phi ^{q} \\
&&+ 16p\int_{M} |\nabla Rc|^2 |Rm|^{p-2} F^{a}\phi^{q-1}.
\end{eqnarray*}
Now, by applying Lemma \ref{lemma2} and Lemma \ref{lemma3} with $b=1$, we obtain
\begin{eqnarray*}
2 \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+1}\phi^q
&\leq& \frac{c}{p^2}\int_M |Rm|^{p} F^{a}\phi^{q} +c(p).
\end{eqnarray*}
On the other hand, by using Lemma \ref{lemma3} with $b=1$, it follows that
\begin{eqnarray*}
cp^2 \int_{M} |\nabla Rm|^2 |Rm|^{p-3} F^{a-1} \phi ^{q}
&\leq& \frac{c}{p}\int_M |Rm|^{p} F^{a}\phi^{q} +c(p).
\end{eqnarray*}
Finally, using Lemma \ref{lemma2} and Lemma \ref{lemma3} with $b=2$, we have
\begin{eqnarray*}
16p\int_{M} |\nabla Rc|^2 |Rm|^{p-2} F^{a}\phi^{q-1}
&\leq& \frac{c}{p}\int_M |Rm|^{p} F^{a}\phi^{q} +c(p).
\end{eqnarray*}
Therefore,
\[ [1-p^{-1}(a+R_0+\frac{n}{2})]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q} \le \frac{c}{p}\int_M |Rm|^{p} F^{a}\phi^{q} +c(p) \]
and Proposition \ref{prop} follows.
\end{comment}
\section{The proof of Main theorem}
In this section, we use the integral estimate in Section 3 and the De Giorgi-Nash-Moser iteration to prove our main result on the pointwise decay estimate of the curvature tensor $Rm$ as stated in the introduction (see also Theorem \ref{maintheorem}).
\begin{theorem} \label{mainthm}
Let $(M^n, g, f)$ be an $n$-dimensional complete gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc\geq 0$ and finite
{\it asymptotic scalar curvature ratio}
\begin{equation*} \label{aympscalar}
\limsup_{r\to \infty} R \ \!r^2< \infty,
\end{equation*}
Then $(M^n, g, f)$ has finite $\alpha$-asymptotic curvature ratio for any $0<\alpha<2$,
\begin{equation}
A_{\alpha} := \limsup_{r\to \infty} |Rm| \ \! r^{\alpha}< \infty.
\end{equation}
Furthermore, there exist constant $C>0$ depending on $n$ and the geometry of $(M^n, g, f)$, sequences $\{r_j\} \to \infty$ and $\{\alpha_j\} \to 2$ such that
\begin{equation*}
|Rm|(x) \leq C (r(x)+1)^{-\alpha_j}
\end{equation*}
for any $x \in M\setminus B(x_0, r_j+1)$.
\end{theorem}
\begin{proof} As in Munteanu-Wang \cite{MW17}, we now combine Proposition 3.1 and the De Giorgi-Nash-Moser iteration to obtain the pointwise curvature tensor decay estimate.
First of all, for any $p>0$ large and $a>0$ such that $p>a+R_0+\frac{n}{2}+c>a+\frac{n}{2}+1$, by Proposition \ref{prop} we have
\begin{equation*}
\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q} \leq c(p).
\end{equation*}
Since the cut-off function $\phi \geq \frac{1}{2}$ on $D(r/2)$ by (\ref{cut-off}), it follows that
\begin{equation*}
\int_{D(r/2)}\left\vert Rm \right\vert ^{p}F^{a} \leq c(p)
\end{equation*}
for $r>r_0$ arbitrarily large.
Hence,
\begin{equation*}
\int_{M}\left\vert Rm \right\vert ^{p}F^{a} \leq c(p).
\end{equation*}
Note that if we define
\begin{equation*}
I(r):=\int_{D(r)}|Rm|^pF^a,
\end{equation*}
then clearly $I(r)$ is increasing in $r$ and
\begin{equation*}
\lim_{r\rightarrow \infty}I(r) = \int_{M}\left\vert Rm \right\vert ^{p}F^{a} \leq c(p).
\end{equation*}
Thus, for any fixed $p>0$ large there exists a constant $r_p>r_0$ such that
\begin{equation*}
\int_{M\backslash D(r_p)} |Rm|^pF^a \leq 1.
\end{equation*}
Therefore, by Lemma \ref{potencialfunction}, we have
\begin{equation} \label{boundfor|Rm|}
\int_{B_x(1)} |Rm|^p \leq c \left(r(x)+1\right)^{-2a}
\end{equation}
for any $x\in M\backslash D(r_p+1)$.
Next, we apply the Moser iteration to get the pointwise decay estimate for $Rm$ from (\ref{boundfor|Rm|}). We start by deriving an inequality satisfied by $\Delta |Rm|^2$.
From Lemma \ref{diffeqofnorm}, we note that
$$ \Delta_f |Rm|^2 \geq 2|\nabla Rm|^2 - 2|Rm|^2 - c|Rm|^3. $$
Also, by using the Cauchy-Schwarz inequality and Kato's inequality, we have
\begin{eqnarray*}
\nabla F \cdot \nabla |Rm|^2 &=& 2|Rm|\nabla F\cdot \nabla |Rm| \\
&\leq& \frac{1}{2} |Rm|^2 |\nabla F|^2 + 2|\nabla Rm|^2.
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\Delta |Rm|^2 &\geq& 2|\nabla Rm|^2 - 2|Rm|^2 - c|Rm|^3 - \nabla F \cdot \nabla |Rm|^2 \\
&\geq& - 2|Rm|^2 - c|Rm|^3 - \frac{1}{2}(F-R-\frac{n}{2}) |Rm|^2 \\
&\geq& -c(F+|Rm|)|Rm|^2 \\
&=& -u|Rm|^2,
\end{eqnarray*}
where $u:=c(F+|Rm|)$.
\smallskip
By Lemma \ref{avr} and Lemma \ref{sobolev}, or by the Sobolev inequality in \cite{Saloff} together with the non-collapsing estimate of Carrillo and Ni in Lemma \ref{non-collapsing}, we know that the Sobolev inequality holds on the unit geodesic ball $B_x(1)$, with
the Sobolev constant
$C_s$
independent of $x\in M$. Therefore, by applying the Moser iteration (see \cite{PLi} or \cite{PLi93}), we have
\begin{equation} \label{Nash-Moser_iteration2}
|Rm|(x) \leq C_0\left( \int_{B_x(1)} u^n+1 \right)^{\frac{1}{p}}\left( \int_{B_x(1)} |Rm|^p \right)^{\frac{1}{p}},
\end{equation}
where $C_0>0$ depends only on $n$ and $C_s$.
Note that, by (\ref{boundfor|Rm|}) and the Bishop volume comparison, we have
\begin{eqnarray*}
\int_{B_x(1)} |Rm|^n & \leq & \left( \int_{B_x(1)} |Rm|^p \right)^{\frac{n}{p}}\text{Vol}(B_x(1))^{\frac{p-n}{p}} \\
& \leq & c\left( r(x)+1 \right)^{-2a\cdot \frac{n}{p}}
\end{eqnarray*}
for any $x\in M\backslash D(r_p+1)$.
Hence,
\begin{eqnarray} \label{boundforu^n2}
\int_{B_x(1)} u^n
&=& c\int_{B_x(1)} (F+|Rm|)^n \\
&\leq& c\int_{B_x(1)} F^n + c\int_{B_x(1)} |Rm|^n \notag \\
&\leq& c(r(x)+1)^{2n}. \notag
\end{eqnarray}
Now, for $p>0$ large, we take
\begin{equation} \label{arelativetop}
a=p-(\frac{n}{2}+R_0+c+1).
\end{equation}
Then, by (\ref{boundfor|Rm|})-(\ref{boundforu^n2}), we have
\begin{eqnarray} \label{pointwisefor|Rm|}
|Rm|(x) &\leq& C_0 \left( \int_{B_x(1)} u^n+1 \right)^\frac{1}{p} \left( \int_{B_x(1)} |Rm|^{p}\right)^\frac{1}{p} \notag \\
&\leq& C_0 (r(x)+1)^{-\frac{2(a-n)}{p}}
\end{eqnarray}
for $x \in M\backslash D(r_p+1)$.
On the other hand, for any $\alpha \in (0,2)$, then for $p$ sufficiently large we have
$$ \frac{a-n}{p} = 1-\frac{\frac{n}{2}+R_0+c+1+n}{p} \geq \frac{\alpha}{2}. $$
Now, for $\alpha ,\ p$ and $a$ as above, by (\ref{pointwisefor|Rm|}) we have
\begin{eqnarray*}
|Rm|(x) &\leq& C_0 (r(x)+1)^{-\frac{2(a-n)}{p}} \\
&=& C_0 (r(x)+1)^{-\alpha}
\end{eqnarray*}
for any $x \in M\backslash D(r_p+1)$.
Furthermore, note also that we have $r_p \rightarrow \infty$ as $p \rightarrow \infty$. Thus, if we take $p=j \in {\mathbb N}$ and set
\[ \alpha_j= \frac{2(a-n)}{p} = 2-\frac{3n+2R_0+2c+2}{j} \to 2,\]
then there exists a sequence $\{r_j\} \to \infty$ such that
\begin{equation*}
|Rm|(x) \leq C_0 (r(x)+1)^{-\alpha_j}
\end{equation*}
for any $x \in M\backslash D(r_j+1)$.
This completes the proof of Theorem \ref{mainthm}.
\end{proof}
\smallskip
In fact, as we mentioned in Remark 1.1, the same proof can be used to prove the following more general curvature decay estimate.
\begin{theorem} \label{subalphadecaythm}
Let $(M^n, g, f)$ be an $n$-dimensional complete gradient expanding Ricci soliton with nonnegative Ricci curvature $Rc\geq 0$ and finite $\alpha_0$-asymptotic curvature ratio for any $0<\alpha_0\leq 2$,
\begin{equation*}
\limsup_{r\to \infty} R \ \!r^{\alpha_0}< \infty.
\end{equation*}
Then, $(M^n, g, f)$ has finite $\alpha$-asymptotic curvature ratio for any $0<\alpha<\alpha_0$,
\begin{equation*}
A_{\alpha} := \limsup_{r\to \infty} |Rm| \ \! r^{\alpha}< \infty.
\end{equation*}
Furthermore, there exist constant $C>0$ depending on $n$ and the geometry of $(M^n, g, f)$, sequences $\{r_j\} \to \infty$ and $\{\alpha_j\} \to \alpha_0$ such that
\begin{equation*}
|Rm|(x) \leq C (r(x)+1)^{-\alpha_j}
\end{equation*}
for any $x \in M\setminus B(x_0, r_j+1)$.
\end{theorem}
\begin{proof}
For any $\alpha_0 \in (0,2]$, let $\epsilon := \frac{\alpha_0}{2}$, then $\epsilon \in (0,1]$. For $p > a+R_0+\frac{n}{2}+1$, by following the same argument as in Lemma \ref{lemma1} and using Lemma \ref{diffequation}, we have
\begin{eqnarray*}
[1-p^{-1}(a+R_0+\frac{n}{2})]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q}
&\leq & 4 \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+\epsilon}\phi^q \\
&& + \ cp^2 \int_{M} |\nabla Rm|^2 |Rm|^{p-3} F^{a-\epsilon} \phi ^{q} \\
&& + \ c(p).
\end{eqnarray*}
Also, note that the same argument as in the proofs of Lemma \ref{lemma2} and Lemma \ref{lemma3} give us the following: if $\epsilon p > a+\frac{n}{2}+1$, then we have
\begin{eqnarray*}
2 \int_M |\nabla Rc|^2 |Rm|^{p-1} F^{a+\epsilon}\phi^{q}
&\leq & cp^3\int_M |\nabla Rm|^2|Rm|^{p-3}F^{a-\epsilon}\phi^{q} \\
&&+ \frac{c}{p^2}\int_M |Rm|^pF^a\phi^q + c(p),
\end{eqnarray*}
and
\begin{eqnarray*}
2\int_M |\nabla Rm|^2 |Rm|^{p-3}F^{a-\epsilon}\phi^{q}
&\leq & \frac{c}{p^5}\int_M |Rm|^{p} F^{a}\phi^{q} + c(p).
\end{eqnarray*}
By combining the estimates above, we see that if $\epsilon p > a+\frac{n}{2}+1$ and $p>a+\frac{n}{2}+R_0+c$ then we have
\begin{equation*}
[1-p^{-1}(a+\frac{n}{2}+R_0+c)]\int_{M}\left\vert Rm \right\vert ^{p}F^{a}\phi ^{q} \leq c(p).
\end{equation*}
As in the proof of Theorem \ref{mainthm}, for any fixed $p>0$ large, there exists a constant $r_p>r_0$ such that
\begin{equation*}
\int_{M\backslash D(r_p)} |Rm|^pF^a \leq 1.
\end{equation*}
Therefore, by Lemma \ref{potencialfunction}, for any $x\in M\backslash D(r_p+1)$, we get
\begin{equation*}
\int_{B_x(1)} |Rm|^p \leq c\left(r(x)+1\right)^{-2a}.
\end{equation*}
For any $p>0$ large, we take
\begin{equation} \label{arelativetop3}
a=\epsilon p-(\frac{n}{2}+R_0+c+1).
\end{equation}
Then by following the same proof as in Theorem \ref{mainthm}, we have
\begin{eqnarray} \label{pointwisefor|Rm|3}
|Rm|(x) &\leq& C_0 \left( \int_{B_x(1)} u^n+1 \right)^\frac{1}{p} \left( \int_{B_x(1)} |Rm|^{p}\right)^\frac{1}{p} \notag \\
&\leq& C_0 (r(x)+1)^{-\frac{2(a-n)}{p}}
\end{eqnarray}
for $x \in M\backslash D(r_p+1)$.
We note that for any $\alpha \in (0,\alpha_0)$, when $p$ is sufficiently large, we have
$$ \frac{a-n}{p} = \epsilon-\frac{\frac{n}{2}+R_0+c+1+n}{p}
\geq \frac{\alpha}{2}. $$
Now, for $\alpha ,\ p$, $a$ as above and any $x \in M\backslash D(r_p+1)$,
by (\ref{pointwisefor|Rm|3}) we obtain
\begin{eqnarray*}
|Rm|(x) &\leq& C_0 (r(x)+1)^{-\frac{2(a-n)}{p}} \\
&=& C_0 (r(x)+1)^{-\alpha}.
\end{eqnarray*}
Moreover, as in the proof of Theorem \ref{mainthm}, if we take $p=j \in {\mathbb N}$ and set
\[ \alpha_j= \frac{2(a-n)}{p} = \alpha_0-\frac{3n+2R_0+2c+2}{j} \to \alpha_0,\]
then there exists a sequence $\{r_j\} \to \infty$ such that
\begin{equation*}
|Rm|(x) \leq C_0 (r(x)+1)^{-\alpha_j}
\end{equation*}
for any $x \in M\backslash D(r_j+1)$.
This completes the proof of Theorem \ref{subalphadecaythm}.
\end{proof}
\bigskip
|
1,941,325,220,994 | arxiv |
\section{\label{sec:introduction}Introduction}
Experiments of ultrarelativistic heavy-ion collisions aim to
create deconfined nuclear matter, a Quark-Gluon Plasma (QGP),
and to study the QGP properties. Among the signatures of
creation of a QGP is the number-of-constituent-quark scaling
of elliptic flow of light and strange hadrons produced in the
collisions, indicating partonic collectivity~\cite{v2:ncq:scalling}.
Elliptic flow is defined as the second harmonic ($v_2$)
in the Fourier expansion of the particle azimuthal anisotropic
distribution with respect to the reaction plane, $\psiRP$~\cite{Voloshin:1994mz}:
\begin{equation}
\frac{{d}^2 N}{{d} p_T {d}\phi} \propto 1 + \sum_{n=1}^{\infty} 2v_n (p_T) \cos(n(\phi-\psiRP)) \, ,
\end{equation}
where $\phi$ and $p_T$ represent the azimuthal angle
and the transverse momentum of the particle, respectively.
The reaction plane contains the impact parameter and the beam momenta.
In practice, the estimated reaction plane is called the event plane.
Heavy quarks provide a unique probe of the QGP properties:
because their masses are large compared with the thermal energy
expected in heavy-ion collisions~\cite{Rapp:quarkoniumOverview:2008},
they are mainly produced in interactions with high momentum transfer,
very early in the heavy-ion collisions, and they are expected to interact
with the QGP differently than light and strange quarks~\cite{Dokshitzer:2001zm, Armesto:2003jh, Djordjevic:2005db}.
Moreover, the heavy quark production is sensitive to the dynamics of the nuclear medium
created in the collisions~\cite{HQ:thermalization}; measurements of their production and elliptic flow could be
used to determine the fundamental properties of the QGP, such as transport coefficients (see, for instance, Ref.~\cite{HQ:transport} and references therein).
Electrons from the semileptonic decays of heavy flavor mesons (also called non-photonic electrons, \rm NPE)
represent well the directions of the mother D (B) mesons, especially when electron $p_T > 1.5 (3)$ GeV/$c$. Thus \rm NPE\ $v_2$ serves as a proxy for heavy quark $v_2$.
In this paper, we present the STAR measurements of \rm NPE\ $v_2$ using the two- and four-particle correlations~\cite{Borghini:2000sa}
($\vn{2}$ and $\vn{4}$, respectively) and the event plane method ($\vn{EP}$)~\cite{Poskanzer:1998yz} in Au+Au collisions at $\sqrt{s_{\rm NN}} = $ 200 GeV
at the Relativistic Heavy Ion Collider (RHIC).
These approaches have different sensitivities to elliptic flow fluctuations and to particle correlations not related to $\psiRP$,
so called non-flow. Jets and resonance decays are considered to be the most important sources of these non-flow correlations.
In the case of $\vn{2}$ and $\vn{EP}$, there are positive contributions from both $v_2$ fluctuations and non-flow (the event plane and two-particle correlation methods are approximately equivalent~\cite{Trainor:2008jp}).
When $v_2$ is obtained with four-particle correlations ($\vn{4}$), the fluctuations give a negative contribution and the non-flow is suppressed. Therefore, $\vn{2}$ gives an upper limit, and $\vn{4}$ gives a lower limit, on elliptic flow~\cite{Voloshin:2007pc}.
We also present \rm NPE\ $\vn{2}$ in Au+Au collisions at $\sqrt{s_{\rm NN}} = $ 62.4 and 39 GeV.
RHIC Beam Energy Scan results show that elliptic flow of inclusive charged hadrons is approximately independent of beam energy in this energy range (the difference is less than 10\% for $0.5<\pt<3$~GeV/c)~\cite{STAR:BES:iclusive:hadron:v2}.
Measurements of \rm NPE\ $\vn{2}$ at these energies could provide information about energy dependence
of the strength of heavy quarks interactions with a hot and dense nuclear medium.
\section{\label{sec:analysis}Data analysis}
Three main STAR subsystems are used in this analysis: the Time Projection Chamber (TPC)~\cite{tpc_det}, Barrel Electromagnetic Calorimeter (BEMC)~\cite{bemc_det} and Time-of-Flight (ToF)~\cite{tof_det} detectors. These detectors provide tracking and particle identification. We use events with minimum-bias and high $\pt$ (so called high tower~\cite{STAR:NPE:pp200GeV}) triggers with primary vertices located within $\pm30$~cm of the TPC's geometrical center along the beam direction. We select tracks with at least 20 points measured in the TPC and at least 52\% of the maximum number of possible TPC points. The distance-of-closest-approach (DCA) of a track to the collision vertex is required to be less than 1.5 cm. Collision centrality is determined using the number of reconstructed tracks in the TPC within
$|\eta|< 0.5$~\cite{STAR:pi:pTspectra:200GeV}. Events with 0-60\% centrality are selected for the $v_2$ measurement; however, we use minimum-bias events (0-80\% centrality) to increase statistics in the electron purity estimation. The data samples used in this study are summarized in Tab.~\ref{Tab:dataset}.
Electrons are identified using the ionization energy loss ($\mbox{$dE/dx$}$) in the TPC, the time-of-flight in the ToF detector and the energy deposited in BEMC towers. First, we select tracks with $|\eta|<0.7$ and $0<\nse<3$, where $\nse$ is the number of standard deviations from the expected mean $\mbox{$dE/dx$}$ for electrons in the TPC. The $\nse$ cut was chosen to optimize the purity (to reduce a potential systematic error due to hadron contamination) and the available statistics (which is crucial for the $\vn{4}$ measurement). For $\pt<1$~\mbox{$\mathrm{GeV/}c$}, the velocity $\beta$ measured in the ToF is used to reject kaons: we require $|1-1/\beta|<0.03$ at 200 GeV, $-0.03< 1-1/\beta<0.02$ at 62.4 GeV and $-0.03< 1-1/\beta<0.01$ at 39 GeV. Different cuts are used because of the slightly different ToF resolution at different energies. To further enhance electron identification at 39 and 62.4 GeV, we impose a more stringent requirement on $\nse$ ($0<\nse<2$) for these collision energies. In the $\pt$ range where the proton $\mbox{$dE/dx$}$ band overlaps with the electron band ($1<\pt<1.5$~\mbox{$\mathrm{GeV/}c$}), we apply an additional cut of $|1-1/\beta|<0.1$ in order to reduce proton contamination. Finally, at $\pt>1 \ \mbox{$\mathrm{GeV/}c$}$, we select tracks that have a momentum-to-energy ratio in the range of $0.3 < pc/E < 2$, where $E$ is the energy of a single BEMC tower associated with a TPC track. The BEMC has a Shower Maximum Detector (SMD), which is a proportional gas chamber with strip readout at a depth of five radiation lengths designed to measure shower shapes and positions in the $\eta - \phi$ plane, and used to discriminate between electrons and hadrons. In order to further improve the purity of the electron sample, we require tracks to occupy more than one strip in both $\phi$ and $\eta$ SMD planes.
\begin{table}[htdp]
\begin{tabular}{lc}
\hline \hline
Collision energy & Data sample (million events) \\
\hline
200 GeV (minimum bias) & 150 \\
200 GeV (high tower) & 41 \\
62.4 GeV (minimum bias) & 38 \\
39 GeV (minimum bias) & 81 \\
\hline \hline
\end{tabular}
\caption{\label{Tab:dataset} Data samples used for the analysis.}
\end{table}
\begin{figure*}[htdp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{Figures/fig1a.pdf} &
\includegraphics[width=0.45\textwidth]{Figures/fig1b.pdf} \\
\end{tabular}
\caption{\label{Fig:PurityFits} (Color online) Examples of $n\sigma_e$ distribution with fits for different hadronic components for minimum bias Au+Au collisions at \sNN{62.4}~GeV at low (a) and high $\pt$ (b).}
\end{center}
\end{figure*}
\begin{figure}[htdp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.45\textwidth]{Figures/fig2a.pdf} \\
\includegraphics[width=0.45\textwidth]{Figures/fig2b.pdf} \\
\includegraphics[width=0.45\textwidth]{Figures/fig2c.pdf} \\
\end{tabular}
\caption{\label{Fig:PurityPheRecoEff}(Color online) Electron purity (a), electron pair invariant mass distribution for electrons with $0.8 <\pt<8.5$ \mbox{$\mathrm{GeV/}c$}\ (b), and photonic electron reconstruction efficiency (c). The bands show combined systematic and statistical errors. Centrality classes are indicated in the plot.}
\end{center}
\end{figure}
Hadron contamination is estimated by first fitting a sum of Gaussian functions for charged hadrons and electrons to the $\nse$ distribution in narrow $\pt$ bins. Figure \ref{Fig:PurityFits} shows examples of such fits for the $0.7<\pt<0.8$~\mbox{$\mathrm{GeV/}c$}\ and $3<\pt<6$~\mbox{$\mathrm{GeV/}c$}\ bins for 62.4 GeV data. There is also a Gaussian for merged pions that arise from track merging due to the finite two-track resolution of the TPC; these have a \mbox{$dE/dx$}\ approximately two times larger than ``regular'' pions. Parameters of the Gaussian functions (mean and width) for each fit component are constrained using high purity electron and hadron samples. The parameters for electrons are fixed based on an electron sample from photon conversion in the detector material and the Dalitz decay of $\pi^0$ and $\eta$ mesons. For hadrons, we use the ToF at low and intermediate $\pt$ to select tracks with a mass close to the mass expected for that specific hadron. At high $\pt$ ($\pt> 3$~\mbox{$\mathrm{GeV/}c$}), pions and protons from $K_s^0$ and $\Lambda$ decays are selected, which are identified via secondary vertex reconstruction. At this high $\pt$ range a simplified fit model (three Gaussian functions: for electrons, pions and protons combined with kaons) describes the $\nse$ distribution well (see Fig. \ref{Fig:PurityFits}(b)). To improve fitting in the ranges where the kaon and the proton \mbox{$dE/dx$}\ bands overlap with the electron band, we impose constraints on the hadron amplitudes: the amplitude of a Gaussian for a hadron there is limited by the values determined outside of that cross-over range, where hadron-electron separation is feasible. These fits are then used to calculate the hadron yields within the $\nse$ range selected for the analysis. Purity is defined as a ratio of electrons to all tracks that passed the quality and electron identification cuts. Figure \ref{Fig:PurityPheRecoEff} (a) shows the purity as a function of $\pt$ -- the overall purity is 90\% or better. Hadron contamination is only significant at 200 GeV for $\pt \sim 0.5-0.6$~\mbox{$\mathrm{GeV/}c$}\ and $\pt \sim 0.8-1.1$~\mbox{$\mathrm{GeV/}c$}\ due to the overlap of the kaon and the proton \mbox{$dE/dx$}\ bands with the electron band, and the slightly more relaxed cuts used for that data set.
The primary source of physical background for this analysis are so-called photonic electrons. These electrons originate from real photon conversion in the detector material or from Dalitz decay of light mesons (mostly $\pi^0$ and $\eta$). We identify photonic electrons using a statistical approach, as a signal in the low mass region of the di-electron $m_{e+e-}$ mass spectrum (mass $m_{e+e-}<0.15$~$\rm{\mbox{$\mathrm{GeV/}c$}}^2$)~\cite{STAR:NPE:pp200GeV}. Each primary photonic electron candidate is paired with an opposite-sign electron (so-called partner) in an event. We estimate the combinatorial background in this procedure with like-sign pairs. Figure \ref{Fig:PurityPheRecoEff}(b) shows an example of an $m_{e+e-}$ distribution for minimum-bias Au+Au 62.4 GeV events. The photonic electron yield is calculated by $\nPho = (N^{\rm UL} - N^{\rm LS})/\effPho$, where $N^{\rm UL}$ and $N^{\rm LS}$ are the numbers of unlike-sign and like-sign electron pairs, respectively. $\effPho$ is the partner finding efficiency (also called the photonic electron reconstruction efficiency) which we determine from full GEANT simulations of the STAR detector. Figure \ref{Fig:PurityPheRecoEff} (c) shows $\effPho$ as a function of $\pt$; it varies from 15\% at 0.5~\mbox{$\mathrm{GeV/}c$}\ to 60\% at 7 \mbox{$\mathrm{GeV/}c$}.
The ``raw" non-photonic electron signal, $\nHFE$, is given by $\nHFE = pN_I - \nPho$, where $N_I$ is the inclusive electron sample and $p$ is the purity. Besides photonic electrons, other sources of background in this analysis are weak kaon decay ($K^{\pm} \rightarrow e^{\pm}\nu\pi^{0}$ and $K^{0}_{L} \rightarrow e^{\pm}\nu\pi^{\mp}$), called $\Ke$, quarkonia and other vector mesons~\cite{STAR:NPE:pp200GeV}. $\Ke$ is the largest source of that secondary background and we subtract it from our non-photonic electron sample, as described later in this section.
Figure \ref{Fig:SignalBackgroundRatio} shows the non-photonic electron signal (with $\Ke$ background subtracted) to photonic electron background ratio for Au+Au 200, 62.4 and 39 GeV. This ratio varies from 0.3 at low $\pt$ to 1.4 at $\pt$ above 5 \mbox{$\mathrm{GeV/}c$}; overall, it is lower at 62.4 and 39 GeV compared to 200 GeV because the cross-section for heavy quark production decreases faster with decreasing colliding energy than does the cross-section for the photonic electron background.
To determine the non-photonic electron elliptic flow, we first measure inclusive electron $v_2$, photonic electron $v_2$ and hadron azimuthal anisotropy and their yields. Then the $v_2^{\rm NPE}$ is given by
\begin{equation}
v_2^{\rm NPE} = \frac{N_I v_2^I - \nPhov_2^{\rm pho} - N_H v_2^H}{\nHFE}
\end{equation}
where $N_H = (1-p)N_I$ is the hadron contamination, $v_2^I$ the inclusive electron elliptic flow and $v_2^H$ the hadron azimuthal anisotropy. $v_2^H$ is calculated as the sum of $v_2$ for different particle species~\cite{STAR:pi:v2:200GeV,STAR:v2:BES,STAR:v2:BES:PRL} weighted by their yields in the inclusive electron sample. These yields are estimated based on the purity studies. Elliptic flow of these components (inclusive and photonic electrons and hadrons) can be measured using any method (for instance $\vn{2}$, $\vn{4}$ or $\vn{EP}$).
In the $\vn{2}$ and $\vn{4}$ analyses, we obtain $v_2^I$ and $v_2^H$ directly from the data. Inclusive electron $\vn{2}$ and $\vn{4}$ are calculated using the direct cumulant method \cite{Bilandzic:2010jr}: for $\vn{2}$ we correlate an electron with a single hadron, while one electron is correlated with three hadrons for $\vn{4}$. We calculated the reference flow using tracks with $0.2 < \pt <2$ \mbox{$\mathrm{GeV/}c$}\ within $|\eta| < 1$, excluding tracks with $|\nse| < 3$ to avoid self-correlations. The results are corrected for non-uniform azimuthal detector acceptance by applying the procedure described in Ref.~\cite{Bilandzic:2010jr}.
$v_2^{\rm pho}$ is given by GEANT simulations of electrons from $\gamma$ conversions and $\pi^0$ and $\eta$ Dalitz decays, where the measured parent $v_2(\pt)$ and $\pt$ spectra are required as an input. Direct photon $v_2$ and $\pt$ spectra at 200 GeV are taken from Refs. \cite{Phenix:LowPt:DirectPhoton,Phenix:HighPt:DirectPhoton,Phenix:DirectPhoton:v2}. For Au+Au 62.4 and 39 GeV, there are no published data available; therefore, we use results for $p+p$\ and assume binary scaling of the direct photon yield. We use NLO pQCD calculations for $p+p$\ at 62.4 GeV~\cite{Phenix:DirPhotonAuAu62GeV,Gordon:1993qc} and E706 data for 39 GeV~\cite{E706:HighPt:DirectPhoton}. We use the $v_2(\pt)$ ($\vn{2}$ and $\vn{EP}$) and $\pt$ spectra for neutral and charged pions measured by STAR and PHENIX as input for the simulation~\cite{STAR:pi:pTspectra:200GeV,STAR:pi:pTpspectra:62GeV,STAR:pi:v2:200GeV,PHENIX:pi:pTpspectra:200GeV,PHENIX:pi:pTpspectra:62:39GeV,PHENIX:pi:v2:200GeV}, and we assume $m_T$ scaling for $\eta$.
In the event-plane analysis, we reconstruct an event-plane using tracks with $0.15 < \pt <1.5$ \mbox{$\mathrm{GeV/}c$}\ and $|\eta| < 1$ in order to reduce the effect of jets on the event plane estimation. We exclude tracks with $|\nse| < 3$ to avoid possible self-correlations between the particle of interest (the electron) and tracks used in the event plane reconstruction. The results are corrected for non-uniform detector acceptance using $\phi$ weighting and event-by-event shifting of the planes, which is needed to make the final distribution of the event planes isotropic~\cite{Poskanzer:1998yz}. We obtain $v_2^{\rm NPE}\{\rm EP\}$ directly from the data: we measure the \rm NPE\ production differentially at all azimuthal angles with respect to the event plane and fit the distribution with ${d}N/{d}\Delta \phi = A \times [1 + 2 v_2^{\rm NPE}\{\rm EP\} \cos (2 \Delta \phi)]$, where $\Delta \phi \equiv \phi - \psiEP$ is the electron azimuthal angle $\phi$ with respect to the event plane $\psiEP$, reconstructed event by event. The observed $\vn{EP}$ coefficients are corrected for the finite event plane resolution, which is estimated from the correlation of the planes of independent sub-events~\cite{Poskanzer:1998yz}.
\begin{figure}[htdp]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figures/fig3.pdf}
\caption{\label{Fig:SignalBackgroundRatio} (Color online) Signal-to-background ratio for non-photonic electrons at $\sqrt{s_{\rm NN}} = $ 200, 62.4 and 39 GeV. The error bars represent the statistical uncertainty, and the brackets represent the systematic uncertainties. See text for details.
}
\end{center}
\end{figure}
\begin{figure*}[htdp]
\begin{center}
\includegraphics[width=0.95\textwidth]{Figures/fig4.pdf}
\caption{\label{Fig:IncPhoEleV2} (Color online) Inclusive and photonic electron $\vn{2}$ and $\vn{4}$ at $\sqrt{s_{\rm NN}} = $ 200, 62.4 and 39 GeV. The error bars represent the statistical uncertainty, and the brackets represent the systematic uncertainties. See text for details.}
\end{center}
\end{figure*}
The $\Ke$ contribution is estimated using a full GEANT simulation of the STAR detector for both $K^0_L$ and charged kaons. We use the $K_S^0$ $\pt$ spectra measured by STAR as an input in these simulations. The efficiency for $\Ke$ reconstruction is very low at low $\pt$ due to a DCA cut applied in the analysis: 2\% at $\pt=0.5$~\mbox{$\mathrm{GeV/}c$}\ and 5\% at $\pt=1$~\mbox{$\mathrm{GeV/}c$}. We compared the $\Ke$ background to the expected heavy flavor electron yield taking into account the single electron reconstruction efficiency and acceptance. In the case of Au+Au 200 GeV, we use the \rm NPE\ spectra measured by PHENIX \cite{Adare:2010de} as an input. For Au+Au 39 and 62.4 GeV, the \rm NPE\ $\pt$ spectrum is not available and we use a perturbative QCD prediction for \rm NPE\ production~\cite{FONLL:RamonaVogt} scaled by the number of binary collisions. The \rm NPE\ measurements in $p+p$\ at \sNN{200} GeV are consistent with the upper limit of the pQCD calculation; therefore, we use the upper limit on the predictions as an estimate of \rm NPE\ yield at lower energies. The $\Ke$ electron background is small at 200 GeV and it decreases with increasing $\pt$: we estimate it to be 8\% for $\pt<1$~\mbox{$\mathrm{GeV/}c$}\ and less than 2\% for $\pt>3$~\mbox{$\mathrm{GeV/}c$}. However, the heavy quark production cross-section decreases faster with decreasing energy than does the cross-section for strangeness production. Thus the relative $\Ke$ electron background is larger at 39 and 62.4~GeV than at the top RHIC energy: it amounts to $\approx 30\%$ for $\pt<0.5$~\mbox{$\mathrm{GeV/}c$}\ and $\approx 10\%$ for $0.5<\pt<3$~\mbox{$\mathrm{GeV/}c$}\ at 62.4 GeV. It is even higher at 39~GeV: $\approx 50\%$ for $\pt<0.5$~\mbox{$\mathrm{GeV/}c$}\ and $\approx 20\%$ for $0.5<\pt<3$~\mbox{$\mathrm{GeV/}c$}. We calculate the $\Ke$ $v_2$ using a GEANT simulation of the STAR detector taking as input the kaon $\pt$ spectrum and $v_2$ measured by STAR. The expected $\Ke$ $\pt$ spectrum and $v_2$ are then subtracted from the measured non-photonic electron yield and $v_2$.
There are three dominant sources of systematic uncertainties in this analysis: the photonic electron reconstruction efficiency, the purity and the input parameters to the photonic electron $v_2$ simulation. We estimate the systematic error on $\effPho$ by varying the contribution of direct photons to the photonic electron yield, and by comparing the partner finding efficiency in the simulations and the data. The overall systematic error on $\effPho$ is $\pm 7\%$ at 200 GeV, $\pm 8\%$ at 62.4 GeV and $\pm 10\%$ at 39 GeV. The systematic error on the purity is estimated by varying the constraints in a multi-Gaussian fit. These uncertainties vary strongly with $\pt$; Fig.~\ref{Fig:PurityPheRecoEff}(a) shows the purity with the combined systematic and statistical errors. The uncertainty on the photonic electron $v_2$ is evaluated by varying the input spectra within their statistical and systemic errors, and varying the relative contributions of the simulation components. We estimate the systematic error on the $\Ke/\rm NPE$ ratio by varying the input $\rm NPE$ distribution. At 200 GeV, we vary the input spectra within statistical and systematic errors; at 39 and 62.4 GeV, we use a central value of pQCD predictions as an estimate of the lower limit on the $\rm NPE$ production. The overall error on photonic electron $v_2$ is 6\% for $\pt<5$~\mbox{$\mathrm{GeV/}c$}; for 200 GeV at high $\pt$, it increases with $\pt$ to 20\% at $\pt = 7$~\mbox{$\mathrm{GeV/}c$}.
\section{\label{sec:results}Results}
\begin{figure}[htdp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.45\textwidth]{Figures/fig5a.pdf} \\
\includegraphics[width=0.45\textwidth]{Figures/fig5b.pdf}
\end{tabular}
\caption{\label{Fig:HFEV2} (Color online)(a) Non-photonic electron azimuthal anisotropy at $\sqrt{s_{\rm NN}} = $ 200 GeV compared to PHENIX measurements~\cite{Adare:2010de}. (b) NPE $\vn{2}$ at 200 and 62.4 and 39 GeV. The error bars represent the statistical uncertainty, and the brackets represent the systematic uncertainties. Non-flow in (a) was estimated based on \rm NPE-hadron correlations~\cite{Aggarwal:2010xp} for $\pt>2.5$~\mbox{$\mathrm{GeV/}c$}\ and PYTHIA for $\pt<2.5$~\mbox{$\mathrm{GeV/}c$}. The band includes the combined systematic and statistical uncertainties. }
\end{center}
\end{figure}
\begin{figure}[htdp]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figures/fig6.pdf}
\caption{\label{Fig:HFEV2WithModels} (Color online) Non-photonic electron azimuthal anisotropy $\vn{2}$ and $\vn{4}$ at $\sqrt{s_{\rm NN}} = $ 200 GeV (min-bias) from Fig. \ref{Fig:HFEV2}(a) compared to model calculations.}
\end{center}
\end{figure}
Figure \ref{Fig:IncPhoEleV2} shows the inclusive and photonic electron $\vn{2}$ and $\vn{4}$ for the 0-60\% most central Au+Au collisions at 200, 62.4 and 39 GeV. The photonic electron $v_2$ is larger than the inclusive electron $v_2$ at low and intermediate $\pt$ ($\pt< 4$~\mbox{$\mathrm{GeV/}c$}), which indicates that the NPE $v_2$ has to be smaller than $v_2^I$. Figure \ref{Fig:HFEV2} shows the non-photonic electron azimuthal anisotropy at $\sqrt{s_{\rm NN}} = $ 200 GeV (a), and 62.4 and 39 GeV (b). We observe finite $\vn{2}$ and $\vn{4}$ for $\pt>0.5$~\mbox{$\mathrm{GeV/}c$}\ at 200~GeV. At high $\pt$, the $\vn{2}$ and $\vn{EP}$ results are consistent with each other, as expected. There is an increase of $v_2$ with $\pt$ for $\pt>4$~\mbox{$\mathrm{GeV/}c$}, which is probably an effect of jet-like correlations. We estimate the strength of these correlations for $\pt>2.5$~\mbox{$\mathrm{GeV/}c$}\ using \rm NPE--hadron correlations in $p+p$\ at $\sqrt{s}=$~200 GeV~\cite{Aggarwal:2010xp}; the non-flow correlations in $p+p$\ are scaled by hadron multiplicity in Au+Au, similarly to Ref.~\cite{STAR:highpT:v2}. If we assume that non-flow correlations in $p+p$\ are similar to those in Au+Au, then the non-flow in Au+Au can be estimated by
\begin{equation}
v_2^{\rm non-flow} = \frac{ \langle\langle2'\rangle\rangle^{pp}}{\vn{2}^{\rm Ref}}\frac{\langle N_h^{pp} \rangle}{\langle N_h^{\rm AA} \rangle}\, ,
\end{equation}
where $\langle\langle2'\rangle\rangle^{pp}$ is the average two-particle correlation of \rm NPE\ and hadrons in $p+p$, $\langle N_h^{pp} \rangle$ and $\langle N_h^{\rm AA} \rangle$ is the average number of hadrons in $p+p$\ and Au+Au, respectively, and $\vn{2}^{\rm Ref}$ is the reference $v_2$ in Au+Au collisions. The jet-like correlation may be considerably modified in the QGP, therefore this procedure likely gives a conservative estimate of the non-flow.
We found that PYTHIA simulations, with trigger and single track reconstruction efficiency included, reproduce well the $v_2^{\rm non-flow}$ obtained with $p+p$\ data at 200 GeV. Thus we use PYTHIA to estimate the $v_2^{\rm non-flow}$ for $\pt < 2.5$~\mbox{$\mathrm{GeV/}c$}. The black solid line in Fig. \ref{Fig:HFEV2} (a) shows the jet-like correlations expected in Au+Au collisions with the gray band representing combined statistical uncertainties and systematic uncertainties due to electron identification and photonic electron rejection~\cite{Aggarwal:2010xp}. Those correlations can explain the rise of $\vn{2}$ and $\vn{EP}$ with $\pt$; more than 60\% of the $v_2$ signal at high $\pt$ could be explained by the central value of non-flow (black solid line). This indicates that ``conventional'' jet correlations (i.e. correlations unrelated to the reaction plane) are likely to dominate $v_2$ at $\pt>4$ \mbox{$\mathrm{GeV/}c$}. We did not estimate the jet-like correlation at 39 and 62.4~GeV because the \rm NPE--hadron correlation data are not available at those energies.
STAR data are compared to the PHENIX measurements for $|\eta|<0.35$ in Fig.~\ref{Fig:HFEV2}(a). PHENIX used the beam-beam counters~(BBCs) with a pseudorapidity coverage of $3.0<|\eta|<3.9$ to measure the event plane. A large pseudorapidity gap between the BBCs and the detector used for electron identification is expected to reduce the effect of jet-like correlations and resonance decays on the $v_2$ measurement. PHENIX data are consistent with STAR results in the $\pt$ range where they overlap ($\pt\leq4$~\mbox{$\mathrm{GeV/}c$}).
At 39 and 62.4 GeV, $\vn{2}$ is consistent with zero up to $\pt=1.6$~\mbox{$\mathrm{GeV/}c$}\ (see Fig.~\ref{Fig:HFEV2}(b)). We further check if the $v_2$ values observed for the two lower energies deviate significantly from the trend seen at top RHIC energy.
We quantify the difference using $\chi^2$ and the z-test. In the case of the z-test, $z = (\mu_1 - \mu_2)/ \sqrt{\sigma^{2}_1 + \sigma^{2}_2}$, where $\mu$ is the mean and $\sigma$ is the standard deviation of a given sample, $\sigma = \sqrt{\sigma_{\rm stat.}^2 + \sigma_{\rm syst.}^2}$, and the two samples are assumed to be independent of one another and to have normal distributions. The difference between results at 200 and 62.4 GeV is 1.2$\sigma$ at $\pt=0.33$ \mbox{$\mathrm{GeV/}c$}, 3$\sigma$ at $\pt=0.58$ \mbox{$\mathrm{GeV/}c$}\ and 1.7$\sigma$ at $\pt=0.82$ \mbox{$\mathrm{GeV/}c$}, while the difference between 200 and 39 GeV is 1.2$\sigma$ at $\pt=0.33$ \mbox{$\mathrm{GeV/}c$}, 2.3$\sigma$ at $\pt=0.58$ \mbox{$\mathrm{GeV/}c$}\ and 1.8$\sigma$ at $\pt=0.82$ \mbox{$\mathrm{GeV/}c$}. Next, we use the $\chi^2$ test to verify the null hypothesis that $\vn{2}$ at 200 GeV is consistent with those at 62.4 and 39 GeV for $\pt<1$ \mbox{$\mathrm{GeV/}c$}. We set a significance level $\alpha$ at 0.01 and we define the test-statistic as
\begin{equation}
\chi^2 = \sum\limits_{\pt < 1 \, {\rm GeV/}c} \frac{\left(v_2^{\rm 200 \, GeV} - v_2^{\rm lower}\right)^2}{\sigma^{2}_{\rm 200\, GeV} + \sigma^{2}_{\rm lower}}
\end{equation}
where $v_2^{\rm lower}$ and $\sigma_{\rm lower}$ denote $v_2$ and $\sigma$ for lower energies, $\sigma$ is defined in the same way as for z-test and the number of degrees of freedom, NDF, is 2. The $\chi^2$/NDF value for a consistency between 200~GeV and 62.4~GeV is 13.2/2 which corresponds to a probability $p = 0.0014$ of observing a $\chi^2$ that exceeds the current measured $\chi^2$ by chance, even for a correct hypothesis. For comparison between 200 and 39~GeV, $\chi^2/{\rm NDF} = 10.5/2$ which corresponds to $p = 0.005$. Thus the null hypothesis is rejected at $\alpha=0.01$ and the difference between $\vn{2}$ at 200 and 62.4~GeV and 39~GeV for $\pt<1 $~\mbox{$\mathrm{GeV/}c$}\ is statistically significant.
The observed $v_2$ for \rm NPE\ is modified with respect to the parent quark $v_2$ due to the decay kinematics of the parent heavy hadron. This effect is shown in Fig.~\ref{Fig:HFEV2WithModels} by predictions for heavy quark elliptic flow and the resulting electron $v_2$ from partonic transport model BAMPS~\cite{Uphoff:2011ad,Uphoff:2012gb}. Also, the \rm NPE\ $\pt$ spectrum is shifted towards lower $\pt$ compared to the parent hadron spectra, which makes the interpretation of the \rm NPE\ data model-dependent. Figure \ref{Fig:HFEV2WithModels} shows \rm NPE\ $\vn{2}$ and $\vn{4}$ at 200 GeV compared to a few models of heavy quark interactions with the partonic medium, which are described below. Note, that all models here calculate the elliptic flow of NPE and heavy quarks with respect to the reaction plane. The flow fluctuations and non-flow are not included there, therefore the predicted $v_2$ values should be between $\vn{2}$ and $\vn{4}$. Unfortunately, limited statistics do not allow us to quantify this difference in the data -- the measured $\vn{4}$ is consistent with $\vn{2}$ within errors.
In a partonic transport model, BAMPS \cite{Uphoff:2011ad,Uphoff:2012gb} (blue dash-dotted line), heavy quarks lose energy by collisional energy loss with the rest of the medium. To account for radiative energy loss, which is not implemented in this model, the heavy quark scattering cross-section is scaled up by a phenomenological factor, K = 3.5. In BAMPS, the hadronization is implemented as fragmentation into $D$ and $B$ mesons using the Peterson function. Thus the observed finite $v_2$ of non-photonic electrons comes only from the elliptic flow of charm quarks. Indeed, heavy quarks have a large elliptic flow in this model (dotted line). Note that the Peterson fragmentation is not an appropriate description of hadronization at low $\pt$ and other, more sophisticated mechanisms (for instance, coalescence) should be implemented. Overall, BAMPS describes $\vn{2}$ data well, but it slightly underestimates the nuclear modification factor $R_{\rm AA}$ for heavy flavor electrons, reported by PHENIX, at intermediate $\pt$ ($1.5<\pt<4$~\mbox{$\mathrm{GeV/}c$})~\cite{Uphoff:2012gb}.
It has been shown in Ref.~\cite{Gossiaux:2008jv} that initial-state parton-$k_T$ broadening (also called the Cronin effect) increases the predicted $R_{\rm AA}$ in a $\pt$ range of 1 - 3 \mbox{$\mathrm{GeV/}c$}\ and improves the agreement with the data. However, it has almost no effect at high $\pt$ and thus it is not important for the energy loss studies.
The dash-dotted green line shows the implementation of radiative and collisional energy loss from Gossiaux et al. \cite{Gossiaux:2008jv,Gossiaux:2010yx,Aichelin:2012ww}. It is a QCD-inspired model with the pQCD description of heavy quark quenching and additional non-perturbative corrections, with the hadronization implemented as coalescence at low $\pt$ and pure fragmentation for high momentum quarks. In this model, there is little contribution from the light quark to the heavy meson $v_2$ and almost all the $D$ or $B$ meson elliptic flow comes from the charm and bottom $v_2$. This model describes the \rm NPE\ nuclear modification factor at RHIC well, but underpredicts $\vn{2}$ at intermediate $\pt$. Nevertheless, it predicts a finite \rm NPE\ $v_2$, which indicates a finite charm quark $v_2$.
The TMatrix interactions model \cite{vanHees:2007me, He:2011qa} is a non-perturbative approach to heavy quark energy loss. In this framework, heavy quark interaction with the medium is simulated with relativistic Fokker-Planck-Langevin dynamics for elastic scattering in a strongly coupled QGP (modeled by relativistic hydrodynamics). The model assumes strong coupling between heavy quarks and the bulk medium; hadronization is implemented by combining recombination and fragmentation. In this model, heavy quark resonances are formed in the medium at temperatures up tp 1.5 times the critical temperature $T_c$, and scatter off the light quarks in the QGP. The resonant rescattering increases the relaxation rates for charm quarks compared to pQCD scattering of quarks and gluons. This approach also successfully describes the nuclear modification factor, although it misses our $\vn{2}$ data points at intermediate $\pt$~(solid black line). Note that $v_{2}$ should be sensitive to the heavy quark hadronization mechanism. M.~He et al.~\cite{He:2011qa} and P.B.~Gossiaux et al.~\cite{Gossiaux:2008jv,Gossiaux:2010yx,Aichelin:2012ww} use a coalescence approach in the shown $\pt$ range, while in the BAMPS model heavy quarks fragment into mesons. In general, coalescence is expected to give a larger $v_2$ of the mesons due to the contribution of the light quark flow. However, it is shown in~\cite{Greco:2003vf,Adare:2010de} that elliptic flow of light quarks alone cannot account for the observed NPE $v_2$. In that model, the data are approximately reproduced if charm quarks have an elliptic flow similar to that of light quarks.
The theoretical models discussed here, despite the different mechanisms employed, assume that charm quarks are strongly coupled with the medium and have a finite elliptic flow. All these models qualitatively follow the trend of the data. To further discriminate between models, a simultaneous comparison with other experimental observables (nuclear modification factor, azimuthal correlations) as a function of beam energy is required.
Our $\vn{2}$ measurements at 39 and 62.4 GeV provide such additional benchmarks for testing hypotheses of heavy quark energy loss. Moreover, precision measurements of these quantities for charmed and bottom hadrons separately are necessary to further constrain models and to advance our understanding of the partonic medium properties. Two new STAR detectors, the Heavy Flavor Tracker and the Muon Telescope Detector~\cite{QM2012:STAR:upgrade}, will deliver such data in the next few years.
\section{\label{sec:summary}Summary}
We report the first measurement of non-photonic electron azimuthal anisotropy using 2- and 4- particle correlations at $\sqrt{s_{\rm NN}} = $ 200 GeV, and $\vn{2}$ at 62.4 and 39 GeV. \rm NPE\ $\vn{2}$ and $\vn{4}$ are non-zero at low and intermediate $\pt$ at 200 GeV; more data is needed to quantify the effect of fluctuations and non-flow on the measured elliptic flow. At lower energies, the measured value of $\vn{2}$ is consistent with zero. The $\chi^2$ tests for a consistency between $\vn{2}$ at $\sqrt{s_{\rm NN}} = $~200~GeV and lower energies for $\pt<1 $~\mbox{$\mathrm{GeV/}c$}\ give $\chi^2/{\rm NDF} = 13.2/2$ for $\sqrt{s_{\rm NN}} = $ 62.4 GeV, and $\chi^2/{\rm NDF} = 10.5/2$ for $\sqrt{s_{\rm NN}} = $ 39 GeV. These values correspond to probabilities of $p = 0.0014$ and $p = 0.005$, respectively. Thus the difference between $\vn{2}$ for $\pt<1 $~\mbox{$\mathrm{GeV/}c$}\ in Au+Au collisions at $\sqrt{s_{\rm NN}} = $ 200 and $\vn{2}$ at the two lower beam energies is statistically significant.
\section*{Acknowledgements}
We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL, the KISTI Center in Korea and the Open Science Grid consortium for providing resources and support. This work was supported in part by the Offices of NP and HEP within the U.S. DOE Office of Science, the U.S. NSF, CNRS/IN2P3, FAPESP CNPq of Brazil, Ministry of Ed. and Sci. of the Russian Federation, NNSFC, CAS, MoST and MoE of China, the Korean Research Foundation, GA and MSMT of the Czech Republic, FIAS of Germany, DAE, DST, and CSIR of India, National Science Centre of Poland, National Research Foundation (NRF-2012004024), Ministry of Sci., Ed. and Sports of the Rep. of Croatia, and RosAtom of Russia.
\bibliographystyle{model1a-num-names}
|
1,941,325,220,995 | arxiv | \section{Introduction}
The field of poromechanics pertains to the study of coupled fluid flows and mechanical deformations in porous media. Applications include the prediction of land subsidence due to extraction of water and/or hydrocarbons from subsurface \cite{faunt2016water}.
Mathematical models of the poroelastic two-phase flow problem can be found in \cite{lewis1998finite} and were
derived by Biot \cite{biot1941general,biot1956theory} using a phenomenological approach.
In the case of single phase flow, the poroelasticity equations have been extensively studied by applied mathematicians and engineers in the scientific literature \cite{muradloula,BarryMercer1998,WheelerGai,phillips2008coupling,yi2013coupling,ChaabaneRiviere2017b}. In contrast, there are very few works on the design of efficient numerical methods for multiphase flows in deformable porous media.
The main contribution of this work is the formulation of a numerical method that employs discontinuous piecewise polynomial approximations
for the wetting and non-wetting phase pressures and the displacement of the medium.
At each time step, the mass balance equations and the momentum equation are sequentially solved.
Stabilization terms are added to the discrete momentum equation, in the same spirit as what was done in \cite{ChaabaneRiviere2017} for
single phase flow in deformable porous media.
In this work, we focus on isothermal flows where inertial forces are neglected.
The resulting coupled
partial differential equations can be solved fully implicit, iteratively or sequentially \cite{dean2006comparison}.
Fully implicit finite element
methods are the most stable ones but also the most computationally expensive.
In \cite{schrefler1993fully}, finite element methods
in space are combined with the theta method in time and the resulting system is solved by Newton-Raphson's method at each time step.
The method is applied to one-dimensional and two-dimensional problems. In \cite{yang2014fully}, fully implicit mixed finite element methods combined with standard finite element methods are
applied to solve for pressure, saturation, displacement and their gradients in two-dimensional problems.
The iterative approach (fixed-stress split) is combined with finite volume methods in \cite{asadi2015comparison}
for different choices of primary unknowns and for one-dimensional problems.
Our approach is novel in the sense that no iterations are needed for stability. At each time step, each equation is solved separately and
the computational cost is smaller than the one for fully implicit methods. We apply the proposed method to three-dimensional problems
and we study the impact of heterogeneities (discontinuous capillary pressure) and loading on the propagation of the fluid phases in the medium.
Finally, we point out that fully implicit finite element method
has been applied to more complex dynamic and non-isothermal flows in \cite{lizienkiewicz1990,schrefler2001fully,gawinbaggio,khoei2020thermo}.
An outline of the paper follows. Section~\ref{sec:problem} introduces the mathematical model and the assumptions on the input
data. The numerical algorithm is described and analyzed in Section~\ref{sec:scheme}. Numerical results, including convergence rates and
validation of the method by benchmark problems, can be found in Section~\ref{sec:numer}. Conclusions follow.
\section{Model Problem}
\label{sec:problem}
\label{sec:model}
Mathematical models for compressible two-phase flow poroelasticity are described by two mass conservation equations
coupled by a momentum conservation equation \cite{lewis1998finite}. Let $p_w, s_w$ (resp. $p_o, s_o$) denote the wetting (resp. non-wetting) phase pressure and saturation respectively and let ${\boldsymbol n}$ denote the displacement of the porous medium $\Omega\subset\mathbb{R}^3$.
By definition, $s_o=1-s_w$, and we use this relation to eliminate the non-wetting phase saturation from the system of equations.
The difference between phase pressures is the capillary pressure, $p_c$, which is a given nonlinear function of $s_w$, according to the Brooks-Corey model \cite{brookscorey}:
\begin{equation}\label{eq:pc}
p_c = p_c(s_w) = p_o - p_w, \quad s_w = \left(\frac{p_d}{p_c}\right)^2,
\end{equation}
where $p_d>0$ is a constant entry pressure.
We choose for primary unknowns the phase pressures and the displacement.
The nonlinear model coupling flow and deformation can be described by the following equations:
\begin{align}
\mathcal{C}_1(p_o,p_w) \frac{\partial p_w}{\partial t} + \mathcal{C}_2(p_o,p_w) \frac{\partial p_o}{\partial t}
- \nabla \cdot (\lambda_w(s_w) K \nabla p_w)
+ \alpha s_w \frac{\partial (\nabla \cdot{\boldsymbol n}) }{\partial t} = f_w,\label{eq:pb1}\\
\mathcal{C}_3(p_o,p_w) \frac{\partial p_o}{\partial t} + \mathcal{C}_4(p_o,p_w) \frac{\partial p_w}{\partial t}
- \nabla \cdot (\lambda_o(s_w) K \nabla p_o)
+ \alpha (1-s_w) \frac{\partial (\nabla \cdot{\boldsymbol n}) }{\partial t} = f_o, \label{eq:pb2}\\
-\mu \Delta {\boldsymbol n} - (\lambda+\mu)\nabla (\nabla \cdot {\boldsymbol n}) + \nabla (s_w p_w + (1-s_w) p_o) = {\boldsymbol f}_{\boldsymbol n}. \label{eq:pb3}
\end{align}
The mass balance equations for the wetting and non-wetting phase are \eqref{eq:pb1} and \eqref{eq:pb2} respectively
whereas \eqref{eq:pb3} represents the momentum equation for quasi-static elastic deformation of the medium. The coefficients $C_i$ are nonlinear functions of the phase pressures (see \eqref{eq:pc}):
\begin{align}
\mathcal{C}_1(p_o,p_w) =& \frac{\alpha-\phi}{K_s} s_w^2 + \frac{\phi s_w}{K_w} + \left( \frac{\alpha-\phi}{K_s} s_w p_c - \phi\right) \frac{d s_w}{d p_c},\\
\mathcal{C}_2(p_o,p_w) =& \frac{\alpha-\phi}{K_s} s_w(1-s_w) - \left( \frac{\alpha-\phi}{K_s} s_w p_c -\phi\right) \frac{d s_w}{d p_c},\\
\mathcal{C}_3(p_o,p_w) =& \frac{\alpha-\phi}{K_s} (1-s_w)^2 + \frac{\phi (1-s_w)}{K_o} - \left( \frac{\alpha-\phi}{K_s} (1-s_w) p_c + \phi\right) \frac{d s_w}{d p_c},\\
\mathcal{C}_4(p_o,p_w) =& \frac{\alpha-\phi}{K_s} s_w(1-s_w) + \left( \frac{\alpha-\phi}{K_s} (1-s_w) p_c +\phi\right) \frac{d s_w}{d p_c}.
\end{align}
We describe briefly the different coefficients in the equations above. The absolute permeability field $K$ and the
porosity field $\phi$ are given positive scalar functions; $K$ may be discontinuous and vary in space over several orders of magnitude.
Other input data are known constants: the Biot-Willis constant $\alpha$; the bulk moduli for the solid structure
and the fluid phases, $K_s, K_w, K_o$; the Lam\'e parameters $\lambda, \mu$; and the phase viscosities $\mu_w$ and $\mu_o$.
The phase mobilities, $\lambda_w, \lambda_o$, are the ratios of
the phase relative permeability $k_{ri}$ to the phase viscosity $\mu_i$ and they are given functions of the saturation:
\begin{equation}\label{eq:relperm}
\lambda_i(s_w) = \frac{k_{ri}(s_w)}{\mu_i}, \quad i=w,o, \quad k_{rw}(s_w) = s_w^4, \quad k_{ro}(s_w) = (1-s_w)^2(1-s_w^2).
\end{equation}
The Biot-Willis constant $\alpha$ is close to $1$.
For realistic porous media with porosity less than $0.5$, this implies that
the quantity $(\alpha - \phi)$ is non-negative.
The porous medium is such that the bulk modulus for
the solid is much larger than the capillary pressure, and thus we assume that
\[
\frac{p_c}{K_s} << 1.
\]
This implies that
\[
\frac{\alpha-\phi}{K_s} s_w p_c - \phi \leq 0.
\]
From \eqref{eq:pc}, we see that the derivative $s_w'(p_c)$ is negative.
Therefore, with the assumptions above, we can determine the sign of two of the scalar functions $\mathcal{C}_i(p_o,p_w)$.
\begin{equation}\label{eq:nonneg}
\mathcal{C}_1(p_o,p_w) \geq 0, \quad \mathcal{C}_3(p_o,p_w) \geq 0.
\end{equation}
This motivates the use of a sequential scheme where \eqref{eq:pb1} is solved for $p_w$ and \eqref{eq:pb2} is solved for $p_o$.
The equations \eqref{eq:pb1}-\eqref{eq:pb3} are completed by initial and boundary conditions.
\begin{eqnarray}
p_w & = & p_{w}^0,\quad \mbox{in} \quad \Omega\times \{0\},\\
p_o & = & p_{o}^0,\quad \mbox{in} \quad \Omega\times \{0\},\\
{\boldsymbol n} & = & {\boldsymbol n}^0,\quad \mbox{in} \quad \Omega\times \{0\}.
\end{eqnarray}
The boundary of the medium is decomposed into Dirichlet and Neumann parts for pressures and displacement:
\[
\partial\Omega = \Gamma_{p{\mathrm D}}\cup\Gamma_{p{\mathrm N}} = \Gamma_{{\boldsymbol n}{\mathrm D}}\cup\Gamma_{{\boldsymbol n}{\mathrm N}}.
\]
Boundary data are prescribed by the following conditions:
\begin{align}
p_w = & p_{w{\mathrm D}}, \quad p_o = p_{o{\mathrm D}}, \quad \mbox{on}\quad \Gamma_{p{\mathrm D}}\times (0,T),\\
\lambda_w(s_w) K \nabla p_w\cdot{\boldsymbol n} = & g_{w}, \quad \lambda_o(s_w) K \nabla p_o \cdot{\boldsymbol n} = g_o, \quad \mbox{on}\quad \Gamma_{p{\mathrm D}}\times (0,T),\\
{\boldsymbol n} = & {\boldsymbol n}_{\mathrm D}, \quad \mbox{on}\quad \Gamma_{{\boldsymbol n}{\mathrm D}}\times (0,T),\\
\mu \nabla {\boldsymbol n} \, {\boldsymbol n} + (\lambda+\mu) (\nabla \cdot{\boldsymbol n}){\boldsymbol n}
= & {\boldsymbol g}_{\boldsymbol n}, \quad \mbox{on}\quad \Gamma_{{\boldsymbol n}{\mathrm N}}\times (0,T).
\end{align}
\section{Discontinuous Galerkin Scheme}
\label{sec:scheme}
The equations are discretized by the interior penalty discontinuous Galerkin method. Let $\mathcal{E}_h$ be a partition of the domain
made of tetrahedral elements of maximum diameter $h$. Let $\Gamma_h$ denote the set of interior faces. For any interior face $e$, we fix
a unit normal vector ${\boldsymbol n}_e$ and we denote by $E_e^1$ and $E_e^2$ the two tetrahedra that share the face $e$ such that the vector
${\boldsymbol n}_e$ points from $E_e^1$ into $E_e^2$. The jump and average of a function $q$ across an interior face $e$ are denoted by
$[q]$ and $\{q\}$ respectively:
\[
[q] = q|_{E_e^1}-q|_{E_e^2}, \quad \{q\}=\frac12 \left( q|_{E_e^1}+q|_{E_e^2}\right), \quad \forall e=\partial E_e^1\cap\partial E_e^2.
\]
The jump and average of $q$ on a boundary face are, by convention, equal to the trace of $q$:
\[
[q] = q|_{e}, \quad \{q\}=q|_e, \quad \forall e\subset\partial\Omega.
\]
The DG spaces, denoted by $Q_h$ and ${\boldsymbol V}_h$, consist of discontinuous piecewise linears:
\[
Q_h=\{ q\in L^2(\Omega): \, q|_E \in \mathbb{P}_1(E), \, \forall E\in\mathcal{E}_h\}, \quad
{\boldsymbol V}_h = Q_h\times Q_h\times Q_h.
\]
We denote by $\Pi$ the cut-off operator that restricts any function $q$ to the interval $[0,1]$. The parameter $\epsilon$ is chosen equal
to $10^{-8}$ in our numerical results.
\[
\Pi(q)({\boldsymbol x}) = \left\{
\begin{array}{lr}
1-\epsilon \color{black} & \mbox{if } q({\boldsymbol x}) > 1-\epsilon,\\
q({\boldsymbol x}) & \mbox{if } 0\leq q({\boldsymbol x}) \leq 1,\\
\epsilon \color{black} & \mbox{if } q({\boldsymbol x}) < \epsilon.
\end{array}
\right.
\]
Let $0=t_0<t_1<\dots<t_N=T$ be a partition of the time interval $(0,T)$. For reasons that will be apparent below, we choose two time step values $\tau_0$ and $\tau$ and we define
\[
t_1 = \tau_0, \quad t_n = t_1+(n-1)\tau, \quad \forall n\geq 2.
\]
Let $P_w^n, P_o^n$ and ${\boldsymbol U}^n$ denote the DG approximations of $p_w, p_o$ and ${\boldsymbol n}$ evaluated at time $t_n$. We define
\begin{equation}
S_w^n = \Pi(p_c^{-1}(P_o^n-P_w^n)), \quad \forall n\geq 1.
\label{eq:truncatedsat}
\end{equation}
The scheme consists of three sequential steps for $n\geq 1$:\\
Step 1: Given $P_w^n\in Q_h$, $P_o^n, P_o^{n-1}\in Q_h$ and ${\boldsymbol U}^n,{\boldsymbol U}^{n-1}\in {\boldsymbol V}_h$, find $P_w^{n+1}\in Q_h$ such that
\begin{align}
\left(\mathcal{C}_1(P_o^n,P_w^n) \frac{P_w^{n+1}-P_w^n}{\tau} + \mathcal{C}_2(P_o^n,P_w^n) \frac{P_o^n-P_o^{n-1}}{\tau}, q_h\right)_\Omega
+ a(\lambda_w^n K; P_w^{n+1},q_h) \nonumber\\
+ \alpha b_{\boldsymbol n}(S_w^n;\frac{{\boldsymbol U}^{n}-{\boldsymbol U}^{n-1}}{\tau},q_h) = \ell_w(t_{n+1};q_h),\quad \forall q_h\in Q_h.\label{eq:discpb1}
\end{align}
Step 2: Given $P_o^n\in Q_h$, $P_w^n, P_w^{n+1}\in Q_h$ and ${\boldsymbol U}^n, {\boldsymbol U}^{n-1}\in {\boldsymbol V}_h$, find $P_o^{n+1}\in Q_h$ such that
\begin{align}
\left(\mathcal{C}_3(P_o^n,P_w^n) \frac{P_o^{n+1}-P_o^n}{\tau} + \mathcal{C}_4(P_o^n,P_w^n) \frac{P_w^{n+1}-P_w^n}{\tau},q_h \right)_\Omega
+ a(\lambda_o^n K; P_o^{n+1},q_h) \nonumber\\
+ \alpha b_{\boldsymbol n}(1-S_w^n;\frac{{\boldsymbol U}^{n}-{\boldsymbol U}^{n-1}}{\tau},q_h) = \ell_o(t_{n+1};q_h), \quad \forall q_h\in Q_h.
\label{eq:discpb2}
\end{align}
Step 3: Given $P_o^{n+1}, P_w^{n+1} \in Q_h$ and ${\boldsymbol U}^{n}, {\boldsymbol U}^{n-1}\in{\boldsymbol V}_h$, find ${\boldsymbol U}^{n+1}\in{\boldsymbol V}_h$ such that
\begin{align}
c({\boldsymbol U}^{n+1},{\boldsymbol v}_h) + b_p(S_w^{n+1} P_w^{n+1}+(1-S_w^{n+1}) P_o^{n+1},{\boldsymbol v}_h)
+\gamma \left(\frac{{\boldsymbol U}^{n+1}-{\boldsymbol U}^n}{\tau },{\boldsymbol v}\right)_\Omega
\nonumber\\
-\gamma \left(\frac{{\boldsymbol U}^{n}-{\boldsymbol U}^{n-1}}{\tau },{\boldsymbol v}\right)_\Omega
= \ell_{\boldsymbol n}(t_{n+1};{\boldsymbol v}_h), \quad
\forall {\boldsymbol v}_h\in{\boldsymbol V}_h. \label{eq:discpb3}
\end{align}
In \eqref{eq:discpb1}, \eqref{eq:discpb2}, the coefficients $\lambda_w^n, \lambda_o^n$ are the functions $\lambda_w$
and $\lambda_o$ evaluated at $S_w^n$. In \eqref{eq:discpb3}, the parameter $\gamma$ is a positive constant that is user-specified and
that multiplies a stabilization term involving the discrete displacements. The numerical scheme \eqref{eq:discpb1}-\eqref{eq:discpb3} is sequential as the flow and displacement equations are solved separately. However, each equation is solved implicitely with respect to its primary unknown ($P_w^{n+1}$ for \eqref{eq:discpb1}, $P_o^{n+1}$ for \eqref{eq:discpb2} and $\mathbf{U}^{n+1}$ for \eqref{eq:discpb3}). One novel contribution of this work is the use of the stabilization term that multiplies $\gamma$; this term is required for convergence of the method. For single-phase flow in deformable porous media, stability and convergence of the scheme is obtained if $\gamma$ is sufficiently large \cite{ChaabaneRiviere2017}. The convergence proof for the case of two-phase flow in deformable porous media remains an open question.
The $L^2$ inner-product over $\Omega$ is denoted by $(\cdot,\cdot)_\Omega$. Similary, we use the notation
$(\cdot,\cdot)_E$ and $(\cdot,\cdot)_e$ for the $L^2$ inner-product over an element $E$ and a face $e$. We now describe the forms $a(\cdot;\cdot,\cdot), b_{\boldsymbol n}(\cdot;\cdot,\cdot)$,
$c(\cdot,\cdot), b_p(\cdot,\cdot)$ that correspond to the discretizations of the differential operators in the mathematical model.
For the operator of the form $ \chi \nabla \cdot {\boldsymbol n}$ with $\chi$ being a scalar-valued function, we propose the following discretization:
\[
b_{\boldsymbol n}(\chi;{\boldsymbol n},q) = -\sum_{E\in\mathcal{E}_h} ({\boldsymbol n},\nabla (\chi q))_E
+ \sum_{e\in\Gamma_h\cup\partial\Omega} (\{{\boldsymbol n}\cdot{\boldsymbol n}_e\}, [\chi q])_e.
\]
For the operator of the form $\nabla q$, we apply the following discretization:
\[
b_p(q,{\boldsymbol v}) = \sum_{E\in\mathcal{E}_h} (\nabla q, {\boldsymbol v})_E
- \sum_{e\in\Gamma_h} ([q],\{{\boldsymbol v}\cdot{\boldsymbol n}_e\})_e.
\]
For the operator of the form $-\nabla\cdot(\chi\nabla p)$ with $\chi$ being a scalar-valued function, we utilize the standard interior penalty DG form:
\begin{eqnarray*}
a(\chi;p,q) &=& \sum_{E\in\mathcal{E}_h} (\chi\nabla p,\nabla q)_E
+ \sum_{e\in\Gamma_h\cup\Gamma_{p{\mathrm D}}} \sigma_p h_e^{-1} ([p],[q])_e \\
&& -\sum_{e\in\Gamma_h\cup\Gamma_{p{\mathrm D}}} (\{ \chi \nabla p\} \cdot{\boldsymbol n}_e, [q])_e
+\epsilon_p \sum_{e\in\Gamma_h\cup\Gamma_{p{\mathrm D}}} (\{ \chi \nabla q\}\cdot{\boldsymbol n}_e, [p])_e.
\end{eqnarray*}
The scalar $\epsilon_p$ is either equal to $-1$ or to $+1$ to yield a symmetric or non-symmetric bilinear form.
The penalty parameter $\sigma_p$ is a positive constant: it has to be sufficiently large if $\epsilon_p=-1$ \cite{Riviere2008}.
The discretization of the operator $-\mu\Delta {\boldsymbol n} - (\lambda+\mu) \nabla (\nabla \cdot{\boldsymbol n})$ is also recalled:
\begin{eqnarray*}
c({\boldsymbol n},{\boldsymbol v}) &=& \mu \sum_{E\in\mathcal{E}_h} (\nabla {\boldsymbol n},\nabla {\boldsymbol v})_E
+ \mu \sum_{e\in\Gamma_h\cup\Gamma_{{\boldsymbol n}{\mathrm D}}} \sigma_{\boldsymbol n} h_e^{-1} ([{\boldsymbol n}],[{\boldsymbol v}])_e \\
&& -\mu \sum_{e\in\Gamma_h\cup\Gamma_{{\boldsymbol n}{\mathrm D}}} (\{ \nabla {\boldsymbol n}\} {\boldsymbol n}_e, [{\boldsymbol v}])_e
+\epsilon_{\boldsymbol n} \mu \sum_{e\in\Gamma_h\cup\Gamma_{{\boldsymbol n}{\mathrm D}}} (\{ \nabla {\boldsymbol v}\}{\boldsymbol n}_e, [{\boldsymbol n}])_e
\\
&& + (\lambda+\mu) \sum_{E\in\mathcal{E}_h} (\nabla\cdot{\boldsymbol n},\nabla \cdot{\boldsymbol v})_e
- (\lambda+\mu) \sum_{e\in\Gamma_h\cup\Gamma_{{\boldsymbol n}{\mathrm D}}} (\{\nabla \cdot {\boldsymbol n}\}, [{\boldsymbol v}\cdot{\boldsymbol n}_e])_e.
\end{eqnarray*}
The forms $\ell_w, \ell_o$ and $\ell_{\boldsymbol n}$ handle the source/sink functions, external forces and boundary conditions.
\begin{eqnarray*}
\ell_w(t_{n+1};q_h) &=& (f_w(t_{n+1}),q_h)_\Omega
+\epsilon_p \sum_{e\in\Gamma_{p{\mathrm D}}} (\lambda_w^n K \nabla q_h\cdot{\boldsymbol n}_e,p_{w{\mathrm D}}(t_{n+1}))_e
\\
&&+ \sum_{e\in\Gamma_{p{\mathrm N}}} (g_w(t_{n+1}),q_h)_e
+\sum_{e\in\Gamma_{p{\mathrm D}}} \sigma_p h_e^{-1} (p_{w{\mathrm D}}(t_{n+1}), q_h)_e,
\end{eqnarray*}
\begin{eqnarray*}
\ell_o(t_{n+1};q_h) &=& (f_o(t_{n+1}),q_h)_\Omega
+\epsilon_p \sum_{e\in\Gamma_{p{\mathrm D}}} (\lambda_o^n K \nabla q_h\cdot{\boldsymbol n}_e,p_{o{\mathrm D}}(t_{n+1}))_e
\\
&&+ \sum_{e\in\Gamma_{p{\mathrm N}}} (g_o(t_{n+1}),q_h)_e
+\sum_{e\in\Gamma_{p{\mathrm D}}} \sigma_p h_e^{-1} (p_{o{\mathrm D}}(t_{n+1}), q_h)_e,
\end{eqnarray*}
\begin{eqnarray*}
\ell_{\boldsymbol n}(t_{n+1};{\boldsymbol v}_h) &=& ({\boldsymbol f}_{\boldsymbol n}(t_{n+1}),{\boldsymbol v}_h)_\Omega
+\epsilon_{\boldsymbol n} \mu \sum_{e\in\Gamma_{{\boldsymbol n}{\mathrm D}}} (\nabla {\boldsymbol v}_h \, {\boldsymbol n}_e, {\boldsymbol n}_{\mathrm D}(t_{n+1}))_e
\\
&&+ \sum_{e\in\Gamma_{{\boldsymbol n}{\mathrm N}}} ({\boldsymbol g}_{\boldsymbol n}(t_{n+1}),{\boldsymbol v}_h)_e
+\sum_{e\in\Gamma_{{\boldsymbol n}{\mathrm D}}} \sigma_{\boldsymbol n} h_e^{-1} ({\boldsymbol n}_{\mathrm D}(t_{n+1}),{\boldsymbol v}_h)_e.
\end{eqnarray*}
In order to start the algorithm, the solutions at times $t_0$ and $t_1$ are to be computed.
The initial values are chosen to be the $L^2$ projections of the initial data.
\[
(P_w^0,q_h)_\Omega = (p_w^0,q_h)_\Omega, \quad
(P_o^0,q_h)_\Omega = (p_o^0,q_h)_\Omega, \quad
({\boldsymbol U}^0,{\boldsymbol v}_h)_\Omega = ({\boldsymbol n}^0,{\boldsymbol v}_h)_\Omega, \quad \forall q_h\in Q_h, \, \forall {\boldsymbol v}_h\in{\boldsymbol V}_h.
\]
To obtain $P_w^1$ we solve a modified flow equation:
\begin{equation}
(\mathcal{C}_1(P_o^0,P_w^0) \frac{P_w^{1}-P_w^0}{\tau_0}, q_h)_\Omega
+ a(\lambda_w^0 K; P_w^{1},q_h)
= \ell_w(t_1;q_h).\quad \forall q_h\in Q_h.\label{eq:initdiscpb1}
\end{equation}
Once $P_w^1$ is computed, we can solve for $P_w^0$ satisfying
\begin{align}
(\mathcal{C}_3(P_o^0,P_w^0) \frac{P_o^{1}-P_o^0}{\tau_0},q_h)_\Omega
+ a(\lambda_o^0 K; P_o^{1},q_h)
= \ell_o(t_1;q_h
- (\mathcal{C}_4(P_o^0,P_w^0) \frac{P_w^{1}-P_w^0}{\tau_0},q_h)_\Omega, \quad \forall q_h\in Q_h.
\label{eq:initdiscpb2}
\end{align}
Because $\tau_0$ is chosen to be much smaller than $\tau$, the consistency errors due to the modified equations \eqref{eq:initdiscpb1} and \eqref{eq:initdiscpb2} will be negligible
compared to the numerical errors for all time steps $n\geq 2$.
Finally, to compute the displacement ${\boldsymbol U}^1$, equation~\eqref{eq:discpb3} is used without the stabilization terms. This yields a consistent discretization for the displacement at time step $t_1$.
\begin{align}
c({\boldsymbol U}^{1},{\boldsymbol v}_h) = \ell_{\boldsymbol n}(t_{1};{\boldsymbol v}_h) -b_p(S_w^{1} P_w^{1}+(1-S_w^{1}) P_o^{1},{\boldsymbol v}_h)
, \quad
\forall {\boldsymbol v}_h\in{\boldsymbol V}_h. \label{eq:initdiscpb3}
\end{align}
Define the DG norm for discrete pressures:
\[
\Vert q_h \Vert_{\mathrm{DG}} = \left( \sum_{E\in\mathcal{E}_h} \Vert \nabla q_h\Vert_{L^2(E)}^2
+ \sum_{e\in\Gamma_h\cup\Gamma_{p{\mathrm D}}} h_e^{-1} \Vert [q_h]\Vert_{L^2(e)}^2 \right)^{1/2}, \quad \forall q_h\in Q_h.
\]
A similar norm is defined for vector-valued functions ${\boldsymbol v}_h\in{\boldsymbol V}_h$; it differs by the boundary terms.
\[
\Vert {\boldsymbol v}_h \Vert_{\mathrm{DG}} = \left( \sum_{E\in\mathcal{E}_h} \Vert \nabla {\boldsymbol v}_h\Vert_{L^2(E)}^2
+ \sum_{e\in\Gamma_h\cup\Gamma_{{\boldsymbol n}{\mathrm D}}} h_e^{-1} \Vert [{\boldsymbol v}_h]\Vert_{L^2(e)}^2 \right)^{1/2}, \quad \forall {\boldsymbol v}_h\in {\boldsymbol V}_h.
\]
We now recall the coercivity properties for the bilinear forms $a$ and $c$.
\begin{lemma}\label{lem:coerc}
Let $\chi$ be a scalar-valued function bounded below and above by positive constants $C_{\underline{\chi}}$ and $C_{\overline{\chi}}$. If $\epsilon_p = -1$, assume that $\sigma_p$ is sufficiently large. The following holds:
\begin{equation}\label{eq:aPcoer}
\frac12 \Vert q_h \Vert_{\mathrm{DG}}^2 \leq a(\chi;q_h,q_h), \quad \forall q_h\in Q_h.
\end{equation}
In addition, assume the penalty parameter $\sigma_{{\boldsymbol n}}$ is sufficiently large. Then we have
\begin{equation}\label{eq:cUcoer}
\frac12 \Vert {\boldsymbol v}_h \Vert_{{\mathrm D \mathrm G}}^2 \leq c({\boldsymbol v}_h,{\boldsymbol v}_h), \quad \forall {\boldsymbol v}_h\in{\boldsymbol V}_h.
\end{equation}
\end{lemma}
The proof of Lemma~\ref{lem:coerc} is classical and is therefore skipped \cite{Riviere2008}. If $\epsilon_p = -1$, the constant $\sigma_p$ depends on trace constants and the constants $C_{\underline{\chi}}$ and $C_{\overline{\chi}}$. Similarly, the penalty parameter $\sigma_{{\boldsymbol n}}$ depends on trace constants and on the Lam\'e parameters.
Next we show that the discrete equations are solvable under some conditions on the phase mobilities.
\begin{proposition}
Assume that the functions $\lambda_w$ and $\lambda_o$ are bounded below by positive constants.
For any $n\geq 0$, the solutions $(P_w^n, P_o^n, {\boldsymbol U}^n)$ exist and are unique.
\end{proposition}
\begin{proof}
Existence and uniqueness of the initial solutions $(P_w^0, P_o^0, {\boldsymbol U}^0)$ is immediate because of the $L^2$ projection operator.
Regarding the solutions at time $t_1$, since \eqref{eq:initdiscpb1}, \eqref{eq:initdiscpb2}, \eqref{eq:initdiscpb3} are linear problems in finite dimension, it suffices to show uniqueness. The proof is an immediate consequence of the coercivity Lemma~\ref{lem:coerc} and the
non-negative signs of the coefficients $\mathcal{C}_1$ and $\mathcal{C}_3$ (see \eqref{eq:nonneg}). Next we prove existence of solutions
to \eqref{eq:discpb1}-\eqref{eq:discpb3} by also utilizing the fact that these equations are linear with respect to their unknowns. It is thus equivalent to show uniqueness. Fix $n\geq 1$ and assume that $\tilde{P}_w$ is the difference of two solutions to \eqref{eq:discpb1}. We have
\[
(\mathcal{C}_1(P_o^n,P_w^n) \frac{\tilde{P}_w}{\tau}, q_h)_\Omega
+ a(\lambda_w^n K; \tilde{P}_w,q_h) = 0, \quad \forall q_h\in Q_h.
\]
Choosing $q_h = \tilde{P}_w$ in the equation above and using \eqref{eq:aPcoer} and \eqref{eq:nonneg}, we have that $\tilde{P}_w=0$.
Next, we denote by $\tilde{P}_o$ the difference of two solutions to \eqref{eq:discpb2}; it satisfies
\[
(\mathcal{C}_3(P_o^n,P_w^n) \frac{\tilde{P}_o}{\tau},q_h )_\Omega
+ a(\lambda_o^n K; \tilde{P}_o,q_h) = 0, \quad \forall q_h \in Q_h.
\]
Again, by choosing $q_h = \tilde{P}_o$ and using \eqref{eq:aPcoer} and \eqref{eq:nonneg}, we have that $\tilde{P}_o=0$.
Finally, let $\tilde{{\boldsymbol U}}$ be the difference of two solutions to \eqref{eq:discpb3}. It satisfies
\[
c(\tilde{{\boldsymbol U}},{\boldsymbol v}_h)
+\gamma \left(\frac{\tilde{{\boldsymbol U}}}{\tau },{\boldsymbol v}\right)_\Omega
=0, \quad \forall {\boldsymbol v}_h\in{\boldsymbol V}_h.
\]
Choosing ${\boldsymbol v}_h = \tilde{{\boldsymbol U}}$ and using \eqref{eq:cUcoer}, yields
\[
\frac12 \Vert \tilde{{\boldsymbol U}}\Vert_{{\mathrm D \mathrm G}}^2 + \gamma \Vert \tilde{{\boldsymbol U}}\Vert_{L^2(\Omega)}^2 = 0,
\]
which gives the desired result.
\end{proof}
\section{Numerical Results}
\label{sec:numer}
We first verify the optimal rate of convergence of our proposed numerical method for smooth solutions and then we apply our scheme
to various porous media problems: the McWorther problem, a non-homogeneous medium with different capillary pressures,
a medium subjected to load, and a medium with highly varying permeability and porosity.
Unless explicitely stated in the text, all examples use the following physical parameters.
\begin{align*}
\mu_w=\mu_o = 0.001 \, \mbox{Pa s}, \, K_w=K_{o}= 10^{10} \, \mbox{Pa}, \\ \lambda = 7142857 \, \mbox{Pa}, \, \mu=1785714 \, \mbox{Pa}, \, K_s=8333333 \, \mbox{Pa}, \\
\phi=0.3, \,\alpha=0.8, \, \epsilon_p = \epsilon_{\boldsymbol n} = -1.
\end{align*}
The linear systems are solved by LU preconditioned GMRES with absolute stopping criteria $10^{-12}$. Most of the problems converged with desired accuracy in 1 or 2 iterations.
\subsection{Convergence Rates}
We employ the method of manufactured solutions to test the convergence rates of our scheme. The exact solution is smooth and defined
by
\[
p_w(x,y,z) = \sin(y)+5, \quad p_o(x,y,z)=\cos(x)+25, \quad {\boldsymbol n}(x,y,z) = (\cos(x),\sin(y),\cos(z+x))^T.
\]
The following physical parameters are chosen: $\phi = 0.3, K=1, \lambda=1,\mu=0.6, K_w=K_o=K_s=10, \alpha=0.9, \lambda_w(s_w) = s_w, \lambda_o(s_w)=1-s_w$ and $p_d =10$.
The computational parameters are $\tau = 1, \tau_0=10^{-2}, \sigma_p = 20, \sigma_{\boldsymbol n}=14$ and $\gamma=10$.
The domain is the unit cube partitioned into tetrahedra. No cut-off operator is applied in this example.
We compute the numerical errors at the final time $T=5$ on a series of uniformly refined meshes.
\[
e_{w} = p_w(T)-P_w^N, \quad e_o = p_o(T)-P_o^N, \quad e_{\boldsymbol n} = {\boldsymbol n}(T)-{\boldsymbol U}^N.
\]
Table~\ref{tab:errorpressure} displays the errors for the phase pressures in the
broken gradient norm and the $L^2$ norm, and the errors for the displacement in the $L^2$ norm. The rates are optimal.
\begin{table}[H]
\centering
\vspace{-0.5em}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$h$ & $\Vert e_w\Vert$ & Rate & $\Vert \nabla_h e_w \Vert $& Rate
& $\Vert e_o\Vert$ & Rate & $\Vert \nabla_h e_o \Vert $& Rate & $\Vert e_{\boldsymbol n} \Vert$ & Rate\\
\hline
1/2 & 5.78e-03 & & 6.89e-02 & & 7.53e-03 & & 1.08e-01 & & 1.16e-02 & \\
1/4 & 1.56e-03 & 1.89 & 3.57e-02 & 0.95 & 2.01e-03 & 1.91 & 5.48e-02 & 0.98 & 3.03e-03 & 1.93\\
1/8 & 4.03e-04 & 1.95 & 1.80e-02 & 0.99 & 5.24e-04 & 1.94 & 2.75e-02 & 0.99 & 7.79e-03 & 1.94\\
\hline
\end{tabular}
\caption{Numerical errors and rates for the numerical approximations of smooth solutions.}
\label{tab:errorpressure}
\end{table}
\subsection{McWhorter Problem}
The original McWhorter problem simulates counter-current flow in a homogeneous one-dimensional domain.
Because of the quasi-analytical solution developed in
\cite{McwhortherSunada1990}, this benchmark problem is ideal for evaluating the accuracy of a numerical scheme.
The fluid phases are incompressible, which means that the inverse of the bulk modulus for each phase is set to zero.
The entry pressure (see \eqref{eq:pc}) is $p_d=5000$ Pa.
For this problem, the Biot-Willis constant is set equal to $1$ and the permeability is $K=10^{-10}$.
We solve this problem in a thin slab $[0,2.6]\times [0,0.065]\times [0,0.0325]$ partitioned into 160 cubes of side $h=0.0325$, each
cube is then divided into 6 tetrahedra.
The computational parameters are:
\[
\tau=1 \mbox{ s}, \quad \tau_0 = 0.01 \mbox{ s}, \quad \sigma_p = 400, \quad \sigma_{\boldsymbol n} = 1000, \quad \gamma = 10^5, \quad T=5000 \mbox{ s}.
\]
Initially, the pressures are $p_w^0 = 184000$ Pa and $p_o^0 = 234000$ Pa, which implies the initial saturation in the domain is $s_w^0 = 0.01$. \color{black} The Dirichlet boundary is the left vertical boundary $\{0\}\times [0,0.065]\times [0,0.0325]$. Dirichlet data are selected such that the wetting phase saturation is equal to $0.99$ on that boundary. This means that
$p_{w{\mathrm D}} = 194970$ Pa and $p_{o{\mathrm D}} = 200000$ Pa.
No flow is imposed on the remainder of the boundary: $g_w = g_o = 0$.
Zero displacement is prescribed on both left and right vertical boundaries and no traction ($ {\boldsymbol g}_{\boldsymbol n} = {\bf 0}$) is prescribed on the remainder of the boundary.
\[
{\boldsymbol n}_{\mathrm D} = {\bf 0} \quad \mbox{on} \quad \{0\}\times [0,0.065]\times [0,0.0325] \cup \{2.6\} \times [0,0.065]\times [0,0.0325].
\]
The saturation profiles at different times are plotted in Fig.~\ref{fig:mcwhorter_sat_profile_smin001h00325}. We observe that the numerical solution coincide with the analytical solution.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\linewidth]{Figures/Mcwhorter_h00325_x26_sat.png}
\caption{McWhorter problem: wetting phase saturation profiles at five selected time steps.}
\label{fig:mcwhorter_sat_profile_smin001h00325}
\end{figure}
In Fig.~\ref{fig:mcwhorter_displacement}, we compare the numerical displacement obtained with our method with the numerical displacement
obtained by a finite volume discretization in \cite{asadi2015comparison} at $t=1000$s. Because there are no external forces, changes in the displacement are caused by changes in the pressures. We observe a good agreement between the two solutions.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{Figures/Mcwhorter_h00325_x26_disp_1.png}
\caption{McWhorter problem: displacement at $t=1000$s.}
\label{fig:mcwhorter_displacement}
\end{figure}
\subsection{Porous Medium with Heterogeneous Inclusions}
This example considers a porous medium with two rock types with different permeability and
entry pressure in each rock. The domain $\Omega = [0,100]\times [0,100]\times [0,2.5]$ (m$^3$) contains
two box-shape inclusions $[20,40]\times[50,70]\times[0,2.5]$ (m$^3$) and $[50,90]\times[20,50]\times[0,2.5]$ (m$^3$) (see Fig.~\ref{fig:2blocksetup}). The permeability and entry pressure for rock type 1 (resp. type 2) are denoted by $K_1$ and $p_{d1}$ (resp. $K_2$ and $p_{d2}$).
We consider two cases:
\begin{align*}
\mbox{Case 1:} & \quad K_1 = 4.2\times 10^{-11}, \, p_{d1} =\sqrt{2}p_{d2}, \, K_2 = 2 K_1, \, p_{d2} = 5000,\\
\mbox{Case 2:} & \quad K_1 = 8.4\times 10^{-11}, \, p_{d1} = 5000, \, K_2 = K_1/2, \, p_{d2} =\sqrt{2}p_{d1}.
\end{align*}
\vspace{-1em}
\begin{figure}[H]
\subfigure[top view \label{fig:2blocksetuptop}]{
\includegraphics[width=0.32\linewidth]{Figures/two_rock_types/blocks_domain.png}}
\subfigure[flow BCs \label{fig:2blockBCflow}]{
\includegraphics[width=0.32\linewidth]{Figures/two_rock_types/blocks_flow_bc.png}}
\subfigure[geomechanics BCs \label{fig:2blockBCdisp}]{
\includegraphics[width=0.32\linewidth]{Figures/two_rock_types/blocks_disp_bc.png}}
\caption{Domain with two inclusions: top view and set-up of boundary conditions for flow and geomechanics. }
\label{fig:2blocksetup}
\end{figure}
The initial non-wetting phase pressure is $p_o^0 = 200000$ Pa and the initial wetting phase pressure is chosen so that the initial wetting phase saturation in the areas of rock type 1 and rock type 2 are 0.1 and 0.05 respectively. Dirichlet data are selected such that the wetting phase saturation is equal to $1.0$ on the left side $\{0\}\times [0,100]\times [0,2.5]$,
this means that $p_{w{\mathrm D}} = 195000$ Pa and $p_{o{\mathrm D}} = 200000$ Pa on that side.
No flow is imposed on the remainder of the boundary: $g_w = g_o = 0$.
Zero displacement is prescribed on both left and right sides and no traction ($ {\boldsymbol g}_{\boldsymbol n} = {\bf 0}$) is prescribed on the remainder of the boundary.
The domain is partitioned into $9600$ tetrahedra. The computational parameters are:
\begin{equation}
\tau=5 \mbox{ days}, \quad \tau_0 = 0.05\mbox{ days}, \quad \sigma_p = 800, \quad \sigma_{\boldsymbol n} = 800, \quad \gamma = 10^5,\quad T=1000 \mbox{ days}.
\label{eq:comppar}
\end{equation}
First, we simulate flow for Case 1.
Fig.~\ref{fig:blocks_case1_sat} shows the wetting phase saturation contours at 50, 125, 250, 375, 500 and 1000 days.
The saturation front avoids the inclusions that have lower permeability, as expected. As the wetting phase floods the medium, deformations occur; for better visualization the displacement components are scaled by $1200$.
\begin{figure}[H]
\vspace{-0.5em}
\centering
\subfigure[$t=50$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/saturation/saturation_50days.png}}
\subfigure[$t=125$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/saturation/saturation_125days.png}}
\subfigure[$t=250$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/saturation/saturation_250days.png}}\\
\vspace{-0.5em}
\subfigure[$t=375$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/saturation/saturation_375days.png}}
\subfigure[$t=500$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/saturation/saturation_500days.png}}
\subfigure[$t=1000$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/saturation/saturation_1000days.png}}
\caption{Heterogeneous inclusions problem for Case 1: wetting phase saturation contours at $t=50, 125, 250, 375, 500$ and $1000$ days.}
\label{fig:blocks_case1_sat}
\end{figure}
Profiles of the saturation front are plotted along two horizontal lines $y=35$m and $y=60$m in the plane $z=2.5$m for different times in Fig.~\ref{fig:block_case1_sat_linechart}.
We observe that the saturation is discontinuous at the interface between the two types of rocks. The discontinuity is due to the capillary pressure function that switches to another curve as shown in Figure \ref{fig:blocks_case1_cap_pres}. This is attributed to the fact that the entry pressures are discontinuous, the entry pressure in rock of type $2$ is smaller than the entry pressure in rock of type $1$. We note that the threshold saturation $S_w^\ast \approx 0.84$, which is defined as $p_{c1}(S_w^\ast)=p_{c2}(1)$, is larger than the saturation in rock 2, $S_{w2}$, and less than the saturation in rock 1, $S_{w1}$, therefore the phase pressure is continuous across the interface. Figure \ref{fig:blocks_case1_pres} shows the wetting phase pressure solutions at different times. The inclusions impact the pressure contours: even though the permeability in rock 2 is twice the permeability in rock 1, the wetting phase saturation is smaller in rock 2, which yields a smaller wetting phase relative permeability.
\begin{figure}[H]
\centering
\subfigure{\includegraphics[width=0.45\linewidth]{Figures/two_rock_types/case_b/line_chart/blocks_case_1_y35m_sat_paper.png}}
\subfigure{\includegraphics[width=0.45\linewidth]{Figures/two_rock_types/case_b/line_chart/blocks_case_1_y60m_sat_paper.png}}
\caption{Heterogeneous inclusions problem for Case 1: wetting phase saturation profiles along $y=35$ m (left) and $y=60$m (right) at selected times.}
\label{fig:block_case1_sat_linechart}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{Figures/two_rock_types/cap_pres_1.png}
\caption{Heterogeneous inclusions problem for Case 1: capillary pressure functions for the two rocks. }
\label{fig:blocks_case1_cap_pres}
\end{figure}
Since rock type 2 has a lower entry pressure, less non-wetting phase is displaced by the wetting phase and the wetting phase saturation value lags behind in the region of rock type 2. Overall, the magnitude of displacement in the area of rock type 2 is smaller than in surrounding areas. Figure \ref{fig:blocks_case1_displacement} shows the magnitude of the displacement at different times.
Before the wetting phase front reaches the right boundary, we first observe a significant displacement in the x-axis direction compared to the y- and z-axis directions. More wetting phase passes through the area of rock type 1 where the medium is being stretched in the x-axis positive direction along with the flow. Meanwhile, the displacements in both the y-axis and z-axis increase in the direction that is perpendicular to the flow's direction. This can be identified when the medium contracts in the y-axis when the wetting phase entered the domain. The same phenomenon can be observed when the region between two blocks is being stretched. The area that is close to the right boundary is being squeezed in the x-axis which leads to the increase of displacement in y- and z-direction until it bounces back due to the zero displacement boundary condition on the right side.
\begin{figure}[H]
\centering
\subfigure[$t=50$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/pressure/pressure_50days.png}}
\subfigure[$t=125$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/pressure/pressure_125days.png}}
\subfigure[$t=250$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/pressure/pressure_250days.png}}\\
\subfigure[$t=375$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/pressure/pressure_375days.png}}
\subfigure[$t=500$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/pressure/pressure_500days.png}}
\subfigure[$t=1000$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/pressure/pressure_1000days.png}}
\caption{Heterogeneous inclusions problem for Case 1: wetting phase pressure contours at $t=50, 125, 250, 375, 500$ and $1000$ days.}
\label{fig:blocks_case1_pres}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[$t=50$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/disp_magnitude/disp_mag_50days.png}}
\subfigure[$t=125$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/disp_magnitude/disp_mag_125days.png}}
\subfigure[$t=250$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/disp_magnitude/disp_mag_250days.png}}\\
\subfigure[$t=375$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/disp_magnitude/disp_mag_375days.png}}
\subfigure[$t=500$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/disp_magnitude/disp_mag_500days.png}}
\subfigure[$t=1000$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_b/disp_magnitude/disp_mag_1000days.png}}
\caption{Heterogeneous inclusions problem for Case 1: magnitude of displacement at $t=50, 125, 250, 375, 500$ and $1000$ days.}
\label{fig:blocks_case1_displacement}
\end{figure}
In the next experiments, we consider Case 2 where the rock properties are switched compared to Case 1.
Initially, the wetting and non-wetting phase pressures are constant ($p_o^0=200000$ Pa) and the initial wetting phase saturation
in the areas of rock type 1 and rock type 2 are 0.1 and 0.2 respectively.
The saturation contours and profiles are shown in
Fig.~\ref{fig:blocks_case2_sat}
and Fig.~\ref{fig:block_case2_sat_linechart} respectively.
Since the saturation in the area of rock type 1, $S_{w1}$, is less than threshold saturation $S^*_{w}$ (see Fig.~\ref{fig:blocks_case2_cap_pres}), the phase pressure is continuous across the interface.
Wetting phase pressure and magnitude of displacement are
presented in Fig.~\ref{fig:blocks_case2_pres} and Fig.~\ref{fig:blocks_case2_displacement} respectively.
As seen in Figure \ref{fig:blocks_case2_pres}, the wetting phase pressure propagates in the area of rock type 2 faster than in the area of rock type 1 due to higher initial wetting phase saturation. Higher wetting phase saturation indicates that there is more wetting phase that goes into the rock type 2 region (see Fig.~\ref{fig:blocks_case2_sat}). This leads to a significant displacement of the rock type 2 in the x-axis and y-axis directions.
Finally we remark that in the z-direction, the regions of rock 2 contract for Case 1 whereas they expand for Case 2
(see Fig.~\ref{fig:3dzdirection}).
\begin{figure}[H]
\centering
\subfigure[$t=50$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/saturation/saturation_50days.png}}
\subfigure[$t=125$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/saturation/saturation_125days.png}}
\subfigure[$t=250$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/saturation/saturation_250days.png}}\\
\subfigure[$t=375$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/saturation/saturation_375days.png}}
\subfigure[$t=500$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/saturation/saturation_500days.png}}
\subfigure[$t=1000$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/saturation/saturation_1000days.png}}
\caption{Heterogeneous inclusions problem with switched rock types: wetting phase saturation contours at $t=50, 125, 250, 375, 500$ and $1000$ days.}
\label{fig:blocks_case2_sat}
\end{figure}
\begin{figure}[H]
\centering
\subfigure{\includegraphics[width=0.45\linewidth]{Figures/two_rock_types/case_c/line_chart/blocks_case_2_y35m_sat_paper.png}}
\subfigure{\includegraphics[width=0.45\linewidth]{Figures/two_rock_types/case_c/line_chart/blocks_case_2_y60m_sat_paper.png}}
\caption{Heterogeneous inclusions problem for Case 2: wetting phase saturation profiles along $y=35$ m (left) and $y=60$m (right) at selected times.}
\label{fig:block_case2_sat_linechart}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{Figures/two_rock_types/cap_pres_2.png}
\caption{Heterogeneous inclusions problem for Case 2: capillary pressure functions for two rocks.}
\label{fig:blocks_case2_cap_pres}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[$t=50$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/pressure/pressure_50days.png}}
\subfigure[$t=125$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/pressure/pressure_125days.png}}
\subfigure[$t=250$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/pressure/pressure_250days.png}}\\
\subfigure[$t=375$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/pressure/pressure_375days.png}}
\subfigure[$t=500$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/pressure/pressure_500days.png}}
\subfigure[$t=1000$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/pressure/pressure_1000days.png}}
\caption{Heterogeneous inclusions problem for Case 2: wetting phase pressure contours at $t=50, 125, 250, 375, 500$ and $1000$ days.}
\label{fig:blocks_case2_pres}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[$t=50$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/disp_magnitude/disp_mag_50days.png}}
\subfigure[$t=125$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/disp_magnitude/disp_mag_125days.png}}
\subfigure[$t=250$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/disp_magnitude/disp_mag_250days.png}}\\
\subfigure[$t=375$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/disp_magnitude/disp_mag_375days.png}}
\subfigure[$t=500$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/disp_magnitude/disp_mag_500days.png}}
\subfigure[$t=1000$]{\includegraphics[width=0.29\linewidth]{Figures/two_rock_types/case_c/disp_magnitude/disp_mag_1000days.png}}
\caption{Heterogeneous inclusions problem for Case 2: magnitude of displacement at $t=50, 125, 250, 375, 500$ and $1000$ days.}
\label{fig:blocks_case2_displacement}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[Case 1]{\includegraphics[width=2.9in]{Figures/two_rock_types/case_b/disp_y35/disp_y35m_500days2.png}}
\subfigure[Case 2]{\includegraphics[width=2.9in]{Figures/two_rock_types/case_c/disp_y35/disp_y35m_500days2.png}}
\caption{Heterogeneous inclusions problem: 3D views of a cross-section of the domain along the line $y=35$m. Contours correspond to
the wetting phase saturation at $t=500$ days.}
\label{fig:3dzdirection}
\end{figure}
\subsection{Porous Medium Subjected to Load}
The numerical examples in this section show the impact of loading on the wetting phase propagation in the medium as it undergoes deformations.
The domain $\Omega = [0,100]\times [0,100]\times [0,5]$ m$^3$ is partitioned into $2400$ tetrahedra.
Boundary conditions for flow and displacement are described in Fig.~\ref{fig:loading_traction_y_setup}. Dirichlet data
is prescribed for the pressures ($p_{w{\mathrm D}} = 195000$ Pa and $p_{o{\mathrm D}} = 200000$ Pa) on the left side of the boundary and no flow
is imposed on the remainder of the boundary. Two different loading scenarios are considered: first a non-zero traction boundary condition in the $y$-direction is imposed on the top side ($\mathbf{g}_\mathbf{u} = (0,-r,0)$); this case is referred to as $y-$load. Second a load is imposed in the $x$-direction on the left side of the domain ($\mathbf{g}_\mathbf{u} = (r,0,0)$); this case is referred to as $x-$load. In both cases, the bottom side is fixed, with zero Dirichlet boundary condition for the displacement. Zero traction is imposed on the remainding of the boundary. The load increases linearly in time:
\[
r(t) = 50000 \frac{t}{T}.
\]
\begin{figure}[H]
\subfigure[flow BCs \label{fig:loadingBCflow}]{
\includegraphics[width=0.3\linewidth]{Figures/loading/loading_flow_bc.png}}
\subfigure[$y$-load BCs \label{fig:loadingBCdisp-y}]{
\includegraphics[width=0.3\linewidth]{Figures/loading/loading_top_disp_bc.png}}
\subfigure[$x$-load BCs \label{fig:loadingBCdisp-x}]{
\includegraphics[width=0.3\linewidth]{Figures/loading/loading_left_disp_bc.png}}
\caption{Set-up of boundary conditions for flow and geomechanics.}
\label{fig:loading_traction_y_setup}
\end{figure}
The following physical parameters are used:
\[
K_w=K_{o}= 10^{4}, \quad K=8.0\times 10^{-11} \mbox{m}^2, \quad \lambda=\mu=4\times 10^5 \, \mbox{Pa},\quad K_s=666666 \, \mbox{Pa}.
\]
We choose smaller values for the bulk moduli to show the impact of the loading on fluid and solid phases.
The final time is $T=500$ days and the other computational parameters are
as in \eqref{eq:comppar}.
We first show the contours for wetting phase saturation and pressure at $250$, $375$ and $500$ days in Fig.~\ref{fig:yloadsatpres}
for the case of vertical load.
As the load increases, the domain is compressed in the $y-$direction as expected and slightly expanded in the $x-$direction. Even though the pressure gradient is mostly in the $x-$direction, the deformation of the medium creates a small pressure gradient
in the $y-$direction near the load boundary. The wetting phase floods the top part of the domain slower than the bottom part.
\begin{figure}[H]
\centering
\subfigure[$S_w, \, t=250$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_top_fix_bot/sat_250days.png}}
\subfigure[$S_w, \, t=375$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_top_fix_bot/sat_375days.png}}
\subfigure[$S_w, \, t=500$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_top_fix_bot/sat_500days.png}}\\
\subfigure[$P_w, \, t=250$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_top_fix_bot/pres_250days.png}}
\subfigure[$P_w, \, t=375$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_top_fix_bot/pres_375days.png}}
\subfigure[$P_w, \, t=500$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_top_fix_bot/pres_500days.png}}
\caption{Case of $x$-load: wetting phase saturation and pressure contours at different times}
\label{fig:yloadsatpres}
\end{figure}
To better see this, we extract the saturation profiles at $250, 375$ and $500$ days along three horizontal lines (see Fig.~\ref{fig:yload-profiles3D}). The location of the front is also indicated in the figure. Near the top side of the domain, the saturation front is lagging behind by ten meters.
\begin{figure}[H]
\centering
\subfigure{\includegraphics[width=0.75\linewidth]{Figures/loading/traction_top_fix_bot/3d_yy_sat_comparision.png}}
\caption{Case of $y$-load: wetting phase saturation profiles along $y=0$, $50$ and $100$ m at $t=250$, $375$ and $500$ days.}
\label{fig:yload-profiles3D}
\end{figure}
Next, we show the saturation and pressure contours for the case of $x-$load in Fig.~\ref{fig:xloadsatpres}.
\begin{figure}[H]
\centering
\subfigure[$S_w, \, t=250$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_left_fix_bot/sat_250days.png}}
\subfigure[$S_w, \, t=375$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_left_fix_bot/sat_375days.png}}
\subfigure[$S_w, \, t=500$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_left_fix_bot/sat_500days.png}}\\
\subfigure[$P_w, \, t=250$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_left_fix_bot/pres_250days.png}}
\subfigure[$P_w, \, t=375$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_left_fix_bot/pres_375days.png}}
\subfigure[$P_w, \, t=500$]{\includegraphics[height=0.25\linewidth,keepaspectratio]{Figures/loading/traction_left_fix_bot/pres_500days.png}}
\caption{Case of $x$-load: wetting phase saturation and pressure contours at different times.}
\label{fig:xloadsatpres}
\end{figure}
In this loading scenario, the deformation of the medium is mostly in the $x-$direction, with the top part of the domain deforming the most because of the constraint of zero displacement at the bottom side. We also observe that the displacement of the domain is in the same direction than the propagation of the wetting phase saturation. This yields a faster saturation front in the top part of the domain. Fig.~\ref{fig:xload-profiles3D} shows the saturation profiles along three horizontal lines. After $500$ days, the saturation front at the top side reaches about 97 meters which is 3 and 8 meter further than other two locations.
\begin{figure}[H]
\centering
\subfigure{\includegraphics[width=0.75\linewidth]{Figures/loading/traction_left_fix_bot/3d_xy_sat_comparision.png}}
\caption{Case of $x$-load: wetting phase saturation profiles along $y=0$, $50$ and $100$ m at $t=250$, $375$ and $500$ days.}
\label{fig:xload-profiles3D}
\end{figure}
For a better comparison between these two types of loading, we show the contours of the $x-$ and $y-$ components of the displacement
at the final time in Fig.~\ref{fig:xydispT}. Under the $y-$load, the medium is compressed vertically and stretched horizontally whereas
under the $x-$load, the medium deforms mostly along the direction of the flow except for the fixed bottom boundary.
\begin{figure}
\centering
\subfigure[$y$-load: $U_x$]{\includegraphics[height=0.22\textheight,keepaspectratio]{Figures/loading/traction_top_fix_bot/disp_x_500days.png}}
\subfigure[$y$-load: $U_y$]{\includegraphics[height=0.22\textheight,keepaspectratio]{Figures/loading/traction_top_fix_bot/disp_y_500days.png}}\\
\subfigure[$x$-load: $U_x$]{\includegraphics[height=0.22\textheight,keepaspectratio]{Figures/loading/traction_left_fix_bot/disp_x_500days.png}}
\subfigure[$x$-load: $U_y$]{\includegraphics[height=0.22\textheight,keepaspectratio]{Figures/loading/traction_left_fix_bot/disp_y_500days.png}}
\caption{Contours of $x$ and $y$ components of displacement at $500$ days.}
\label{fig:xydispT}
\end{figure}
Finally, we now compare the effect of no loading versus loading for both $y-$ and $x-$ loads.
To be precise, no loading means that zero
traction boundary condition ($\mathbf{g}_\mathbf{u} = {\bf 0}$) is prescribed on the boundary except for the bottom boundary where zero displacement is imposed.
Fig.~\ref{fig:loadvnoload} shows the wetting phase saturation profiles extracted along the top and bottom sides at $250$, $375$ and $500$ days.
On the top boundary, we observe that the saturation front advances faster in the $x-$load than in the zero traction case and the $y-$load
yields the slowest saturation front. This is expected since the loading direction for the $x-$load is the same as the flow direction.
On the bottom boundary, overall there are less differences between the profiles for the three loading scenarios because of the zero displacement constraint. This figure shows the impact of the nonlinearities in the problem on the fluid propagation.
\begin{figure}
\subfigure[Along $y=100$ m]{\includegraphics[width=0.5 \linewidth]{Figures/loading/y100m_sat.png}}
\subfigure[Along $y=0$ m]{\includegraphics[width=0.5 \linewidth]{Figures/loading/y0m_sat.png}}
\caption{Wetting phase saturation profiles extracted along $y=100$m and $y=0$m at three different times: $250$, $375$ and $500$ days and
for different loading scenarios.}
\label{fig:loadvnoload}
\end{figure}
\subsection{Highly Heterogeneous Medium}
We apply the method to a porous medium where both porosity and permeability vary in space.
The medium exhibits regions of high permeability (channels) surrounded by regions of low permeability and lower porosity.
This example demonstrates the capability of the proposed method to handle large variations in permeability.
The domain $[0,80]\times[0,80] \times [0,7.5]$ consists of three stacked horizontal layers of height $2.5$ m. The mesh contains $18432$ tetrahedra.
The porosity field for the three layers is shown in Fig.~\ref{fig:speporosity} and the permeability field in logarithmic scale is shown in Fig.~\ref{fig:speperm}. The data are extracted from the SPE10 porosity and permeability fields; they correspond to a section of layer 43, 44 and 45 in the SPE10 model \cite{SPE10reference}.
Dirichlet data is prescribed for the pressures ($p_{w{\mathrm D}} = 1950000$ Pa and $p_{o{\mathrm D}} = 2000000$ Pa) on the left side of the boundary and no flow is imposed on the remainder of the boundary. The entry pressure is $p_d=50000$ Pa.
The computational parameters are:
\begin{equation}
\tau=20 \mbox{ days}, \quad \tau_0 = 0.2\mbox{ days}, \quad \sigma_p = 800, \quad \sigma_{\boldsymbol n} = 800, \quad \gamma = 10^5,\quad T=4000 \mbox{ days}.
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/porosity_z0_5m43_tstep_180_900days.png}
\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/porosity_z3_75m44_tstep_180_900days.png}
\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/porosity_z7m45_tstep_180_900days.png}
\caption{Heterogeneous medium: porosity field for bottom layer (left), middle layer (center) and top layer (right).}
\label{fig:speporosity}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/perm_z0_5m43_tstep_180_900days.png}
\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/perm_z3_75m44_tstep_180_900days.png}
\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/perm_z7m45_tstep_180_900days.png}
\caption{Heterogeneous medium: permeability field in log scale for bottom layer (left), middle layer (center) and top layer (right).}
\label{fig:speperm}
\end{figure}
Fig.~\ref{fig:3Dsat} shows the wetting phase saturation in the three-dimensional domain at time $t= 1000$ days; values of the saturation above 0.21 are shown only. We observe a non-uniform saturation front. The deformation of the domain is magnified by a scaling factor of $100$ for visualization.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{Figures/SPE10/figures/sat_tstep10_dt20days_scale100_sat021.png}
\caption{Two-phase flow in highly heterogeneous medium. Threshold plot of wetting phase saturation where the value is greater than 0.21 at $t=1000$ days, displacement scaled up by 100 for visualization.}
\label{fig:3Dsat}
\end{figure}
The wetting phase saturation and pressure at $4000$ days are shown in each of the three layers in Fig.~\ref{fig:spelayersat}.
For visualization purposes, each component of the numerical approximation of the displacement has been scaled by $100$.
Due to the heterogeneous permeability and porosity, we observe differences in the pressure and saturation contours at each layer.
This simulation shows the effect of three-dimensional heterogeneities in the propagation of the wetting phase through the medium.
\begin{figure}[H]
\centering
\subfigure[$S_w$, top layer]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/43_sat_tstep200_dt20days_scale100.png}}
\subfigure[$P_w$, top layer]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/43_pres_tstep200_dt20days_scale100.png}}\\
\subfigure[$S_w$, middle layer]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/44_sat_tstep200_dt20days_scale100.png}}
\subfigure[$P_w$, middle layer]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/44_pres_tstep200_dt20days_scale100.png}}\\
\subfigure[$S_w$, bottom layer]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_sat_tstep200_dt20days_scale100.png}}
\subfigure[$P_w$, bottom layer]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_pres_tstep200_dt20days_scale100.png}}
\caption{Two-phase flow in highly heterogeneous medium. Left column: wetting phase saturation in the three layers. Right column: wetting phase pressure, at t=4000 days.}
\label{fig:spelayersat}
\end{figure}
The contours for the x-, y-, and z-components of the displacement are shown in Fig.~\ref{fig:spe10disp}. The displacement is five
times larger in the flow direction, which is consistent with the choice of the boundary conditions. Because of the coupling between flow
and geomechanics, the displacement components vary in time as the medium is flooded by the wetting phase.
\begin{figure}[H]
\centering
\subfigure[$U_x, t=500$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_x_tstep25_dt20days_scale100.png}}
\subfigure[$U_y, t=500$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_y_tstep25_dt20days_scale100.png}}
\subfigure[$U_z, t=500$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_z_tstep25_dt20days_scale100.png}}\\
\subfigure[$U_x, t=2000$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_x_tstep100_dt20days_scale100.png}}
\subfigure[$U_y, t=2000$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_y_tstep100_dt20days_scale100.png}}
\subfigure[$U_z, t=2000$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_z_tstep100_dt20days_scale100.png}}\\
\subfigure[$U_x, t=4000$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_x_tstep200_dt20days_scale100.png}}
\subfigure[$U_y, t=4000$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_y_tstep200_dt20days_scale100.png}}
\subfigure[$U_z, t=4000$]{\includegraphics[width=0.32\linewidth]{Figures/SPE10/figures/45_disp_z_tstep200_dt20days_scale100.png}}
\caption{Two-phase flow in highly heterogeneous medium. Contours of displacement components in top layer at different times:
x-component (left column), y-component (center column) and z-component (right column).}
\label{fig:spe10disp}
\end{figure}
\section{Conclusions}
We have presented an accurate and robust numerical method for solving the coupled two-phase flow and geomechanics equations in porous media.
The method is sequentially implicit, therefore computationally less expensive than a fully implicit scheme. The sequential scheme is stable due to
stabilization terms added to the displacement equation. The method is validated on three-dimensional benchmark problems and the numerical results confirm
the stability, robustness and accuracy of the proposed scheme for various heterogeneous porous media.
|
1,941,325,220,996 | arxiv | \section{Introduction}
The development of femtosecond laser pumps has led to new probes of complex metals whereby systems are
driven out of equilibrium with the aim to study their relaxation dynamics~\cite{giannetti2016,bovensiepen2007,smallwood2016}.
Simultaneously, pump-probe
setups allow the fascinating possibility to study phenomena that have no analog in equilibrium physics,
such as the transient excitation of coherent optical phonons~\cite{silvestri1985,chen1990,cho1990,ishioka2009}.
A ``coherent'' phonon is excited when the
relevant atoms of the crystalline solid, which are macroscopic in number,
vibrate with \emph{identical} frequency and phase [see Fig.~\ref{fig1}(a)]. This is to be contrasted with
incoherent motion triggered by quantum and thermal fluctuations in equilibrium where, from atom to atom,
the frequencies and phases are uncorrelated.
More recently it has been recognized that the
physics of Floquet dynamics can be made experimentally accessible via coherent phonon
excitations~\cite{hubener2018,oka2018}.
Experimentally, a typical signature of a
coherent phonon excitation is an oscillatory signal on a decaying background
in time-resolved spectroscopic probes such as x-ray spectroscopy, photoemission and reflectivity
measurements.
Coherent phonons have been studied in a variety of materials that include
semiconductors~\cite{hunsche1995,riffe2007,bothschafter2013},
semimetals~\cite{hase1996,decamp2001,hase2002,misochko2004,fritz2007,johnson2008,papalazarou2012},
transition metals~\cite{hase2005},
Cu-based~\cite{chwalek1990,albrecht1992,bozovic2004,novelli2017}
and the Fe-based~\cite{mansart2009,mansart2010,torchinsky2011,kim2012,avigo2013,yang2014,rettig2015,gerber2015,gerber2017}
high temperature superconductors,
charge density wave systems~\cite{kenji1998,demsar1999,toda2004,schaefer2014,schmitt2011},
as well as Mott~\cite{perfetti2008,mansart-mott-2010,mankowsky2015,lee2017} and
topological~\cite{a-q-wu2008,kamaraju2010,misochko2015} insulators.
On the theory side, this phenomenon is usually described either as displacive excitaion of coherent phonons
(DECP)~\cite{zeiger1992,kuznetsov1994} or as impulsive stimulated Raman scattering (ISRS)~\cite{garrett1996,merlin1997}.
In the former mechanism
photoexcitation leads to a shift in the equilibrium position of the phonon~\cite{zeiger1992,kuznetsov1994},
while in the latter the electromagnetic
radiation provides a short impulsive force to the atoms~\cite{garrett1996,merlin1997}.
Note, if the photoexcitation does not involve crossing phase boundaries,
then typically only the fully symmetric Raman $A_{1g}$ phonon is excited in DECP.
It has been argued that in absorbing medium these two mechanisms
are not distinct~\cite{stevens2002}.
Using the above concepts, first-principles calculations have been successfully applied to understand coherent phonon
dynamics in a variety of systems~\cite{mazin1994,tangney1999,tangney2002,murray2005,shinohara2010}.
The purpose of this work is to develop, within the conceptual framework of DECP, a microscopic Hamiltonian-based
description of coherent phonons in an environment where the timescale for the photoexcited carriers to thermalize
is rather short, such as a metal with gapless charge excitations.
Here we focus on coherent phonon excitation driven by laser heating of carriers,
a phenomena which is relevant experimentally, but which has received less attention theoretically.
As we show below, the
microscopic formulation provides a better treatment of electron-phonon interaction compared to the phenomenological
model that is currently used to analyze experimental data~\cite{zeiger1992}.
In particular, our theory captures how the coherent phonon excitation
modifies the electronic fluid, and how this modification feeds back on the coherent phonon dynamics.
\begin{figure}[!!b]
\begin{center}
\includegraphics[width=1.0\linewidth,trim=0 0 0 0]{Fig1.jpg}
\caption{
(color online) (a) Sketch of an $A_{1g}$ coherent phonon motion in a two-atom (red balls) unit cell. A macroscopic
number of atoms oscillate with \emph{identical} phase and frequency $\omega_0$. Green arrows indicate the
instantaneous velocities at two instants. The motion preserves the point group symmetry.
(b)
The effect of the laser pump is idealized as a temperature quench from a measured base temperature $T_L$ to a high
temperature $T_H$ over a short time set to zero, and the subsequent relaxation of temperature over a time-scale $\tau_e$.
In the theory, $(T_H, \tau_e)$ are phenomenological parameters (see text). The temperature and time scales are
representative.
}
\label{fig1}
\end{center}
\end{figure}
The main advances of our work compared to the phenomenological theory of Zeiger \emph{²}~\cite{zeiger1992}
are the following.
(i) Including the lattice feedback effect leads to a richer description of the dynamics. In particular, we show
that at short time scales this leads to
\emph{chirping} or temporal variations of the oscillation frequency, while staying within a harmonic description of
the coherent phonons. On the other hand, at long times the feedback leads to a finite phase in the oscillatory signal.
The origin of this phase is distinct from that in the phenomenological DECP theory~\cite{zeiger1992}, and it is likely
to be dominant quantitatively. Importantly,
the theory \emph{predicts} that the sign of the phase is determined by whether the chirping is red or blue shifted.
(ii) A Hamiltonian formulation opens the possibility of extracting microscopic equilibrium information from coherent phonon studies.
(iii) The microscopic formulation can be refined systematically using methods of many-body to deal with
various interaction effects.
The paper is organized as follows. In Sec.~\ref{Model}, we introduce the microscopic model,
we discuss the rationale for treating the effect of the pump as a quench of the electronic temperature,
and we derive the equation of motion of the coherent phonon using Heisenberg equation of motion.
In Sec.~\ref{Sec_results}, we solve the above equation, and we discuss our main results,
emphasizing the new physics introduced by taking into account the feedback of the lattice.
In Sec.~\ref{Ba_model}, we apply the theory to BaFe$_2$As$_2$ and we show that the data from a recent time resolved
x-ray study can be successfully described by our theory, using a more constrained fit.
We conclude in Sec. \ref{conclusion}.
\section{Model \& Formalism}\label{Model}
We consider a multiorbital electronic system interacting with a zero wavevector uniform $A_{1g}$ phonon mode.
It is described by the Hamiltonian
\begin{align}
\label{eq:ham}
\mathcal{H} &= \sum_{{\bf k}, a, b, \sigma} \left[ \epsilon({\bf k})_{a b} - \mu \delta_{a b} \right]
c^{\dagger}_{{\bf k} a \sigma} c_{{\bf k} b \sigma} + \mathcal{N} \hbar \omega_0 (b^{\dagger} b + 1/2)
\nonumber \\
&+ \lambda \sum_{{\bf k}, a, b, \sigma} C({\bf k})_{a b}c^{\dagger}_{{\bf k} a \sigma} c_{{\bf k} b \sigma} (b^{\dagger} + b).
\end{align}
$\epsilon({\bf k})_{a b}$ describe the dispersion in an orbital basis, and $\mu$ is the chemical potential.
$c^{\dagger}_{{\bf k} a \sigma}$ and $c_{{\bf k} a \sigma}$ are electron creation
and annihilation operators, respectively, with lattice wavevector ${\bf k}$, orbital index $a$, and spin $\sigma$.
The operators ($b^{\dagger}$, $b$) describe
creation and annihilation operators for the $A_{1g}$ phonon with frequency $\omega_0$, and $\mathcal{N}$ is the total number
of sites.
Electron-phonon interaction is described by $ \lambda C({\bf k})_{a b}$, where $\lambda < 1$ is a
dimensionless small parameter and $C(k)_{ab}$ is order Fermi energy. Thus, electron-phonon interaction
can be treated perturbatively in orders of $\lambda$. For clarity, we ignore the phonon modes that
are not coherently generated. We also ignore electron-electron and phonon-phonon
interaction. Later, we comment on their effects.
After the pump the initial dynamics of the system is dominated by light-matter and by electron-electron interactions.
However, as time and angle resolved photoemission (tr-ARPES) experiments have
shown~\cite{papalazarou2012,yang2014}, due to electron-electron scattering the
electronic subsystem equilibrates after a time $\tau_r$ of order few tens of femtoseconds. At longer times an instantaneous
electronic temperature $T(t)$ can be defined.
In this work we focus on the regime $t \gg \tau_r$. Accordingly, we assume
$\tau_r \rightarrow 0$, such that the effect of the laser pump can be modeled as inducing a \emph{temperature quench} of the electrons.
We assume that the electronic temperature relaxation is characterized by a timescale $\tau_e$, and is described phenomenologically by
\begin{equation}
\label{eq:tau-T}
T(t) = T_L + (T_H - T_L) \operatorname{e}^{-t/\tau_e},
\end{equation}
where $T_L \equiv T(t=0^-) = T(t \rightarrow \infty)$, and $T_H \equiv T(t=0^+)$ [see Fig.~\ref{fig1}(b)].
The dimensionless mean atomic displacement $u \equiv \langle b + b^{\dagger} \rangle$ follows the
equation of motion $\left( \partial_t^2 + \omega_0^2 \right) u = F(t)$, where the out-of-equilibrium force is
\[
F(t) = - \frac{2 \omega_0}{\mathcal{N}} \lambda \sum_{{\bf k}, a, b, \sigma} C({\bf k})_{a b}
\langle c^{\dagger}_{{\bf k} a \sigma} (t) c_{{\bf k} b \sigma} (t) \rangle_{\mathcal{H}, T(t)}.
\]
Here $\langle X \rangle_{\mathcal{H}, T(t)} \equiv {\rm Tr} [\rho X]/{\rm Tr} [\rho]$ and $\rho \equiv |n \rangle \langle n|
\operatorname{e}^{-E_n/T(t)}$, where $|n \rangle$ and $E_n$ are
the eigenfunctions and eigenvalues, respectively, of $\mathcal{H}$ in Eq.~(\ref{eq:ham}).
Our goal is to capture, at least qualitatively, the feedback of the coherent phonon on the electron fluid,
for which it is sufficient to evaluate the force to second order in $\lambda$. At this order
$u(t)$ can be treated as a classical variable fluctuating in time, and $F(t)$ can be evaluated
using linear response theory. We get
\begin{equation}
\label{eq:force}
F(t)/(2 \omega_0) = - \langle \hat{O} \rangle_{\mathcal{H}_0, T(t)}
- \int_{- \infty}^{\infty} d t^{\prime} \Pi_{T(t)}(t - t^{\prime}) u(t^{\prime}),
\end{equation}
where $\Pi_{T(t)}(t - t^{\prime}) \equiv i \theta(t - t^{\prime}) \langle \left[ \hat{O} (t^{\prime}), \hat{O} (t) \right]
\rangle_{\mathcal{H}_0, T(t)}$ is the response function associated with the weighted electron density operator
$\hat{O} \equiv (\lambda/\mathcal{N}) \sum_{{\bf k}, a, b, \sigma} C({\bf k})_{a b}c^{\dagger}_{{\bf k} a \sigma} c_{{\bf k} b \sigma}$,
and $\mathcal{H}_0 \equiv \mathcal{H}(\lambda=0)$.
Since all the averages involving electronic operators from now on are
defined with respect to $\mathcal{H}_0$, henceforth we do not mention it explicitly.
Note, as discussed in Appendix~\ref{Appendix_structure}
$\Pi_{T(t)}(t - t^{\prime})$ is a function not just of $(t - t^{\prime})$, but also of $t$ via its dependence on temperature $T(t)$.
Moreover, the Fourier transform of the response function $\Pi_{T(t)}(\Omega)$ coincides with the \emph{equilibrium}
retarded phonon self-energy $\Sigma_{\rm ph}(\Omega)$ evaluated to second order in $\lambda$ and at temperature $T$ (see Eq.~(\ref{Ftrsf})).
At this stage it is also evident that, if needed, effects of electron-electron interaction can be systematically introduced
in the evaluation of $F(t)$.
The fact that the coherent phonon is a well-defined excitation implies that the retardation in $\Pi_{T(t)}(t - t^{\prime})$
is weak, and it is sufficient to expand in frequency $\Pi_{T(t)}(\Omega) \approx \pi(T) + i \Omega \gamma(T)/\omega_0$.
Here $\pi(T) \equiv \Pi_R(\Omega=0, T)$ and $\gamma(T)/\omega_0 \equiv \partial_{\Omega}\Pi_I(\Omega, T)_{\Omega=0}$, where
$\Pi_{R/I}(\Omega, T)$ are the real and imaginary parts of $\Pi_{T(t)}(\Omega)$, respectively.
Note, in general, both $\pi(T)$ and $\gamma(T)$ are time dependent through their $T(t)$ dependencies. In the following
we simplify the discussion by assuming the decay rate $\gamma$ is constant, even though the
current formulation can handle time-dependent decay rates.
This gives
\begin{equation}
\label{eq:diff1}
\left( \partial_t^2 + 2 \gamma \partial_t + \omega_0^2 \right) u = f(t),
\end{equation}
and
\[
f(t) \equiv - 2 \omega_0 \left[ \langle \hat{O} \rangle_{T}
-\langle \hat{O} \rangle_{T_L} + \{ \pi(T) - \pi(T_L) \} u(t) \right]
\]
is the instantaneous out of equilibrium force.
In the above the second and the fourth terms are added by hand for the following reasons. The second term
involving $\langle \hat{O} \rangle_{T_L}$ is a constant, and adding it is equivalent to setting
the zero of
the displacement $u$ to be the atomic position at $T_L$.
The fourth term involving $\pi(T_L) u(t)$ renormalizes
the frequency $\omega_0$ and adding it is equivalent to identifying
$\omega_0$ with the equilibrium phonon frequency at $T_L$. Once these two terms are added,
we now get the behavior that is physically expected, namely $f(t=0^-) = f(t \rightarrow \infty) = 0$, see Eq.~(\ref{eq:tau-T}).
The functions $\langle \hat{O} \rangle_{T}$ and $\pi(T)$ are well-defined thermodynamic quantities which, in the
absence of a phase transition, are analytic in $T$. Thus, they can be expanded around $T_L$ and,
using Eq.~(\ref{eq:tau-T}), they can
be expressed as series in powers of $\operatorname{e}^{- t/\tau_e}$.
In practice, these series can be truncated after the first few terms:
\begin{align}
\begin{aligned}
\langle \hat{O} \rangle_{T} -\langle \hat{O} \rangle_{T_L} = \sum_{n}a_{n}e^{-nt/\tau_e}
&\approx - (X_1/2) \operatorname{e}^{- t/\tau_1},
\\
\pi(T) - \pi(T_L) = \sum_{n}{}b_{n}e^{-nt/\tau_e} & \approx - (X_2/2) \operatorname{e}^{- t/\tau_2},
\end{aligned}
\label{eq:approx-equality}
\end{align}
where $T_L$ is the base temperature of pump-probe experiments, $a_n = \left.\frac{d^{n}\langle \hat{O} \rangle_{T}}{dT^{n}}\right|_{T=T_L}(T_H-T_L)^{n}$,
$b_n=\left.\frac{d^{n}\pi}{dT^{n}}\right|_{T=T_L}(T_H-T_L)^n$,
$X_1 = -2 \left(\langle \hat{O} \rangle_{T_H} -\langle \hat{O} \rangle_{T_L} \right) \sim \mathcal{O}(\lambda)$,
$X_2 = -2 \left( \pi(T_H) - \pi(T_L) \right) \sim \mathcal{O}(\lambda^2)$.
In other words, we assume that each of the series
$\sum_{n}a_{n}e^{-nt/\tau_e}$
and
$\sum_{n}{}b_{n}e^{-nt/\tau_e}$
can be modeled as a single decaying exponential
with effective decay rates $\tau_{1,2} \sim \tau_e$, respectively. The temperature dependencies of
$\langle \mathcal{O} \rangle_T$ and $\pi(T)$ can be obtained from the microscopic theory.
Then, the parameters $[X_1, X_2, \tau_1, \tau_2]$ can be calculated using Eq. (5), provided
we know $[T_H, \tau_e]$. Hence \textit{the theory has only two phenomenological parameters},
namely $[T_H,\tau_e]$. We get
\begin{equation}
\label{eq:force1}
f(t) = \omega_0 \left( X_1 \operatorname{e}^{- t/\tau_1} + u X_2 \operatorname{e}^{- t/\tau_2} \right),
\end{equation}
where the second term is the \emph{lattice feedback} which
can be interpreted as the effect of the change in the electron dispersion due to the coherent phonon excitation.
Eqs.~(\ref{eq:diff1}) and (\ref{eq:force1}), together with the initial conditions $u(0) =0$ and $\partial_t u(0) =0$, describe
the coherent phonon dynamics.
\section{Results}\label{Sec_results}
(i) Evaluating the force $f(t)$ to linear order in $\lambda$ is equivalent to ignoring the lattice feedback by
setting $X_2 =0$ in Eq.~(\ref{eq:force1}). In this limit, we recover
the phenomenological result of Zeiger \emph{et al.}~\cite{zeiger1992}, namely
$u(t) = (X_1/\omega_0) [ \operatorname{e}^{- t/\tau_1} - \operatorname{e}^{- \gamma t} \cos (\omega_0 t - \phi_0)/ \cos \phi_0 ]$,
with the phase $\phi_0 \sim {\rm max}[\gamma/\omega_0, 1/(\omega_0 \tau_1)]$.
However, the detection of a coherent phonon necessarily implies that
in a typical experimental situation
\begin{equation}
\label{eq:inequality}
\omega_0 \gg \gamma, 1/\tau_{1/2},
\end{equation}
and so $\phi_0 \ll 1$, which means that the phase obtained within the phenomenological framework is negligible.
As we show below, keeping the lattice feedback term also leads to a finite phase of a different physical
origin, and this latter is quantitatively
more significant than $\phi_0$.
(ii) Finite $X_2$ leads to a richer dynamics and a modified solution.
In the limit $[\gamma/\omega_0, 1/(\omega_0 \tau_{1/2})] \rightarrow 0$, which is
experimentally relevant, we get (see Eq.~(\ref{u_sol}))
\begin{equation}
\label{eq:full-soln}
u(t) = \frac{X_1 \operatorname{e}^{- t/\tau_1}}{\omega_0 - X_2 \operatorname{e}^{- t/\tau_2}}
- \frac{X_1 \operatorname{e}^{- \gamma t} }{\omega_0 - X_2} \cos [\omega_0 t + \Phi(t)],
\end{equation}
where
\begin{equation}
\label{eq:phase}
\Phi(t) \equiv - \frac{X_2 \tau_2}{2} \left( 1 - \operatorname{e}^{- t/\tau_2} \right).
\end{equation}
Equations~(\ref{eq:full-soln}) and (\ref{eq:phase}) summarize the main results of this work.
\begin{figure}[!!t]
\begin{center}
\includegraphics[width=1.0\linewidth,trim=0 0 0 0]{Fig2.png}
\caption{
(color online)
Calculations for representative parameter values.
Frequency $\omega_0/(2\pi) =5.5$ THz, $X_1/\omega_0 = 0.5$, $\tau_1 = 0.7$ ps,
$\tau_2 = 0.6$ ps, and $\gamma^{-1} =5$ ps, and for
different strengths of the lattice feedback term $X_2$.
$X_2=0$ corresponds to the phenomenological theory~\cite{zeiger1992}.
(a) Coherent phonon displacement $u(t)$, see Eqs.~(\ref{eq:full-soln}) and (\ref{eq:phase}) and the associated text.
The inset, a blow-up of the
dashed rectangle, shows signature of the finite phase $\phi$ for different values of $X_2$.
(b) The effects of the feedback at different time scales.
At short times ($t \lesssim \tau_2$) a finite
$X_2$ leads to \emph{chirping}. At long times ($t \gg \tau_2$) it leads to a finite phase $\phi$,
see also inset in (a).
$\tau_2$ is defined in Eq.~(\ref{eq:approx-equality}).
}
\label{fig2}
\end{center}
\end{figure}
At face value, the above is a five parameter description of the coherent phonon. However, if the microscopic
prescription is followed, $(X_1, X_2, \tau_1, \tau_2)$ can be obtained from the phenomenological
parameters $T_H$ and $\tau_e$ defined in Eq.~(\ref{eq:tau-T}) by using the approximate relations
of Eq.~(\ref{eq:approx-equality}).
Furthermore, if the theory to $\mathcal{O}(\lambda^2)$ is quantitatively
sufficient, then $\gamma^{-1}$ is the equilibrium phonon lifetime measured by, say, Raman response.
(iii) For $t \lesssim \tau_2$ the feedback $\Phi(t)$ describes temporal variation of the oscillation
frequency, i.e., \emph{chirping}, with a frequency variation $\Delta \omega_0 \sim -X_2/2$, see Fig.~\ref{fig2}.
On the other hand, for $t \gg \tau_2$ we get a finite residual phase
$\phi \equiv \Phi(t \rightarrow \infty) = -X_2 \tau_2/2$, see Fig.~\ref{fig2}.
Note, even if $\left|\Delta \omega_0 \right|/ \omega_0 \ll 1$ and the chirping is not experimentally observable at low fluence, the
phase $\phi = (\Delta \omega_0/ \omega_0)(\omega_0 \tau_2)$ can be substantial since it involves the large parameter $\omega_0 \tau_2$,
c.f., Eq.~(\ref{eq:inequality}). Note, the time dependent phase $\Phi(t)$ is qualitatively different from a constant
phase that is usually discussed in the literature.
The chirping discussed here is related to the temperature, and hence, to the time dependence of the phonon frequency
due to electron-phonon interaction. This is to be contrasted with other mechanisms of
chirping discussed in the literature such as that due to phonon anharmonicity~\cite{hase2002} and carrier
diffusion~\cite{tangney1999,tangney2002,fritz2007}.
(iv) Equilibrium Raman spectroscopy of BaFe$_2$As$_2$ shows that the
$A_{1g}$ phonon frequency softens with increasing temperature~\cite{rahlenbeck2009}.
Simultaneously, the phonon lifetime~\cite{rahlenbeck2009} has an atypical temperature dependence
across the magnetic transition of BaFe$_{2}$As$_{2}$ which is very reminiscent of the $T$-dependence of
resistivity~\cite{rullier-albenque}, implying that the phonon temperature dependencies
are likely due to interaction with the electrons.
Thus, from these equilibrium trends, we conclude that
$X_2 > 0$, and we \emph{predict} that the coherent $A_{1g}$ phonon of BaFe$_2$As$_2$ will show red-shifted chirp
at sufficiently high fluence.
(v) Since in our theory the frequency shift $\Delta \omega_0$ and the residual phase $\phi$ both depend on $X_2$,
an important conclusion is that red-shifted (blue shifted) chirp is accompanied by negative (positive)
residual phase. Note, the above expectation is indeed correct for the $A_{1g}$ coherent phonon of
BaFe$_2$As$_2$, which softens with increasing temperature, and for which a negative phase $\phi = - 0.1\pi$ has been
reported~\cite{yang2014,rettig2015}, see also the discussion in Sec.~\ref{Ba_model}.
\section{Quantitative description of the $A_{1g}$ coherent phonon in BaFe$_{2}$As$_{2}$ }
\label{Ba_model}
In this section, we apply the theory quantitatively to the coherent $A_{1g}$ phonon of the strongly correlated
metal BaFe$_2$As$_2$, and we compare the theory results with a recent time-resolved x-ray study~\cite{rettig2015},
see Fig~\ref{fig3}.
BaFe$_2$As$_2$ is the parent compound of a class of high temperature superconductors that also have rather
interesting magnetic and nematic properties \cite{Johnston}. The $A_{1g}$ coherent phonon in this system, associated with the
motion of the As atoms, has also been widely studied using a variety of pump-probe techniques \cite{yang2014,rettig2015,kim2012,mansart2010}, including
time-resolved x-ray spetcroscopy \cite{rettig2015}which provides the most direct information about the As motion.
The electronic properties of the BaFe$_2$As$_2$ are known \cite{Egami} to be very sensitive to the As height, which makes
the study of the coherent phonon motion all the more interesting.
Our overall goal in this section is to check to what extent a microscopic tight-binding model, that has been successfully
used to understand equilibrium properties, can be used to describe the transient temperature dependencies involved
in a pump-probe setting. Such an exercise is a step in the direction of extracting information about equilibrium
properties from a pump-probe setup.
As a first, step we define the various parameters that we use to describe
BaFe$_2$As$_2$ with the microscopic Hamiltonian of Eq.~(\ref{eq:ham}).
We take the electronic kinetic part $\epsilon_{ab}(\textbf{k})$ from Ref.~\cite{graser2010}, which itself
is obtained as a tight-binding fit of the LDA band structure onto a basis of five $d$ Fe orbitals~\cite{Unfold}.
Note, this particular set of tight-binding parameters has been
used widely in the literature. Relatively less detailed information is currently available concerning the orbitally
resolved electron-phonon matrix elements $C(\textbf{k})_{ab}$ of Eq.~(\ref{eq:ham}). However, it is well-accepted that
an increase of the dimensionless arsenic height $u=\langle b^{\dagger}+b \rangle$ is accompanied by a reduction of
the hopping-integrals and the bandwidths~\cite{kuroki2009} since the hopping of the electrons between Fe atoms can also
be mediated by the As atoms.
Taking into account this physical expectation, we found that a simple way to model the electron-phonon matrix elements is to assume
\begin{equation}
C({\bf k})_{a b}= -[t_{nn}]_{a b}(\textbf{k}),
\end{equation}
where $[t_{nn}]_{ab}$ is the diagonal nearest-neighbour entries of the tight-binding parameters $\epsilon_{ab}(\textbf{k})$.
Thus, in our scheme the entire electron-phonon coupling is ultimately described by a single additional dimensionless parameter
$\lambda$ which can later be absorbed in an overall scaling factor between the calculated $u(t)$ and the experimental x-ray intensity
(see also the discussion following Eq.~(\ref{SFit}) below).
As a second step, we describe the calculation of the out-of-equilibrium force $F(t)$ (see also the discussion in the paragraph
following Eq.~(\ref{eq:tau-T})) to first order in $\lambda$.
This involves the calculation of the thermal average of the weighted electron
density operator. From Eq.~(\ref{eq:force}) we get
\begin{equation}\label{Sforcing}
\begin{split}
\langle \hat{O}\rangle_{\mathcal{H}_0,T} & \equiv \frac{\lambda}{\mathcal{N}}
\sum_{{\bf k}, a, b, \sigma} C({\bf k})_{a b}\langle c^{\dagger}_{{\bf k} a \sigma} c_{{\bf k} b \sigma} \rangle_{\mathcal{H}_0, T} \\
& =\frac{\lambda}{\mathcal{N}}
\sum_{{\bf k}, \nu, \sigma} \tilde{C}({\bf k})_{\nu \nu} n_F\big[\xi_{\nu}(\textbf{k})-\mu(T),T\big],
\end{split}
\end{equation}
where the last equality is written in the band basis.
Here $n_F$ is the Fermi function, $\xi_{\nu}(\textbf{k})$ is the energy of an electron in the
band $\nu$ with momentum $\textbf{k}$, $\tilde{C}({\bf k})_{\nu \nu}$ is the electron-phonon matrix
elements in the band basis, and $\mu(T)$ is the chemical potential at the transient temperature $T(t)$ at time $t$.
We assume that there is no electronic diffusion~\cite{torchinsky2011}, and that the particle number
is conserved during the pump-probe cycle, which is consistent with the conclusions of a
recent time-resolved photoemission study \cite{yang2014}.
We divide the Brillouin zone into a ($10\times10\times10$) grid,
and diagonalize $\mathcal{H}_0$ at each point of the grid to obtain the electronic dispersion $\xi_{\nu}(\textbf{k})$.
The chemical potential is then calculated by solving the particle number conservation equation numerically.
In Fig.~\ref{fig3} (a) we show the result of our calculation of
$\langle \hat{O}\rangle_{\mathcal{H}_0,T}$
for temperatures ranging from 0 to 3500 $\rm{(K)}$. This $T$-dependence can be
transformed into a time dependence using Eq.~(\ref{eq:tau-T}) provided we have an estimate
of the phenomenological parameters $(T_H,\tau_e)$ at each pump fluence.
Henceforth, the base temperature is taken as $T_L = 140$ K.
The solid (black) line of Fig.~\ref{fig3} (b) gives such a transformation
$\langle \hat{O}\rangle_{\mathcal{H}_0,T} \rightarrow \langle \hat{O}\rangle (t)$ for a representative
value of $(T_H,\tau_e)$.
The resulting time dependence can be modeled by a
single decaying exponential using Eq.~(\ref{eq:approx-equality}). This leads to an estimate of $(X_1, \tau_1)$
for each pump fluence, see dashed (red) line of Fig.~\ref{fig3} (b).
\begin{figure}[!!t]cc
\begin{center}
\includegraphics[width=1.0\linewidth,trim=0 0 0 0]{Fig3.png}
\caption{
(color online) Quantitative description of the $A_{1g}$ coherent phonon (frequency $\omega_0/(2\pi) =5.5$ THz) of BaFe$_2$As$_2$,
and comparison with experiment~\cite{rettig2015}.
(a) Calculated equilibrium expectation value of the weighted electron density $\langle \hat{O} \rangle_{T}$
for $\lambda =$ 0.25.
(b) Solid (black) line: The $T$-dependence in (a) is transformed into a time dependence using Eq.~(\ref{eq:tau-T})
for representative
values of the phenomenological parameters $(T_H,\tau_e)$. Base temperature $T_L = 140$ K. Dashed (red) line:
Fit using Eq.~(\ref{eq:approx-equality}), and estimate of $(X_1, \tau_1)$.
(c) Solid lines: temporal variation of x-ray form factor calculated using Eq.~(\ref{eq:full-soln})
at different fluences (FL in mJ/cm$^2)$.
The table gives estimates of $(T_H,\tau_e)$ used in the calculation. The fit uses $\gamma^{-1} = 5$ ps, which
is the equilibrium lifetime~\cite{rahlenbeck2009}.
Symbols represent data points extracted from Ref.~\cite{rettig2015}.
}
\label{fig3}
\end{center}
\end{figure}
Note, the above step should not be construed as a mere replacement of two phenomenological parameters
$(X_1, \tau_1)$ by two other phenomenological parameters $(T_H,\tau_e)$. This is because in our scheme
the estimation of $(X_1, \tau_1)$ at each fluence is obtained via
the evaluation of $\langle \hat{O}\rangle_{\mathcal{H}_0,T}$ from
the microscopic Hamiltonian Eq.~(\ref{eq:ham}) whose parameters are themselves fluence independent.
Thus, the modeling is highly constrained, and it is not obvious that the $(X_1, \tau_1)$ needed for a
given fluence can be obtained in our scheme for reasonable values of $(T_H,\tau_e)$ once the Hamiltonian is
fixed.
One way to
appreciate the nontrivial step involved in our quantitative modeling is to note
that our scheme can provide meaningful $(T_H,\tau_e)$ only if $\langle \hat{O}\rangle_{\mathcal{H}_0,T}$
is a \emph{monotonically increasing} function of temperature. On the other hand, such a property is \emph{a priori}
not guaranteed. Likewise, if the slope of the function
$\langle \hat{O}\rangle_{\mathcal{H}_0,T}$ is too large/small it would lead to values of $T_H$ that are too small/large
compared to the estimates currently available from time-resolved photoemission studies \cite{yang2014}.
In the third step we discuss the relevance of the $\lambda^2$ contribution to the force $F(t)$ that is implied in the
experiment of Ref.~\cite{rettig2015,yang2014}. This contribution can be estimated from the following argument.
To $\lambda^2$ accuracy, $\pi(T)$ can also be identified as the equilibrium
phonon self-energy whose $T$-dependence can be inferred from equilibrium Raman measurement of $\omega_0(T)$~\cite{rahlenbeck2009}.
For $T_L = 140$ K and $T_H \sim 500$ K,
an extrapolation of $\omega_0(T)$ reported in Ref.~\cite{rahlenbeck2009} gives $\Delta \omega_0 = 0.4$ THz, and therefore
$\frac{X_2}{\omega_0} \approx 0.01$, see Eq.~(\ref{eq:phase}).
This small fraction implies that the $\lambda^2$ contribution to the force $F(t)$ is unimportant for the fluences used
in Ref.~\cite{rettig2015}. Nevertheless, for the fits we kept the phase $\Phi(t)$ generated by the feedback effect, and we used the
expression
\begin{equation}
\label{eq:fit-u}
u(t) = (X_1/\omega_0) (\operatorname{e}^{- t/\tau_1}
- \operatorname{e}^{- \gamma t} \cos [\omega_0 t + \Phi(t)]),
\end{equation}
by setting $\frac{X_2}{\omega_0} \rightarrow$ 0 in Eq.~(\ref{eq:full-soln}).
To model $\Phi(t)$ we assume that it is fluence independent and that the experimentally
reported phase $\phi = - 0.1 \pi$~\cite{rettig2015,yang2014} can be identified with $\Phi(t \rightarrow \infty) = -X_2 \tau_2/2$
(see Eq.~(\ref{eq:phase})), from which we get $\tau_2 \approx 800\,\rm{fs}$.
Note also, for time $t\lesssim \tau_e$ the quality of the fit is marginally
affected by including the feedback $\Phi(t)$ term.
Thus, following the above three steps we are able to compute $u(t)$ for a given
fluence provided we have an estimate of $(T_H,\tau_e)$.
Finally, we compare the calculated arsenic displacement $u(t)$ with that measured in time resolved
x-ray scattering~\cite{rettig2015} for
a fluence range of $0.7$ to $3.5\,\rm{(mJ/cm^2)}$.
The intensity is convolved with a Gaussian pulse to account for the limited time resolution~\cite{rettig2015}.
In the kinematic approximation~\cite{rettig2015}
the variation of the intensity is proportional to the arsenic displacement and is given by
\begin{equation}\label{SFit}
\frac{\Delta I}{I_{0}}(t)= \frac{B}{ \tau_r\sqrt{\pi} }\int^{\infty}_0 e^{-(\frac{t-\tau}{\tau_r})^2}
u(\tau) d\tau,
\end{equation}
where
$I_0$ is the equilibrium intensity, $\Delta I$ is the variation of intensity
out of equilibrium, $\tau_r\approx 96 \,\rm{fs}$ is the experimental resolution of the probe-pulse,
and $u(t)$ is computed using Eq.~(\ref{eq:fit-u}) following the three steps mentioned above.
$B$ is a dimensionless proportionality constant, independent of fluence, that sets the overall scale of the
theoretically evaluated $\Delta I/I_0$ with respect to the experimentally measured ones. Physically, $B$ is
related to the change of the relevant x-ray form factor with the As atomic position.
Within our scheme the constant $B$ and the dimensionless electron-phonon coupling $\lambda$ cannot be
estimated separately. We find that best fits are obtained for $\lambda B =$ 4.9.
In Fig.~\ref{fig3} (c) we compare the calculated
$\Delta I/I_0$ (lines) with the data of Ref.~\cite{rettig2015} (solid symbols).
From Fig.~\ref{fig3}(c)
we conclude that the two-parameter fit is quite reasonable, given the simplicity of the starting model.
Furthermore, our
estimation of $(T_H, \tau_e)$, given in the inset of Fig.~\ref{fig3} (c), compares well with the
experimental estimations given in Ref.~\cite{yang2014}.
The above attempt at a quantitative description is an important step towards making
connection between equilibrium microscopic description
of electrons with out-of-equilibrium pump-probe data. Note, the above calculation does not include
temperature dependencies of the single electron properties arising due to
electron-electron interaction.
While such interaction effects can be
incorporated in the current formalism, it is beyond the scope of the current work.
\section{Conclusions} \label{conclusion}
We developed a microscopic theory of displacive coherent phonons driven by laser heating of carriers.
Our theory captures physics beyond the standard phenomenological description, namely
the modification of the electronic energy levels due to the phonon excitation, and how this
change feeds back on the phonon dynamics. This effect of electron-phonon interaction leads to chirping at short time scales,
and at long times it appears as a finite phase in the oscillatory signal.
We successfully applied the theory to the $A_{1g}$ coherent phonon of BaFe$_2$As$_2$,
thereby demonstrating that pump-probe data can be related to microscopic quantities and eventually to
equilibrium physics.
We explained the origin of the phase in the oscillatory signal reported in recent experiments~\cite{yang2014,rettig2015}
on this system, and we predict
that it will exhibit red-shifted chirping at larger fuence.
\acknowledgments
We are thankful to M. Bauer, V. Brouet, I. Eremin,
Y. Gallais, L. Perfetti, M. Schiro, K. Sengupta, A. Subedi for insightful discussions.
We acknowledge financial support from ANR grant ``IRONIC'' (ANR-15-CE30-0025).
|
1,941,325,220,997 | arxiv | \section{Introduction}\label{sec:intro}
The logarithmic Mahler measure of a Laurent polynomial
\[
P(x_1,\dots,x_n) \in \mathbb C[x_1^{\pm 1},\dots,x_n^{\pm 1}]
\]
is defined as
\[
m(P) = \frac{1}{(2\pi i)^n}\int_{|x_1|=\dots=|x_n|=1} \log|P(x_1,\dots,x_n)| \;
\frac{dx_1}{x_1} \dots \frac{dx_n}{x_n}.
\]
One can show that this integral is always convergent. For a monic polynomial in
one variable $P \in \mathbb C[x]$ one can compute $m(P)$ by Jensen's formula
\begin{equation}\label{Jensen}
\frac{1}{2\pi i}\int_{|x|=1} \log|P(x)| \; \frac{dx}{x} \= \sum_{\alpha:
P(\alpha)=0} \max(0,\log|\alpha|)\,,
\end{equation}
but no explicit formula is known for polynomials in several variables. Let us
consider the simplest case of linear forms, namely $m(1+x_1+\dots+x_n)$. In
1981 C.~Smyth discovered~(\cite{Sm1}) that
\begin{equation}\label{2var}
m(1+x_1+x_2) \= \frac{3\sqrt{3}}{4\pi}L(\chi_{-3},2)
\end{equation}
where $\chi_{-3}(n)=\bigl(\frac{-3}{n}\bigr)$, $L(\chi_{-3},s) \=
\sum\limits_{n=1}^{\infty} \dfrac{\chi_{-3}(n)}{n^s} \= 1 - \frac1{2^s} +
\frac1{4^s} - \frac1{5^s} + \dots$ and
\begin{equation}\label{3var}
m(1+x_1+x_2+x_3) \= \frac{7}{2\pi^2} \zeta(3) \,.
\end{equation}
These formulas can be proved by explicit integration. Later we will see another
method due to F.~Rodriguez-Villegas~(\cite{MMM}) to obtain~\eqref{2var}
and~\eqref{3var} with the help of modular forms. Already in the next case no
explicit formula for $m(1+x_1+x_2+x_3+x_4)$ is known, and this is the subject of
the present paper. One can find in~\cite{VTV} the numerical value
\[
m(1+x_1+x_2+x_3+x_4) \= 0.544412561752185...
\]
and also there is the following conjectural formula.
\vskip0.5cm
{\bf Conjecture} (F. Rodriguez-Villegas,~\cite{BLVD}, see also \cite{Zud}):\emph{
\[
m(1+x_1+x_2+x_3+x_4) \; \overset{?}= \; 6 \, \Bigl(\frac{\sqrt{-15}}{2 \pi
i}\Bigr)^5 L(f_{15},4)
\]
where
\[
f_{15} \= \eta(3z)^3\eta(5z)^3 + \eta(z)^3\eta(15z)^3 \= q + q^2 - 3q^3 - 3q^4
+ \dots
\]
is a CM modular form of weight 3, level 15 and Nebentypus $\bigl(
\frac{-15}{\cdot}\bigr)$.}
\vskip0.5cm
This modular form arises in~\cite{PTV} in relation to the variety
\[
\begin{cases}
1+x_1+x_2+x_3+x_4 \= 0 \\
1+\frac1{x_1}+\frac1{x_2}+\frac1{x_3}+\frac1{x_4} \= 0
\end{cases}
\]
which can be compactified to a $K3$ surface of Picard rank 20. Namely, C.~Peters,
J. Top, M. van der Vlugt show in~\cite{PTV} that if $X$ is the minimal resolution of
singularities of the above surface then the L-function of $H^2(X)$ has generic
Euler factor
\[
(1-pT)^{16} \, \Bigl(1 - \bigl( \frac{-3}{p}\bigr) pT\Bigr)^4 \, \Bigl(1 - A_p T
+ \bigl( \frac{-15}{p}\bigr) p^2 T^2 \Bigr)
\]
where $A_p$ is the $p$th coefficient in the $q$-expansion of $f_{15}$.
In order to state our results, consider the modular function $t(z)$ and modular form $f(z)$ of
weight 2
\begin{equation}\label{L3mpar}\begin{aligned}
t(z) &\= -\Bigl(\frac{\eta(2z) \eta(6z)}{\eta(z) \eta(3z)}\Bigr)^6 \= -q - 6 q^2
- 21 q^3 + \dots\\
f(z) &\= \frac{(\eta(z) \eta(3z))^4}{(\eta(2z) \eta(6z))^2} \= 1 - 4q + 4q^2 -
4 q^3 +\dots
\end{aligned}\end{equation}
for the group
\[
\Gamma_0(6)+3 \= \Gamma_0(6) \; \cup \; \Bigl\{ \sqrt{3} \begin{pmatrix} a& b/3
\\ 2 c & d \end{pmatrix} \in {\rm SL}(2,\mathbb R) \,|\, a,b,c,d \in \mathbb Z \Bigr\} \,.
\]
Throughout the paper we use the differential operator $D=\frac1{2\pi
i}\frac{d}{dz} \= q \frac{d}{dq}$. We need the following modular forms of
weight~4
\begin{equation}\label{thm2gs}\begin{aligned}
g_1 &\= \frac{Dt}{t} f \= 1+2q-14q^2+38q^3-142q^4+252q^5-266q^6+\dots \\
g_2 & \= \frac{t}{1-t} \, g_1 \= -q-7q^2-6q^3+5 q^4+120 q^5 +498 q^6 + \dots \\
g_3 & \= \frac{t(212 t^2 + 251t - 13)}{(1-t)^3} \, g_1 \=
13q+316q^2+2328q^3+\dots \\
\end{aligned}\end{equation}
Here $g_1$ is indeed a modular form and one can write it as a linear combination
of Eisenstein series (see~\eqref{thm2g1}), while $g_2$ and $g_3$ have poles at
the discrete set of points where $t(z)=1$. Our main result is the following.
\vskip0.5cm
{\bf Theorem.} \emph{ Consider the Chowla-Selberg period for the field
$K=Q(\sqrt{-15})$
\begin{equation}\label{ChS15}
\Omega_{15} \= \frac{1}{\sqrt{30\pi}} \bigl( \prod_{j=1}^{14}
\Gamma\bigl(\frac{j}{15}\bigr)^{(\frac{-15}{j})} \bigr)^{1/4} \,,
\end{equation}
and the two numbers
\begin{equation}\label{thm2DLs}
L(g_j,g_1,3,1) \= (2 \pi)^4 \, \int_{0}^{\infty} g_1(i s)
\int_{s}^{\infty}\int_{s1}^{\infty}\int_{s2}^{\infty} g_j(i s_3) \, ds_3 \, ds_2
\, ds_1 \, ds
\end{equation}
for $j=2,3$. One has
\[\begin{aligned}
m(1+x_1+x_2+&x_3+x_4) \- \frac45 \; m(1+x_1+x_2+x_3) \\
& \= \frac{3\sqrt{5}\Omega_{15}^2}{20 \pi} L(g_3,g_1,3,1) \-
\frac{3\sqrt{5}}{10 \pi^3 \Omega_{15}^2} L(g_2,g_1,3,1)\,.
\end{aligned}\]}
\vskip0.5cm
The reader will find this statement in a slightly different notation in Corollary~\ref{cor2}. First, let us explain why the integrals
in~\eqref{thm2DLs} converge. For $z \in i \mathbb R_+$ both $t(z)$ and $f(z)$ are
real-valued and one can easily check that $t(z)<0$. Therefore $g_2$ and $g_3$
have no poles along the imaginary half-axis. When $s \to \infty$ we have $g_1(i
s) = O(1), g_2(i s) = O(e^{-2 \pi s})$ and $g_3(i s) = O(e^{-2 \pi s})$ because
$q$-expansions of $g_2$ and $g_3$ start in degree~1, therefore the integrated
integrals above are convergent at $\infty$. Also one can show that all three
functions $g_j(i s)$ are $o(s)$ when $s \to 0$, hence they are globally bounded
and there is no problem with convergence at $s=0$. With the help of PARI/GP we
find that numerically
\[\begin{aligned}
& L(g_2,g_1,3,1) \= -0.44662442...\\
& L(g_3,g_1,3,1) \= 8.5383217...
\end{aligned}\]
which agrees with the statement of the theorem.
The geometric meaning of these two numbers is not clear at the moment. If for example $g_2$ were a holomorphic cusp form then the number defined in~\eqref{thm2DLs} would be indeed the value of the corresponding double L-function $L(g_2, g_1, s_2, s_1)$ at $s_2 = 3, s_1=1$, which is the motivation
for our notation. We discuss double L-values of holomorphic modular forms in
Section~\ref{sec:dL}. But as soon as forms under consideration have poles in the
upper half-plane the corresponding multiple integrals become path-dependent and
there is no general theory of multiple L-values. Also we would like to remark
that for the holomorphic modular form $g_1$ in our theorem one has
$m(1+x_1+x_2+x_3)=-\frac12L(g_1,1)$, the reader can find the proof of this
statement in Section~\ref{sec:mp}. Another observation is that the poles of
$g_2$ ant $g_3$ are located at the points from the same field
$K=\mathbb Q(\sqrt{-15})$, namely at the images of $z=\frac18+\frac{\sqrt{-15}}{24}$
under the group $\Gamma_0(6)+3$.
\medskip
The structure of the paper is as follows.
Sections 2 and 3 follow the approach pioneered by Rodriguez-Villegas \cite{MMM}.
In Section 2 we relate the Mahler measure of $1+x_1+\dots+x_n$ to the
principal period of a pencil of Calabi-Yau varieties of dimension $n-1$ given by
\[
\Bigl(1 + x_1 + \dots + x_n \Bigr)\Bigl( 1 + \frac1{x_1} + \dots +
\frac1{x_n} \Bigr) \= \lambda
\]
and the corresponding Picard-Fuchs differential equation. One needs to do explicitly analytic continuation of its solutions from one singular point to another one in order to compute the Mahler measure. For $n=2,3$ the differential operators appear to have modular parametrization. This allows us to do necessary analytic continuation and derive~\eqref{2var} and~\eqref{3var} in Section~3.
When $n=4$ the Picard-Fuchs differential operator is not modular. However one can apply Jensen's formula to reduce the number of variables:
in Section~4 we observe that in fact $m(1+x_1 + \dots + x_n)$ can be computed by analytic continuation of a solution of a non-homogeneous differential equation with the Picard-Fuchs differential operator corresponding to $m(1+x_1 + \dots + x_{n-1})$. A non-homogeneous differential equation arises if one considers the generating function for the moments of a solution
of a homogeneous differential equation along a path. Moreover, the differential operator depends only on the initial differential equation being independent of the particular solution and the path, while the right-hand side depends on this data (Proposition~\ref{de_transform}). In Section~6 we discuss a modular interpretation of solutions to a non-homogeneous equation in the case when the differential operator has modular parametrization and show that double L-values of modular forms appear naturally in this context.
Though our main interest is the case $n=4$, we keep applying our technique parallelly to the case $n=3$ throughout the paper (Theorem~\ref{Thm1} and Corollary~\ref{cor1}).
This leads to a linear relation (\ref{L_relation}) between a double $L$-value of two Eisenstein series of weight~3, and ordinary $L$-values
$L(\chi_3,2)$ and $\zeta(3)$.
We give a direct proof of this relation in Section~7 using a method due to Zudilin \cite{Z2}, \cite{Z3}.
\medskip
Our original interest in Mahler's measure came from the beautiful paper \cite{MMM}
which has been inspiring us the whole time we were working on this project.
We would like to thank
our friends and colleagues Sergey Galkin, Vasily Golyshev, Anton Mellit, Maxim Smirnov and Wadim Zudilin
for their interest in our work.
Both authors are greatful to the Max-Planck-Institute f\"ur Mathematik
in Bonn for providing wonderful working conditions where a significant part of this work has been done. We would like to
express our gratitude to the referee of the manuscript who read it
carefully and helped us to improve the exposition.
\section{Mahler Measures and Differential Equations}\label{sec:mmde}
For a Laurent polynomial $P(x_1,\dots,x_n)$ the function
\[
a(t) \= \frac{1}{(2\pi i)^n}\int_{|x_1|=\dots=|x_n|=1} \frac1{1-t
P(x_1,\dots,x_n)} \; \frac{dx_1}{x_1} \dots \frac{dx_n}{x_n}
\]
is well defined for small $t$ since $|P|$ is bounded on the torus. We call
$a(t)$ the principal period of $P$. It is the generating function for the
sequence
\begin{equation}\label{cterms}
a_m \= \text{ the constant term of } P(x_1,\dots,x_n)^m
\end{equation}
since
\[\begin{aligned}
a(t) &\= \sum_{m=0}^{\infty} t^m \; \frac{1}{(2\pi
i)^n}\int_{|x_1|=\dots=|x_n|=1} P(x_1,\dots,x_n)^m \; \frac{dx_1}{x_1} \dots
\frac{dx_n}{x_n} \\
& \= \sum_{m=0}^{\infty} a_m \, t^m\,.
\end{aligned}\]
Suppose that the polynomial $P$ takes only nonnegative real values on the torus $\{ |x_i|=1 \}$. Then the
Mahler measure $m(P)$ can be computed as follows. For small real $t < 0$ one has
\[\begin{aligned}
m(P-\frac1t) & \= \frac1{(2 \pi i)^n} \int_{|x_i|=1} \log\bigl(P(x_1,\dots,x_n) - \frac 1 t \bigr) \;
\frac{dx_1}{x_1} \dots \frac{dx_n}{x_n} \\
& \= \frac1{(2 \pi i)^n} \int_{|x_i|=1} \log\bigl(- \frac 1 t (1 - t P(x_1,\dots,x_n) ) \bigr) \;
\frac{dx_1}{x_1} \dots \frac{dx_n}{x_n} \\
&\= - \log(-t) - \sum_{m=1}^{\infty} \frac{t^m}{m} \frac1{(2 \pi i)^n}
\int_{|x_i|=1} P(x_1,\dots,x_n)^m \; \frac{dx_1}{x_1} \dots \frac{dx_n}{x_n} \\
&\= - \log(-t) - \sum_{m=1}^{\infty} \frac{t^m}{m} a_m \= - \bigl( t \frac{d}{dt}\bigr)^{-1} a(t)\,.
\end{aligned}\]
Though we did the computation only for small real $t<0$, the first integral here and the terminal expression are holomorphic in $t$ and defined in some neighbourhood of the real negative half-axis (apart from possibly finitely many punctures where $a(t)$ has singularities). Therefore
\begin{equation}\label{ancont}
m(P) \= - {\rm Re} \; \bigl( t \frac{d}{dt}\bigr)^{-1} a(t) \Big|_{t = \infty}\,,
\end{equation}
where the analytic continuation is done along $-\infty < t < 0$ and we added real part to be independent of the branch of $\log(t)$, i.e. we can now assume throughout the paper that
\[
\bigl( t \frac{d}{dt}\bigr)^{-1}\; \sum_{m=0}^{\infty} a_m t^m \= a_0 \log t \+ \sum_{m=1}^{\infty} \frac{a_m}{m} t^m \,.
\]
On the other hand, it is known (\cite{SB}) that the sequence~\eqref{cterms}
always satisfies a recursion, i.e. $a(t)$ is a solution to an ordinary
differential equation
\begin{equation}\label{de}
{\mathcal L}\Bigl(t, t\frac{d}{dt}\Bigr) a(t) \= 0
\end{equation}
where ${\mathcal L}$ is a certain polynomial in two non-commuting variables. Finally we
see that the Mahler measure $m(P)$ can be computed by doing analytic continuation of
a particular solution to an ordinary differential equation which one constructs
from the polynomial $P$.
Let us apply this strategy to the linear polynomials. Observe that
\[
m(1 + x_1 + \dots + x_n ) \= \frac12 m( P_n )
\]
where
\begin{equation}\label{Pn}
P_n \= \Bigl(1 + x_1 + \dots + x_n \Bigr)\Bigl( 1 + \frac1{x_1} + \dots +
\frac1{x_n} \Bigr)
\end{equation}
takes nonnegative real values on the torus. Consider the sequence of the constant terms of the powers of $P_n$:
\[\begin{aligned}
&n=2 \qquad a_m: 1\,,\;3\,,\;15\,,\; 93 \,,\; 639 \; \dots \\
&n=3 \qquad a_m: 1\,,\;4\,,\;28\,,\; 256 \,,\;2716 \; \dots \\
&n=4 \qquad a_m: 1\,,\;5\,,\;45\,,\; 545 \,,\;7885 \; \dots \\
\end{aligned}\]
The corresponding differential equations
\[
{\mathcal L}_n\bigl(t, t \frac{d}{dt}\bigr)\, a(t) \= 0
\]
are given by
\[\begin{aligned}
&{\mathcal L}_2(t,\theta) \= \theta^2 \- t(10 \theta^2 + 10\theta + 3) \+ 9t^2(\theta + 1)^2 \\
\\
&{\mathcal L}_3(t,\theta) \= \theta^3 \- 2t(2 \theta + 1)(5 \theta^2+5\theta+2) \+ 64t^2(\theta + 1)^3 \\
\\
&{\mathcal L}_4(t,\theta) \= \theta^4 \-
t (35 \theta^4 + 70 \theta^3 + 63 \theta^2 + 28\theta + 5) \\
&\qquad\qquad\+ t^2 (\theta+1)^2 (259\theta^2+518\theta+285) \- 225 t^3 (\theta+1)^2 (\theta+2)^2
\end{aligned}\]
(See \cite{Ver1} for the general form of the operator.)
We use the notation $\theta = t \frac{d}{dt}$ to distinguish it
from $D = q \frac{q}{dq}$.
In all three cases there is a unique analytic at $t=0$ solution satisfying
$a(t)=1+o(t)$.
The equations $P_2(x_1,x_2)=\lambda$ and $P_3(x_1,x_2,x_3)=\lambda$ describe families of elliptic curves and $K3$-surfaces
of rank 19 respectively. It is therefore natural that the differential equations ${\mathcal L}_2$ and ${\mathcal L}_3$ have
modular parametrization, in which cases we can easily do analytic continuation of their solutions and compute the corresponding Mahler measures by formula~\eqref{ancont}. We do this in the next section.
Unfortunately, this method is not applicable in the case $m(1+x_1+x_2+x_3+x_4)$. The equation $P_4(x_1,x_2,x_3,x_4)=\lambda$ describes a family of Calabi-Yau threefolds, and hence we do not expect ${\mathcal L}_4$ to have modular parametrization. Indeed, one can check that the differential
operator ${\mathcal L}_4$ is not a symmetric cube of any second order differential
operator and therefore does not admit a modular parametrization. Later
we show that one can still use the operator ${\mathcal L}_3$ to compute
$m(1+x_1+x_2+x_3+x_4)$, though the price paid is that we have to consider
non-homogeneous differential equations.
\section{Modular parametrizations of ${\mathcal L}_2$ and ${\mathcal L}_3$}\label{sec:mp}
Recall (\cite{Zag}) that for an arbitrary modular function $t(z)$ and a modular form $f(z)$
of weight $k$ on a congruence subgroup of ${\rm SL}(2,\mathbb Z)$ one can construct an ordinary differential operator of order $k+1$
with algebraic coefficients
\begin{equation}\label{mde}
\sum_{i=0}^{k+1} c_i(t) \Bigl(\frac{d}{dt}\Bigr)^i \,, \quad c_i(t)
\in \overline{\mathbb C(t)}
\end{equation}
such that the functions
\begin{equation}\label{locsys}
f(z), z f(z), \dots , z^{k} f(z)
\end{equation}
span the kernel of the pull-back of this operator to the upper half-plane
\[
\sum_{i=0}^{k+1} c_i\bigl(t(z)\bigr)
\Bigl(\frac1{t'(z)}\frac{d}{dz}\Bigr)^i \= \sum_{i=0}^{k+1} \widetilde c_i(z)
\Bigl(\frac{d}{dz}\Bigr)^i\,.
\]
It follows that an operator with these properties is unique up to multiplication
by algebraic functions of $t$ on the left. On the other hand, the operator
\[
\frac1{t'(z) \cdot f(z)} \Bigl( \frac{d}{dz}\Bigr)^{k+1} \frac1{f(z)}
\]
obviously annihilates the local system~\eqref{locsys} and it is a routine to
check that if we rewrite it as
\[
\frac1{t'(z) \cdot f(z)} \Bigl( \frac{d}{dz}\Bigr)^{k+1} \frac1{f(z)} \=
\sum_{i=0}^{k+1} g_i(z) \Bigl(\frac{d}{dt}\Bigr)^i
\]
all coefficients $g_i(z)$ will be modular functions and hence can be written as
some algebraic functions $g_i(z)=c_i\bigl(t(z)\bigr)$. The reader could refer
to~\cite[Proposition 21]{Zag} for several constructions of the differential
equation satisfied by a modular form.
Recall that $D = \frac1{2\pi i}\frac{d}{dz} = q \frac{d}{dq}$. In
view of the above, we make a choice and define the operator
\begin{equation}\label{Ltf}
{\mathcal L}_{t,f} \= \frac1{Dt \cdot f} D^{k+1} \frac1{f}\,.
\end{equation}
This choice corresponds to the leading coefficient in~\eqref{mde} being
\[
c_{k+1}(t) \= \frac{D^kt}{f^2}\,.
\]
It is not hard to check that both ${\mathcal L}_2$ and ${\mathcal L}_3$ can be obtained from
certain pairs of a modular function and modular form, namely the following ones
(see~\cite{Verrill} for all the details in the case of ${\mathcal L}_3$).
\begin{proposition} With
\begin{equation}\label{L2mpar}\begin{aligned}
t &\= \frac{\eta(6z)^8 \eta(z)^4}{\eta(3z)^4 \eta(2z)^8} \= q - 4 q^2 + 10 q^3 +
\dots\\
f &\= \frac{\eta(2z)^6 \eta(3z)}{\eta(z)^3 \eta(6z)^2} \= 1 + 3 q + 3 q^2 + 3
q^3 + \dots\\
\end{aligned}\end{equation}
one has
\[
{\mathcal L}_{t,f} \= \frac1{t} {\mathcal L}_2\Bigl( t, t \frac{d}{dt} \Bigr)\,.
\]
\end{proposition}
\begin{proposition}\label{L3mparPr} With $t(z)$ and $f(z)$ as in~\eqref{L3mpar}
one has
\[
{\mathcal L}_{t,f} \= \frac1{t} {\mathcal L}_3 \Bigl( t, t \frac{d}{dt} \Bigr)\,.
\]
\end{proposition}
Let us use these modular parametrizations to compute $m(P_2)$ and $m(P_3)$ by
formula~\eqref{ancont}. With~\eqref{L2mpar} we see that at $q=0$ we have $t=0$
and $f=1$, hence $f$ coincides with $a(t)$ near $t=0$. Since $t$ runs over the negative real axis when $z \in \frac12+i\mathbb R_{+}$ we have by~\eqref{ancont}
\begin{equation}\label{mP2ev}\begin{aligned}
m(P_2) \= - {\rm Re}\; \bigl( t \frac{d}{dt}\bigr)^{-1} a(t) \Big|_{t=\infty} & \= -
{\rm Re}\; \Bigl( \frac{t}{Dt} D\Bigr)^{-1} f \Big|_{z=\frac12} \\
& \= - {\rm Re} \; D^{-1} \; \Bigl( \frac{Dt}{t} f \Bigr) \Big|_{z=\frac12}\,.
\end{aligned}\end{equation}
Here and throughout the paper we make a particular choice for $D^{-1}$ by
letting it to act on $q$-series as
\[
D^{-1} \; \sum_{n=0}^{\infty} a_n q^n \= a_0 \log q \+ \sum_{n=1} \frac{a_n}{n}
q^n \,,
\]
which is in accordance with our choice for $\bigl( t \frac{d}{dt}\bigr)^{-1}$,
so that the above computation is correct. To evaluate the terminal expression in~\eqref{mP2ev} let us recall the definition of L-function of a modular form.
For a modular form $g=\sum_{n=0}^{\infty} c_n q^n$ of weight $k$ on a congruence subgroup of ${\rm SL}(2,\mathbb Z)$ one
has $c_n = O(n^{k-1})$ and the $L$-function of $g$ is defined by
\[
L(g,s) \= \sum_{n=1}^{\infty} \frac{c_n}{n^s}
\]
when ${\rm Re} \, s > k$. This function can be continued as a meromorphic function to the whole complex plane. Moreover, if $g$ is a cusp form then $L(g,s)$ is holomorphic everywhere in $\mathbb C$.
\begin{proposition}\label{singL} Let $g$ be a modular form of weight $k$ with
$c_0=0$ and $p < k$ be an arbitrary integer. If $L(g,s)$ has no poles with ${\rm Re} s \ge p$ then one has
\[
\underset{q \to 1}\lim \; {\Bigl(D^{-p} g \Bigr)}(q) \= L(g,p)\,.
\]
If $c_0 \ne 0$ the same holds when $p < 0$,
\[
\underset{q \to 1}\lim \; g(q) \= c_0 \+ L(g,0)
\]
and
\[
\underset{q \to 1}\lim \; {\Bigl(D^{-1} g \Bigr)}(q) \= \underset{q \to 1}\lim \;\Bigl(c_0 \log q + \sum_{n=1}^{\infty}\frac{c_n}{n} \; q^n \Bigr) \= L(g,1)
\]
assuming that the branch of $\log q$ is taken so that $\underset{q \to 1}\lim \log q = 0$.
\end{proposition}
The reason we do not consider $p>1$ in the latter case is that it is not that clear how to define $D^{-p}$ when $c_0 \ne 0$. Note also that one always has $L(g,p)=0$ when $p < 0$. Indeed, the function $\Lambda(g,s)=\Gamma(s)/(2\pi)^s L(g,s)$ satisfies a functional equation when $s$ goes to $k-s$ and it is obviously holomorphic when ${\rm Re} \, s > k$, hence also when ${\rm Re} \, s < 0$. Since $\Gamma(s)$ has poles at nonpositive integers $L(g,s)$ has zeros at all integers $s<0$. If $g$ is a cusp form then also $L(g,0)=0$ by the same reason since $\Lambda(g,s)$ is holomorphic in the entire complex plane.
\begin{proof}[Proof of Proposition~\ref{singL}] First let $c_0 = 0$ or $c_0 \ne 0$ but $p<0$. When ${\rm Re} \, w > k-p$ one has
\[\begin{aligned}
\frac{\Gamma(w)}{(2\pi)^w} L(g,p+w) &\= \frac{\Gamma(w)}{(2\pi)^w}\sum_{n=1}^{\infty}\frac{c_n}{n^{p+w}} \= \sum_{n=1}^{\infty}\frac{c_n}{n^{p}} \int_0^{\infty} t^{w-1} \, e^{-2 \pi n t}\, dt\\
& \= \int_0^{\infty} t^{w-1} \Bigl(D^{-p} g \Bigr) (it) dt \,.
\end{aligned}\]
Since $\Gamma(w)$ is small when the imaginary part of $w$ is large and $L(g,p+w)$ is uniformly bounded we can apply the inverse Mellin transform. Namely, with any real $c>k-p$ one has
\[\begin{aligned}
\Bigl(D^{-p} g \Bigr) (it) &\= \frac1{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{\Gamma(w)}{(2\pi t)^w} L(g,p+w) dw \\
&\= L(g,p) \+ \frac1{2\pi i} \int_{-\varepsilon-i\infty}^{-\varepsilon+i\infty} \frac{\Gamma(w)}{(2\pi t)^w} L(g,p+w) dw
\end{aligned}\]
where we moved the path of integration and $0<\varepsilon <1$ is chosen sufficiently small so that ${\mathcal L}(g,s)$ still has no poles with ${\rm Re} \, s \ge p-\varepsilon$. The last integral is $O(t^{\varepsilon})$ and obviously vanishes when $t\to 0$.
If $c_0 \ne 0$ the calculation above would remain correct if we substitute $D^{-p}(q)$ by $g(q)-c_0$ and $D^{-1}(q)-c_0 \log q$ when $p=0$ and $p=1$ correspondingly.
\end{proof}
Going back to~\eqref{mP2ev}, consider a modular form of weight~3 given by
\begin{equation}\label{thm1g1}\begin{aligned}
g(z) & \= \Bigl( \frac{Dt}{t} f \Bigr)(z+\frac12) \= 1+q-5q^2+q^3+11
q^4-24q^5+\dots \\
& \= E_{3,\chi_{-3}}(z) - 2 E_{3,\chi_{-3}}(2z) - 8 E_{3,\chi_{-3}}(4z) \\
\end{aligned}\end{equation}
where $E_{3,\chi_{-3}} \in M_3(\Gamma_0(3),\chi_{-3})$ is the Eisenstein series
\begin{equation}\label{Eis3}
E_{3,\chi_{-3}} \= -\frac19 + \sum_{n \ge 1} \sum_{d|n} \chi_{-3}(d) d^2 q^n \,.
\end{equation}
Then $L(E_{3,\chi_{-3}},s) \= \zeta(s) \, L(\chi_{-3},s-2)$ and
\[
L(g,s) \= \Bigl( 1 - \frac2{2^s} - \frac{8}{4^s} \Bigr) \zeta(s) \,
L(\chi_{-3},s-2)
\]
is holomorphic in the entire complex plane. Since Fourier coefficients of the form $g$ are real $L(g,s)$ takes real values
at real arguments $s$. Combining~\eqref{mP2ev} with Proposition~\ref{singL} we
get
\[
m(P_2) \= - L(g,1) \= 2 \, \underset{s \to 1}\lim \, \zeta(s) \,
L(\chi_{-3},s-2) \= \frac{3\sqrt{3}}{2 \pi}L(\chi_{-3},2)\,.
\]
Recall that $m(1+x_1+x_2)= \frac12m(P_2)$, hence we have just
reproved~\eqref{2var}.
Analogously, with~\eqref{L3mpar} we have that $t(z)$ assumes all negative real values along the imaginary half-axis and $t=\infty$ at $z=0$, hence
\[
m(P_3) \= - {\rm Re} \; D^{-1}\, \Bigl( \frac{Dt}{t} f \Bigr) \Big|_{q=1} \,.
\]
We consider
\begin{equation}\label{thm2g1}\begin{aligned}
g \= &\frac{Dt}{t} f \= 1+2q-14q^2+38q^3-142q^4+252q^5-266q^6+\dots \\
&\= 2 E_4(z) - 32 E_4(2z) - 18 E_4(3z) + 288 E_4(6z)
\end{aligned}\end{equation}
with the Eisenstein series
\begin{equation}\label{Eis4}
E_4 \= \frac1{240} \+ \sum_{n \ge 1} \sum_{d|n} d^3 q^n \,.
\end{equation}
The function $L(E_4,s)=\zeta(s)\zeta(s-3)$ has the only pole at $s=4$, and one can easily see that
\[
L(g,s) \= \Bigl( 2 - \frac{32}{2^s} - \frac{18}{3^s} + \frac{288}{6^s}
\Bigr) \, \zeta(s)\zeta(s-3)
\]
is holomorphic in the entire complex plane because the factor in the brackets vanishes at $s=4$. Finally,
\[
m(P_3) \= - L(g,1) \= - \Bigl( 2 - \frac{32}{2} - \frac{18}{2} + \frac{288}{6}
\Bigr) \underset{s \to 1}\lim \, \zeta(s)\zeta(s-3) \= \frac{7
\zeta(3)}{\pi^2}\,.
\]
This again reproves~\eqref{3var} because $m(1+x_1+x_2+x_3)= \frac12m(P_3)$.
\section{Computation of $m(P_n)$ via ${\mathcal L}_{n-1}$}\label{sec:Ln1}
Observe that in general the Mahler measure of a Laurent polynomial $P$ which takes nonnegative real
values on the torus $\{|x_1|=\dots=|x_n|=1\}$ can be written
as
\[
m(P) \= \int_{\lambda_{\min}}^{\lambda_{\max}} \log(\lambda) \, a^*(\lambda) d\lambda
\]
where
\[\begin{aligned}
\lambda_{\min} &\= \underset{|x_1| = \dots = |x_n|= 1}\min \,
P(x_1,\dots,x_n)\,,\\
\lambda_{\max} &\= \underset{|x_1| = \dots = |x_n|= 1}\max
P(x_1,\dots,x_n)\,,
\end{aligned}\]
and $a^*(\lambda)$ is equal to the integral over the variety
\[
\{ P(x_1,\dots,x_n) \= \lambda \} \cap \{ |x_1| = \dots = |x_n|= 1 \}
\]
of the the $(n-1)$-form $\omega_{\lambda}$ defined as the residue
\[
\frac1{(2\pi i)^n} \frac{dx_1}{x_1} \wedge \dots \wedge \frac{dx_n}{x_n} \=
\omega_\lambda \wedge d\lambda\,,
\]
i.e.
\[
a^*(\lambda) \= \int_{\{ P=\lambda\} \cap \{|x_i|=1\}} \omega_{\lambda} \,.
\]
Along the same lines as we did in Section~\ref{sec:mmde}, one can recover $m(P)$
from the generating function of the ``moments'' of $a^*(\lambda)$
\[
a_m \= \int_{\lambda_{\min}}^{\lambda_{\max}} \lambda^m \; a^*(\lambda) d\lambda \,,\qquad a(t) \=
\sum_{m=0}^{\infty} a_m \, t^m
\]
by formula~\eqref{ancont}. Indeed, the moment $a_m$ is exactly the constant term
of $P^m$, so this $a(t)$ is identical with the one in the previous section. Also
one can see it directly by repeating the old trick in our new notation: for $-\infty < t < 0$
\[\begin{aligned}
&\int_{\lambda_{\min}}^{\lambda_{\max}} \log(\lambda - \frac 1 t ) \; a^*(\lambda) d\lambda \\
&\= - \log (-t) - \sum_{m=1}^{\infty} \frac{t^m}{m} \int_{\lambda_{\min}}^{\lambda_{\max}}
\lambda^m \; a^*(\lambda) d\lambda \\
&\= - \log(- t) - \sum_{m=1}^{\infty} \frac{t^m}{m} a_m \= - \bigl( t
\frac{d}{dt}\bigr)^{-1} a(t)\,,
\end{aligned}\]
hence
\[
m(P) \= \int_{\lambda_{\min}}^{\lambda_{\max}} \log(\lambda) a^*(\lambda) d\lambda \= - {\rm Re} \; \bigl(
t \frac{d}{dt}\bigr)^{-1} a(t) \Big|_{t = \infty}
\]
where we again assume that the analytic continuation of $a(t)$ is done along the real negative halfaxis.
In Section~\ref{sec:mmde} we mentioned without a proof that $a(t)$ satisfies a
differential equation~\eqref{de}. Now this fact follows from the proposition
below because $a^*(\lambda)$ is a period for the 1-parametric family of varieties
$P(x_1,\dots,x_n)=\lambda$ and it satisfies a certain differential equation
\[
\widetilde{{\mathcal L}} \bigl(\lambda, \lambda\frac{d}{d\lambda} \bigr) a^*(\lambda) \= 0\,,
\]
namely the Picard-Fuchs differential equation for this family.
Proposition~\ref{de_transform} states that moments of a solution of a
differential equation satisfy another differential equation determined by the
initial one, though in general they are solutions of this equation with a
right-hand side. This right-hand side is a simple rational function which depends on the path of
integration and on the choice of the solution along this path. In the above
situation with $a^*(\lambda)$ along the real line from $\lambda_{\min}$ to $\lambda_{\max}$
the right-hand side exceptionally appears to vanish. But later we will need a
general case as well.
For a Laurent series $\sum_n a_n t^n$ we introduce the notations
\[\begin{aligned}
& \Bigl[ \sum_n a_n t^n \Bigr]_{+} \= \sum_{n \ge 0} a_n t^n \\
& \Bigl[ \sum_n a_n t^n \Bigr]_{-} \= \sum_{n < 0} a_n t^n \\
\end{aligned}\]
for its parts with nonnegative and negative powers correspondingly. One then has
\[
\sum_n a_n t^n \= \Bigl[ \sum_n a_n t^n \Bigr]_{-} \+ \Bigl[ \sum_n a_n t^n
\Bigr]_{+}\,.
\]
\begin{proposition}\label{de_transform}
For a polynomial differential operator
\[
\widetilde{{\mathcal L}}(\lambda,\theta) \= \sum_{i=0}^{M} \sum_{j=0}^{N} c_{ij} \lambda^i \theta^j \,,\quad
\theta \= \lambda \frac{d}{d\lambda}
\]
consider a solution $F(\lambda)$ of $\widetilde{{\mathcal L}} F = 0$ along some path between
$\lambda=\alpha$ and $\lambda=\beta$. Then for the generating function of its moments
\[
b(t) \= \sum_{n=0}^{\infty} t^n \int_{\alpha}^{\beta} \lambda^n F(\lambda) d\lambda
\]
one has
\[
\widetilde{{\mathcal L}}\Bigl(\frac1t,-\theta_t-1\Bigr) b(t) \= h(t)
\]
where the right-hand $h(t)$ is a rational function which can have poles at most
at $t=0,\frac1{\alpha},\frac1{\beta}$ and is defined as follows. Let
\[
\widetilde{{\mathcal L}}^{(k)}(\lambda,\theta) \= \sum_{i=0}^M\sum_{j=k}^N c_{ij} \lambda^i \theta^{j-k}
\]
and for given $\lambda$ consider a rational function of $t$
\[
H_{\lambda}(t) \= \lambda \, \sum_{j=0}^{N-1} \bigl(\theta^j F \bigr)(\lambda) \, \Bigl[
\widetilde{{\mathcal L}}^{(j+1)}\Bigl(\frac1t,-\theta_t-1\Bigr) \frac1{1-\lambda t } \Bigr]_{+}
\,.
\]
Then
\[
h(t) \= \Bigl[ \widetilde{{\mathcal L}}\Bigl(\frac1t,-\theta_t-1\Bigr) b(t) \Bigr]_{-} \-
H_{\beta}(t) \+ H_{\alpha}(t) \,.
\]
\end{proposition}
\begin{proof}
Integration by parts yields
\[
\int_{\alpha}^{\beta} \lambda^i \bigl(\theta^j F \bigr)(\lambda) d\lambda \= \sum_{s=0}^{j-1}
\lambda^{i+1} (-i-1)^s \theta^{j-1-s} F(\lambda) \Big|_{\lambda=\alpha}^{\lambda=\beta}
\+ (-i-1)^j \int_{\alpha}^{\beta} \lambda^i F(\lambda) d\lambda \,,
\]
and we apply this formula to every term below to get
\[\begin{aligned}
0 & \= \sum_{n=0}^{\infty} t^n \int_{\alpha}^{\beta} \lambda^n \Bigl(
\widetilde{{\mathcal L}}(\lambda,\theta_{\lambda})F \Bigr)(\lambda) d\lambda \= \sum_{ij} c_{ij}
\sum_{n=0}^{\infty} t^n \int_{\alpha}^{\beta} \lambda^{n+i} \bigl(\theta^j F
\bigr)(\lambda) d\lambda \\
& \= \sum_{n=0}^{\infty} t^n \sum_{i,j} c_{ij} \Bigl[ \sum_{s=0}^{j-1}
\lambda^{n+i+1} (-n-i-1)^s \theta^{j-1-s} F(\lambda) \Big|_{\lambda=\alpha}^{\lambda=\beta} \+
(-n-i-1)^j \int_{\alpha}^{\beta} \lambda^{n+i} F(\lambda) d\lambda \Bigr] \\
& \= \sum_{k=0}^{N-1} \theta^k F(\lambda) \sum_{j \ge k+1} c_{ij}
\frac{\lambda}{t^i}\sum_{n=0}^{\infty} \lambda^{n+i} t^{n+i} (-n-i-1)^{j-1-k}
\Big|_{\lambda=\alpha}^{\lambda=\beta} \\
& \quad \+ \sum_{i,j} c_{ij} \frac1{t^i} \sum_{n=0}^{\infty} (-n-i-1)^j t^{n+i}
\int_{\alpha}^{\beta} \lambda^{n+i} F(\lambda) d\lambda \\
& \= \sum_{k=0}^{N-1} \theta^k F(\lambda) \Bigl[
\widetilde{{\mathcal L}}^{(k+1)}\Bigl(\frac1{t},-\theta_t-1\Bigr)\frac1{1-\lambda t} \Bigr]_{+}
\Big|_{\lambda=\alpha}^{\lambda=\beta} \+ \Bigl[
\widetilde{{\mathcal L}}\Bigl(\frac1t,-\theta_t-1\Bigr) b(t) \Bigr]_{+} \\
& \= H_{\beta}(t) \- H_{\alpha}(t) \+ \Bigl[
\widetilde{{\mathcal L}}\Bigl(\frac1t,-\theta_t-1\Bigr) b(t) \Bigr]_{+} \,.
\end{aligned}\]
\end{proof}
Now we introduce another idea which will allow us to apply the above
proposition to the case of linear polynomials. We consider $P=P_n$ as defined
in~\eqref{Pn}, and let $\widetilde{{\mathcal L}}_n$ be the corresponding Picard-Fuchs
differential operator. It can be easily recovered from ${\mathcal L}_n$ since (up to a
simple multiplier) the operators ${\mathcal L}_n(t, \theta_t)$ and $\widetilde{{\mathcal L}}_n\Bigl(\frac1t,-\theta_t-1\Bigr)$ must be equal. For example, with
\[\begin{aligned}
\widetilde{{\mathcal L}}_2 &\= 9 \theta^2 - \lambda (10 \theta^2 + 10 \theta + 3) + \lambda^2(\theta+1)^2 \\
\widetilde{{\mathcal L}}_3 &\= 64 \theta^3 - 2\lambda (2 \theta+1)(5 \theta^2 + 5 \theta + 2) + \lambda^2
(\theta+1)^3 \\
\end{aligned}\]
one can easily check that
\begin{equation}\label{L21t}
\widetilde{{\mathcal L}}_2\Bigl(\frac1t,-\theta-1\Bigr) \= \frac1{9 t^2}
\widetilde{{\mathcal L}}_2(9 t, \theta) \= \frac1{t^2} {\mathcal L}_2(t,\theta)
\end{equation}
and
\begin{equation}\label{L31t}
\widetilde{{\mathcal L}}_3\Bigl(\frac1t,-\theta-1\Bigr) \= -\frac1{64 t^2}
\widetilde{{\mathcal L}}_3(64 t, \theta) \= -\frac1{t^2} {\mathcal L}_3(t,\theta)\,.
\end{equation}
Let $a^*(\lambda)$ be the solution of $\widetilde{{\mathcal L}}_n a^* = 0$ such that
\begin{equation}\label{mPn}
m(P_n) \= \int_0^{(n+1)^2} \log(\lambda) a^*(\lambda) d\lambda\,.
\end{equation}
Applying Jensen's formula~\eqref{Jensen} in the variable $x_{n+1}$ gives
\[\begin{aligned}
&\frac12 m(P_{n+1}) \= \frac1{(2 \pi i)^{n+1}} \int_{|x_i|=1}
\log|1+x_1+\dots+x_{n+1}| \; \frac{dx_1}{x_1} \dots \frac{dx_{n+1}}{x_{n+1}} \\
&\= \frac1{(2 \pi i)^{n}} \int_{|x_i|=1, |1+x_1+\dots+x_n|>1}
\log|1+x_1+\dots+x_n| \; \frac{dx_1}{x_1} \dots \frac{dx_n}{x_n}
\end{aligned}\]
or
\begin{equation}\label{mPn1}
m(P_{n+1}) \= \int_1^{(n+1)^2} \log(\lambda) a^*(\lambda) d\lambda\,.
\end{equation}
Observe that~\eqref{mPn} and~\eqref{mPn1} differ only by the lower limit of
integration. This approach allows us to state that for every $n$ there is a
simple rational function $h_n(t)$ and an analytic solution $b_n(t)$ of
\[
{\mathcal L}_n\Bigl(t, t \frac{d}{dt}\Bigr) b_n(t) \= h_n(t)
\]
such that
\[
m(P_{n+1}) \= - {\rm Re} \; \bigl( t \frac{d}{dt}\bigr)^{-1} b_n(t) \Big|_{t =
\infty}\,.
\]
Below we give exact statements for $n=2$ and $n=3$. We will formulate our results
using solutions with rational coefficients rather then their
transcendental linear combinations. This makes sense from the number-theoretical
point of view and will be used later.
\bigskip
\begin{theorem}\label{Thm1} Take ${\mathcal L}_2(t, \theta) \= \theta^2 \- t (10 \theta^2 + 10 \theta + 3)
\+ 9 t^2 (\theta+1)^2$ and consider the following analytic at $t=0$ solutions of
\[\begin{aligned}
& {\mathcal L}_{2}\bigl(t, t \frac{d}{dt}\bigr)\, \phi(t) \= 0 \qquad \phi(t)\=1+3t+\dots
\\
& {\mathcal L}_{2}\bigl(t, t \frac{d}{dt}\bigr)\, \psi(t) \= \frac{t}{1-t} \qquad \psi(t)
\= t + \dots
\end{aligned}\]
Then
\[
m(P_3) \= - {\rm Re} \, \bigl( t \frac{d}{dt}\bigr)^{-1} [\frac34 \phi(t) +
\frac{6}{\pi^2} \psi(t)] \Big|_{t = \infty} \,.
\]
\end{theorem}
\bigskip
Though all Mahler measures in this theorem were already computed before, we will
use our result to relate them to double L-values of modular forms later. The next
theorem already deals with the interesting case $n=4$.
\bigskip
\begin{theorem}\label{Thm2} Let $\Omega_{15}$ be the Chowla-Selberg period for
the field $K=Q(\sqrt{-15})$ as in~\eqref{ChS15} and $b(t)$ be the unique
analytic at $t=0$ solution of the non-homogeneous differential equation
\[
{\mathcal L}_{3}\bigl(t, t \frac{d}{dt}\bigr)\, b(t) \= -\frac{3\sqrt{5}\Omega_{15}^2}{10
\pi}\frac{t(212 t^2 + 251t - 13)}{(1-t)^3} \+ \frac{3\sqrt{5}}{5 \pi^3
\Omega_{15}^2} \frac{t}{1-t}\,.
\]
satisfying $b(t) \= \frac45 + O(t)$. Then
\[
m(P_4) \= - {\rm Re} \; \bigl( t \frac{d}{dt}\bigr)^{-1} \, b(t) \Big|_{t =
\infty}\,.
\]
\end{theorem}
\section{Proofs}
Recall from Section~\ref{sec:mmde}
that the function
\[
a(t) \= \frac{1}{(2\pi i)^n}\int_{|x_1|=\dots=|x_n|=1} \frac1{1-t
P_n(x_1,\dots,x_n)} \; \frac{dx_1}{x_1} \dots \frac{dx_n}{x_n}
\]
is the unique analytic at $t=0$ solution of ${\mathcal L}_n a = 0$ satisfying $a(0)=1$.
Let us write $a(t)$ as
\[
a(t) \= \int_0^{(n+1)^2} \frac{a^*(\lambda)}{1-t \lambda} d\lambda\,,
\]
where $a^*(\lambda)$ is a solution of $\widetilde{{\mathcal L}}_n a^* = 0$.
Let us also introduce
\[\begin{aligned}
b(t) &\= \int_1^{(n+1)^2} \frac{a^*(\lambda)}{1-t \lambda} d\lambda \= b_0 + b_1 t + \dots \\
c(t) & \= \int_0^{1} \frac{a^*(\lambda)}{1-t \lambda} d\lambda \= c_0 + c_1 t + \dots \\
\end{aligned}\]
In Section~\ref{sec:Ln1} we showed that
\[
m(P_{n+1}) \= - {\rm Re} \; \bigl( t \frac{d}{dt}\bigr)^{-1} b(t) \Big|_{t = \infty}.
\]
In this section we will compute the coefficient $b_0$ and the rational function
\[
h(t) := {\mathcal L}_n b(t) = -{\mathcal L}_n c(t)\,,
\]
for $n=2,3$. The proofs of Theorems~\ref{Thm1} and~\ref{Thm2} go along the same lines. First, we identify the solution $a^*(\lambda)$ for $\lambda \in [0,1]$ in terms of the Frobenius basis in the space of solutions near $\lambda=0$. In order to do this we use asymptotics of $a(t)$ when $t$ is large, and we use the modular parametrization of the differential equation to find this asymptotics. As soon as we know $a^*(\lambda)$ explicitly, we apply Proposition~\ref{de_transform} to finish the proof.
We note that the differential operator $\widetilde{{\mathcal L}}_2$ has singularities at $\lambda=0,1$ which are regular singular points
of maximal unipotent monodromy, whereas the operator $\widetilde{{\mathcal L}}_3$ has a regular singular point of
maximal unipotent monodromy at $\lambda=0$ and is nonsingular at $\lambda=1$. The period $\Omega_{15}$ appears in Theorem~\ref{Thm2} because the point $\lambda=1$ corresponds under the modular
parametrization of $\widetilde{{\mathcal L}}_3$ to a CM point of conductor $15$.
It hopefully will not confuse the reader that we use the same notation
\[
a(t), a^*(\lambda), b(t), c(t)
\]
for the integrals corresponding to the case $n=2$ in Lemmas $5.1-5.3$ and to the case $n=3$ in Lemmas $5.4-5.7$.
\begin{proof}[Proof of Theorem~\ref{Thm1}]
Here $a(t) = \phi(t)$.
From Proposition~\ref{de_transform} and Lemmas~\ref{rhs1_2} and \ref{rhs2_2} below it follows that
\[
\widetilde{{\mathcal L}_2}\Bigl(\frac1t,-\theta-1 \Bigr) c(t) \= -\frac{6}{\pi^2} \Bigl( \frac1{1-t} +\frac{1}{t}\Bigr)
\= -\frac{6}{\pi^2} \frac1{t(1-t)}.
\]
According to~\eqref{L21t} we then have
\[
{\mathcal L}_2(t,\theta) c(t) \= t^2 \widetilde{{\mathcal L}_2}\Bigl(\frac1t,-\theta-1 \Bigr) c(t) \= -\frac{6}{\pi^2}\frac{t}{1-t} \,
\]
and then
\[
{\mathcal L}_2(t,\theta) b(t) \= -{\mathcal L}_2(t,\theta) c(t) = \frac{6}{\pi^2}\frac{t}{1-t}.
\]
From Lemma~\ref{rhs2_2} we have $b_0 \= 1 - c_0 \= \frac34$
and therefore $b(t) = \frac34 \phi(t) + \frac{6}{\pi^2} \psi(t)$.
To finish the proof it remains to verify the three lemmas that follow below.
\end{proof}
\begin{lemma} As $\lambda \to 0$, we have
\label{asymp_2}
\[
a^*(\lambda) \= \frac1{\sqrt{3} \pi} + O(\lambda)\,.
\]
As $\lambda \to 1^-$, we have
\[
a^*(\lambda) \= -\frac{3}{4 \pi^2} \log(1-\lambda) + O(1)\,.
\]
\end{lemma}
\begin{proof}
Using the modular parametrization~\eqref{L2mpar} we find that when $t \to -\infty$ along the negative real axis (this corresponds to $z$ going down the ray $\frac12 + i \mathbb R_{+}$)
\begin{equation}\label{asa1}
t \, a(t) \= \frac1{\sqrt{3} \pi} \log\Bigl( -\frac1{t} \Bigr) \+ O(1) \,.
\end{equation}
On the other hand
\begin{equation}\label{asa2}
t \, a(t) \= \int_0^{9} \frac{a^*(\lambda)}{1/t - \lambda} d\lambda \= - \int_0^{9} \frac{a^*(\lambda)}{s + \lambda} d\lambda \Big|_{s = -\frac1t}\,.
\end{equation}
Let us write $a^*(\lambda) \= \alpha_0 \phi_0(\lambda) \+ \alpha_1 \phi_1(\lambda)$ in terms of the Frobenius basis
\[\begin{aligned}
&\phi_0(\lambda) \= 1 + O(\lambda)\,, \\
&\phi_1(\lambda) \= \log(\lambda) \phi_0(\lambda) + O(\lambda) \\
\end{aligned}\]
of solutions near $\lambda=0$. One can easily check that for any $\varepsilon>0$
\[\begin{aligned}
\int_0^{\varepsilon}\frac{d\lambda}{s+\lambda} &\= - \log s + O(1) \,,\\
\int_0^{\varepsilon}\frac{\log{\lambda} d\lambda}{s+\lambda} &\= -\frac12 (\log s)^2 + O(1)\\
\end{aligned}\]
as $s \to 0$. Comparing~\eqref{asa1} and~\eqref{asa2}, we see that $\alpha_1=0$ and $\alpha_0=\frac1{\sqrt{3} \pi}$.
Now let
\[\begin{aligned}
&\kappa_0(\lambda) \= 1 + O(\lambda-1)\,, \\
&\kappa_1(\lambda) \= \log(\lambda-1) \kappa_0(\lambda) + O(\lambda-1) \\
\end{aligned}\] be the Frobenius basis at $\lambda=1$
and $a^*(\lambda)= \alpha_0 \kappa_0(\lambda) + \alpha_1 \kappa_1(\lambda)$ when $\lambda \to 1_{-}$.
Using our modular parametrization we find that
\[
\alpha_1 \= \underset{\lambda \to 1_{-}} \lim \frac{a^*(\lambda)}{\log(1-\lambda)} \= \underset{s \to 0_{+}}
\lim \frac1{\sqrt{3}\pi} \frac{f(z)}{\log(1-9 t(z))} \Big|_{z=i s} \= - \frac{3}{4 \pi^2} \,.
\]
\end{proof}
\begin{lemma}
\label{rhs1_2}
In the notation of Proposition~\ref{de_transform} applied to
$\widetilde{{\mathcal L}}=\widetilde{{\mathcal L}}_2,\; \alpha=0, \; \beta=1,\; F(\lambda)=a^*(\lambda)$
we have:
\[\begin{aligned}
H_0(t) & \= 0\,, \\
H_1(t) & \= \frac{6}{\pi^2 \, (1-t)}\,.
\end{aligned}\]
\end{lemma}
\begin{proof}
In the course of the proof we rely on the asymptotics given in Lemma~\ref{asymp_2}.
Since $\lambda=0,1$ are singular points we are going to compute $H_0(t)$ and $H_1(t)$
as the corresponding limits of $H_{\lambda}(t)$. We have
\[\begin{aligned}
\Bigl[ \widetilde{{\mathcal L}}_2^{(2)}\Bigl(\frac1t,-\theta-1\Bigr) \frac1{1- \lambda t}
\Bigr]_{+} &\= \Bigl[ \Bigl( 9 -\frac{10}t+\frac1{t^2} \Bigr) \frac1{1- \lambda t}
\Bigr]_{+} \\
&\= \frac{9 t^2-10t+1}{t^2(1- \lambda t)} - \frac1{t^2} - \frac{\lambda-10}{t} \=
\frac{(\lambda-1)(\lambda-9)}{1-\lambda t} \\
\end{aligned}\]
and
\[\begin{aligned}
\Bigl[ \widetilde{{\mathcal L}}_2^{(1)} & \Bigl(\frac1t,-\theta-1\Bigr) \frac1{1- \lambda t}
\Bigr]_{+} \= \Bigl[ \Bigl( \bigl(9 -\frac{10}t+\frac1{t^2} \bigr)(-\theta-1) +
\bigl(-\frac{10}t+2{t^2}\bigr) \Bigr) \frac1{1- \lambda t} \Bigr]_{+} \\
&\= - \bigl(9 -\frac{10}t+\frac1{t^2} \bigr)\frac1{(1-\lambda t)^2} +
\bigl(-\frac{10}t+2{t^2}\bigr) \frac1{1- \lambda t} - \frac1{t^2} \=
-\frac{(\lambda-1)(\lambda-9)}{(1-\lambda t)^2} \,.
\end{aligned}\]
These functions have finite limits when $\lambda \to 0$ and now we see that
$H_0(t)=0$ because $a^*(\lambda)$ is analytic at $\lambda=0$ and therefore
$\underset{\lambda\to 0}\lim \lambda \, \theta^j a^*(\lambda) = 0$ for any $j \ge 0$.
Since $\underset{\lambda\to 1_{-}}\lim (\lambda-1) \, a^*(\lambda) = 0$ and
$\underset{\lambda \to 1_{-}}\lim (\lambda-1) \,\theta a^*(\lambda) = -\frac{3}{4 \pi^2}$ we find that $H_1(t)=\frac{6}{\pi^2 \, (1-t)}$.
\end{proof}
\medskip
\begin{lemma}
\label{rhs2_2}
The first coefficient $c_0$ in the power series expansion of $c(t)$ is equal to $\frac14$ and we have
\[
\Bigl[\widetilde{{\mathcal L}}_2\Bigl(\frac1t,-\theta-1\Bigr) c(t) \Bigr]_{-} \= -\frac{6}{\pi^2 \,t}.
\]
\end{lemma}
\begin{proof}
It is easy to compute that
\[
\Bigl[\widetilde{{\mathcal L}}_2\Bigl(\frac1t,-\theta-1\Bigr) c(t) \Bigr]_{-} \= \frac{-3
c_0 + c_1}{t}\,.
\]
Using the modular parametrization~\eqref{L2mpar} (with modular $t$ and $f$
from~\eqref{L2mpar} and $\lambda = 9 t$ one has ${\mathcal L}_{\lambda, f}=\frac1{9
\lambda}\widetilde{{\mathcal L}}_2(\lambda,\lambda \frac{d}{d\lambda})$) we compute that
\[\begin{aligned}
c_0 \= \int_{0}^{1} a^*(\lambda) d\lambda & \= \frac{9}{\sqrt{3} \pi} \int_{i \infty}^0
f(z) t'(z) dz \\
& \= \frac{18}{\sqrt{3}} \int_{0}^{\infty} f(z)^3 t(z)(1-9 t(z))(1-t(z))
\Big|_{z=is} \, ds \= \frac14\\
\end{aligned}\]
and
\[\begin{aligned}
c_1 - 3 c_0 & \= \int_{0}^{1} (\lambda-3) a^*(\lambda) d\lambda \\
& \= \frac{18}{\sqrt{3}} \int_{0}^{\infty} (9 t(z)-3) f(z)^3 t(z)(1-9
t(z))(1-t(z)) \Big|_{z=is} \, ds\\
& \= - \frac{6}{\pi^2}\,.
\end{aligned}\]
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm2}]
We let $\Omega = \Omega_{15}$.
According to Proposition~\ref{de_transform} and Lemmas~\ref{rhs1_3} and \ref{rhs2_3} below we
compute that
\[\begin{aligned}
\widetilde{{\mathcal L}_3}\Bigl(\frac1t,-\theta-1 \Bigr) c(t) & \=
\frac{3\sqrt{5}\Omega^2}{10\pi} \Bigl( \frac{13}{t} - \frac{-13 t^2 + 251t + 212}{(1-t)^3}\Bigr)
\+ \frac{3\sqrt{5}}{5 \pi^3 \Omega^2} \Bigl( \frac1{t} + \frac1{1-t} \Bigr) \\
& \= \frac{3\sqrt{5}\Omega^2}{10\pi}\frac{-212 t^2 - 251t + 13}{t(1-t)^3} \+ \frac{3\sqrt{5}}{5 \pi^3 \Omega^2} \frac1{t(1-t)}.
\end{aligned}\]
Now it follows from~\eqref{L31t} that we have
\[\begin{aligned}
-h(t) = {\mathcal L}_3(t,\theta) c(t) & \= -t^2 \widetilde{{\mathcal L}_3}\Bigl(\frac1t,-\theta-1 \Bigr) c(t) \\
& \= \frac{3\sqrt{5}\Omega^2}{10 \pi}\frac{t(212 t^2 + 251t - 13)}{(1-t)^3} \-
\frac{3\sqrt{5}}{5 \pi^3 \Omega^2} \frac{t}{1-t}.
\end{aligned}\]
Therefore the function
\[
b(t) \= \int_1^{16} \frac{a^*(\lambda)}{1-t \lambda} d\lambda
\]
satisfies ${\mathcal L}_3 b = h(t)$ and its power series expansion at $t=0$ starts with $b_0 \= a_0 -
c_0 \= \frac45$.
The proof will be finished after verifying the three lemmas below.
\end{proof}
\begin{lemma}
\label{asymp_3}
In terms of the Frobenius basis
\[\begin{aligned}
&\phi_0(\lambda) \= 1 + O(\lambda) \,, \\
&\phi_1(\lambda) \= \log(\lambda) \phi_0(\lambda) + O(\lambda) \= \log(\lambda) + o(1) \,,\\
&\phi_2(\lambda) \= \log(\lambda)^2 \phi_0(\lambda) + O(\lambda) \= \log(\lambda)^2 + o(1) \\
\end{aligned}\]
of solutions near $\lambda=0$ we have
\[
a^*(\lambda) \= -\frac3{8 \pi^2} \bigl( \phi_1(\lambda) \- 6 \log 2 \,\phi_0(\lambda) \bigr)
\,.
\]
At $\lambda = 1$ we have
\[\begin{aligned}
a^*(1) &\= 0.1649669005300320... \= \frac{3 \sqrt{5}}{2 \pi}\Omega^2 \qquad
\,,\\
\theta a^*(1) &\= -0.032993380106006... \= -\frac{3\sqrt{5}}{10\pi}\Omega^2 \,,\\
\theta^2 a^*(1) &\= 0.00330836512971504... \= \frac{\sqrt{5}}{150} \Bigl( \frac{13
\Omega^2}{\pi} - \frac2{\pi^3 \Omega^2} \Bigr).\\
\end{aligned}\]
\end{lemma}
\begin{proof}
With the help of modular parametrization~\eqref{L3mpar} we
find that when $t \to -\infty$ along negative real axis (this corresponds to $z$
going down to $0$ along the imaginary axis)
\[
t a(t) \+ \frac{3}{16 \pi^2} \log\Bigl( -\frac1{64 t} \Bigr)^2 \to 0\,.
\]
On the other hand
\[
t \, a(t) \= \int_0^{16} \frac{a^*(\lambda)}{1/t - \lambda} d\lambda \= - \int_0^{16}
\frac{a^*(\lambda)}{s + \lambda} d\lambda \Big|_{s = -\frac1t}
\]
and since for any $\varepsilon > 0$ one has when $s \to 0$
\[\begin{aligned}
\int_0^{\varepsilon}\frac{d\lambda}{s+\lambda} &\= - \log s + O(1)\,, \\
\int_0^{\varepsilon}\frac{\log{\lambda} d\lambda}{s+\lambda} &\= -\frac12 (\log s)^2 + O(1) \,,\\
\int_0^{\varepsilon}\frac{(\log{\lambda})^2 d\lambda}{s+\lambda} &\= -\frac13 (\log s)^3 +
O\bigl(\log(s)^2\bigr) \,,\\
\end{aligned}\]
we find that $\alpha_2=0$, $\alpha_1=-\dfrac{3}{8 \pi^2}$ and
$\alpha_0=\dfrac{9}{4 \pi^2} \log 2$.
\medskip
We indicate how to find the values $\theta^j a^*(1)$ for $j=0,1,2$.
With modular $t$ and $f$
from~\eqref{L3mpar} and $\lambda = 64 t$ one has ${\mathcal L}_{\lambda, f}=\frac1{64
\lambda}\widetilde{{\mathcal L}}_3(\lambda,\lambda \frac{d}{d\lambda})$. This $\lambda(z)$ takes real values
from the interval $(0,1]$ for $z \= \frac12 + i s$ and $s \in
(+\infty,\frac{\sqrt{15}}{6}]$. In particular, $\lambda(\tau)=1$ for $\tau = \frac12+\frac{\sqrt{-15}}6$. Using asymptotics at
$\infty$ one can check that on the vertical half-line from $\tau$ to $\infty$
\[
a^*\Bigl((\lambda(z) \Bigr) \= -\frac3{8 \pi^2} \cdot 2 \pi i \bigl(z -
\frac12 \bigr) \, f(z) \,.
\]
Now the problem is reduced to computing the values of modular forms and their
derivatives at a CM-point of conductor $15$, this leading to expressions involving
$\Omega$ and $\pi$, see \cite[Propositions~26,~27 and Corollary of Proposition~27]{Zag}.
\end{proof}
\begin{lemma}
\label{rhs1_3}
In the notation of Proposition~\ref{de_transform} applied to
$\widetilde{{\mathcal L}}=\widetilde{{\mathcal L}}_3,\; \alpha=0, \; \beta=1,\; F(\lambda)=a^*(\lambda)$
we have:
\[\begin{aligned}
H_0(t) & \= 0 \\
H_1(t) & \= \frac{3 \Omega^2\sqrt{5}}{10\pi} \frac{-13t^2+251t+212}{(1-t)^3}- \frac{3\sqrt{5}}{5\pi^3\Omega^2} \frac{1}{1-t}
\end{aligned}\]
\end{lemma}
\begin{proof}
One easily checks that $\underset{\lambda \to 0}\lim \lambda \, \theta^j a^*(\lambda) = 0$,
whence $H_0(t)=0$. In order to compute $H_1(t)$ we need
\[\begin{aligned}
&\Bigl[ \widetilde{{\mathcal L}}_3^{(3)}\Bigl(\frac1t,-\theta_t-1\Bigr) \frac1{1-t}
\Bigr]_{+} \= \frac{45}{1-t}\\
&\Bigl[ \widetilde{{\mathcal L}}_3^{(2)}\Bigl(\frac1t,-\theta_t-1\Bigr) \frac1{1-t}
\Bigr]_{+} \= \frac{9(t-6)}{(1-t)^2}\\
&\Bigl[ \widetilde{{\mathcal L}}_3^{(1)}\Bigl(\frac1t,-\theta_t-1\Bigr) \frac1{1-t}
\Bigr]_{+} \= \frac{29+68 t-7t^2}{(1-t)^3},\\
\end{aligned}\]
and then the formula for $H_1(t)$ follows after a simple computation with
values of $\theta^j a^*(1)$ provided by Lemma~\ref{asymp_3}.
\end{proof}
\begin{lemma}
\label{rhs2_3}
The first coefficient $c_0$ in the power series expansion of $c(t)$ is equal to $\frac15$ and we have
\[
\Bigl[\widetilde{{\mathcal L}}_3\Bigl(\frac1t,-\theta-1\Bigr) c(t) \Bigr]_{-} \=
\Bigl(\frac{39\sqrt{5}}{10\pi}\Omega^2 + \frac{3\sqrt{5}}{5\pi^3\Omega^2}\Bigr) \frac1{t}.
\]
\end{lemma}
\begin{proof}
It is easy to compute that
\[
\Bigl[\widetilde{{\mathcal L}}_3\Bigl(\frac1t,-\theta-1\Bigr) c(t) \Bigr]_{-} \= \frac{4
c_0 - c_1}{t}\,.
\]
We have
\[\begin{aligned}
c_0 &\= \int_0^1 a^*(\lambda) d\lambda \= -\frac3{8 \pi^2} \cdot 2 \pi i \int_{i
\infty}^{\tau} \bigl(z - \frac12 \bigr) \, f(z) \, \lambda'(z) dz \\
& \bigl(\text{ here we use that } \, \lambda = 64 t \, \text{ and } \bigl(q
\dfrac{dt}{dq}\bigr)^2 / f^2 \= t^2 (1-4 t)(1 - 16t ) \quad \bigr) \\
&\= -\frac3{8 \pi^2} \cdot (2 \pi i)^2 \cdot 64 \int_{i \infty}^{\tau} \bigl(z
- \frac12 \bigr) \, f(z)^2 t(z) \sqrt{(1 - 4 \, t(z))(1-16 \, t(z))} dz \\
&\= 96 \int_{\frac{\sqrt{15}}{6}}^{\infty} s \, f(z)^2 t(z) \sqrt{(1 - 4 \,
t(z))(1-16 \, t(z))} \Big|_{z = \frac12+is} \, ds \= \frac 15
\end{aligned}\]
and
\[\begin{aligned}
c_1 &- 4 c_0 \= \int_0^1 (\lambda - 4) a^*(\lambda) d\lambda \\
&\= 96 \int_{\frac{\sqrt{15}}{6}}^{\infty} s \, f(z)^2 (64 \, t(z) - 4) t(z)
\sqrt{(1 - 4 \, t(z))(1-16 \, t(z))} \Big|_{z = \frac12+is} \, ds \\
&\= -0.708951451918989714... \= - 7 \, a^*(1) - 9 \, \theta a^*(1) + 45 \, \theta^2a^*(1) \\
&\= -\frac{39\sqrt{5}}{10\pi}\Omega^2 - \frac{3\sqrt{5}}{5\pi^3\Omega^2}
\end{aligned}\]
\end{proof}
\section{Double L-values of modular forms}\label{sec:dL}
In Theorem~\ref{Thm1} we are led to the evaluation of
\[
\bigl( t \frac{d}{dt}\bigr)^{-1} \psi(t) \Big|_{t = \infty}
\]
where $\psi$ is the unique analytic at $t=0$ solution of
\[
{\mathcal L}_{2}\bigl(t, t \frac{d}{dt}\bigr)\, \psi(t) \= \frac{t}{1-t}
\]
which satisfies the condition $\psi(t)=t+o(t)$. The same happens in
Theorem~\ref{Thm2}. Putting this situation into a more general context, consider
a solution of a non-homogeneous differential equation of order $k+1$ which has a
modular parametrization, i.e.
\[
{\mathcal L}_{t,f} \; \psi \= h(t) \,,
\]
where $t$ is a modular function, $f$ is a modular form of weight $k$, ${\mathcal L}_{t,f}$
is defined by~\eqref{Ltf} and $h(t)$ is a function of $t$ which will be just
rational in our cases. We can consider $\psi=\psi(t(z))$ as a function in the
upper half-plane and we rewrite the above differential equation as
\[
\frac1{Dt \cdot f} D^{k+1} \frac{\psi}{f} \= h(t) \,,
\]
or
\[
D^{k+1} \frac{\psi}{f} \= h(t) \cdot Dt \cdot f \,.
\]
Therefore $\dfrac{\psi}f$ is an Eichler integral of the modular form $h(t) \cdot
Dt \cdot f$ of weight $k+2$. (This conclusion is precisely the statement of
Lemma~1 in~\cite{Yang}.) Let us assume in addition that the modular function $t$
takes values $0$ and $\infty$ at $q=0$ and $q=1$ correspondingly. Then
\begin{equation}\label{dL_first}
\bigl( t \frac{d}{dt}\bigr)^{-1} \psi(t) \Big|_{t = \infty} \= D^{-1} \Bigl(
\frac{Dt}{t} \psi \Bigr)\, \Big|_{q=1} \= D^{-1} \Bigl( g_2 \, D^{-k-1} g_1
\Bigr) \, \Big|_{q=1}
\end{equation}
where
\[\begin{aligned}
g_1 \= h(t) \cdot Dt \cdot f \,, \qquad g_2 = \frac{Dt}{t} f
\end{aligned}\]
are two modular forms of weight $k+2$. According to the proposition below, the
right-hand side of~\eqref{dL_first} appears to be a double L-value of these two forms. Let us
give the definition.
Let $g_1=\sum_{n \ge 0} a_n q^n$ and $g_2=\sum_{m \ge 0} b_m q^m$ be two modular forms of weight $k$ on a congruence subgroup of ${\rm SL}(2,\mathbb Z)$, and let in addition $a_0=0$. Their double L-function (denoted by $L^{\bullet}$ in \cite{Ramesh}) is defined for ${\rm Re}(s_1+s_2)>2k$, ${\rm Re} \, s_2 >k$
by
\[
L(g_1,g_2,s_1,s_2) \= \sum_{n=1}^{\infty} \sum_{m=0}^{\infty} \frac{a_n
b_m}{n^{s_1}(n+m)^{s_2}}\,.
\]
The question of the analytic continuation simultaneously in the two variables $s_1,s_2$ is
rather tricky and we do not want to consider it here. But it appears that for
any fixed integer $s_1=p$ the function $L(g_1,g_2,p,s_2)$ is well-defined for
$s_2$ with sufficiently large real part, and we can easily prove analytic
continuation in this variable when $p>0$. In order to do this one writes
(\cite{Ramesh})
\begin{equation}\label{2.3}
L(g_1,g_2,p,s_2) \= \frac{(2\pi)^{p+s_2}}{\Gamma(p)\Gamma(s_2)} \sum_{m=0}^{p-1}
\Lambda(g_1,g_2,p-m,s_2+m)
\end{equation}
where
\[
\Lambda(g_1,g_2,s_1,s_2) \= \int_0^{\infty} t^{s_2-1} g_2(i t) \int_{t}^{\infty}
v^{s_1-1} g_1(i v) dv \, dt \,.
\]
Now observe that these integrals are well defined for all $s_1,s_2$. Indeed, this
follows from the estimates
\[\begin{aligned}
\int_{t}^{\infty} v^{s_1-1} g_1(i v) dv &\= O(t^{s_1-1} e^{-2 \pi t})\,, \quad t
\to \infty \\
&\= O(t^{s_1-k} e^{- \frac{2 \pi}t })\,, \quad t \to 0\\
\end{aligned}\]
since $g_1(it) = O \Bigl( t^{-k} e^{- \frac{2 \pi}t } \Bigr)$ when $t \to 0$.
Therefore formula~\eqref{2.3} gives the analytic continuation of~$L(g_1,g_2,p,s_2)$
in the variable $s_2$ with integer $p>1$. Moreover, this function is holomorphic in the entire complex plane because $1/\Gamma(s_2)$ is holomorphic, and we can speak of ``double L-values'' $L(g_1,g_2,p_1,p_2)$ with integers $p_1,p_2$ whenever $p_1>0$. Notice also that $L(g_1,g_2,p_1,p_2 )=0$ if $p_2 \le 0$ as one can see from~\eqref{2.3} since $\Gamma(s_2)$ has poles at nonpositive integers.
\begin{proposition}\label{doubL} Let $g_1, g_2$ be two modular forms of weight $k$ on a congruence subgroup, $g_1$ vanishing at $\infty$. Then for any integers $0< p_1 \le k$ and $p_2 < k$ one has
\begin{equation}\label{2.2}
\underset{q \to 1} \lim \; D^{-p_2}{\Bigl(g_2 \cdot D^{-p_1} g_1 \Bigr)}(q) \=
L(g_1,g_2,p_1,p_2)
\end{equation}
\end{proposition}
\begin{proof} Whenever ${\rm Re} \, (p_2 + w) > k$ one has
\[\begin{aligned}
\frac{\Gamma(w)}{(2\pi)^w} &L(g_1,g_2,p_1,p_2+w) \= \frac{\Gamma(w)}{(2\pi)^w} \sum_{n=1}^{\infty} \sum_{m=0}^{\infty} \frac{a_n
b_m}{n^{p_1}(n+m)^{p_2+w}} \\
&\= \sum_{n=1}^{\infty} \sum_{m=0}^{\infty} \frac{a_n
b_m}{n^{p_1}(n+m)^{p_2}} \int_0^{\infty} t^{w-1} e^{-2\pi t (n+m)}\\
&\= \int_0^{\infty} t^{w-1}
D^{-p_2}{\Bigl(g_2 \cdot D^{-p_1} g_1 \Bigr)} (it) dt \,.
\end{aligned}\]
By Mellin's inversion theorem with an arbitrary real $c > k-p_2$ one has
\[\begin{aligned}
&D^{-p_2}{\Bigl(g_2 \cdot D^{-p_1} g_1 \Bigr)} (it) \= \frac1{2\pi i} \int_{c-i\infty}^{c+i\infty}
\frac{\Gamma(w)}{(2\pi t)^w}
L(g_1,g_2,p_1,p_2+w) dw \\
&\;\= L(g_1,g_2,p_1,p_2) \+ \frac1{2\pi i} \int_{-\varepsilon-i\infty}^{-\varepsilon+i\infty} \frac{\Gamma(w)}{(2\pi t)^w} L(g_1,g_2,p_1,p_2+w) dw
\end{aligned}\]
with any $0 < \varepsilon < 1$ and we moved the path using the fact that $L(g_1,g_2,p_1,p_2+w)$ is everywhere holomorphic in $w$. The last integral obviously vanishes when $t\to 0$, and~\eqref{2.2} follows.
\end{proof}
In the case of Theorem~\ref{Thm1} we use the modular
parametrization~\eqref{L2mpar} and $h(t)=\frac1{1-t}$. Since $t=\infty$ at
$z=\frac12$ we consider the shifted forms
\begin{equation}\label{thm1gs}\begin{aligned}
g_1(z) & \= \Bigl( \frac{Dt}{t} f \Bigr)(z+\frac12) \= 1+q-5q^2+q^3+11
q^4-24q^5+\dots \\
& \= E_{3,\chi_{-3}}(z) - 2 E_{3,\chi_{-3}}(2z) - 8 E_{3,\chi_{-3}}(4z) \\
g_2(z) & \= \Bigl( \frac{Dt}{1-t} f \Bigr)(z+\frac12) \= -q-4q^2-q^3+16
q^4+24q^5-4q^6+\dots \\
& \= -E_{3,\chi_{-3}}(z) - 7 E_{3,\chi_{-3}}(2z) + 8 E_{3,\chi_{-3}}(4z) \\
\end{aligned}\end{equation}
where $E_{3,\chi_{-3}}$ is the Eisenstein series defined in~\eqref{Eis3}. The form
$g_1$ already appeared in~\eqref{thm1g1} and we had that $m(P_2)=-L(g_1,1)$.
Using Proposition~\ref{doubL} we now rewrite the statement of Theorem~\ref{Thm1}
as follows.
\begin{corollary}\label{cor1} With the modular forms $g_1,g_2$ of weight 3
defined in~\eqref{thm1gs} one has
\[
m(P_3) \- \frac34 m(P_2) \= - \frac6{\pi^2} L(g_2,g_1,2,1) \,.
\]
\end{corollary}
Plugging in the values of $m(P_2)$ and $m(P_3)$ which we compute from (\ref{2var}), (\ref{3var})
into the formula given in Corollary \ref{cor1} we obtain the following relation
between double and ordinary L-values:
\begin{equation}\label{L_relation}
L(g_2,g_1,2,1) = \frac{3\sqrt{3}\pi}{2^4} L(\chi_{-3},2) - \frac{7}{6} \zeta(3).
\end{equation}{}
We give a straightforward proof of this relation in the next section.
\medskip
\medskip
For the Theorem~\ref{Thm2} we use the modular parametrization~\eqref{L3mpar} and
we have to consider two solutions with $h(t)=\frac1{1-t}$ and $h(t)=\frac{212
t^2 + 251t - 13}{(1-t)^3}$. Also we have $t=0$ at $z=i \infty$ and $t=\infty$ at
$z=0$. According to our strategy, we define the modular forms of weight~4
\[\begin{aligned}
g_1 & \= \frac{Dt}{t} f \= 1+2q-14q^2+38q^3-142q^4+252q^5-266q^6+\dots, \\
g_2 & \= \frac{Dt}{1-t} f \= -q-7q^2-6q^3+5 q^4+120 q^5 +498 q^6 + \dots, \\
g_3 & \= \frac{212 t^2 + 251t - 13}{(1-t)^3} Dt \cdot f \=
13q+316q^2+2328q^3+\dots \\
\end{aligned}\]
(observe that they are the same ones as in~\eqref{thm2gs}). Here $g_1$ is a
holomorphic modular form, which already appeared in~\eqref{thm2g1} where we found
$m(P_3)=-L(g_1,1)$. The forms $g_2$ and $g_3$ are meromorphic with the poles at
the points where $t=1$. Using the fact that $t$ has no poles on the imaginary
half-axis we defined the corresponding double L-values $L(g_2,g_1,3,1)$,
$L(g_3,g_1,3,1)$ in~\eqref{thm2DLs} in Section~\ref{sec:intro}.
\begin{corollary}\label{cor2} With the double L-values defined
in~\eqref{thm2DLs} one has
\[
m(P_4) \- \frac45 m(P_3) \= \frac{3\sqrt{5}\Omega_{15}^2}{10 \pi} L(g_3,g_1,3,1)
\- \frac{3\sqrt{5}}{5 \pi^3 \Omega_{15}^2} L(g_2,g_1,3,1)\,.
\]
\end{corollary}
\begin{proof} Due to Theorem~\ref{Thm2} we have that $m(P_4) = - {\rm Re} \; \bigl( t
\frac{d}{dt}\bigr)^{-1} \, b(t) \Big|_{t = \infty}$. We know from
Proposition~\ref{L3mparPr} that this differential equation has modular
parametrization by $t(z)$ and $f(z)$. Therefore
\[
b(t(z)) \= \frac45 f(z) \+ f(z) \, D^{-3} \, \Bigl( -
\frac{3\sqrt{5}\Omega_{15}^2}{10 \pi} \, g_3(z) \+ \frac{3\sqrt{5}}{5 \pi^3
\Omega_{15}^2} \, g_2(z) \Bigr)\,.
\]
We use the path in the upper halfplane from $z=i \infty$ to $z=0$ along
the imaginary half-axis, exactly where one has $-\infty < t(z) < 0$. As was explained in Section~\ref{sec:intro}, all three
forms are holomorphic along this path. For $g_j(z)$ with both $j=2,3$ we then
have
\[
D^{-1} \Bigl( g_1 \cdot D^{-3} g_j \Bigr) (i v) \= (2 \pi)^4 \,
\int_{v}^{\infty} g_1(i s) \int_{s}^{\infty}\int_{s1}^{\infty}\int_{s2}^{\infty}
g_j(i s_3) \, ds_3 \, ds_2 \, ds_1 \, ds
\]
and therefore the numbers~\eqref{thm2DLs} are the limiting values at $v=0$.
\end{proof}
\section{Explicit computation of the double L-value
in formula \eqref{L_relation}}
In this section we show how to compute the iterated integral
\begin{equation}\label{int}\begin{aligned}
\int_0^{i \infty} g_1(z) \cdot D^{-2} g_2(z) dz &=
\frac1{2 \pi i} \int_1^{0} g_1(q) \cdot D^{-2} g_2(q) \frac{dq}{q} \\
&=- \frac1{2 \pi i} D^{-1} (g_1 \cdot D^{-2} g_2) \Big|_{q=1} \\
\end{aligned}\end{equation}
for the two modular forms $g_1$,$g_2$ of weight $3$ defined in (\ref{thm1gs})
which leads to an alternative proof of formula \eqref{L_relation}.
We use a powerful method due to Wadim Zudilin \cite{Z2}, \cite{Z3} of computing double
$L$-values of Eisenstein-like series.
We are grateful to Wadim for explaining to us his method and its applicability in this situation.
Unfortunately, the more complicated $L$-values from Corrollary \ref{cor2}
do not seem to be computable in the same way due to the lack of an Eisenstein-like
representation for the forms $g_1$,$g_2$,$g_3$ refered to in
Corollary \ref{cor2}.
We briefly describe the method as follows: the Atkin-Lehner involution
$z \to -\frac1{12z}$ is applied to $g_1$, the resulting modular form being denoted by
$\hat{g}_1(z)$:
\[
g_1(z) = const \cdot \hat{g}_1(-\frac1{12z}) z^{-3},
\]
so that the integral \eqref{int} will take the following form:
\[
const \cdot \int_0^{i\infty} \hat{g}_1(-\frac1{12z}) D^{-2} g_2(z) z^{-3} dz.
\]
We then expand the integral as a quadruple sum, make a variable change and
collapse the sum back in order to get
\[
const \cdot \int_0^{i\infty} f_1(u) (const + f_2(-\frac1{12u})) u\,du\\,
\]
where $f_1$ and $f_2$ are Eisenstein series of weight $1$.
We apply the Atkin-Lehner involution again:
\[
f_2(-\frac1{12u}) = const \cdot \hat{f}_2(u) u\,,
\]
this time rewriting the integral as
\[
const \cdot \int_0^{i\infty} f_1(u) (const + \hat{f}_2(u) \, u) u\,du\\ = const \cdot L(f_1,2) + const \cdot L(f_1 \hat{f}_2, 3).
\]
Here $f_1$ is an Eisenstein series of weight $1$ and character $\chi_{-3}$, and $L(f_1,2)$ will give the term with $\zeta(2) \cdot L(\chi_{-3},2)$ in the final formula. The form $f_1 \hat{f}_2$ is an Eisenstein series for $SL_2(\mathbb{Z})$ of weight $2$, and $L(f_1 \hat{f}_2, 3)$ will give us the term with $\zeta(2) \cdot \zeta(3)$.
\medskip
Note that no regularization is necessary in our integral, since $g_2$ vanishes at $z=\infty$ and $g_1$ vanishes at $z=0$.
\medskip
In the course of the computation we will need to apply an Atkin-Lehner involution to Eisenstein series.
For that we express Eisenstein series as linear combinations of eta-products.
Let $N \ge 1$. Consider an eta-product
\[
f(z) = \prod_j \eta(d_j \cdot z)^{k_j}
\]
where $k_j$ are integers and $d_j$ non-negative integers dividing $N$.
Let $d_j' = N / d_j$ and
\[
\hat{f}(z) = \prod_j \eta(d_j' \cdot z)^{k_j}.
\]
We have
\begin{equation}\label{eta}
f(-\frac1{Nz}) = (-i)^w z^w \prod_j d_j'^{k_j/2} \hat{f}(z)
\end{equation}
for $w = \frac12\sum_j k_j$ (the weight).
This follows from the basic transformation formula
\[
\eta\bigl(-\frac1{z}\bigr) \= \sqrt{-iz} \; \eta(z).
\]
We use the following two Eisenstein series of weight $3$:
\[\begin{aligned}
E_{3,\ch3}(z) &= -\frac19 \frac{\eta(z)^9}{\eta(3z)^3} = -\frac19 + \sum_{n,m \ge 1} \chi_{-3}(n)n^2 q^{nm} = -\frac19 + q - 3q^2 + q^3 + \dots\,, \\
\widetilde{E}_{3,\ch3}(z) &= \frac{\eta(3z)^9}{\eta(z)^3} = \sum_{n,m \ge 1} \chi_{-3}(m)n^2 q^{nm} = q + 3q^2 + 9q^3 + 13q^4 + \dots\,. \\
\end{aligned}\]
Then
\[\begin{aligned}
g_1(z) &= (1,-2,-8) \cdot (E_{3,\ch3}(z),E_{3,\ch3}(2z),E_{3,\ch3}(4z))^t\,, \\
g_2(z) &= (-1,-7,8) \cdot (E_{3,\ch3}(z),E_{3,\ch3}(2z),E_{3,\ch3}(4z))^t\,,
\end{aligned}\]
where $(a,b,c)\cdot(d,e,f)^t=ad+be+cf$. Application of~\eqref{eta} gives
\[\begin{aligned}
E_{3,\ch3}\Bigl(-\frac1{12z}\Bigr) &\= -2^6 3^{5/2} i z^3 \widetilde{E}_{3,\ch3}(4z)\,, \\
E_{3,\ch3}\Bigl(-\frac2{12z}\Bigr) &\= E_{3,\ch3}(-\frac1{6z}) = -2^3 3^{5/2} i z^3 \widetilde{E}_{3,\ch3}(2z)\,,\\
E_{3,\ch3}\Bigl(-\frac4{12z}\Bigr) &= E_{3,\ch3}(-\frac1{3z}) = -3^{5/2} i z^3 \widetilde{E}_{3,\ch3}(z)\,, \\
\end{aligned}\]
and hence
\[
(a,b,c) \cdot (E_{3,\ch3}(z),E_{3,\ch3}(2z),E_{3,\ch3}(4z))^t (-\frac1{12z}) = -3^{5/2} i z^3 (c,2^3b,2^6a) \cdot (\widetilde{E}_{3,\ch3}(z),\widetilde{E}_{3,\ch3}(2z),\widetilde{E}_{3,\ch3}(4z))^t.
\]
In particular, we have
\[\begin{aligned}
g_1(-\frac1{12z}) &= 8 \cdot 3^{5/2} iz^3 \hat{g}_1(z) \\
\hat{g}_1(z) &= (1,2,-8) \cdot (\widetilde{E}_{3,\ch3}(z),\widetilde{E}_{3,\ch3}(2z),\widetilde{E}_{3,\ch3}(4z))^t\,, \\
&= q + 5 q^2 + 9 q^3 + 11 q^4 + 24 q^5 + \dots
\end{aligned}\]
or, equivalently,
\begin{equation}\label{g1}
g_1(z) = - \frac{i z^{-3} }{2^3 3^{1/2}} \hat{g}_1(-\frac1{12z})\,.
\end{equation}
Formula (\ref{g1}) allows us to rewrite the iterated integral (\ref{int}) as
\begin{equation}\label{int2}
-\frac{i}{2^3 3^{1/2}}\int_0^{i\infty} \hat{g}_1(-\frac1{12z}) D^{-2} g_2(z) z^{-3} dz \,.\\
\end{equation}
We now make use of quadruple sums. For that we write our form $\hat{g}_1$ and $g_2$ as
\[\begin{aligned}
\hat{g}_1(z) &= \sum_{m_1,n_1\ge 1} a_1(m_1) b_1(n_1) n_1^2 q^{m_1 n_1} = q + 5q^2 + 9q^3 + 11q^4 + 24q^5 + \dots\,,\\
g_2(z) &= \sum_{m_2,n_2\ge 1} a_2(m_2) b_2(n_2) n_2^2 q^{m_2 n_2} = -q - 4q^2 - q^3 + 16q^4 + 24q^5 + \dots \,,\\
\end{aligned}\]
where
\[\begin{aligned}
a_1(m) &= \chi_{-3}(m) \,,\\
b_1(n) &= 1 + \frac12 [n \; \text{even}] - \frac12 [n \; \text{divisible by 4}] \,,\ \\
a_2(m) &= -1 -7 [m \;\text{even}] + 8 [m \; \text{divisible by 4}] \,,\\\
b_2(n) &= \chi_{-3}(n) \,,\\\
\end{aligned}\]
and $[\dots]$ means $1$ when the respective condition is satisfied and $0$ otherwise. By using the expansions
\[\begin{aligned}
\hat{g}_1(-\frac1{12z}) &\= \sum_{m_1,n_1\ge 1} a_1(m_1) b_1(n_1) n_1^2 \exp\Bigl(-\frac{2 \pi i n_1 m_1}{12z}\Bigr)\,, \\
D^{-2} g_2(z) &\= \sum_{m_2,n_2\ge 1} a_2(m_2) b_2(n_2) \frac1{m_2^2} \exp\Bigl(2 \pi i m_2 n_2 z\Bigr) \\
\end{aligned}\]
in~\eqref{int2}, we obtain a quadruple sum:
\[\begin{aligned}
& -\frac{i}{2^3 3^{1/2}}\int_0^{i\infty} \hat{g}_1(-\frac1{12z}) D^{-2} g_2(z) z^{-3} dz \\
=& -\frac{i}{2^3 3^{1/2}}\sum_{m_1,n_1,m_2,n_2} a_1(m_1) b_1(n_1) a_2(m_2) b_2(n_2) \frac{n_1^2}{m_2^2} \int_0^{i\infty} exp\Bigl(2\pi i (-\frac{m_1 n_1}{12z} + m_2 n_2 z) \Bigr)
z^{-3} dz. \\
\end{aligned}\]
Now we change variable in the integral. First, we let $w = -\frac1{12z}$ and obtain
\[
-\frac{12^2i}{2^3 3^{1/2}} \sum_{m_1,n_1,m_2,n_2} a_1(m_1) b_1(n_1) a_2(m_2) b_2(n_2) \frac{n_1^2}{m_2^2} \int_0^{i\infty}
exp\Bigl(2\pi i (m_1 n_1 w - \frac{m_2 n_2}{12w}) \Bigr) w \, dw \;.
\]
With $u = \dfrac{n_1 w}{m_2}$ we get
\[\begin{aligned}
-2 \cdot 3^{3/2} i\sum_{m_1,n_1,m_2,n_2} a_1(m_1) b_1(n_1) a_2(m_2) b_2(n_2) \int_0^{i\infty} exp\Bigl(2\pi i (m_1 m_2 u - \frac{n_1 n_2}{12u}) \Bigr)
u \, du \;,
\end{aligned}\]
which we rewrite as
\begin{equation}\label{int3}
-2 \cdot 3^{3/2} i \int_0^{i\infty} f_1(u) \Bigl(f_2\bigl(-\frac1{12u}\bigr) - \frac16\Bigr) u\,du \;,\\
\end{equation}
where
\[\begin{aligned}
f_1(z) &= \sum_{m_1,m_2\ge 1} a_1(m_1) a_2(m_2) q^{m_1 m_2} \;\= -q - 7q^2 - q^3 + 7q^4 - 7q^6 - 2q^7 + \dots \,,\\
f_2(z) &= \frac16 + \sum_{n_1,n_2\ge 1} b_1(n_1) b_2(n_2) q^{n_1 n_2} \= \frac16 + q + 1/2 q^2 + q^3 + 1/2 q^4 + 1/2q^6 + 2q^7 + \dots \,. \\
\end{aligned}\]
(The term $\frac16$ turns $f_2(z)$ into a modular form.)
Consider the Eisenstein series of weight~1
\[\begin{aligned}
E_1(z) &= \frac16 \+ \sum_{m,n \ge 1} \chi_{-3}(m) q^{nm}\,. \\
\end{aligned}\]
Then for any positive integer $l$ we have
\[\begin{aligned}
\sum_{m,n \ge 1} \chi_{-3}(m)\, [n \; \text{divisible by} \; l] \, q^{nm} &\= E_1(l\,z) - \frac16,
\end{aligned}\]
and therefore
\[\begin{aligned}
f_1(z) &\= -E_1(z) \- 7 E_1(2z) \+ 8 E_1(4z)\,, \\
f_2(z) &\= E_1(z) \+ \frac12 E_1(2z) \- \frac12 E_1(4z) \,.\\
\end{aligned}\]
In order to evaluate $(\ref{int3})$ we apply an Atkin-Lehner involution to $f_2$.
We first express $f_2$ as a combination of eta-products:
\[
f_2(z) = \frac12 \frac{\eta(4z)^2\eta(12z)^2}{\eta(2z)\eta(6z)} + \frac16 \frac{\eta(2z)^6 \eta(3z)}{\eta(z)^3 \eta(6z)^2}\,, \\
\]
then we use (\ref{eta}):
\[\begin{aligned}
f_2\bigl(-\frac1{12z}\bigr) &= \frac12 \Bigl(\frac{3^2}{6\cdot 2}\Bigr)^{1/2} (-iz) \frac{\eta(3z)^2\eta(z)^2}{\eta(6z)\eta(2z)} +
\frac16 \Bigl(\frac{6^6 4}{12^3 2^2}\Bigr)^{1/2} (-iz) \frac{\eta(6z)^6 \eta(4z)}{\eta(12z)^3 \eta(2z)^2} \\
&= -\frac{3^{1/2}}{2} iz \hat{f}_2(z), \\
\hat{f}_2(z) &= \frac12 \frac{\eta(3z)^2\eta(z)^2}{\eta(6z)\eta(2z)} + \frac{\eta(6z)^6 \eta(4z)}{\eta(12z)^3 \eta(2z)^2}
= - E_1(z) + 2 E_1(2z) + 8 E_1(4z) \\
&= 3/2 - q + 2 q^2 - q^3 + 7q^4 + 2q^6 + \dots \,.
\end{aligned}\]
For a modular form $f$ we have
\[
\int_0^{i \infty} f(z) z^{k-1} dz \= \frac{(k-1)!}{(-2 \pi i)^k} \, L(f,k)\,.
\]
Using this fact, we continue rewriting the integral (\ref{int3}):
\[\begin{aligned}
\dots &\= 3^{1/2} i \int_0^{i \infty} f_1(u) \, u \, du - 3^2 \int_0^{i \infty} f_1(u) \hat{f}_2(u) \, u^2\, du \\
&\= \frac{3^{1/2} i}{(-2\pi i)^2} \, L(f_1,2) \- \frac{3^2 \cdot 2}{(-2\pi i)^3} \, L(f_1 \hat{f}_2, 3) \\
&\= -\frac{3^{1/2} i}{4 \pi^2} \, L(f_1,2) \+ \frac{3^2 i}{2^2 \pi^3} \, L(f_1 \hat{f}_2, 3) \,.\\
\end{aligned}\]
Now we evaluate the $L$-values that have appeared. Since $f_1(z) = - E_1(z) - 7 E_1(2z) + 8 E_1(4z)$, we have $L(f_1, s) = (-1 - 7 \cdot 2^{-s} + 8 \cdot 4^{-s}) \zeta(s) L(\chi_{-3},s)$ and
\[
L(f_1, 2) \= \bigl(-1 - \frac{7}{2^2} + \frac{8}{4^2}\bigr) \zeta(2) L(\chi_{-3},2) \= -\frac{3}{8} \pi^2 \, L(\chi_{-3},2)\,.
\]
Using the representation
\[
f_1(z) \hat{f}_2(z) \= -\frac32 G_2(z) - 5 G_2(2z) + \frac{19}{2} G_2(3z) + 24 G_2(4z) - 35 G_2(6z) + 8 G_2(12z)\,,
\]
we have
\[\begin{aligned}
L(f_1 \hat{f}_2\,,\, s) &\= \bigl(-\frac32 - 5 \cdot 2^{-s} + \frac{19}{2} \cdot 3^{-s} + 24 \cdot 4^{-s} - 35 \cdot 6^{-s} + 8 \cdot 12^{-s}\bigr) \, \zeta(s) \, \zeta(s-1) \\
L(f_1 \hat{f}_2\,,\, 3) &\= \bigl(-\frac32 - \frac{5}{2^3} + \frac{19}{2\cdot3^3} + \frac{24}{4^3} - \frac{35}{6^3} + \frac{8}{12^3}\bigr) \,\zeta(3) \, \zeta(2) \\
&\= -\frac{14}{9} \,\zeta(3)\, \zeta(2) \= -\frac{7 \, \pi^2}{27} \, \zeta(3) \\
\end{aligned}\]
Eventually we finish the computation of the integrals in (\ref{int}):
\[
\int_0^{i \infty} g_1(z) \cdot D^{-2} g_2(z) dz \= \frac{3^{3/2} \, i}{2^5} \, L(\chi_{-3},2) \- \frac{7 \, i}{12\pi} \,\zeta(3)\,.
\]
Multiplying by $-2\pi i$ we get
\[
D^{-1} \Bigl(g_1(q) \, D^{-2}g_2 \,(q) \Bigr) \Big|_{q=1} \= \frac{3\sqrt{3}\, \pi}{2^4} \, L(\chi_{-3},2) \- \frac{7}{6} \, \zeta(3) \,,
\]
which is the same as (\ref{L_relation}).
|
1,941,325,220,998 | arxiv | \section{Introduction}
The phase-field modeling technique involves finding and minimizing the free energy of one or more order parameters that describe a phase \cite{Hohenberg1977,Provatas2010}. The method was originally introduced by Fix in 1983 \cite{Fix1983} and has since been applied to modeling a wide range of problems including material microstructures \cite{Chen2002,Li2017}, crack propagation \cite{Spatschek_2011}, batteries \cite{Wang2020}, biological membranes \cite{Fan_2008}, cellular systems \cite{Nonomura_2012,Palmieri2015} and even immune response \cite{Najem2014}. Some recent advancements
include the phase-field crystal model \cite{Elder2002} and its applications, see e.g., Refs.~\cite{Achim_2006,Emmerich_2011,Faghihi2013,Kocher_2019,Alster2020}, and phase-field damage models \cite{Wu_2017}.
The same field-based approach can also used for problems in which a free energy description is not readily available. In such a case, the equations of motion are, instead, derived phenomenologically. A well-known class of such problems are reaction-diffusion systems, including Turing~\cite{Turing1952} and Gray--Scott models~\cite{Gray_1985}. These exhibit complex morphologies which mimic nature~\cite{Lee_1994,Leppaenen2002,Maini_2012} with far-reaching applications to biological systems~\cite{Murray1989}.
With the phase-field approach gaining popularity, there is a growing need for
open source phase-field simulation software that generalizes some numerical strategies as has been discussed by Hong and Viswanathan~\cite{Hong2020}. Phase-field simulation software can generally be divided between numerical solvers using finite element or finite difference methods.
On the finite element side, some recent ones include, e.g.,
PRISMS-PF \cite{DeWitt2020}, which uses a matrix free approach, as well as
SfePy \cite{Cimrman_2019}, FEniCS \cite{Alnaes2015} and
MOOSE \cite{Permann2020}, of which the latter offers symbolic algebra functionality and automatic differentiation, allowing for instance, specification of free energy equations. The package FiPy \cite{Guyer2009} is equipped with an accessible Python interface and equation specification.
While there are several packages for the finite element method to solve partial differential equations,
much less is available for finite differences.
Current open software includes
the Mesoscale Microstructure Simulation Project (MMSP) \cite{Keller2019}, though development appears to have ceased shortly after its release, and
OpenPhase \cite{Tegeler2017}, which employs parallelization and sparse storage
to solve large-scale multi-phase problems.
To improve upon the existing open source tools available using finite difference methods, we develop \textit{SymPhas}{}, an API and software package
that aims at advancing the ability to implement numerical
solutions of general phase-field problems by maximizing accessibility, flexibility and performance. \textit{SymPhas}{} uses a discrete-grid-based approach for the implementation of numerical solvers to allow
simulations to scale well with the number of grid points and number of order parameters, regardless of dimension. This is supplemented with parallelization via the C++{} standard library and OpenMP~\cite{Dagum_1998}.
The \textit{SymPhas}{} API allows the user to define and solve any phase-field model that can be formulated field-theoretically, up to three dimensions and with arbitrary numbers of order parameters and equations. This extends to reaction-diffusion problems as well.
Phase-field problems are readily specified in the program using the dynamical equations provided in a completely unconstrained form using simple C++{}-macro-based grammar. We achieve this primarily in two ways: 1) Development of a symbolic algebra library to manipulate and transform mathematical constructs as expression trees and 2) a modular approach of Object Oriented Programming (OOP) that progressively layers more complexity as needed by a given application.
The symbolic algebra feature is implemented as compile-time constructs that directly formulate expression trees at the point of definition. This is a unique feature of \textit{SymPhas}{} that is, to the best of our knowledge, not present in any other phase-field software package.
A modular design is used to retain a simple interface for basic uses while simultaneously supporting complex tasks and implementations. This design applies template meta-programming to fully optimize program constructs and eliminate branching wherever possible. We also achieve considerable decoupling between elements, allowing individual functional elements of \textit{SymPhas}{} to remain distinct; this has the added benefit of supporting community development.
The modular approach also facilitates another key feature of \textit{SymPhas}{}: The ability to integrate a user-developed numerical solver into the workflow. The solver is seamlessly integrated via a class inheritance strategy
designed to eliminate almost all API-specific restrictions on the solver implementation. In writing the solver, the user can leverage the significant set of features available in the symbolic algebra library and in the \textit{SymPhas}{} API overall.
Through extensive documentation and adherence to best programming practices, we provide \textit{SymPhas}{} as a codebase to be expanded and driven by community development.
This is further facilitated by managing the build process with CMake~\cite{cmake}, which provides \textit{SymPhas}{} with multi-platform support and grants users the ability to customize the compilation and installation procedure in a straightforward way.
\section{Methods}
To generate solutions to phase-field problems, an implementation that defines the problem and establishes the program control flow is written in C++{} using the \textit{SymPhas}{} API. This consists of three components:
\begin{enumerate}
\item \textbf{Model definitions file:} The phase-field description with the equations of motion are specified. These are written using C++{} macros provided by \textit{SymPhas}{} and follow a simple grammar structure. Putting this in a separate file is optional.
\item \textbf{Solver file:} The implementation of a specific method which solves a phase-field problem using the equations of motion.
\item \textbf{Driver file:} Specifies the workflow, data inputs and outputs.
\end{enumerate}
We use OOP
and extensively apply the programming paradigm
known as \textit{template meta-programming}; the use of objects and functions defined with arbitrary parameters or data types~\cite{Meyers2005}.
These abstractions are either implicitly (by the compiler) or explicitly (by the user) specialized for concrete types. The benefit of this approach is that the type specialization gives the compiler full information about the call stack, allowing it to make optimizations not possible in different approaches (e.g. virtual inheritance). The other advantage is added extendability through type dependent implementations. The drawback is that since each specialization is unique, the library and executable will take longer to compile and result in a larger size when many specializations are used. For example, compiling five phase-field models of one or two order parameters with one solver defined with all available finite difference stencils -- see Section~\ref{methods:objects:stencils} -- results in a total size of approximately 3\,MB.
As part of template meta-programming, we also apply the \textit{expression template technique}~\cite{Veldhuizen1995,Vandevoorde2003}, commonly referred to as the \textit{curiously recurring template pattern} (CRTP) \cite{Coplien1996}, mainly used in the implementation of the symbolic algebra functionality.
A non-exhaustive list of familiar libraries using expression templates includes Armadillo~\cite{Sanderson2016,Sanderson2018}, Blitz++~\cite{Veldhuizen2000}, Boost $\mu$BLAS~\cite{Schaeling2011}, Dlib~\cite{King2009}, Eigen~\cite{GaeelGuennebaud2010}, Stan Math Library~\cite{Carpenter2015} and xtensor~\cite{xtensor}.
The OOP approach applies a modular design to program structure; this means that we minimize coupling and maximize cohesion~\cite{Vanderfeesten_2008, Candela2016} of all classes in the API, as well as designing each element to support class inheritance or composition.
The former is primarily accomplished by following best programming principles such as applying the single-responsibility principle for objects \cite{Martin2002}. The latter implies that objects designed under this modular framework can be readily extended or modified without refactoring the existing code. Moreover, modularity is used to reflect the real world representation of a phase-field problem. The overall aim is to simplify and streamline the future development of \textit{SymPhas}{}.
Additionally, the build process is another aspect designed to be user friendly. Managed by CMake, the user has full control over program definitions and modules. The result of the build process is a shared library that can be linked in a g++ invocation or alternatively, imported into a separate user CMake \cite{cmake} project.
\subsection{Overview of Modules}
There are four required and two supplementary modules as part of \textit{SymPhas}{}.
The necessary modules constituting the \textit{SymPhas}{} library are the basic functionality (\modulename{lib}), datatypes (\modulename{datatypes}), solution (\modulename{sol}) and symbolic algebra (\modulename{sym}) modules. The module \modulename{lib} is a dependency of all other modules, since it introduces components such as objects and types used throughout the program.
There are two additional libraries which complete the feature set of \textit{SymPhas}{}: the configuration (\modulename{conf}) and the input/output (\modulename{io}) modules.
The \modulename{datatypes} module depends only on \modulename{lib}. It implements objects used in the discrete grid representations, the most important of which is the \lstinline{Grid} class, a managed array for storing data of 1-, 2- and 3-dimensional problems.
The symbolic algebra library (\modulename{sym}) provides the core functionality
to interpret mathematical expressions. The implementation of \modulename{sym} contains all the mathematical objects and relations that are required to specify a phase-field equation of motion. It also specifies rules between these objects.
The solution module (\modulename{sol}) provides the structural and functional framework used to describe and solve a phase-field problem. This is accomplished by defining two objects: One being the programmatic representation of a general phase-field problem and the other which implements a specific set of interface functions for time evolving the phase-field data.
\subsection{Objects in \textit{SymPhas}{}} \label{methods:objects}
The following is a list and brief description of the relevant objects
in \textit{SymPhas}{}:
\begin{itemize}
\item \lstinline{Grid}: Basic array type for storing phase-field data of arbitrary type.
\item \lstinline{Boundary}: Logical element for defining the properties of a grid boundary.
\item \lstinline{Stencil}: Object which defines the finite difference stencils used to approximate derivatives in a uniform grid.
\item \lstinline{OpExpression}: Interface object representing the node in an expression tree, based on CRTP.
\item \lstinline{Solver}: Interface that is specialized for implementing the
solution procedure.
\item \lstinline{Model}: Encapsulation of the problem representation, including the equations of motion. Primary interface for managing the solution of a phase-field model.
\end{itemize}
\subsubsection{Uniform Grid}
The data of a grid is initialized in computer memory as a one-dimensional array, and the desired system dimension is logically imposed according to row major order where the first dimension is always the horizontal ($x$-axis). This ensures fastest run time through memory localization.
An extension of \lstinline{Grid} called \lstinline{BoundaryGrid} enables the use of finite difference stencils at points near the boundary and implements routines to update boundary grid elements (for example, to apply periodic boundaries). This is accomplished by managing a list of indices that correspond to boundary elements. The number of layers of the boundary is predefined based on the extent of the largest finite difference stencil.
The design of the \lstinline{Grid} class and any specialization thereof is based on template meta-programming, and allows the user to select the data type and the dimension.
\subsubsection{Finite Difference Stencils} \label{methods:objects:stencils}
Stencils are finite difference approximations of derivatives of a specified order. In \textit{SymPhas}{}, we implement second and fourth order accurate central-space stencils for various orders of derivatives. To this end, stencils are defined using three characteristics: 1) The order of derivative that is approximated, 2) the order of accuracy,
and 3) the dimension of the system. An additional characterization is the number of points used in the approximation. A stencil family is a group of stencils with the same dimension and order of accuracy. We apply this categorization to the design of stencils in \textit{SymPhas}{} by implementing specialized template classes for each family.
Stencils are CRTP-based template classes with member functions for each order of derivative up to fourth order, and a member function for the generalized implementation of higher orders. In particular, the Laplacian, bilaplacian, gradlaplacian and gradient derivatives are explicitly defined according to those derived in \cite{Patra2006}, including both anisotropic- and isotropic-type stencils that are second and fourth order accurate in two dimensions, and second order accurate in three dimensions. Using CRTP in the stencil implementation eliminates branching in the member function invocations; a significant optimization that improves performance since stencils are applied multiple times at each grid point for every solution iteration.
For second order accuracy approximations, the Laplacian is implemented by 5 and 9 point stencils for 2D, and 7, 15, 19, 21 and 27 point stencils in 3D; the gradlaplacian is implemented by 6, 8, 12 and 16 point stencils, and 10, 12, 28, 36 and 40 point stencils in 3D; and the bilaplacian is implemented by 13, 17 and 21 point stencils for 2D, and 21, 25, 41, 52 and 57 point stencils for 3D. For fourth order accuracy approximations, which are only in 2D, the Laplacian is implemented by 9, 17 and 21 point stencils; the gradlaplacian is implemented by 14, 18, 26 and 30 point stencils. The bilaplacian is implemented by 21, 25, 33 and 37 point stencils. The wide selection ensures that appropriate approximations can be used a problem as necessary.
\subsubsection{Expressions} \label{methods:objects:expression}
One of the primary features of \textit{SymPhas}{} is the symbolic algebra library, which represents mathematical expressions with expression trees formulated at compile time. An expression tree representation allows the equations of motion to be treated as a single object, i.e., they can be persisted as an object state and passed as a function parameter. Moreover, expression trees provide the ability to reorganize and manipulate mathematical expressions.
The symbolic algebra functionality is used to interpret equations of motion that are provided in a general form, and is exposed in a user-friendly way via the \textit{SymPhas}{} API.
Symbolic algebra is implemented in \textit{SymPhas}{} through an approach unique to phase-field simulations programs: CRTP is applied to generate compiled code for an expression tree evaluation so that it is as close as possible to writing the evaluation manually. In this regard, the symbolic algebra is considered ``compile-time constant'', since the expression representation is formulated at compile time. The type name of the CRTP base and expression tree node is \lstinline{OpExpression}.
One motivation of this design choice is minimizing application runtime, since a design that applies a deterministic control flow to expression tree traversal can significantly increase performance.
The most significant corresponding improvement in performance is the reduction in the time spent by a numerical solver in evaluating the equation of motion of the phase-field model, which takes place for all points in the grid and for each iteration of the solver.
An implication
is that, in general, each expression is a unique type (unique CRTP specializations of \lstinline{OpExpression}).
\subsubsection{Solution Interface}
The base \textit{SymPhas}{} library consisting of the six aforementioned modules does not contain a solver implementation, though two solver implementations that are detailed in Section~\ref{methods:implementation} are provided in the \textit{SymPhas}{} package obtained from Github (\lstinline{https://github.com/SoftSimu/SymPhas}). Instead, the \textit{solution interface}, \lstinline{Solver}, declares three functions
which a concrete numerical solver must implement.
The solver interface design uses CRTP and is based on applying a mediator design pattern via a special object that we refer to as the ``equation mediator object''. These are generated by the solver for each dynamical equation in a phase-field model before the simulation begins. Its purpose is to recast the equation of motion into a form that can be interpreted by the numerical scheme of the solver in order for the subsequent time index of the corresponding phase-field data to be computed. The equation mediator object is constructed by the solver member function \lstinline{form_expr_one()}.
By taking advantage of the modular framework,
we design the solver interface to allow the equation mediator object to remain entirely specific to the implemented solver, maximizing third party development potential.
A specialized solver implements the following three primary interface functions, where additional functions, such as derivatives, may also be written as necessitated:
\begin{itemize}
\item \lstinline{form_expr_one()}: Given the set of equations of motion of a phase-field model, constructs the equation mediator object for a specified equation of motion.
This function is only called once. It performs as much computational work as possible to ensure maximum program performance.
\item \lstinline{equation()}: Using the phase-field data and the equation mediator objects, performs an initial time evolution step,
typically writing intermediate results to working memory.
\item \lstinline{step()}: Using the phase-field data and intermediate results computed by \lstinline{equation()}, obtain the next iteration in the solution.
\end{itemize}
\subsubsection{Problem Encapsulation}
To represent the physical phase-field problem in the code domain, objects of basic functionality are successively encapsulated to build necessary functionality. For instance, the \lstinline{Grid} class is encapsulated by \lstinline{System} to add information such as the spatial intervals and discretization width. A further encapsulation will specialize \lstinline{System} into \lstinline{PhaseFieldSystem}, adding functionality such as data persistence and the ability to populate array values with initial conditions.
Using template meta-programming, the\\ \lstinline{PhaseFieldSystem} object allows the user to select the data type and dimension used for the instantiated type. It also
allows encapsulating \lstinline{Grid} or any of its specializations to modify the basic implementation features as required by the problem or solver.
All of the aspects which constitute a phase-field model are encapsulated in the class \lstinline{Model}, of which the responsibility is to manage the phase-field data and numerical solver and initialize all phase-field data in a standardized way. It is also the primary interface to the phase-field data and interacting with the solver.
\lstinline{Model} is itself specialized in order to manage a specific set of equations of motion corresponding to a phase-field problem. The user is responsible for this final specialization through the procedure defined in Section~\ref{methods:capabilities:models}.
\subsection{Capabilities} \label{methods:capabilities}
To produce solutions to
phase-field problems, \textit{SymPhas}{} offers a number of capabilities.
These include convenient parameter specification alongside a rich feature set for specifying the equations of motion and managing the phase-field problem. In this section, we briefly list the capabilities relevant in typical use cases and outline the steps for generating a simple driver file in \textit{SymPhas}{}.
\subsubsection{Symbolic Algebra} \label{methods:capabilities:algebra}
Data are used in the symbolic algebra by linking it with a specialized expression type that represents a ``variable'' term (e.g., in the context of an equation of motion, each order parameter is a ``variable'' linked to the respective phase-field data).
The symbolic algebra also defines value literals and spatial derivative operators. Value literals include special constructs representing the positive and negative multiplicative identity (the numbers $1$ and $-1$) and the additive identity (the number $0$). These are mainly used to facilitate symbolic algebra rules. Some common functions are defined as well, including $\sin$, $\cos$ and the exponential. For less common cases, the convolution operator is also defined. Since the structure of expression trees is managed at compile time, type-based rules defined by specific expression tree structures are applied to formulate expressions, such as when addition, subtraction, multiplication and division operations are used. Rules include simplification, distribution, factorization.
While the primary purpose of the symbolic algebra feature is to represent an equation of motion for a phase-field problem and support the user in implementing a specialized solver, the same functionality can be applied in more general applications.
Features such as the ability to name variables and print formatted expressions to an output stream in either simple text or \LaTeX{} format are included. Moreover, the symbolic algebra can be used to perform high-performance pointwise operations on arrays, include new symbols to the algebra ruleset, and even define identities for the new and existing symbols that would be applied automatically.
\subsubsection{Defining New Phase-Field Models} \label{methods:capabilities:models}
\textit{SymPhas}{} provides convenient C++{} macro-based grammar to allow a user to define a new phase-field model in a completely unconstrained way, a novel feature among phase-field simulations software. Each new definition generates a specialization of the \lstinline{Model} class. Upon recompiling the program, the new model is fully functional without the pitfalls of verbose implementation details. Moreover, the same definition can be interpreted by all solvers available in \textit{SymPhas}{}.
A model is defined in three parts:
1) The model name as it appears in the compilation unit, which must be unique for all models, 2) a list of the order parameters and their types, and 3) the equations of motion. An additional optional section between the order parameter type list and equations of motion can be specified to define virtual variables. These can be used to measure desired system quantities or used in the equations of motion to optimize the runtime by pre-computing values. An example of defining a two-phase model with a virtual variable is demonstrated in Figure~\ref{fig:model_definition}. Specific details including all available macros are offered in the manual.
\begin{figure}
\centering
\footnotesize
\begin{lstlisting}
MODEL(MC,
(SCALAR, SCALAR),
PROVISIONAL_DEF(
(SCALAR),
var(1) = c5 * op(1) * op(2))
MODEL_PREAMBLE_DEF(
( auto op13 = c2 * op(1) * op(1) * op(1);
auto op23 = c4 * op(2) * op(2) * op(2); )
dop(1) = lap(op(1)) + c1 * op(1) - op13 + lit(2.) * var(1),
dop(2) = -bilap(op(2)) - lap(c3 * op(2) - op23 + c5 * op(1) * op(1)))
)
\end{lstlisting}
\caption{An example of specifying a phase-field model. Model~C~\cite{Hohenberg1977}, which represents eutectic growth with two order parameters~\cite{Elder1994}, is implemented. It is associated with the given name ``\lstinline{MC}'', which defines the type alias of the model in the code and therefore must be unique. The keywords \lstinline{op(N)} and \lstinline{dop(N)} refer to the \lstinline{N}th order parameter and its time derivative, respectively, and the keyword \lstinline{var(N)} refers to the \lstinline{N}th virtual variable. The keyword \lstinline{SCALAR} specifies that the field types are real-valued. The keyword \lstinline{lit(v)} is used to represent a numeric constant of value \lstinline{v} in the symbolic algebra expression. The keywords \lstinline{lap} and \lstinline{bilap} apply the 2nd and 4th spatial derivative to their arguments, respectively. The enumerated terms \lstinline{c1} to \lstinline{c5} are parameters passed to the model upon instantiation. Variables can be defined before the equations of motion using the macro \lstinline{MODEL_PREAMBLE_DEF}, demonstrated here with the cubic terms, \lstinline{op13} and \lstinline{op23}. This option exists chiefly for convenience and does not affect the structure of the expression tree formulated for the equation of motion. If this section is omitted and only the dynamical equations are provided, then the macro \lstinline{MODEL_DEF} is used instead.}
\label{fig:model_definition}
\end{figure}
\subsubsection{Creating Custom Solvers}
The user may develop their own solver using the \textit{SymPhas}{} API and seamlessly integrate it into an existing workflow. Implementation of a solver entails extending the provided \lstinline{Solver} interface.
A major benefit of the design is provided by the equation mediator object, since it remains completely internal to the functionality of the implemented solver and does not interact with the other parts of the API or program. This allows the solver implementation to be decoupled as much as possible from the surrounding implementation and allows the user to leverage the capabilities of the API without being limited by extensive requisite knowledge.
Additionally, if the built-in \lstinline{SolverSystem} and its specializations are insufficient,
the user may develop a new specialization. This has some constraints and requirements, including following a specific naming style, inheritance requirements and recompilation of the solution module \modulename{sol}.
\subsubsection{Standardized Problem Parameter Management}
The parameters of the problem are managed by a specialized object tasked with managing data that fully describes a phase-field problem.
This information includes the initial conditions, the interval data and information about the boundary conditions, the latter which can be specified on an individual basis.
There are a number of phase-field initialization routines available to the user, each tuned by user-provided parameters. Initialization routines defined by the user can also be used.
This approach allows
the user to have a unified approach to initializing, accessing, and passing problem information, simplifying the workflow and ensuring flexibility.
\subsubsection{Input/Output}
The configuration module provides the user with the ability to write a configuration file with phase-field problem and data persistence parameters, which can be used to construct the problem parameters object.
The \textit{SymPhas}{} API includes data persistence capabilities through the \modulename{io} module, which introduces functions and objects for reading and writing phase-field data. The user is also provided with the ability to persist phase-field data at regular checkpoints throughout the simulation, which can later be used to recover simulation data from the last saved point if it is interrupted for any reason. This supports program reliability and convenience, particularly for extended simulations.
Currently, there are three output/input formats: 1) Plain text matrix (the matrix format is amenable to plotting utilities such as gnuplot), 2) plain text column (an ordered list of vectors and values), and 3) binary output in the xdrfile format, popularized by GROMACS \cite{Lindahl2021}. This functionality is available to the user when \modulename{io} is compiled with \textit{SymPhas}{} through CMake.
Input can also be given to \textit{SymPhas}{} through the command line; this method allows configuring some program-level parameters. Unlike the configuration file, this a base component of \textit{SymPhas}{}. Command line parameters allow \textit{SymPhas}{} to change some of its behavior, largely with regards to the initial condition generation. The basic set of program level parameters are introduced in \modulename{lib}, and \modulename{io} introduces more parameters. The list of all configurable parameters and their details are provided in the manual.
\begin{figure}
\centering
\scriptsize
\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\newcommand*\fsize{\dimexpr\f@size pt\relax}%
\newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}%
\ifx\svgwidth\undefined%
\setlength{\unitlength}{419.73992097bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{\svgwidth}%
\fi%
\global\let\svgwidth\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.55151926)%
\lineheight{1}%
\setlength\tabcolsep{0pt}%
\put(0,0){\includegraphics[width=\unitlength,page=1]{implementation_small_v2.pdf}}%
\put(0.16157335,0.46704034){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Initialize configuration\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=2]{implementation_small_v2.pdf}}%
\put(0.49225814,0.5319947){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Driver\end{tabular}}}}%
\put(0.82377576,0.5319947){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}\lstinline{Model}\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=3]{implementation_small_v2.pdf}}%
\put(0.16104047,0.42347701){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}[Load Checkpoint?]\end{tabular}}}}%
\put(0.48547551,0.10795121){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\begin{minipage}{0.67051006\unitlength}\raggedright \end{minipage}}}%
\put(0.16333496,0.5319947){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Solve Phase-Field Problem\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=4]{implementation_small_v2.pdf}}%
\put(0.49065199,0.39721209){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Define problem parameters\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=5]{implementation_small_v2.pdf}}%
\put(0.16099403,0.34815596){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Read backup file\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=6]{implementation_small_v2.pdf}}%
\put(0.48967667,0.34821582){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Initialize \lstinline{Model} list\end{tabular}}}}%
\put(0.49295187,0.22960868){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Select next \lstinline{Model}\end{tabular}}}}%
\put(0.23460284,0.38343563){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\textit{No}\end{tabular}}}}%
\put(0.16829113,0.37616843){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\textit{Yes}\end{tabular}}}}%
\put(0.49056795,0.31364469){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}[All models simulated?]\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=7]{implementation_small_v2.pdf}}%
\put(0.49659382,0.26355095){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\textit{No}\end{tabular}}}}%
\put(0.43775105,0.27644197){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\textit{Yes}\end{tabular}}}}%
\put(0.82129282,0.23042956){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Update system (\lstinline{update})\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=8]{implementation_small_v2.pdf}}%
\put(0.82129282,0.18497542){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Compute dynamics (\lstinline{equation})\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=9]{implementation_small_v2.pdf}}%
\put(0.82129282,0.1395214){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Time evolve (\lstinline{step})\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=10]{implementation_small_v2.pdf}}%
\put(0.82110556,0.09609849){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}[Reached final index?]\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=11]{implementation_small_v2.pdf}}%
\put(0.82148597,0.01546429){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Persist solution data\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=12]{implementation_small_v2.pdf}}%
\put(0.78188778,0.34903378){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Model parameters\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=13]{implementation_small_v2.pdf}}%
\put(0.76108252,0.05424335){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\textit{No}\end{tabular}}}}%
\put(0.83023066,0.04574204){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\textit{Yes}\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=14]{implementation_small_v2.pdf}}%
\end{picture}%
\endgroup%
\caption{Control flow diagram for the \textit{SymPhas}{} driver used to run simulations in this work. A list of models is generated using parameters from a configuration file. The number of models that is initialized depends on the configuration.
The \textit{SymPhas}{} library provides a function to perform the solution loop illustrated under \lstinline{Model}.
}
\label{fig:flow4}
\end{figure}%
\begin{figure}
\centering
\footnotesize
\begin{lstlisting}
#include "symphas.h"
#define psi op(1)
#define dpsi dop(1)
MODEL(EX, (SCALAR), MODEL_DEF(
dpsi = lap(psi) + (c1 - c2 * psi * psi) * psi))
int main(int argc, char* argv[]) {
double dt = 0.5;
symphas::problem_parameters_type pp{ 1 };
symphas::b_data_type bdata;
symphas::interval_data_type vdata;
symphas::init_data_type tdata{ Inside::UNIFORM, { -1, 1 } };
symphas::interval_element_type interval;
interval.set_interval_count(0, 80, 128);
bdata[Side::LEFT] = BoundaryType::PERIODIC;
bdata[Side::RIGHT] = BoundaryType::PERIODIC;
bdata[Side::TOP] = BoundaryType::PERIODIC;
bdata[Side::BOTTOM] = BoundaryType::PERIODIC;
vdata[Axis::X] = interval;
vdata[Axis::Y] = interval;
pp.set_boundary_data(&bdata);
pp.set_initial_data(&tdata);
pp.set_interval_data(&vdata);
pp.set_problem_time_step(dt);
model_EX_t<2, SolverSP<Stencil2d2h<5, 9, 6>>> model{ pp };
symphas::find_solution(model, dt, 100);
}
\end{lstlisting}
\caption{Example of a simple driver program which includes the model definition.
A phase-field problem of one order parameter named \lstinline{EX} is defined. System boundaries are defined to be periodic on each edge and initial conditions are set by seeding valuing with the uniform distribution $\mathcal{U}(-1, 1)$. Identical $x$ and $y$ intervals are defined and provided, resulting in a square $128\times128$ grid. When the model is created, the solver (\lstinline{SolverSP}) is passed as a template type parameter and the \lstinline{find_solution} function is called to perform 100 solver iterations. This program can be compiled through CMake or by providing the directories of the installed header and library to gcc.}
\label{fig:driver_implementation}
\end{figure}
\subsection{Implementation} \label{methods:implementation}
The entire \textit{SymPhas}{} API is available on Github (\texttt{https://github.com/SoftSimu/SymPhas}) alongside two solvers (forward Euler, Sec.~\ref{sec:feuler} and the Semi-Implicit Fourier Spectral, Sec.~\ref{sec:spectral}), model definitions and driver file examples.
The program control flow of the driver file
used to to generate the simulations in this paper is illustrated in Figure~\ref{fig:flow}, and is found in the directory \lstinline{examples/simultaneous-configs} relative to the source code root.
This driver performs several steps, including data persistence, as part of data collection, but a fully functional driver file can be as simple as is written in Figure~\ref{fig:driver_implementation} and included with the source code in \lstinline{examples/simple-driver}. It is presented here to illustrate relevant \textit{SymPhas}{} API elements, but primarily demonstrate its ease of use.
\subsubsection{Forward Euler Solver}
\label{sec:feuler}
A forward Euler solver is a well-known explicit numerical method for partial differential equations. It is a first order method and it is known to have instabilities especially when solving stiff systems. It is provided as a base method.
Since it is well-known and taught in virtually every course in numerical methods, we will not discuss it further but refer the reader to one of the standard references such as Press \textit{et al.}~\cite{Press1992}.
\subsubsection{Semi-Implicit Spectral Solver}
\label{sec:spectral}
Consider a phase-field problem for the order parameter $\psi = \psi(\vec{x},t)$; the equation of motion for this problem may be expressed in the form
\begin{equation}
\frac{\partial \psi}{\partial t} = \mathcal{L}(\nabla^{n})\left\{\psi\right\} + \sum_i{{\mathcal{N}}_i(\nabla^{m_i})\left\{ f_i(\psi)\right\}}\,,
\label{eq:spectralphasefield}
\end{equation}
where $\mathcal{L}$ is a linear combination of derivatives up to order $n$ applied to $\psi$, and each term in the sum over $i$ is a unique linear differential operator, $N_i$, of derivatives up to order $m_i$ that is applied to a nonlinear function $f_i$.
Under periodic boundary conditions, the semi-implicit Fourier spectral solver approximates the solution to $\psi$ by first applying the Fourier transform of Equation~(\ref{eq:spectralphasefield}):
\begin{align}
\frac{\partial \hat{\psi}_{\vec{k}}}{\partial t} &= {L(k^n)}\hat{\psi}_{\vec{k}} + \sum_i{{N_i}(k^{m_i}) \hat{f}_{i}(\psi)_{\vec{k}}}
\,,
\label{eq:phasefieldfourier}
\end{align}
where $\hat{\phantom{a}}$ indicates the Fourier transform of the respective term, $\hat{\psi}_{\vec{k}} = \hat{\psi}(\vec{k}, t)$, $\vec{k}$ is a vector in Fourier space and $k = |\vec{k}|$. Also, since $\vec{\nabla} \rightarrow i\vec{k}$ (and correspondingly $\nabla^2 \rightarrow -|k|^2$) under a Fourier transform in an infinite domain, the linear operator $\mathcal{L}$ becomes the function ${L}(k^n)$, a linear combination of the Fourier transformed derivatives, and likewise for $\mathcal{N}$.
A difference scheme \cite{Provatas2010} to Equation~(\ref{eq:phasefieldfourier}) is determined by solving it as a linear ordinary differential equation and approximating to 1st order, yielding
\begin{align}
\hat{\psi}(t + \Delta t) \approx A\hat{\psi}_{\vec{k}}(t) + B\sum_i N_i(k^{m_i}) \hat{f}_n(\psi)_{\vec{k}}\,,
\label{eq:spectralscheme}
\end{align}
where
\begin{gather}
A = e^{{L}(k^n)\Delta t} \quad \text{and} \quad B = \frac{e^{{L}(k^n)\Delta t} - 1}{{L}(k^n)}\,.
\label{eq:spectraloperators}
\end{gather}
The spectral solver produces Equation~(\ref{eq:spectralscheme}) from any given equation of motion, a significant advantage that generalizes the spectral solver to a multitude of problems. This
demonstrates the adaptability of the solver and of the program design in general.
The spectral solver also computes the values of $A$ and $B$ \textit{a priori} to minimize the runtime.
The procedure is as follows:
\begin{enumerate}
\item Split the equation into linear and nonlinear parts.
Call the linear part $\mathcal{L}$ and the nonlinear part $\mathcal{N}$, analogous to the notation in Equation~(\ref{eq:spectralphasefield}).
\item Further split the linear part by separating out terms which do not involve $\psi$, along with terms that cannot be expressed using a linear operator.
Call the expression formed by these terms $\mathcal{L}_*$ and the expression formed by all other terms $\mathcal{L}_\psi$. Thus, $\mathcal{L} = \mathcal{L}_\psi + \mathcal{L}_*$.
\item Obtain ${L}_\psi$ by removing $\psi$ from terms in $\mathcal{L}_{\psi}$ and interchanging the derivative terms with the Fourier space transformed derivatives. Generate values for $A$ by evaluating ${L}_\psi$.
\item Create the new expression ${{L}}_*$ by exchanging all order parameters in $\mathcal{L}_*$ with the Fourier transformed counterparts.
\item \label{step:make_D} Let ${{L}}_*$ be represented as the sum of its unique derivatives $d_n$ applied to expressions $e_n$, viz.: $$ {L}_* = \sum_n{d_n \cdot e_n}\,. $$
Form the set $\mathbf{D}_* = \{(d_n, e_n) \mid n\}$.
\item Apply Step~\ref{step:make_D} for the terms of $\mathcal{N}$, producing the list $\mathbf{D}_{\mathcal{N}}$. Form the set $\mathbf{D}_N = \{(d_n, \hat{e}_n) \mid (d_n, e_n) \in \mathbf{D}_{\mathcal{N}} \}$ where $\hat{\phantom{a}}$ denotes the Fourier transform of the respective term.
\item
Define the following sets:
\begin{align}
\mathbf{D}_1 &= \{(d_n, e_n, e_m) \mid (d_n, e_n) \in \mathbf{D}_{*}, (d_m, e_m) \in \mathbf{D}_{{N}}, d_n = d_m\} \,, \\
\mathbf{D}_2 &= \{(d_n, e_n, 0) \mid (d_n, e_n) \in \mathbf{D}_{*}, (d_m, e_m) \in \mathbf{D}_{{N}}, d_n \not\in \{d\}_m\} \,, \\
\mathbf{D}_3 &= \{(d_m, 0, e_m) \mid (d_m, e_m) \in \mathbf{D}_{{N}}, (d_n, e_n) \in \mathbf{D}_{*}, d_m \not\in \{d\}_n\} \,.
\end{align}
Define $\mathbf{D} = \mathbf{D}_1 \,\cup\,\mathbf{D}_2\,\cup\,\mathbf{D}_3$. In other words, generate elements of $\mathbf{D}$ by pairing together the expressions in elements from $\mathbf{D}_{*}$ and $\mathbf{D}_{N}$ that match based on the derivatives $d_i$, and if there is no matching derivative in the other set, use 0 in place of the associated expression.
\item Define two sequences ${B}_i = B\hat{d}_i$ where $B$ is as defined in Equation~(\ref{eq:spectraloperators}) and ${E}_i = (e_{n_i}, {e}_{m_i})$, using the elements: $$(d_i, e_{n_i}, e_{m_i}) \in \mathbf{D}.$$
With respect to Equation~(\ref{eq:spectralscheme}), ${B}_i = B {N}_i(k)$ and $e_{n_i} + {e}_{m_i} = \hat{f}_i(\psi)_{\vec{k}}$.
\item Return the set $\{A, \{{B}\}_i, \{{E}\}_i\}$.
\end{enumerate}
The scheme applied by the implemented spectral solver is then given by:
\begin{equation}
\hat{\psi}^{n+1} = A\hat{\psi}^n(t) + \sum_i {B}_i \left( {E}^0_i + {E}^1_i \right)\,,
\end{equation}
where $\hat{\psi}^n$ is the approximate solution to $\hat{\psi}(\vec{k}, t)$ ($t = n\Delta t $ for time step $\Delta t$), ${E}^0_i$ and ${E}^1_i$ are the sets of the first and second elements of ${E}_i$, respectively, and subscript $i$ represents the indexed elements of their respective sets.
\section{Simulations and Verification} \label{sec:results}
Using the semi-implicit spectral solver, we performed simulations of the Allen--Cahn equation (Model A in the Hohenberg--Halperin classification \cite{Hohenberg1977}) describing a non-conserved order parameter $\psi = \psi(\vec{x}, t)$ \cite{Allen1975}, the Cahn--Hilliard equation (Model B in the Hohenberg--Halperin classification \cite{Hohenberg1977}) describing a conserved order parameter $m = m(\vec{x}, t)$, and a two order-parameter problem which couples a conserved order parameter $m = m(\vec{x}, t)$ with a non-conserved order parameter $\psi = \psi(\vec{x}, t)$ (Model C in the Hohenberg--Halperin classification \cite{Hohenberg1977}). The systems were simulated in both two and three dimensions. Since these systems are well-known and described in literature and since the main goal here is to demonstrate the software, we will not describe the above models in more detail but refer the reader to standard references such as the classic article by Hohenberg and Halperin \cite{Hohenberg1977} and the book by Provatas and Elder \cite{Provatas2010}. We also include a simulation of the phase-field crystal model of Elder et al.~\cite{Elder2002}.
The Allen--Cahn equation describes the dynamics of a non-conserved order parameter $\psi = \psi(\vec{x}, t)$ \cite{Hohenberg1977,Provatas2010, Allen1975} as
\begin{equation}
\frac{\partial \psi}{\partial t} = \nabla^2 \psi + c_1\psi - c_3\psi^3\,.
\label{eq:modelb}
\end{equation}
The Cahn--Hilliard equation, on the other hand, describes the phase separation dynamics of a conserved order parameter $m = m(\vec{x}, t)$ \cite{Hohenberg1977,Provatas2010,Cahn1958} as
\begin{equation}
\frac{\partial m}{\partial t} = -\nabla^4 m - \nabla^2 \left(c_1m - c_2m^3 \right)\,,
\label{eq:modela}
\end{equation}
In addition to these, we simulated a two order parameter problem which couples a conserved order parameter $m = m(\vec{x}, t)$ with a non-conserved order parameter $\psi = \psi(\vec{x}, t)$ (Model C~\cite{Hohenberg1977}) through a nonlinear term in the free energy functional:
\begin{align}
\dfrac{\partial \psi}{\partial t} &=
\nabla^2 \psi +c_1\psi - c_2\psi^3 + 2c_5 \psi m\\
\dfrac{\partial m}{\partial t} &= -\nabla^4 m - \nabla^2 \left(c_3 m - c_4 m^3 + c_5 \psi^2\right).
\label{eq:modelc}
\end{align}
This model has been used to describe eutectic growth~\cite{Elder1994}.
The phase-field crystal model was developed to include periodic structure to the standard phase-field free energy functional in order to represent elastic and plastic interactions in a crystal \cite{Elder2002, Elder_2004}.
For a conserved density field $n=n(\vec{x}, t)$, the dynamical equation of a phase-field crystal model is given by:
\begin{equation}
\frac{\partial n}{\partial t} = \nabla^2\left( n^2 + n^3 + \left((q_0 + \nabla^2)^2 - \varepsilon\right)n \right). \label{eq:pfc}
\end{equation}
Coarsening of the phase-field at any point during the transition can be quantified by the radial average of the static structure factor, $S(k)$. The static structure factor, $S(\vec{k})$, measures incident scattering in a solid \cite{Lovesey1984, Squires1978}. In the Born approximation for periodic systems, $S(\vec{k}) = | \hat{\rho}_{\vec{k}} |^2$ \cite{Lindgard1994}. The term $\hat{\rho}_{\vec{k}}$ is the Fourier transform of $\rho(\vec{r})$, the particle occupancy at the position $\vec{r}$ in the lattice, which is 1 in the solid phase and 0 elsewhere. Correspondingly, for computing the structure factor of the continuous order parameter field $\psi(\vec{r})$, the positive phase is chosen to represent solidification, that is, $\rho = 1$ when $\psi > 0$ and $\rho = 0$ otherwise.
For both the conserved and non-conserved phase-field models, the radial average of the structure factor should correspond to Porod's law~\cite{Bray_2002}:
\begin{equation}
S(k) \sim (Lk^{d+1})^{-1}\,,
\label{eq:porodslaw}
\end{equation}
where $L$ is the size of the system and $d$ is the dimension.
We use this to establish that \textit{SymPhas}{} generates correct solutions by validating that the relationship holds for $S(k)$ measured from simulations of the Allen--Cahn and Cahn--Hilliard models, representing the non-conserved and conserved dynamics, respectively~\cite{Puri1997}. We proceed by computing $S(k)$ from the average of 10 independent simulations of these models in both 2D and 3D and verifying that the scaling is consistent to Porod's law, Equation~(\ref{eq:porodslaw}), for the corresponding dimension.
The implementations of the simulated models with the \textit{SymPhas}{} model definitions macros are provided in Figure~\ref{fig:models_abc_definitions}. As the figure shows, models are defined in a compact and intuitive way.
The initial conditions of models A, B and C are uniformly distributed noise, and the equation parameters $c_1$, $c_2$, $c_3$, $c_4$ and $c_5$ are set to unity. For the phase-field crystal model, 128 randomly arranged seeds containing large fluctuations are initially distributed throughout the system, and the parameters of the equation and simulation were selected from Elder et al.~\cite{Elder2002}.
The simulation results for the non-conserved Allen--Cahn model (Model A, Equation~(\ref{eq:modela})) are displayed in Figure~\ref{fig:modelab:a} and results for the conserved Cahn--Hilliard model (Model B, Equation~(\ref{eq:modela})) are displayed in Figure~\ref{fig:modelab:b}. The structure factor results of these two models are presented in Figure~\ref{fig:model-a_sf} and Figure~\ref{fig:model-b_sf}, respectively. The results for the eutectic model consisting of two coupled equations of motion~\cite{Elder1994} (Model C, Equation~(\ref{eq:modelc})) are shown in Figure~\ref{fig:modelc} and the phase-field crystal model~\cite{Elder2002} (Equation~(\ref{eq:pfc})) with a conserved field is displayed in Figure~\ref{fig:pfc}.
\begin{figure}
\centering
\begin{subfigure}{1.\textwidth}
\footnotesize
\begin{lstlisting}
MODEL(MA, (SCALAR),
MODEL_DEF(
dpsi = lap(psi) + (c1 - c2 * psi * psi) * psi))
\end{lstlisting}
\caption{}
\label{fig:modeldef:a}
\end{subfigure}\hfill%
\begin{subfigure}{1.0\textwidth}
\footnotesize
\begin{lstlisting}
MODEL(MB, (SCALAR),
MODEL_DEF(
dpsi = -bilap(m) - lap((c1 - c2 * m * m) * m)))
\end{lstlisting}
\caption{}
\label{fig:modeldef:b}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\footnotesize
\begin{lstlisting}
MODEL(MC, (SCALAR, SCALAR),
MODEL_PREAMBLE_DEF(
( auto psi3 = c2 * psi * psi * psi;
auto m3 = c4 * m * m * m; ),
dpsi = lap(psi) + c1 * psi - psi3 + lit(2.) * c5 * psi * m,
dm = -bilap(m) - lap(c3 * m - m3 + c5 * psi * psi)))
\end{lstlisting}
\caption{}
\label{fig:modeldef:c}
\end{subfigure}\hfill%
\begin{subfigure}{1.0\textwidth}
\footnotesize
\begin{lstlisting}
PFC_TYPE(PC,
DEFAULTS(
DEFAULT_DYNAMIC(PFC_CONSERVED)
),
(SCALAR))
\end{lstlisting}
\caption{}
\label{fig:modeldef:pfc}
\end{subfigure}
\caption{Macro implementations of (\subref{fig:modeldef:a}) the Allen--Cahn model \cite{Allen1975} from Equation~(\ref{eq:modela}), (\subref{fig:modeldef:b}) the Cahn--Hilliard model \cite{Cahn1958} from Equation~(\ref{eq:modelb}), and (\subref{fig:modeldef:c}) Model C \cite{Elder1994} from Equation~(\ref{eq:modelc}).
The order parameter names in the macro specification are chosen to correspond to the variable names in the respective equations of motion, and the keys \lstinline{c1} to \lstinline{c5} correspond to the coefficients $c_1$ to $c_5$. (d) The phase-field crystal model (Equation~\ref{eq:pfc}) specification uses different macro keywords that allow selecting the phase-field crystal model as a type of problem, dispensing with equation specification and allowing selection of the dynamics.
}
\label{fig:models_abc_definitions}
\end{figure}%
\begin{figure}
\centering
\begin{minipage}{0.48\textwidth}
%
\begin{subfigure}{\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(5182.00,1872.00)%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(1714,1924){\makebox(0,0)[r]{\strut{}index $500$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(3341,1924){\makebox(0,0)[r]{\strut{}index $2,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(4963,1924){\makebox(0,0)[r]{\strut{}index $20,000$}}%
\csname LTb\endcsnam
\put(218,0){\makebox(0,0){\strut{}$-1$}}%
\put(1404,0){\makebox(0,0){\strut{}$-0.5$}}%
\put(2590,0){\makebox(0,0){\strut{}$0$}}%
\put(3776,0){\makebox(0,0){\strut{}$0.5$}}%
\put(4963,0){\makebox(0,0){\strut{}$1$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-a-2d_data0_20000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelab:a:2d}
\end{subfigure}
%
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.33\textwidth]{modela50.png}\hfill%
\includegraphics[width=0.33\textwidth]{modela200.png}\hfill%
\includegraphics[width=0.33\textwidth]{modela2000.png}
\caption{}
\label{fig:modelab:a:cs}
\end{subfigure}\vspace{10pt}
%
\begin{subfigure}{\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(5182.00,1872.00)%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(1714,1924){\makebox(0,0)[r]{\strut{}index $50$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(3341,1924){\makebox(0,0)[r]{\strut{}index $200$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(4963,1924){\makebox(0,0)[r]{\strut{}index $2,000$}}%
\csname LTb\endcsnam
\put(218,0){\makebox(0,0){\strut{}$-1$}}%
\put(1404,0){\makebox(0,0){\strut{}$-0.5$}}%
\put(2590,0){\makebox(0,0){\strut{}$0$}}%
\put(3776,0){\makebox(0,0){\strut{}$0.5$}}%
\put(4963,0){\makebox(0,0){\strut{}$1$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-a-3d_data0_2000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelab:a:3d}
\end{subfigure}
\caption{Snapshots from simulations of a 2D ($1024\times1024$) and 3D ($256 \!\times256 \times \!256$) Allen--Cahn model \cite{Allen1975}:
(\subref{fig:modelab:a:2d}) the 2D system is shown at three intervals at solution index 500, 2,000 and 20,000. (\subref{fig:modelab:a:cs}) the 3D system is visualized using VTK \cite{vtk} at solution index 50, 200 and 2,000 with a cross-section highlighted for visibility, where (\subref{fig:modelab:a:3d}) shows the cross sections.
The simulations use a time step of $\Delta t = 0.25$ and are initially seeded with random values between -1 and 1.
}
\label{fig:modelab:a}
\end{minipage}\hfil
\begin{minipage}{0.48\textwidth}
\centering
\begin{subfigure}{\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(5182.00,1872.00)%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(1714,1924){\makebox(0,0)[r]{\strut{}index $5,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(3341,1924){\makebox(0,0)[r]{\strut{}index $20,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(4963,1924){\makebox(0,0)[r]{\strut{}index $200,000$}}%
\csname LTb\endcsnam
\put(218,0){\makebox(0,0){\strut{}$-1$}}%
\put(1404,0){\makebox(0,0){\strut{}$-0.5$}}%
\put(2590,0){\makebox(0,0){\strut{}$0$}}%
\put(3776,0){\makebox(0,0){\strut{}$0.5$}}%
\put(4963,0){\makebox(0,0){\strut{}$1$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-b-2d_data0_200000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelab:b:2d}
\end{subfigure}
%
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.33\textwidth]{modelb500.png}\hfill%
\includegraphics[width=0.33\textwidth]{modelb2000.png}\hfill%
\includegraphics[width=0.33\textwidth]{modelb20000.png}
\caption{}
\label{fig:modelab:b:cs}
\end{subfigure}\vspace{10pt}
%
\begin{subfigure}{\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(5182.00,1872.00)%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(1714,1924){\makebox(0,0)[r]{\strut{}index $500$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(3341,1924){\makebox(0,0)[r]{\strut{}index $2,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(4963,1924){\makebox(0,0)[r]{\strut{}index $20,000$}}%
\csname LTb\endcsnam
\put(218,0){\makebox(0,0){\strut{}$-1$}}%
\put(1404,0){\makebox(0,0){\strut{}$-0.5$}}%
\put(2590,0){\makebox(0,0){\strut{}$0$}}%
\put(3776,0){\makebox(0,0){\strut{}$0.5$}}%
\put(4963,0){\makebox(0,0){\strut{}$1$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-b-3d_data0_20000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelab:b:3d}
\end{subfigure}
\caption{Snapshots from simulations of a 2D ($1024\times1024$) and 3D ($128 \! \times \! 128 \! \times \!128$) Cahn--Hilliard model \cite{Cahn1958}: (\subref{fig:modelab:b:2d}) the 2D system is shown at three intervals at solution index 5,000, 20,000 and 200,000; and (\subref{fig:modelab:a:cs}) the 3D system is visualized using VTK \cite{vtk} at solution index 2,500, 5,000 and 20,000 with a cross-section highlighted for visibility, where (\subref{fig:modelab:a:3d}) shows the cross sections.
The simulations use a time step of $\Delta t = 0.05$, and are initially seeded with random values between -1 and 1.}
\label{fig:modelab:b}
%
\end{minipage}
\end{figure}%
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(4320.00,5760.00)%
\gplgaddtomacro\gplbacktext{%
\csname LTb\endcsnam
\put(682,704){\makebox(0,0)[r]{\strut{}-4}}%
\put(682,1255){\makebox(0,0)[r]{\strut{}-3}}%
\put(682,1806){\makebox(0,0)[r]{\strut{}-2}}%
\put(682,2356){\makebox(0,0)[r]{\strut{}-1}}%
\put(682,2907){\makebox(0,0)[r]{\strut{}0}}%
\put(682,3458){\makebox(0,0)[r]{\strut{}1}}%
\put(682,4009){\makebox(0,0)[r]{\strut{}2}}%
\put(682,4560){\makebox(0,0)[r]{\strut{}3}}%
\put(682,5110){\makebox(0,0)[r]{\strut{}4}}%
\put(1054,484){\makebox(0,0){\strut{}$0.01$}}%
\put(2138,484){\makebox(0,0){\strut{}$0.1$}}%
\put(3221,484){\makebox(0,0){\strut{}$1$}}%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(198,3121){\rotatebox{-270}{\makebox(0,0){\strut{}$S(k)$}}}%
\put(2368,154){\makebox(0,0){\strut{}$k$}}%
\put(2835,5333){\makebox(0,0){\strut{}Model A, 2D}}%
\csname LTb\endcsnam
\put(3200,4170){\makebox(0,0)[r]{\strut{}500}}%
\csname LTb\endcsnam
\put(3200,4456){\makebox(0,0)[r]{\strut{}2,000}}%
\csname LTb\endcsnam
\put(3200,4742){\makebox(0,0)[r]{\strut{}20,000}}%
\csname LTb\endcsnam
\put(3200,5028){\makebox(0,0)[r]{\strut{}$S(k) \sim k^{-3}$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-a-2d_sa0_20000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modela:sf2d}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(4320.00,5760.00)%
\gplgaddtomacro\gplbacktext{%
\csname LTb\endcsnam
\put(682,704){\makebox(0,0)[r]{\strut{}-4}}%
\put(682,1587){\makebox(0,0)[r]{\strut{}-2}}%
\put(682,2471){\makebox(0,0)[r]{\strut{}0}}%
\put(682,3354){\makebox(0,0)[r]{\strut{}2}}%
\put(682,4238){\makebox(0,0)[r]{\strut{}4}}%
\put(682,5121){\makebox(0,0)[r]{\strut{}6}}%
\put(1707,484){\makebox(0,0){\strut{}$0.1$}}%
\put(2984,484){\makebox(0,0){\strut{}$1$}}%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(198,3121){\rotatebox{-270}{\makebox(0,0){\strut{}$S(k)$}}}%
\put(2368,154){\makebox(0,0){\strut{}$k$}}%
\put(2835,5333){\makebox(0,0){\strut{}Model A, 3D}}%
\csname LTb\endcsnam
\put(3200,4170){\makebox(0,0)[r]{\strut{}50}}%
\csname LTb\endcsnam
\put(3200,4456){\makebox(0,0)[r]{\strut{}200}}%
\csname LTb\endcsnam
\put(3200,4742){\makebox(0,0)[r]{\strut{}2,000}}%
\csname LTb\endcsnam
\put(3200,5028){\makebox(0,0)[r]{\strut{}$S(k) \sim k^{-4}$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-a-3d_sa0_2000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modela:sf3d}
\end{subfigure}
\caption{The radially averaged structure factor, $S(\vec{k}) \! \! = \! | \hat{\psi}_{\vec{k}} |^2$, for the Allen--Cahn model \cite{Allen1975} in (\subref{fig:modela:sf2d}) 2D and (\subref{fig:modela:sf3d}) 3D, corresponding to the simulation parameters described in Figure~\ref{fig:modelab:a} and computed using the average of 10 simulations. The results demonstrate scaling to Porod's law~(solid line, Equation~(\ref{eq:porodslaw})) for $d=2$ and $d=3$, respectively~\cite{Puri1997}.}
\label{fig:model-a_sf}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(4320.00,5760.00)%
\gplgaddtomacro\gplbacktext{%
\csname LTb\endcsnam
\put(682,704){\makebox(0,0)[r]{\strut{}-4}}%
\put(682,1671){\makebox(0,0)[r]{\strut{}-2}}%
\put(682,2638){\makebox(0,0)[r]{\strut{}0}}%
\put(682,3605){\makebox(0,0)[r]{\strut{}2}}%
\put(682,4572){\makebox(0,0)[r]{\strut{}4}}%
\put(682,5539){\makebox(0,0)[r]{\strut{}6}}%
\put(1054,484){\makebox(0,0){\strut{}$0.01$}}%
\put(2138,484){\makebox(0,0){\strut{}$0.1$}}%
\put(3221,484){\makebox(0,0){\strut{}$1$}}%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(198,3121){\rotatebox{-270}{\makebox(0,0){\strut{}$S(k)$}}}%
\put(2368,154){\makebox(0,0){\strut{}$k$}}%
\put(2835,5333){\makebox(0,0){\strut{}Model B, 2D}}%
\csname LTb\endcsnam
\put(3200,4170){\makebox(0,0)[r]{\strut{}5,000}}%
\csname LTb\endcsnam
\put(3200,4456){\makebox(0,0)[r]{\strut{}20,000}}%
\csname LTb\endcsnam
\put(3200,4742){\makebox(0,0)[r]{\strut{}200,000}}%
\csname LTb\endcsnam
\put(3200,5028){\makebox(0,0)[r]{\strut{}$S(k) \sim k^{-3}$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-b-2d_sa0_200000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelb:sf2d}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(4320.00,5760.00)%
\gplgaddtomacro\gplbacktext{%
\csname LTb\endcsnam
\put(682,823){\makebox(0,0)[r]{\strut{}-3}}%
\put(682,1360){\makebox(0,0)[r]{\strut{}-2}}%
\put(682,1897){\makebox(0,0)[r]{\strut{}-1}}%
\put(682,2435){\makebox(0,0)[r]{\strut{}0}}%
\put(682,2972){\makebox(0,0)[r]{\strut{}1}}%
\put(682,3509){\makebox(0,0)[r]{\strut{}2}}%
\put(682,4046){\makebox(0,0)[r]{\strut{}3}}%
\put(682,4583){\makebox(0,0)[r]{\strut{}4}}%
\put(682,5120){\makebox(0,0)[r]{\strut{}5}}%
\put(1394,484){\makebox(0,0){\strut{}$0.1$}}%
\put(2851,484){\makebox(0,0){\strut{}$1$}}%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(198,3121){\rotatebox{-270}{\makebox(0,0){\strut{}$S(k)$}}}%
\put(2368,154){\makebox(0,0){\strut{}$k$}}%
\put(2835,5333){\makebox(0,0){\strut{}Model B, 3D}}%
\csname LTb\endcsnam
\put(3200,4170){\makebox(0,0)[r]{\strut{}500}}%
\csname LTb\endcsnam
\put(3200,4456){\makebox(0,0)[r]{\strut{}2,000}}%
\csname LTb\endcsnam
\put(3200,4742){\makebox(0,0)[r]{\strut{}20,000}}%
\csname LTb\endcsnam
\put(3200,5028){\makebox(0,0)[r]{\strut{}$S(k) \sim k^{-4}$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-b-3d_sa0_20000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelb:sf3d}
\end{subfigure}
\caption{The radially averaged structure factor, $S(\vec{k}) \! = \! | \hat{\psi}_{\vec{k}} |^2$, for the Cahn--Hilliard model \cite{Cahn1958} in (\subref{fig:modelb:sf2d}) 2D and (\subref{fig:modelb:sf3d}) 3D, corresponding to the simulation parameters described in Figure~\ref{fig:modelab:b} and computed using the average of 10 simulations. The results demonstrate scaling to Porod's law~(solid line, Equation~(\ref{eq:porodslaw})) for $d=2$ and $d=3$, respectively~\cite{Puri1997}. }
\label{fig:model-b_sf}
\end{figure}
\begin{figure}
\begin{minipage}{0.48\textwidth}
\centering
\begin{subfigure}{\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(5182.00,1872.00)%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(1714,1924){\makebox(0,0)[r]{\strut{}$25,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(3341,1924){\makebox(0,0)[r]{\strut{}$100,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(4963,1924){\makebox(0,0)[r]{\strut{}$1,000,000$}}%
\csname LTb\endcsnam
\put(218,0){\makebox(0,0){\strut{}$-1$}}%
\put(1404,0){\makebox(0,0){\strut{}$-0.5$}}%
\put(2590,0){\makebox(0,0){\strut{}$0$}}%
\put(3776,0){\makebox(0,0){\strut{}$0.5$}}%
\put(4963,0){\makebox(0,0){\strut{}$1$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-c-2d_data0_1000000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
%
\vspace{15pt}
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(5182.00,1872.00)%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(1714,1924){\makebox(0,0)[r]{\strut{}$25,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(3341,1924){\makebox(0,0)[r]{\strut{}$100,000$}}%
}%
\gplgaddtomacro\gplbacktext{%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(4963,1924){\makebox(0,0)[r]{\strut{}$1,000,000$}}%
\csname LTb\endcsnam
\put(218,0){\makebox(0,0){\strut{}$-1$}}%
\put(1404,0){\makebox(0,0){\strut{}$-0.5$}}%
\put(2590,0){\makebox(0,0){\strut{}$0$}}%
\put(3776,0){\makebox(0,0){\strut{}$0.5$}}%
\put(4963,0){\makebox(0,0){\strut{}$1$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{model-c-2d_data1_1000000.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelc:2d}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.33\textwidth]{modelc02500.png}\hfill%
\includegraphics[width=0.33\textwidth]{modelc010000.png}\hfill%
\includegraphics[width=0.33\textwidth]{modelc0100000.png}
\includegraphics[width=0.33\textwidth]{modelc12500.png}\hfill%
\includegraphics[width=0.33\textwidth]{modelc110000.png}\hfill%
\includegraphics[width=0.33\textwidth]{modelc1100000.png}
\caption{}
\label{fig:modelc:3d}
\end{subfigure}
\caption{Snapshots from simulations of 2D ($1024 \! \times \!1024$) and 3D ($128 \! \times \! 128 \! \times \! 128$) Model C \cite{Elder1994}, Equation~(\ref{eq:modelc}):
(\subref{fig:modelc:2d}) the 2D system is shown at three intervals at solution index 25,000, 100,000 and 1,000,000. (\subref{fig:modelc:3d}) the 3D system visualized using VTK \cite{vtk}, with a cross-section highlighted for visibility, at three intervals at solution index 2,500, 10,000 and 100,000.
The simulations use a time step of $\Delta t = 0.025$, and were initially seeded with random values between -1 and 1.
The first row illustrates the evolution of the non-conserved field coalescing into droplet formations.
The conserved field illustrated on the second row represents the density of the material.
}
\label{fig:modelc}
\end{minipage}\hfill%
\begin{minipage}{0.48\textwidth}
\centering
\begin{subfigure}{\textwidth}
\centering
\begingroup
\inputencoding{cp1252}%
\makeatletter
\providecommand\color[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package color not loaded in conjunction with
terminal option `colourtext'%
}{See the gnuplot documentation for explanation.%
}{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.}%
\renewcommand\color[2][]{}%
}%
\providecommand\includegraphics[2][]{%
\GenericError{(gnuplot) \space\space\space\@spaces}{%
Package graphicx or graphics not loaded%
}{See the gnuplot documentation for explanation.%
}{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.}%
\renewcommand\includegraphics[2][]{}%
}%
\providecommand\rotatebox[2]{#2}%
\@ifundefined{ifGPcolor}{%
\newif\ifGPcolor
\GPcolortrue
}{}%
\@ifundefined{ifGPblacktext}{%
\newif\ifGPblacktext
\GPblacktexttrue
}{}%
\let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{}%
\gdef\gplfronttext{}%
\makeatother
\ifGPblacktext
\def\colorrgb#1{}%
\def\colorgray#1{}%
\else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}}%
\expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}}%
\expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}}%
\expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}}%
\expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}}%
\expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}}%
\expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}}%
\expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}}%
\expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}}%
\else
\def\colorrgb#1{\color{black}}%
\def\colorgray#1{\color[gray]{#1}}%
\expandafter\def\csname LTw\endcsname{\color{white}}%
\expandafter\def\csname LTb\endcsname{\color{black}}%
\expandafter\def\csname LTa\endcsname{\color{black}}%
\expandafter\def\csname LT0\endcsname{\color{black}}%
\expandafter\def\csname LT1\endcsname{\color{black}}%
\expandafter\def\csname LT2\endcsname{\color{black}}%
\expandafter\def\csname LT3\endcsname{\color{black}}%
\expandafter\def\csname LT4\endcsname{\color{black}}%
\expandafter\def\csname LT5\endcsname{\color{black}}%
\expandafter\def\csname LT6\endcsname{\color{black}}%
\expandafter\def\csname LT7\endcsname{\color{black}}%
\expandafter\def\csname LT8\endcsname{\color{black}}%
\fi
\fi
\setlength{\unitlength}{0.0500bp}%
\ifx\gptboxheight\undefined%
\newlength{\gptboxheight}%
\newlength{\gptboxwidth}%
\newsavebox{\gptboxtext}%
\fi%
\setlength{\fboxrule}{0.5pt}%
\setlength{\fboxsep}{1pt}%
\begin{picture}(5182.00,4896.00)%
\gplgaddtomacro\gplbacktext{%
\csname LTb\endcsnam
\put(671,979){\makebox(0,0)[r]{\strut{}$0$}}%
\put(671,1400){\makebox(0,0)[r]{\strut{}$100$}}%
\put(671,1822){\makebox(0,0)[r]{\strut{}$200$}}%
\put(671,2243){\makebox(0,0)[r]{\strut{}$300$}}%
\put(671,2664){\makebox(0,0)[r]{\strut{}$400$}}%
\put(671,3086){\makebox(0,0)[r]{\strut{}$500$}}%
\put(671,3507){\makebox(0,0)[r]{\strut{}$600$}}%
\put(671,3928){\makebox(0,0)[r]{\strut{}$700$}}%
\put(671,4350){\makebox(0,0)[r]{\strut{}$800$}}%
\put(803,759){\makebox(0,0){\strut{}$0$}}%
\put(1248,759){\makebox(0,0){\strut{}$100$}}%
\put(1694,759){\makebox(0,0){\strut{}$200$}}%
\put(2139,759){\makebox(0,0){\strut{}$300$}}%
\put(2584,759){\makebox(0,0){\strut{}$400$}}%
\put(3029,759){\makebox(0,0){\strut{}$500$}}%
\put(3475,759){\makebox(0,0){\strut{}$600$}}%
\put(3920,759){\makebox(0,0){\strut{}$700$}}%
\put(4365,759){\makebox(0,0){\strut{}$800$}}%
}%
\gplgaddtomacro\gplfronttext{%
\csname LTb\endcsnam
\put(803,181){\makebox(0,0){\strut{}$-1$}}%
\put(1696,181){\makebox(0,0){\strut{}$-0.5$}}%
\put(2590,181){\makebox(0,0){\strut{}$0$}}%
\put(3483,181){\makebox(0,0){\strut{}$0.5$}}%
\put(4377,181){\makebox(0,0){\strut{}$1$}}%
}%
\gplbacktext
\put(0,0){\includegraphics{pfc_finished.pdf}}%
\gplfronttext
\end{picture}%
\endgroup
\caption{}
\label{fig:modelpfc:2d}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{pfc3d_side.png}
\caption{}
\label{fig:modelpfc:3d}
\end{subfigure}
\caption{Snapshots from simulations of the phase-field crystal model. Parameters were taken from from Elder et al.~\cite{Elder2002}, $h= \pi/4$ and $\Delta t = 0.01$, and the constants in the dynamical equation (Equation~(\ref{eq:pfc})) were chosen to be $q_0=1$ and $\epsilon=0.1$. (\subref{fig:modelpfc:2d}) Shows the 2D ($1024\times1024$) simulation at 700,000 iterations, representing approximately 700 diffusion time-lengths. (\subref{fig:modelpfc:3d}) shows the visualization of the 3D system after 70,000 iterations, where a view of the interior is provided by removing a portion of the system along a sloped plane.
}
\label{fig:pfc}
\end{minipage}
\end{figure}
\subsection{Performance}
Performance was measured on three different hardware and operating system platforms using Models A and C with
the solution taken after 10,000 iterations. The performance was measured by the runtime length of the entire program execution,
in seconds. Since this includes all program activity between program initialization and termination, the data includes both time spent generating the spectral form and printing the final results to a file. All recorded measurements are listed in Table~\ref{table:performance}, and the details of the hardware platforms are listed in Table~\ref{table:hardware}.
Data was collected by repeating the simulations 10 times.
From the system size, it is expected that the second test case is approximately 16 times longer than the first, and the third test case is similar to the second test case, for each model. Additionally, it is expected that Model C results are approximately twice as long as Model A (although the derivatives are of a higher order, and there is a coupling term). The data shows that typically the second test is much longer than the first test, which is the result of the ability for the processor to perform caching on the smaller system. Otherwise, these results correspond well to expectations.
\begin{table}
\caption{Runtime data was recorded for three different hardware and operating system environments, for two models and three different grid sizes. The entries of the table show the results of taking the minimum and maximum runtime values (in seconds) for each individual configuration across 10 individual simulations, in the format ``minimum value/maximum value''. The hardware and operating system specifications of the environments used are given in Table~\ref{table:hardware}.}
\footnotesize
\centering
\begin{tabular}{|c|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{Label} & \multicolumn{3}{c}{Model A} & \multicolumn{3}{|c|}{Model C} \\
& {$128 \times 128$} & {$512 \times 512$} & {$64 \times 64 \times 64$} & {$128 \times 128$} & {$512 \times 512$} & {$64 \times 64 \times 64$} \\ \hline
Win7 & 2.0/2.4 & 33.9/36.4 & 37.1/40.1 & 5.0/5.7 & 118.5/125.1 & 113.9/122.6 \\
Win10 & 1.7/1.8 & 32.6/33.4 & 31.1/31.6 & 4.1/4.2 & 102.8/106.3 & 96.7/97.3 \\
Arch & 2.0/2.0 & 49.8/50.1 & 45.8/46.2 & 5.5/5.7 & 175.0/177.8 & 158.7/160.3 \\
\hline
\end{tabular}
\label{table:performance}
\end{table}
\begin{table}
\caption{The hardware and operating system specifications of the environments used to generate runtime data.}
\footnotesize
\centering
\begin{tabular}{|c|l|l|}
\hline
Label & {Clock speed} & {OS (Compiler)} \\ \hline
Win7 & i7-6800K (3.40 GHz) & Windows 7 (msvc 14.28) \\
Win10 & i7-7700HQ (2.80 GHz) & Windows 10 (msvc 14.27) \\
Arch & i5-6500 (3.20 GHz) & Arch Linux (gcc 10.2) \\
\hline
\end{tabular}
\label{table:hardware}
\end{table}
\section{Requirements and Limitations}
\subsection{Hardware and Software Environment}
Table~\ref{table:compilers} lists compilers, operating systems and target architectures that were used in testing \textit{SymPhas}{}.
The minimum C++{} standard is C++{}17.
The minimum gcc version required to build \textit{SymPhas}{} is gcc7.5. This version notably introduces constructor template deduction guides, a necessary part of the compile time expression algebra.
The latest Microsoft Visual C++{} Compiler (abbreviated as MSVC++{} or MSVC) version is highly recommended. As a result of the heavy usage of meta-programming, earlier versions of MSVC++{} are not guaranteed to successfully build \textit{SymPhas}{}.
When compiling \textit{SymPhas}{}, four of the modules are required for the minimum build and there is only one required external dependency, FFTW. These are listed in Table~\ref{table:dependencies} alongside the minimum tested versions.
\begin{table}
\caption{The environments used for testing and compiling \textit{SymPhas}{}. The target architecture is specified in the third column. The minimum gcc version required to build \textit{SymPhas}{} is gcc7.5. The latest MSVC++{} version is highly recommended. }
\centering
\begin{tabular}{|l|l|l|l|}
\hline
Compiler & Operating System & Target Arch. & Compiles? \\
\hline
MSVC++{} 14.28 & Windows 7 Professional (64-bit) & x64 & Yes \\
MSVC++{} 14.28 & Windows 10 Home (64-bit) & x64 & Yes \\
clang 11.0.1 & Arch Linux (64-bit) & x86-64 & Yes \\
g++ 10.2 & Arch Linux (64-bit) & x86-64 & Yes \\
g++ 7.5 & Arch Linux (64-bit) & x86-64 & Yes \\
g++ 5.5 & Arch Linux (64-bit) & x86-64 & No \\
\hline
\end{tabular}
\label{table:compilers}
\end{table}
\begin{table}
\caption{List of the dependencies of each module. Modules that are required in the base build of \textit{SymPhas}{} are emphasized using bold print. Optional modules are always an optional dependency. The version of the external dependency with which \textit{SymPhas}{} has been tested is indicated in parentheses. The dependency tbb enables parallelism when using the execution header.}
\centering
\begin{threeparttable}
\begin{tabular}{|l|l|l|}
\hline
Module & Internal Dependency & External Dependency \\
\hline
\textbf{\modulename{lib}} & None & FFTW (3.3.7) \cite{Frigo_2005}, tbb* \\
\textbf{\modulename{datatypes}} & \modulename{lib} & None \\
\textbf{\modulename{sym}} & \modulename{datatypes} & None \\
\textbf{\modulename{sol}} & \modulename{sym}, \modulename{io} & VTK \cite{vtk} (9.0)** \\
\modulename{io} & \modulename{datatypes} & libxdrfile (2.1.2)** \\
\modulename{conf} & \modulename{sol}, \modulename{io} & None \\
\hline
\end{tabular}
\begin{tablenotes}
\item
\small
*Library is only required for compiling in Linux.\\
**Library is optional.
\end{tablenotes}
\end{threeparttable}
\label{table:dependencies}
\end{table}
\subsection{Limitations}
When the equations of motion and equations for virtual variables are written, a corresponding expression tree will be constructed by the symbolic algebra functionality. However, virtual variables used in the equations of motion will be substituted directly as data variables rather than as expression trees, meaning that the expression tree associated with the virtual variable will not be used to construct the expression tree for the equation of motion.
One implication of this design is that computing the derivative of a virtual variable that is defined in terms of a derivative will apply a stencil twice, resulting in a poor approximation.
This also means that when using the spectral solver with equations of motion involving virtual variables, the spectral operators may be malformed if a virtual variable is written using a term necessary to correctly construct the operator. Refer to Section~\ref{sec:spectral} which explains the procedure of constructing the operators.
The Euler solver will assume that it can approximate derivatives of any order, but is only able to compute derivatives for which the stencils are implemented. See Section~\ref{methods:objects:stencils} for the exhaustive list of stencils.
Only scalar values can be provided to the model arguments for initializing the values of \lstinline{c1}, \lstinline{c2}, $\ldots$, and cannot be other types like matrices or complex types. When the model equations should use other types, then an appropriate number of scalar arguments should be provided in order to construct the term in the model preamble.
\section{Conclusions}
With \textit{SymPhas}{}, we have developed a high-performance, highly flexible API that allows a user to simulate phase-field problems in a straightforward way. This applies to any phase-field problem which may be formulated field-theoretically, including reaction-diffusion systems. Simulated models are written using the equations of motion in an unconstrained form specified through simple grammar
that can interpret mathematical constructs. Here, \textit{SymPhas}{} was tested in both 2- and 3-dimensions against the well-known Cahn--Hilliard~\cite{Cahn1958} and Allen--Cahn~\cite{Allen_1972} models, a model of two coupled equations of motion for eutectic systems~\cite{Elder1994}, and a phase-field crystal model~\cite{Elder2002}. The results demonstrate that \textit{SymPhas}{} produces correct solutions of phase-field problems.
Overall, \textit{SymPhas}{} successfully applies a modular design
and supports the user by providing individual modules for each functional component alongside a highly detailed documentation.
With the growing interest in phase-field methods outside traditional materials and microstructure modeling, subjects such as wave propagation in cardiac activity and properties of biological tissues are other potential application fields \cite{Courtemanche_1996, Raina_2015, Gueltekin2016}. \textit{SymPhas}{} offers
very short definition-to-results workflow
facilitating rapid implementation of new models.
In addition to being a tool for direct simulations, \textit{SymPhas}{} can generate large volumes of training data for new machine learning simulations of phase-field models. For instance, this type of approach has been applied in recent works that focused on formulating a free energy description from the evolving microstructure \cite{Teichert_2019, Zhang_2020} and in developing machine-guided microstructure evolution models for spinodal decomposition \cite{OcaZapiain2021}.
\textit{SymPhas}{} will continue to be developed and have features added over time, which will include items such as upgrades to performance, additional symbolic algebra functionality, stochastic options and more solvers. The software is available at \texttt{https://github.com/SoftSimu/SymPhas}.
\medskip
\textbf{Supporting Information} \par
Supporting Information is available
from the authors.
\medskip
\textbf{Conflict of Interest} \par The authors declare no conflict of interest.
\medskip
\textbf{Acknowledgments} \par
M.K. thanks the Natural Sciences and Engineering Research Council of Canada (NSERC) and Canada Research Chairs Program for financial support.
S.A.S. thanks NSERC for financial support through the Undergraduate Student Research Award (USRA), the Canada Graduate Scholarships - Master's (CGSM) programs and Mitacs for support through the Mitacs Globalink Research Award.
SharcNet and Compute Canada are acknowledged for computational resources.
\medskip
\printbibliography
\end{document}
|
1,941,325,220,999 | arxiv | \section{Introduction}
``Good sense is, of all things among men, the most equally distributed.'' With this phrase, Descartes begins his \textit{Discourse on the method} that changed science and human history. In that book, the philosopher introduces the scientific method, a systematic way to derive knowledge based on experiments. Recently, a new tendency strengthens this idea in software engineering: experimentation. This approach is a process of continuously validating product assumptions, transforming them into hypotheses, prioritizing, and applying the scientific method to test these hypotheses, supporting or refuting them~\cite{Lindgren2016}. In this context, practitioners can employ several techniques like iterations with prototypes, gradual rollouts, and controlled experiments~\cite{Fabijan2018} but also problem and solution interviews~\cite{Lindgren2016}.
In a recent position paper~\cite{Melegati2019}, we compared different models of experimentation and observed that, at the process beginning, they suggest the team to identify, specify, and prioritize hypotheses. Drawing a parallel to Requirements Engineering activities employed in a requirement-driven approach, we argued the need for a set of practices called Hypotheses Engineering (HE) to identify, specify, and prioritize hypotheses in experimentation.
Given the similarity between the terms assumption and hypothesis, it is fundamental to differentiate them. Throughout this paper, ``assumption'' refers to a personal or team-wise, generally implicit, understanding taken as truth without being questioned or proved. Meanwhile, ``hypothesis'' is an explicit statement that has not been proved yet but could be tested through an experiment. That is, assumptions are cognitive and abstract ideas, while hypotheses are concrete elements employed in experimentation.
The natural first step of HE is to elicit or define hypotheses. In this paper, we targeted this problem in the context of software startups. Software startups are organizations looking for a sustainable business model for an innovative product or service they develop where software is a core element~\cite{Unterkalmsteiner2016}. Although we can easily identify success stories like Airbnb or Uber, most of these companies fail~\cite{Herrmann2012}. Reasons for the lack of success are various: demanding market conditions, lack of team commitment, financial issues~\cite{Klotins2018}, including an inaccurate business development~\cite{Cantamessa2018}. Since a defining characteristic of software startups is developing an innovative solution, experimentation is a key element in this context~\cite{Kerr2014}. Such a value is corroborated by the fact that Lean Startup, the most well-known methodology among practitioners, has a strong emphasis on experimentation~\cite{Frederiksen2017,Bortolini2018}.
Nevertheless, these companies still focus on developing their proposed solution instead of focusing on the necessary learning process~\cite{Giardino2014}. This aspect is essential, especially in early-stage startups for which developing the wrong features may represent the resources exhaustion and the consequent ending. One of the reasons for this limited adoption of experimentation is the lack of clearly defined practices~\cite{Lindgren2016}. Therefore, an essential step in the direction of a better implementation of experimentation practices is a systematic way to specify and handle hypotheses~\cite{Melegati2019}. In this study, our goal is to develop a novel technique to identify the hypotheses on which early-stage software startups base their products. Based on hypotheses, these companies could perform experiments and progress with more precise information about the user and market needs. Therefore, to guide this study, we came up with the following research question:
\begin{center}
\textbf{RQ: How can early-stage software startups define hypotheses to support experimentation?}
\end{center}
To achieve our goal, we followed a design science research (DSR) approach based on Hevner et al.'s guidelines~\cite{Hevner2004} composed of three cycles. The first cycle goal was to understand how the assumptions on which startups base their products are formed. In the second and third cycles, we proposed, evaluated, and improved HyMap, a technique to elicit hypotheses based on cognitive mapping systematically created through a set of questions. We evaluated the practice using a multiple-case study with three software startups. The results indicated that the technique is clear, easy to use, independent from the facilitator applying it, and useful leading to hypotheses of three types: problem, value, and product. This paper extends a previous paper~\cite{Melegati2020} that presented the first two cycles of this study. This paper's main original contributions are the improvement in the graphical notation and process performed in the third cycle and the evaluation of the technique with three new software startups.
The remaining of this paper is organized according to Gregor and Hevner's guidelines to present a DSR study~\cite{Hevner2013}. Section~\ref{sec:literature_review} presents a literature review including the justificatory knowledge to support the artifact effectiveness. Section~\ref{sec:research_method} presents the DSR method and Section~\ref{sec:development_process} the artifact development process. Section~\ref{sec:artifact_description} describes the final artifact and Section~\ref{sec:evalution} presents its evaluation. In Section~\ref{sec:discussion}, we discuss the results and, finally, Section~\ref{sec:conclusions} concludes the paper.
\section{Literature review}
\label{sec:literature_review}
Gregor and Hevner~\cite{Hevner2013} made a distinction between descriptive knowledge (\textOmega) and prescriptive knowledge (\textLambda). While the first one concerns the ``what'' about phenomena, including laws and theories to describe natural, artificial, or human phenomena, the second is focused on the ``how'' of human-built artifacts, including constructs, models, and methods. According to the authors, in DSR, it is important to review both areas to avoid the lack of novelty and, consequently, contribution to the knowledge. Besides that, this review should include the justificatory knowledge, that is, elements used to inform the artifact construction and explain its effectiveness. We organized this section according to this distinction: Section~\ref{ssec:available_solutions} describes the available solutions indicating the gap, and Section~\ref{ssec:justficatory_knowledge} displays the justificatory knowledge that supports our proposed technique.
\subsection{Available solutions}
\label{ssec:available_solutions}
In the software engineering literature, there are some models to describe experimentation in general. We can mention RIGHT~\cite{Fagerholm2017}, HYPEX~\cite{Olsson2014}, and QCD~\cite{Olsson2015}. The authors of these models analyzed the existing literature and current practices of companies applying experimentation. These models presented the process as cyclical approaches consisted of some steps executed continuously: identify, specify, and prioritize hypotheses, design an experiment, execute it, analyze the results, and update the hypotheses accordingly~\cite{Melegati2019}. Nevertheless, these models do not describe how hypotheses could be systematically identified.
Other valuable pieces of \textLambda~knowledge come from practitioners-oriented literature. Here, we could mention the Customer Development~\cite{Blank2007} and the Lean Startup~\cite{Ries2011}. The latter had a huge success among practitioners and consisted of taking the founders' assumptions as hypotheses, building experiments to evaluate them, and based on the results, persevere or pivot to another idea. One criticism against the Lean Startup is exactly its lack of operationalization. For instance, Bosch et al.~\cite{Bosch2013} proposed the Early-Stage Software Startup Development Model (ESSSDM) to tackle this problem. It consisted of three parts: idea generation, a prioritized backlog, and a funnel in which ideas are validated. To generate ideas, the authors suggested exploratory interviews, brainstorming, or following potential customers to understand their needs.
To the best of our knowledge, the only technique explicitly focused on hypotheses elicitation is Assumption Mapping. It is a technique recently proposed by Bland et al.~\cite{Bland2019} consisting of a series of canvases, including the Business Model Canvas (BMC)~\cite{Osterwalder2009} to create hypotheses. Although the BMC was initially based on an ontology systematically developed~\cite{Osterwalder2005}, Assumption Mapping has been neither derived nor evaluated scientifically. In summary, up to the moment, there is no hypothesis elicitation technique systematically derived and evaluated.
\subsection{Justificatory knowledge}
\label{ssec:justficatory_knowledge}
The definition of software startup is not a consensus among published papers, but the most common aspects are innovation and uncertainty~\cite{Berg2018}. Blank~\cite{Blank2007} proposed a definition generally adopted in practice: a startup is an organization formed to search for a repeatable and scalable business model. Therefore, searching a business model for a novel software-intensive product is a key defining aspect of a software startup contrasting these organizations to other development teams~\cite{Melegati2020b}.
The business model concept has a plethora of different used definitions in academic literature~\cite{Zott2011}. Furnari~\cite{Furnari2015} described two theoretical perspectives in business model research: an activity-based perspective that describes a business model as ``a system of activities that firms use to create and capture value'', and a cognitive perspective that considers it as a cognitive instrument to represent those activities.
Based on the cognitive perspective, Furnari~\cite{Furnari2015} proposed the use of cognitive maps to represent business models. Cognitive maps are visual representations of causal aspects of a person's belief system as a graph where nodes represent the concepts individuals use and arrows, causal links between them~\cite{Furnari2015}. The arrows are generally labeled according to the type of relationship: `+' for a positive one, `-' for a negative one, and `/o/' for a neutral one. Cognitive maps are supported by Kelly's Personal Construct Theory~\cite{Eden1988}. According to the theory, a person looks at the world through patterns or templates, that Kelly called constructs, that she creates and, in which, she tries to fit the reality~\cite{Kelly2002}. Kelly also described the person-as-a-scientist idea: ``as a scientist, man seeks to predict, and thus control, the course of events'' and these constructs ``are intended to aid him in his predictive efforts''~\cite{Kelly2002}. Brannback et al.~\cite{Brannback2009} have already discussed the relationship between cognitive mapping and the personal constructs theory with entrepreneurship. The authors argued that ``an entrepreneur needs to make sense of his/her reality to predict and control - to find and solve problems''~\cite{Brannback2009}.
One essential aspect of software startups to this discussion is the founders' influence on the product definition. Seppanen et al.~\cite{Seppanen2016} investigated the competencies of initial teams in software startups. They observed a strong influence from founders on the actions and competencies related to the business and product creation in these nascent companies. Based on what we exposed so far, software startups' business models are strongly influenced by how their founders perceive, mentally model the environment and how they use these models to predict the market and how the future product will behave. Research has shown that this influence is strong enough to prevent the use of experimentation. For instance, while investigating enablers and inhibitors for experimentation in software startups, Melegati et al.~\cite{Melegati2019b} identified as an inhibitor the fact that founders are often ``in love'' with the idea deeming experiments to evaluate it as unnecessary and focusing on developing the solution. Giardino et al.~\cite{Giardino2014} argued that this focus is one of the key challenges early-stage software startups face.
Cognitive mapping could be used to materialize these assumptions and put them in a position to be challenged. As Eden~\cite{Eden1988} points out: ``by seeing their own ideas in this form of visualization [people] are being encouraged to `change their mind.''' This technique has been used in Software Engineering, for instance, in problem structuring in requirements engineering~\cite{Rooksby2006} or in a decision model for distributed software development with agile~\cite{Almeida2011}.
The methods to elicit cognitive maps can be divided into two groups depending on how data is obtained~\cite{Hodgkinson2004}. Initially, researchers used documents or other sources of evidence to perform content analysis to develop these maps. Another way is through direct methods where researchers develop the map \textit{in situ} by interacting with subjects. These direct methods can employ two divergent approaches: pairwise judgments of causal relationships or capture through visual forms. In the first form, subjects answer to questionnaires where all combinations of concepts are evaluated and based on the answers, a map is built. In the second form, with the subject's help, a facilitator builds a visual representation using paper and pencil or software solutions. The pairwise approach has better coverage at the expense of being more difficult, less engaging, and less representative than the freehand technique~\cite{Hodgkinson2004}. Since the startup context is defined by time and resource constraints~\cite{Berg2018}, an effective approach targeted to these companies should not be time-consuming. Therefore, a visual approach is more suitable.
Regarding the population of startups, it is essential to mention that they may be in different development stages. Based on previous works in the literature, Klotins et al.~\cite{Klotins2019} proposed a life-cycle model to analyze the startups' progress composed of four stages: inception, stabilization, growth, and maturity. The first stage starts with the idea and ends with the first product release. In the next stage, the startup prepares to scale regarding technical and operational perspectives. In summary, during the early-stages, teams focus on ``finding a relevant problem'' and ``a feasible solution.'' In the growth stage, the startup aims to reach a desired market participation, and, finally, in the last stage, it progresses into an established company. That is, in the later stages, the focus is on marketing and efficiency. We decided to initially focus on developing a technique for early-stage startups, mainly because of two reasons. First, the lack of testing assumptions about the customer and market represent a higher risk to the startup survival at this stage where the company generally do not have many resources. Second, in a later stage, the hypotheses obtained by the technique might have already been validated or refuted by the product usage from previous stages.
Since early-stage startups focus on finding the problem and evaluating the proposed solution, they are essentially testing their value proposition. On an ontological analysis of the value proposition concept, Sales et al.~\cite{Sales2017} defined ``a value proposition as a value assertion a company makes (as the value beholder) that a given market segment (the beneficiaries) will ascribe a particular value to the experiences enabled by an offering (the value object).'' Such a definition is compatible with the cognitive-based perspective of business models.
\section{Research method}
\label{sec:research_method}
Given our research goals and question, we aim to solve a real-world problem. Instead of trying to understand how a defined phenomenon unfolds, our goal is to develop an artifact to act on the world. In this regard, Design Science Research (DSR) is a suitable method. This approach is often used in Information Systems research as shown by the several methodological guidelines (e.g., Hevner et al.~\cite{Hevner2004}, Peffers et al.~\cite{Peffers2007}, and Wieringa~\cite{Wieringa2009}) and even a special issue in \textit{MIS Quarterly} (i.e.,~\cite{March2008}). Although its use is often not explicitly mentioned in Software Engineering research, in an analysis of awarded papers in the \textit{International Conference on Software Engineering}, Engstrom et al.~\cite{Engstrom2020} showed that most of these studies actually could be classified as DSR although not explicitly using the term. More recently, though, researchers have explicitly used the methodology to tackle problems in software engineering like gamification (e.g., ~\cite{Morschheuser2018}) and requirements (e.g., ~\cite{Benfell2020}).
In this research, we followed the guidelines proposed by Hevner and colleagues in~\cite{Hevner2004} and~\cite{Hevner2013}. According to the authors, DSR seeks to develop innovative artifacts relying on existing kernel theories ``that are applied, tested, modified, and extended through the experience, creativity, intuition, and problem solving capabilities of the researcher''~\cite{Hevner2004}. These artifacts could be constructs, models, methods, or instantiations. Constructs represent the language used to describe the world, and models use them to represent real-world situations. Methods define processes to guide how to solve problems and, finally, instantiations demonstrate how the previous elements could be used in a scenario. Based on this classification, we can categorize the expected artifact of this study as a method: driving how an early-stage software startup can identify the hypotheses that will guide its experiments.
Hevner et al.~\cite{Hevner2004} proposed seven guidelines for DSR:
\begin{enumerate}
\renewcommand{\labelenumi}{G\arabic{enumi}.}
\item \label{guide:1} Design as an artifact: DSR must produce a viable artifact;
\item \label{guide:2} Problem relevance: the goal should be to develop a solution to relevant problems;
\item \label{guide:3} Design evaluation: the ``utility, quality, and efficacy'' of the artifact should be rigorously demonstrated;
\item \label{guide:4} Research contributions: the DSR project should provide ``clear and verifiable contributions'' regarding the ``design artifact, design foundations, and/or design methodologies'';
\item \label{guide:5} Research rigor: DSR should apply rigorous methods both in the construction and in the evaluation of the artifact;
\item \label{guide:6} Design as a search process: the DSR process is inherently iterative and the search for the best, optimal solution is unfeasible. The goal should be feasible, good designs representing satisfactory solutions.
\item \label{guide:7} Communication of research: DSR solutions should be presented effectively.
\end{enumerate}
Since the target artifact is a method, we satisfy~\prettyref{guide:1} and~\prettyref{guide:4}. Our argument about the importance of experimentation to early-stage software startups fulfills~\prettyref{guide:2}. In the following sections, following Gregor and Hevner's guidelines, we describe the design artifact and its search (or development) process as a way to cope with~\prettyref{guide:5} and~\prettyref{guide:6}. To present them in a logical order, in Section~\ref{sec:development_process}, we describe the development process then, in Section~\ref{sec:artifact_description}, we present the final artifact. Section~\ref{sec:evalution} presents the evaluation (\prettyref{guide:3} and~\prettyref{guide:4}) and, in Section~\ref{sec:discussion}, we discuss the research contributions (\prettyref{guide:4}). This paper and previous one~\cite{Melegati2020} communicate our results (\prettyref{guide:7}).
To increase the research rigor (\prettyref{guide:5}), we guided our development and evaluation processes according to defined criteria. First, to fulfill the utility criteria, the achievements that the artifact aims for should have ``value outside the development environment''~\cite{Hevner2013}. Therefore, using the artifact, we should be able to create hypotheses for real situations, that is, for startups other than those that participated in the study. This concept is associated with the perceived usefulness, which is generally used in the research on the adoption of software development methodologies (e.g., ~\cite{Riemenschneider2002,Hardgrave2003}) and technology in general, like in the Technology Acceptance Model~\cite{Davis1989}. This concept ``refers to the degree to which a developer expects that following a methodology will improve his or her individual job performance''~\cite{Hardgrave2003}. In the context of experimentation in software startups, we can operationalize this concept by obtaining hypotheses to build experiments.
Regarding the artifact quality, we consider several aspects: ease of use, independence from the facilitator, and clearness. Since our ultimate goal is to impact real startups, we should consider the future adoption of this method. In this regard, taking the artifact as innovation, complexity is one factor influencing adoption~\cite{Rogers2010}. Additionally, given that startups are generally time and resources-constrained, we expect that they would not be keen to spend a large amount of time learning and applying a new method. It should be independent with respect to who is applying or facilitating it, providing all the details to proper use, allowing anyone to use it rather than depending on its authors. Finally, the method description should be clear, making its comprehension straightforward.
Finally, the artifact should be effective; that is, it should produce the expected result. In our case, our goals are to reveal hidden assumptions that founders had about the product's environment and why it should have value for potential customers, and systemically elicit hypotheses that would work as the basis for experiments.
\section{Artifact design process}
\label{sec:development_process}
The design artifact development process consisted of an initial exploratory, and two design cycles referred to below as cycles 0, 1, and 2. The first cycle (0) had the goal of understanding how teams form the assumptions on which they base their products. To achieve this goal, we performed a multiple-case study with two early-stage software startups. Our results indicate that requirements are based on the team's, especially the founder's, assumptions about the market, and customer behavior. In the first design cycle (1), we used cognitive mapping to make founders' assumptions explicit. At this stage, the technique consisted of using boxes and the arrows as described in cognitive mapping and the employment of an open-ended talk where the founder described her understanding, and a facilitator drew the map. We evaluated this initially proposed method in two other software startups. Our results indicate that this approach could base a comprehensive practice to elicit hypotheses elicitation in software startups. However, it still lacked some operationalization. In the final design cycle (2), we improved the technique by defining specific notations for different concepts (customer, product, and features) and creating a list of questions to guide the cognitive map creation. Below, we describe the three cycles in detail, including the research method, data collection, analysis, and results obtained.
\subsection{Cycle 0}
\label{ssec:cycle_0}
This cycle's goal was to understand how teams form the assumptions on which they base their products. Given that such a phenomenon is contemporary and the boundaries between it and the context are not evident, a case study is a suitable research method~\cite{Yin2003}. According to Yin~\cite{Yin2003}, one rationale for this research approach is the representative or typical case. Therefore, we selected software startups where, as mentioned before, the founder is the one who had the initial idea. Besides that, we followed Klotins et al.'s life-cycle model~\cite{Klotins2019} and selected startups in the inception and stabilization phase. Through our contact network, we selected two startups called from now on as A and B. Both companies were based in the same city in Italy and located in a technological park. Nevertheless, at the moment of data collection, startup B participated in the incubation process while startup A only used the space available.
Data collection consisted of semi-structured interviews that followed a previously defined guide. For both cases, we interviewed the founders and, for case B, also the software developer. The interview questions aimed to understand the interviewees' background, the idea, the motivation to build the product, and how they changed throughout the company history. Since the goal was to understand from where the assumptions used to create the product came, data analysis consisted of explanation building where cause-effect relationships are sought~\cite{Runeson2012} looking for an explanation for the cases~\cite{Yin2003}.
As the first step in the analysis, we developed case descriptions, as suggested by Yin~\cite{Yin2003}. Then, we performed a cross-case analysis.
\subsubsection{Case A}
At the time of data collection, the startup was developing a library to be added to software development projects. The company will provide a dashboard that will show live software run-time issues, like exceptions, detected or inferred from data collected within the target system. The dashboard will also show solutions found on websites focused on programming issues like Stack Overflow to similar problems and a list of freelance developers that could solve the problem. In some cases, the system would be able to fix some issues automatically. The startup team was composed of five people working part-time on the project spread across software development, business plan, and marketing strategy.
The founder has worked as a software development consultant for an extended period. While participating in third-party projects, he observed that such a tool could help him work more effectively. Besides that, he believed that the technical level of software developers was decreasing. Therefore, it would make sense to develop such a tool. In the founder's words: \textit{``the idea came to me during my work as a consultant because I observed this need... let's say the idea came from there, that is, seeing that my customers don't [collect data from bugs], but I saw that some of them were starting to do something in that direction, and I also observed that the developer job market is growing, but the average know-how is probably decreasing.''}
At the time of data collection, the startup had an initial prototype consisted of a dashboard with some dummy data and a website that displays the idea.
\subsubsection{Case B}
The startup was running a website to help hotel owners and managers to find the best software solutions for their businesses. Its initial focus was on the Italian market, but it aimed to reach international markets. The team was composed of two founders/partners, one developer who founded the company but left the partnership, and an intern to help with administrative tasks. We performed interviews with one of the founders and another with the developer in the company office to allow further observations. The interviewed founder is the one that had the original idea.
The interviewed founder had a background in online marketing. He had worked in a company that handled web marketing and websites before staying twelve years in a big web agency. In his last job, he worked as the director of the company's technology business unit. Throughout his work life, he had extensive contact with the tourism sector, especially the hospitality industry.
He claimed that the idea came to him based on the needs he observed from hotel owners, which have many technological tools available in the market to run the business and software vendors that have to reach these customers. He was inspired by American software review websites and the lack of a specific one for the hospitality sector. Therefore, the original idea was to list available software with users' reviews, bring hotel owners to the website, and receive a fee for each lead (an interested customer that visited the vendor website) generated. In his own words, \textit{``Let's say, I have worked for many years in the touristic sector first as a consultant and later as a software producer, specifically in the hotel sector. I made the match between these two competencies, the problem of the hotel owners who have many technologies in the hotel and the problem of the software vendor who needs a showcase, a marketplace where to sell their products, and then I created this hotel technology marketplace.''}
The founder said that after the original version went online, the team started observing the website usage data and realized it was not going as expected. The team observed that the hotel owners were not able to compare different software solutions because these products rarely have the same set of features and, sometimes, hotels needed more than one software system to fulfill their requirements. Then, the startup changed the website: now, the hotel owner fills a form giving details about her business, and the system would match through a simple algorithm with solutions adapted to the business needs.
In the founder's words: \textit{``Initially, it started as a project similar to [review or comparison websites] then we started to collect data that was telling us that the hotel owners did not have the analytical capability to compare the features between one software and another [then, the product became] a system that allowed the hotel owner to tell us what were her needs fill in a form, and we, based on a simple algorithm, made the match between her needs and the database of software solutions that we had.''}
In the interview with the developer, it was clear that his influence on the idea was limited. He thought that it was a good idea but does not have experience in the market. He trusted the founders regarding the business and focused on developing the solution.
At the moment of data collection, the startup had its customer base growing, and it was close to break-even. It was looking to expand to other markets.
\subsubsection{Cross-case analysis}
From the startup product and the founder's background descriptions above, it is clear that the latter shaped a set of beliefs on the founders about their target customers and market. Through these sets of beliefs, the founders made sense about the specific business environment and its players, explaining their behavior and, in the last stance, trying to forecast it. Specifically, in startup B, the founder considered that hotel owners wanted to buy software solutions, and they were able to compare the different alternatives and select the best suited for their cases. Based on that, the founder foresaw that a website with a list of available software solutions would be useful to hotel owners. They would be able to see all the solutions and select the one that would fit their needs. Fig.~\ref{fig:idea_creation} summarizes this process and compares it with the idea of the founder being the innovation owner and her experience being the motivator for the startup product idea, as discussed in the literature~\cite{Seppanen2016}.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{idea_creation_new}
\caption{The process of idea creation. The dashed lines represent the previous understanding that the background of the founder led her to had the idea. Adapted from~\cite{Melegati2020}.}
\label{fig:idea_creation}
\end{figure}
In summary, the assumptions a founder has about customers and the market guides requirements elicitation, that is, the beliefs that the founder has about the customers and market base the definition of software features.
In startup B, it was possible to see what could happen next. After the software was ready, and the website went online, usage data showed that the results were not as predicted. Hence, the founder had to update his assumptions about the customers and change the product accordingly. Now, this new ``implicit theory'' has emerged from the experiment result and led the company to better results within the market. Nevertheless, to reach this stage, the startup spent resources developing the whole product that could have done earlier if the team had analyzed the customers.
Such rearrangement exposed an implicit process model for development in software startups. In such a process, the founder's assumptions guide the elicitation of requirements, and the data generated by the software usage may impose the changes on this set of assumptions. Then, the founder uses such an updated representation of the world to elicit new requirements. Fig.~\ref{fig:feedback} depicts this process through a causal chain~\cite{Miles2014}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{feedback}
\caption{The founder's assumptions being updated as shown in~\cite{Melegati2020}.}
\label{fig:feedback}
\end{figure}
\subsection{Cycle 1}
\label{sssec:cycle_1}
Based on Cycle 0 results, a hypothesis elicitation approach should make explicit the founders' underlying assumptions about the context in their startups are included. Cognitive mapping is a valuable tool in this regard.
As a first version, we proposed to adapt the approach proposed by Furnari~\cite{Furnari2015}. Using a whiteboard to depict the current status of the mapping and, with the founder's help, we aimed to create a cognitive map representing how and why the founder believes the startup's business model works. The detailed steps were:
\begin{enumerate}
\item ask the founder to describe the business model concerning the value proposition and customers;
\item extract concepts and causal relationships;
\item dig on each concept to see if they were, in reality, not based on the underlying assumption.
\item check with the founder if the map represented the way she thought about the problem at the moment.
\end{enumerate}
We evaluated this initial proposal in two other software startups, C and D. Both startups are located in the same Italian city as A and B. We performed interview sessions following the defined protocol below:
\begin{enumerate}
\item Present the concept of hypotheses and how Lean Startup is related to it.
\item Ask the interviewee to describe his business or product idea, especially regarding customer segments and value proposition.
\item Ask on which hypotheses the founder believed his idea is based.
\item Using a whiteboard and interacting with the founder, draw a cognitive map until she feels that the map represented her understanding of the market.
\item Create a list of hypotheses based on cognitive mapping and compare it with the initially created list.
\item Ask feedback on the process to the founder.
\end{enumerate}
Below, we describe the results for each case.
\subsubsection{Case C}
Case C is an early-stage software startup that plans to develop a digital mentor for software developers to increase their happiness and satisfaction. The product would adapt itself to each developer's needs. Companies interested in improving their developers' productiveness customer would pay a fee to make the solution available to their teams.
When asked about hypotheses, the founder mentioned those they already worked with and those they were planning. The first one was that software development teams could not organize themselves. Through some interviews, it got invalidated, and they pivoted an initial idea to the current one. The next hypothesis or, how the interviewee called, \textit{``exploration''} was to understand if software developers care about soft skills. When asked about other hypotheses, the founder said that she was waiting for another round of tests.
Fig.~\ref{fig:case_c_cognitive_map} displays a representation of the cognitive map derived for this case. Through this process, the founder stated that the main element to increase developers' productivity would be making their work more fun through gamification.
\begin{figure}
\centering
\includegraphics[width=.8\columnwidth]{case_c_cognitive_map}
\caption{Cognitive map created during interview with the founder of startup C.}
\label{fig:case_c_cognitive_map}
\end{figure}
The arrows in the figure imply six hypotheses: 1) developers productivity improves the company results; 2) developers satisfaction improves developers productivity; 3) making the development work more fun improves the developers' productivity and 4) the developers satisfaction; 5) gamification could make developers' work more fun; 6) making the development work more fun would increase the company satisfaction.
Although some identified hypotheses are straightforward and may not demand a proper experiment to be considered validated, the founder acknowledges that the \textit{``[they] have to see if the correlation between having fun and the productivity [exists], that is a major risk.''}
\subsubsection{Case D}
Case D is developing a software solution to improve network connectivity, especially for situations where the quality of the Internet connection is low. Through an innovative approach that is suppressed here as requested by the interviewee, the solution will make the network status transparent, enabling the user to adapt it to their needs and, consequently, improve the quality of service.
At the beginning of the interview, the founder answered that the main hypothesis they had regarded how large is the area where quality is bad and if providers are willing to, in the near future, to fix it. He mentioned that he talked to many potential customers regarding the solution, and most of them would like to have the solution. After that, the cognitive map was developed and is depicted in Fig.~\ref{fig:case_d_cognitive_map}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{case_d_cognitive_map}
\caption{Cognitive map created during interview with the founder of startup D.}
\label{fig:case_d_cognitive_map}
\end{figure}
From the arrows, there are four implied hypotheses: 1) increasing the network efficiency will improve user satisfaction, 2) making the network more transparent will not decrease user satisfaction, 3) making the network more transparent will increase the user's willingness to react, and 4) the users' willingness and ability to react will increase user satisfaction.
When confronted with the hypotheses, the founder mentioned they had already thought about them before. Nevertheless, using his words, the process \textit{``made them explicit and more structured.''}
Although the results were promising, we observed that the process of cognitive map elicitation was not repeatable, highly dependent on the interviewer instead. Besides that, the interviewer felt the lack of guidance on properly conducting this step, as we could observe at the beginning of the interviews. Another aspect that we observed was the lack of uniformity in the boxes' content: some were concepts or nouns, but others were actions, events. Based on that, we improved the technique in a new research cycle. Regarding the expected attributes defined in Section~\ref{sec:research_method}, the artifact was useful, easy to use, and effective, but we should improve its qualities: independence to facilitator and clearness.
\subsection{Cycle 2}
\label{sssec:cycle_2}
Based on the previous cycle results, the need for a more systematic approach was evident. To achieve this goal, we focused on developing a proper visual language to build the maps and a systematic method to elicit them from founders. Since the artifact development is a design search process~\cite{Hevner2013}, we performed a series of map elicitation sessions with potential founders. The subjects in this phase not necessarily created a startup but had an idea that, in their opinion, could potentially be the basis of a new solution. After each session, the researchers evaluated the process and improved the artifacts to make the language and process more precise. The sessions occurred online, and the researcher shared his screen where he drew the cognitive map using the software Diagrams.net\footnote{https://app.diagrams.net/} with the interviewee's help. Once the results reached a satisfactory level, we considered the artifact design process completed.
Following this approach, we performed three sessions. While the first two were Brazilian entrepreneurs located in two different cities, the third was based in Italy, in a different city from cases analyzed in the previous cycles. Table~\ref{tab:cycle2} describes the improvements we applied to the technique for each cycle and the respective results.
\begin{table*}[!ht]
\renewcommand{\arraystretch}{1.5}
\caption{Sessions performed on artifact creation Cycle 2.}
\label{tab:cycle2}
\begin{tabular}{cm{.3\textwidth}m{.3\textwidth}m{.3\textwidth}}
\hline
Session & Visual language improvements & Process improvements & Results \\ \hline
1 & A circle to define the potential customers' problems to be tackled by the startup solution. & An initial set of questions to guide the map elicitation (product name and customers' problems the solution aimed to solve) and an iterative approach to connect these two elements. & Much clear guidance for the interviewer with respect to the previous cycle. But the process of asking the customers and their problems was still not straightforward to explain to the interviewee. \\
2 & Different elements for customers and their problems: circles to represent customers or users and their problems used similar boxes as the other elements. & Questions changed accordingly to changes in the language. & Improved the process but, based on the analysis of the elicited maps, using the same element to represent software features and value concepts was confusing. \\
3 & Dashed box to represent features, differentiating them from other elements. & None. & Satisfactory results. \\ \hline
\end{tabular}
\end{table*}
\section{HyMap: Hypotheses Elicitation using Cognitive Maps for Early-Stage Software Startups }
\label{sec:artifact_description}
The final artifact consists of two elements: a visual language to depict the founder's cognitive map and a defined process to help draw the map and extract the hypotheses from it.
\subsection{A visual language to represent startup founder cognitive maps}
To depict the cognitive map, we developed a visual language consisted of the following elements:
\begin{itemize}
\item Circles represent the customer segments.
\item An ellipsis box is used to denote the proposed solution.
\item Dotted-line boxes portray the software features.
\item Boxes represent concepts, either physical or abstract elements, and are filled with nouns.
\item Arrows connect elements and represent relationships among them. They represent three types of relationships: offering, influence, and perception, that are defined by the types of the elements connected. The first connect solution and features. Influence arrows are similar to those in cognitive maps, as mentioned before, and should be labeled with one sign: `+', `- ', `/o/' to denote its type. Perception arrows are those that connect the customers with their problems.
\end{itemize}
There are no restrictions on the number of inbound or outbound arrows from boxes, but it is expected that they represent an acyclical graph. Such a pattern for the construction leads to layers of elements in the map, as shown in Fig.~\ref{fig:hymap}. In the Product layer, we represent the product. In the Features layer, we represent the features the founders expect for the product. In the Problems layers, one or more layers of elements represent the aspects founders think the product features will solve. Finally, in the Customer layer, we represent the expected customers and users for the product.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{template}
\caption{A template for the HyMap map.}
\label{fig:hymap}
\end{figure}
\subsection{A process to elicit startup founder cognitive maps and hypotheses}
The first step to reach hypotheses is to elicit the founder's cognitive map. To reach this goal, we propose an iterative approach where, based on the map's current status, the founder should analyze each of the relationships (arrows) and wonder if there are underlying concepts. At the beginning of this process, an initial map should be created based on the following questions:
\begin{itemize}
\item What is the product/solution name?
\item What are the customers targeted by the solution?
\item For each customer, what are the aspects the actor expects to improve using the solution?
\item Which are the solution features envisioned, and which aspects are identified in the previous step they help fulfill?
\end{itemize}
The answers to these questions and the corresponding relationships lead to an initial version of the map. Then, for each arrow, the founder should judge if there are concepts implicitly used to explain that relationship. Some questions useful in this step are \textit{how?} and \textit{why?}. If a new concept is added along with new relationships (arrows), this process is repeated iteratively until the founder is comfortable that no new concepts should be added. A useful question to evaluate if this process is saturated is if it is possible to create a simple experiment to evaluate that relationship. Additionally, the founder must evaluate if the new concepts added are related to other concepts already present on the map. Throughout the process, the founder should constantly assess if the forming map is coherent to her understanding of the customer and market. To refine the map, the founder can add, remove, or substitute elements.
Once the cognitive map is finished, we can say that each relationship represents an assumption the founder has about its targeted customer, value proposition, and product. Based on them, she can formulate hypotheses based on which she can create experiments. These experiments can be pieces of software but also interviews, questionnaires, or other techniques.
Arrows originated in different layers represent diverse types of hypotheses that demand different templates while crafting the hypotheses. Although the definition of systematic templates for hypotheses is beyond this paper's scope, we defined a simple template for each type. Of course, the templates are only guidelines, and a final inspection of the wording was necessary to create well-formed phrases. Below, we describe each type and the corresponding template.
\begin{enumerate}
\item Arrows from the product to the features layer generate hypotheses regarding the team's capability to develop that feature that we called product hypotheses. A simple template for this type is: ``the team developing $<$product name$>$ is capable of implementing $<$functionality$>$''.
\item Arrows starting from the features layer to the problem layers, or those restricted to the problem layers, represent value hypotheses. In this case, a suitable template is ``$<$Functionality or problem$>$ $<$increases, decreases or does not affect$>$ $<$problem$>$''.
\item Arrows connecting customers to the problem layers lead to problem hypotheses, that is, if that problem is a real ``pain'' for the customers, that will make her pay for the solution. An initial template for this type of hypothesis is ``$<$Customer segment$>$ $<$ has/would like to$>$ $<$problem$>$''.
\end{enumerate}
\section{Evaluation}
\label{sec:evalution}
In this section, we describe the HyMap evaluation. Rigorous design evaluation is an essential element of DSR~\cite{Hevner2013}.
\subsection{Method}
Since a startup is a complex phenomenon with many variables like founders' background, product, market on which they are operating, and the boundary between the phenomenon and the context is blur, a case study is a suitable choice to evaluate a technique for these companies. To do so, we executed a protocol similar to that for Cycle 1 but online as in Cycle 2. To evaluate the artifact ease of use and independence of facilitator, a different researcher from the one that performed the sessions on the construction phase was responsible for facilitating the sessions. The other researcher acted as an observer during the elicitation sessions. Another difference was that, instead of doing in one session, we divided the protocol into two steps: first, the facilitator, with the help of the founder, created the map; then, we created the hypotheses list offline and sent it to the founder, then, on a second session, we performed an interview to get her feedback on the hypotheses and the process. If the founder was not available for a second interview, we sent a questionnaire. We also used this instrument as a guide in case we performed the second interview. In both situations, we started asking for each hypothesis if the founder believed the hypothesis had been validated and, if positive, how, and how she perceived the risk to the business if it was not valid. Then, we asked feedback about the process's usefulness, ease to use, clearness, and if the process led the founder to think about something she had not thought before but would consider in the following startup steps.
To sample the cases, we employed a theoretical approach based on the different startup stages, as described in Section~\ref{sec:literature_review}. Since the focus of HyMap is on early-stage startups, we aimed companies in the inception and stabilization stages. Since, by the end of the inception stage, we expect that a startup had already partially developed the product, we aimed to compare startups in the beginning and at the end of such stage. To reach the startups, we followed a convenient approach, using our contacts network to recruit interested startups.
\subsection{Results}
We performed the planned case study in three startups that we referred to as E, F, and G, ordered by the development stage they are at the moment of data collection: beginning and end of inception and stabilization stages, respectively. Below, we describe the companies and the results we obtained for each in detail. To preserve the startups' privacy, a request that founders often make, we do not explicitly put the product or startups name in the descriptions or the cognitive maps.
\subsection{Case E}
Case E is a Brazilian startup planning an app to connect board game enthusiasts to meet and form groups to playing sessions. The startup also plans to provide services to board game shops that want people to come and play on their premises and publishers that want to promote their games. At the time of data collection, the startup had already created the brand and started building an online presence. Because of the 2020 coronavirus pandemic, the development halted in the beginning, and the startup lost all team members but the founder. Therefore, we classify the startup at the beginning of the inception stage. The interview performed with the founder resulted in the cognitive map depicted in Fig.~\ref{fig:case_e_cognitive_map}.
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{case_e_cognitive_map}
\caption{Cognitive map created for startup E.}
\label{fig:case_e_cognitive_map}
\end{figure*}
Based on the cognitive map, we identified 22 hypotheses of which four were related to problems (e.g., ``board game players have difficult to form game tables''), 12 to value (e.g., ``the search of nearby people with similar interests decreases the difficulty to find people with similar interests''), and six to the product (e.g., ``the development team is capable of implementing the search of nearby people with similar interests'').
Regarding the problem hypotheses, the founder said that one (``board game players have difficult forming game tables'') has a high risk to the business, and she validated it through her own experience within the field and offline and online surveys. The other three problem hypotheses have medium risk, and she validated them through talking to shops and publishers. Out of the 12 value hypotheses, the founder considered eight with high risk to the business and four with medium risk. She believes that all value hypotheses are validated except one: ``creating game tables through the app facilitates bringing people to play at the shop.'' For the 11 value hypotheses the founder considered validated, we grouped similar strategies. Since she mentioned more than one strategy per hypothesis, the sum of occurrences is larger than the total of hypotheses. She mentioned that validation came with offline and online surveys for six of them, four from the comparison with similar tools, three from resembling business models, and three from her own experience with the market. Finally, regarding product hypotheses, the founder considered all with high risk except the one regarding news feed (low risk) and suggestions (medium risk). However, they were not validated so far because of the lack of a development team.
\subsection{Case F}
Case F is a Brazilian startup developing an app to connect patients to health professionals generally not found through insurance companies like psychologists, nutritionists, and chiropractics. The startup is located in the same city as case E. By the time of data collection, the company had developed most of the software solution and was planning to launch in a short time. Therefore, the startup is at the inception stage end. The founder team consisted of two people: one concentrated on software development and the other on the product conception and other issues. We performed interviews with the latter. The result of the first interview is the cognitive map depicted in Fig.~\ref{fig:case_f_cognitive_map}.
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{case_f_cognitive_map}
\caption{Cognitive map created for startup F.}
\label{fig:case_f_cognitive_map}
\end{figure*}
Based on the cognitive map, we identified 23 hypotheses of which eight were related to problems (e.g., ``the patient has difficulty to find professionals''), ten to value (e.g., ``searching professionals by type and place decreases the difficulty to find professionals)'', and five to the product (e.g., ``the team developing the product is capable of implement the search for professionals by type and place'').
When we asked the interviewee to rate the problem hypotheses, he answered that they had validated seven out of the eight hypotheses based on his own experience using those services or talking to professionals they know. The founder regarded five validated as having a high risk to the business, one with a medium risk, and two with low. The hypothesis not validated was about the referral program. The founder attributed this classification to the fact that this feature is not essential to product viability. Regarding the value hypotheses, he acknowledged that they had not evaluated them, but it will be possible to evaluate them as soon as they launch the product. Regarding the risk, five were considered high, one was classified as medium, and four with low risk. Finally, since the product was almost ready, the product hypotheses had already been validated, and three of them had high risk, while two were low risk.
Nevertheless, the founder observed that the process did not identify a potential hypothesis: patients have difficulty booking appointments with professionals. He mentioned that this aspect became clear to him while discussing with the facilitator after the diagram elicitation process ended.
\subsection{Case G}
Case G is another Brazilian startup that is developing an online marketplace for second-hand sports gear. The startup is located in a different city from previous cases. By the time of data collection, the startup has been created for a year, and the service is already online. Therefore, we classify this startup in the stabilization stage. We ran a session with the startup founder that led to the cognitive map depicted in Fig.~\ref{fig:case_g_cognitive_map}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{case_g_cognitive_map}
\caption{Cognitive map created for startup G.}
\label{fig:case_g_cognitive_map}
\end{figure}
Based on this map, we generated 16 hypotheses from which two were related to the problems (e.g., ``sports enthusiasts have difficulty to access sports gear''), ten to value (e.g., ``the lack of trust in the products increases the difficulty to access sports gear''), and four to the product (e.g., ``the implementing the solution is capable of implementing seller's reputation'').
The founder answered a questionnaire about his evaluation of generated hypotheses. Regarding the two problem hypotheses, the founder answered that they represent a high risk to the business, but they were validated based on ``field and online surveys. '' The founder had a similar view for the product hypotheses: all represented high risk but were validated based on the actual implementation. Concerning the value hypotheses, out of the 11 hypotheses, the founder considered six validated and five not. The validation came either from ``field and online surveys'' or from the service's current users. The founder evaluated the risk as high for one, medium for four, and low for one. For those not validated, two were high risk and three medium.
\subsection{Cross-case analysis}
Table~\ref{tab:summary} summarizes how the founders perceived the hypotheses identified.
\begin{table*}[!ht]
\renewcommand{\arraystretch}{1.2}
\caption{Summary of hypotheses obtained by case. The letters L, M, and H stand for the risk level perceived by the founders: low, medium, and high.}
\label{tab:summary}
\begin{tabular}{ccccccccccccccc}
\hline
\multirow{2}{*}{Case} & \multirow{2}{*}{Stage} & \multirow{2}{*}{Hypotheses} & \multicolumn{4}{c}{Problem} & \multicolumn{4}{c}{Value} & \multicolumn{4}{c}{Product} \\
& & & L & M & H & Total & L & M & H & Total & L & M & H & Total \\ \hline
\multirow{2}{*}{E} & \multirow{2}{*}{Inception (begin)} & Validated & - & 3 & 1 & 4 & - & 3 & 8 & 11 & - & - & - & - \\
& & Not validated & - & - & - & - & - & 1 & - & 1 & 1 & 1 & 4 & 6 \\ \hline
\multirow{2}{*}{F} & \multirow{2}{*}{Inception (end)} & Validated & 2 & - & 5 & 7 & - & - & - & - & 2 & - & 3 & 5 \\
& & Not validated & - & 1 & - & 1 & 4 & 1 & 5 & 10 & - & - & - & - \\ \hline
\multirow{2}{*}{G} & \multirow{2}{*}{Stabilization} & Validated & - & - & 2 & 2 & 1 & 4 & 1 & 6 & - & - & 4 & 4 \\
& & Not validated & - & - & - & - & - & 3 & 1 & 4 & - & - & - & - \\ \hline
\end{tabular}
\end{table*}
Comparing the different cases, we could observe similar results. Regarding problem hypotheses, founders claimed that, although having high risk, they had validated these statements. The exception was two problem hypotheses for case F that were regarded as not validated but with a minor risk since were related to a feature not essential to the product. Founders claimed to have validated these hypotheses, mainly based on their own experiences or interaction with customers from the targeted market.
For value hypotheses, we obtained different results. For startup G, the one in the stabilization stage, the founder claimed that got the hypotheses validated by the product usage. However, for cases E and F, although the founder of the latter said that he expects to validate these hypotheses with the product usage, the founder of E claimed that most of the hypotheses were validated based on her experience and surveys with potential customers.
An evident aspect regarding these two types mentioned above of hypotheses is the prevalence of the founders' previous experience as evidence to support. Although the interviewees' claim of validity, there was no systematic approach to evaluate these hypotheses and, consequently, a risk that they are not valid, leading to the development of unneeded solutions.
Regarding product hypotheses, the founders considered these hypotheses validated for the two cases (F and G) where the initial product was ready. For case E, since the product development has not started, the founder believes that the hypotheses had not been validated. In all cases, founders generally considered the hypotheses as high risk to the product viability.
Regarding the feedback about the technique, all founders answered that it allowed them to see their business idea better. Although not highlighting unnoticed elements, founders claimed that the practice gave a structured form to the product idea.
\section{Discussion}
\label{sec:discussion}
With the proposed artifact, we aimed to answer our research question: ``How can early-stage software startups define hypotheses to support experimentation?'' To verify if the artifact reached this goal, we proposed analyzing the criteria: utility, quality, and effectiveness.
Regarding utility, the technique used in the startups as described in Section~\ref{sec:evalution} demonstrated its capability of eliciting hypotheses even though, as more mature the startup becomes, the higher the probability that teams have already confirmed the hypotheses. Nevertheless, the technique identified hypotheses not validated, even for the startup on the stabilization stage. Besides that, all founders mentioned the value of having a graphical overview of their business. For instance, such visualization may help communicate product aspects to other stakeholders like a marketing agency. We can also expect that teams could use the map as a living document updated according to how the startup progress and validates or updates hypotheses about its product, customers, and market.
To evaluate quality, we considered three aspects: ease of use, independence to the facilitator, and clearness. The small number of visual language elements and the process simplicity are good indicators for the ease of use. The amount of time spent creating the map (around one hour in each case) demonstrates that the technique demands a few resources that it is essential in the startup resource and time-constrained context. This aspect is correlated to the independence to the facilitator that we observed by the facility with which the evaluation sessions ran and the results they reached. Finally, the diagram and the process are clear, as shown by the maps displayed and the process description. An aspect that we have not explicitly evaluated was if the technique could reach a complete set of hypotheses, that is if it could identify all of them. Based on the example of case F, it is clear that such an aspect was not reached. This fact is probably related to our choice of using a freehand approach rather than a pairwise that is linked to a better coverage~\cite{Hodgkinson2004} as we discussed earlier. Besides that, our goal was to reach an initial set of hypotheses that could be extended and refined throughout the startup existence. Finally, this changing behavior is similar to requirements and was one of the reasons behind agile software development~\cite{Williams2003}.
Effectiveness is the most challenging aspect to evaluate. Given that founders were not used to think about hypotheses, asking them at the beginning of the section what their hypotheses were was not practical, and we abandoned it in the evaluation. Then, to support the claim of an effective technique, we should count on founders' feedback during the whole process of artifact construction and the lack of validation of hypotheses observed even in later stages. Regarding the latter, our results showed that startups, even with an initial version, or live, the product still led to hypotheses without proper backing.
Our evaluation supports several attributes expected to the technique like ease to use, usefulness, and clearness but other studies could better evaluate other aspects like the effectiveness. Nevertheless, we agree with Hevner~\cite{Hevner2013} that writes: ``When a researcher has expended significant effort in developing an artifact in a project, often with much formative testing, the summative (final) testing should not necessarily be expected to be as full or as in-depth as evaluation in a behavioral research project where the artifact was developed by someone else.'' As future work, we suggest experiments probably comparing with a different approach like the Assumption Mapping~\cite{Bland2019}.
An interesting consequence of the diagram developed is a relationship among hypotheses, or at least some types of them, with requirements. This result is in line with the concept of Dual-track development proposed by Sedano et al.~\cite{Sedano2020}. Through a comprehensive field study in a development company, the authors proposed a conceptual framework to reconcile human-centered design and agile methods. According to them, ``a software project comprises two continuous, ongoing, parallel tracks of work'' where one generate feature ideas and the other use these ideas to build the product. Our results give a piece of evidence for a possible opposite flow: potential requirements leading to guidance to better understand the product.
Still, regarding the hypotheses types, we can map them to the steps in the idea creation process depicted in Fig.~\ref{fig:idea_creation}. Based on the previous personal experience, the founder builds an understanding of the target market and customers that led to problem hypotheses through the HyMap technique. Based on this understanding, the founder forecasts how the customers and market will behave. The elicitation process extracts these assumptions as value hypotheses. Finally, the product envisioned by the founder that would carry out her expectations leads to the product hypotheses relating to the solution feasibility and the team capability of doing it. Fig.~\ref{fig:hypotheses_types} depicts this comparison.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{hypotheses_types_new}
\caption{The relationship between the idea creation process and the hypotheses types identified by the HyMap process.}
\label{fig:hypotheses_types}
\end{figure}
Besides that, the frequency of their own experience as an answer to how the hypotheses were validated is another piece of evidence to support the idea creation process. It was also evident in these cases that founders based their product ideas on their previous experiences or observations of the target market and not necessarily with proper backing.
Another interesting aspect of hypothesis types is that they can act as elements to help in prioritization, an essential aspect of hypotheses engineering~\cite{Melegati2019}. For instance, regarding the types we identified in HyMap, if customers do not feel the problems, it is hard for the product to succeed, and without them, the whole map would not exist. Therefore, we could expect that these are probably the first to be evaluated.
Our results are also related to the three approaches to software development identified by Bosch et al.~\cite{Bosch2018}. According to the authors, these approaches would be the conventional requirement-driven, the outcome or data-driven, that is essentially experimentation, and a rising AI-driven software development, where the software would be automatically updated by machine learning algorithms trained with user data. The authors argue that these approaches co-exist and should be used according to the needs. The relationship among potential features and hypotheses identified in HyMap diagrams suggests an intertwined process where requirements also could take to hypotheses. This idea is also related to the concept of hybrid development~\cite{Kuhrmann2017}.
Regarding the artifact construction process, it is important to summarize the cycles and knowledge flow in the DSR. As inputs in the design process from the descriptive knowledge base, there are the elements already described in Section~\ref{sec:literature_review}, like the Personal Constructs Theory~\cite{Kelly2002} and the life-cycle of software startups~\cite{Klotins2019}. Besides that, we used other elements in the following cycles, like the value proposition ontology analysis~\cite{Sales2017}. From the prescriptive knowledge base, we used cognitive mapping techniques and their use to depict business models (e.g., ~\cite{Furnari2015}). The description of the cycles also displayed the iterative nature of the DSR. Such an aspect is clear from the improvement of the depicted maps. The next section describes this study's contributions to the descriptive and prescriptive knowledge base.
\subsection{Contributions}
To summarize the contributions of our study, we will use the concepts of descriptive (\textOmega) and prescriptive knowledge (\textLambda)~\cite{Hevner2013}. Regarding the first, Cycle 0 results showed that founders develop an understanding of the customers and markets based on their previous experience and use this perception to develop the idea and forecast how it will behave with customers. The artifact evaluation also corroborated this result. Based on the final artifact, it was possible to observe at least three different types of hypotheses: product, value, and problem. Such a set is probably not complete but brings the idea that there should be different types, handled, and validated in diverse ways. Finally, the relationship between requirements and hypotheses is a novel insight.
Concerning prescriptive knowledge, our contributions are the visual language used to depict the cognitive map and a systematic process of doing it. Early-stage software startups could use this technique to guide their initial steps more systematically. We envision that this practice could be useful for other software development teams when creating new features to consolidated products, but investigating this suggestion is beyond this paper's scope.
\subsection{Threats to validity}
Given that we employed multiple-case studies in two cycles of the design phase (as already described in our previous paper~\cite{Melegati2020}) and in the evaluation of the final artifact, we deemed essential to discuss threats to the validity of these investigations. We followed the definitions given by Runeson and Host~\cite{Runeson2012}. The authors describe four aspects of validity for a case study: construct validity, internal validity, external validity, and reliability.
Construct validity concerns to what extent the case elements studied represent what the researchers have in mind. A common threat when using interviews is if the interviewee has the same understanding of terms and concepts used in the questions as the interviewer. Since the interview guide for Cycle 0 focused on the business model description and evolution, such a threat is minimal. Besides that, the triangulation of data with a different team member interview decreased the threat even more.
Triangulation was also important to mitigate threats to internal validity. This aspect relates to causal inferences when the researchers attribute the cause of an effect to a phenomenon, but, in reality, it is caused by a third one not considered in the analysis. In addition to triangulation, we employed peer debriefing; that is, all authors discussed the results.
External validity reflects the extent to which the results can be generalized or if it is interesting to other people outside the studied case~\cite{Runeson2012}. As mentioned by Runeson et al.~\cite{Runeson2012}, in case studies, it is not possible to draw statistical significance. Still, the goal should be an analytical generalization of the results to cases with similar characteristics. As we argued before, the studied cases are typical software startups where the founder is the main innovation owner and, consequently, dictates the requirements. Besides that, these companies generally focus on developing a solution instead of understanding the customer~\cite{Giardino2014,Gutbrod2017}. Therefore, we expect that our results are valuable to describe a large portion of early-stage software startups.
Finally, reliability concerns to what extent the results are dependent on the researchers that performed the study. That is, if another researcher conducts the same study, she will reach similar conclusions. To improve this aspect, we described all the steps for data collection and analysis in all artifact construction cycles and the evaluation.
\section{Conclusions}
\label{sec:conclusions}
Experimentation is a useful approach to guide software development in startups. However, the lack of defined practices to guide these teams is one reason for the reduced use of this approach. Given that identifying hypotheses is the first step to create experiments, this study focused on developing a practice to perform this task in early-stage software startups. Following a Design Science Research approach, we performed three cycles to build a visual language to depict cognitive maps of startup founders and a systematic process to extract them. We evaluated these artifacts on three startups in different development stages.
As mentioned earlier, an extensive evaluation of HyMap would be valuable future work, probably using controlled experiments or longitudinal case studies. Another interesting work would be to assess the technique outside the startup context, for instance, when adding new features to market-driven products where development teams create software for a market of users rather than specific customers. Besides that, other studies could improve the completeness of the hypotheses set generated by the technique, probably extending the language and the process. These enhanced processes could also tackle other types of hypotheses.
|
1,941,325,221,000 | arxiv | \section{Introduction}
\label{intro}
The next generation of galaxy surveys such as Large Synoptic Survey Telescope (LSST) \citep{lsst2008summary} or Euclid \citep{euclid2011report,euclid2016cosmology,euclid2016missiondesign} will not be limited by noise but by systematic effects. In particular, deep photometric observations will be subject to several foreground and target contamination effects, such as dust extinction, stars, and seeing \citep[e.g.][]{scranton2002analysis, ross2011ameliorating, ho2012clustering, huterer2013calibration, ho2015sloan}.
In the past, such effects have been addressed by generating templates for such contaminations and accounting for their overall template coefficients within a Bayesian framework. \cite{leistedt2014exploiting}, for example, compiled a total set of $220$ foreground contaminations for the inference of the clustering signal of quasars in the Sloan Digital Sky Survey (SDSS-III) Baryon Oscillation Spectroscopic Survey (BOSS) \citep{bovy2012photometric}. Foreground contaminations are also dealt with in observations of the cosmic microwave background, where they are assumed to be an additive contribution to observed temperature fluctuations \citep[e.g.][]{tegmark1996method, tegmark1998measuring, hinshaw2007threeyear, eriksen2008joint, ho2015sloan, vansyngel2016semiblind, sudevan2017improved, elsner2017unbiased}. In the context of large-scale structure analyses, \cite{jasche2017foreground} presented a foreground sampling approach to account for multiplicative foreground effects which can affect the target and the number of observed objects across the sky.
All these methods rely on a sufficiently precise estimate of the map of expected foreground contaminants to be able to account for them in the statistical analysis. These approaches exploit the fact that the spatial and spectral dependence of the phenomena generating these foregrounds are well-known. But what if we are facing unknown foreground contaminations? Can we make progress in robustly recovering cosmological information from surveys subject to yet-unknown contaminations? In this work, we describe an attempt to address these questions and develop an optimal and robust likelihood to deal with such effects. The capability to account for `unknown unknowns' is also the primary motivation behind the blind method for the visibility mask reconstruction recently proposed by \cite{monaco2018blind}.
The paper is organised as follows. We outline the underlying principles of our novel likelihood in Section \ref{likelihood_formalism}, followed by a description of the numerical implementation in Section \ref{numerical_implementation}. We illustrate a specific problem in Section 4 and subsequently assess the performance of our proposed likelihood via a comparison with a standard Poissonian likelihood in Section~\ref{results}. The key aspects of our findings are finally summarised in Section \ref{conclusion}.
\section{Robust likelihood}
\label{likelihood_formalism}
\begin{figure}
\centering
{\includegraphics[width=\hsize,clip=true]{map_colour_index.pdf}}
\caption{Schematic to illustrate the colour indexing of the survey elements. Colours are assigned to voxels according to patches of a given angular scale. Voxels of the same colour belong to the same patch, and this colour indexing is subsequently employed in the computation of the robust likelihood.}
\label{fig:colouring_schematic}
\end{figure}
\begin{figure}
\centering
{\includegraphics[width=\hsize,clip=true]{color_threshold.pdf}}
\caption{Slice through the 3D coloured box. The extrusion of the colour indexing scheme (cf. Fig.~\ref{fig:colouring_schematic}) onto a 3D grid yields a collection of patches, denoted by a given colour, with a group of voxels belonging to a particular patch, to be employed in the computation of the robust likelihood. The axes indicate the comoving distances to the observer, who is located at the origin (0,0,0).}
\label{fig:coloured_box_slice}
\end{figure}
\begin{figure}
\centering
{\includegraphics[width=\hsize,clip=true]{radial_selection_plot.pdf}}
\caption{Radial selection function for the CMASS (north galactic cap) survey which is used to generate the mock data to emulate features of the actual SDSS-III BOSS data.}
\label{fig:radial_selection}
\end{figure}
\begin{figure*}
\centering
{\includegraphics[width=\hsize,clip=true]{maps.pdf}}
\caption{Observed sky completeness ({\it left panel}) of the CMASS component of the SDSS-III survey for the north galactic cap and dust extinction map ({\it right panel}) used to generate the large-scale contamination. This reddening map has been generated from the SFD maps \citep{schlegel1998maps}.}
\label{fig:foreground_completeness_maps}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=\hsize,clip=true]{maps_modified.pdf}}
\caption{Contaminated completeness mask ({\it left panel}) and percentage difference compared to the original completeness mask ({\it right panel}). The contamination is introduced by multiplying the original mask by a factor of $(1 - 5F)$ where $F$ is a foreground template, in this case, the dust extinction map downgraded to the angular resolution of the colour indexing map depicted in Fig. \ref{fig:colouring_schematic}. The factor $\alpha=5$ is chosen such that the mean contamination is 15\%, an arbitrary choice to ensure that the contaminations are significant in the completeness mask. The difference between the original and contaminated masks shows that the effect is stronger on the edges of the survey.}
\label{fig:modified_foreground_completeness_maps}
\end{figure*}
We describe the conceptual framework for the development of the robust likelihood which constitutes the crux of this work. The standard analysis of galaxy surveys assumes that the distribution of galaxies can be described as an inhomogeneous Poisson process \citep{Layzer56,PeeblesBook,MartinezSaar03} given by
\begin{eqnarray}
\mathcal{P}(N|\lambda) = \prod_i \frac{e^{-\lambda_i}(\lambda_i)^{N_i}}{N_i} ,
\label{eq:standard_poisson}
\end{eqnarray}
where $N_i$ is the observed number of galaxies at a given position in the sky $i$ and $\lambda_i$ is the expected number of galaxies at that position. The expected number of galaxies is related to the underlying dark-matter density field $\rho$ via
\begin{equation}
\lambda = S\bar{N}\rho^b \exp(-\rho_g \rho^{-\epsilon}) ,
\end{equation}
where $S$ encodes the selection function and geometry of the survey, $\bar{N}$ is the mean number of galaxies in the volume, and $\{ b, \rho_g, \epsilon \}$ are the parameters of the non-linear bias model proposed by \cite{neyrinck2014halo}.
The key contribution of this work is to develop a more robust likelihood than the standard Poissonian likelihood by marginalizing over the unknown large-scale foreground contamination amplitudes. We start with the assumption that there is a large-scale foreground modulation that can be considered to have a constant amplitude over a particular group of voxels. Assuming that $A$ is the amplitude of this large-scale perturbation, we can write $\lambda_\alpha = A \bar{\lambda_\alpha}$, where the index $\alpha$ labels the voxels over which the perturbation is assumed to have constant amplitude. The likelihood consequently has the following form:
\begin{align}
\mathcal{P}(N|\bar{\lambda},A) &= \prod_\alpha \frac{e^{-A \bar{\lambda}_\alpha}A^{N_\alpha} (\bar{\lambda}_\alpha)^{N_\alpha}}{N_\alpha} \\
&= e^{- A \sum_\alpha \bar{\lambda}_\alpha} A^{\sum_\alpha N_\alpha} \prod_\alpha \frac{(\bar{\lambda}_\alpha)^{N_\alpha}}{N_\alpha}.
\end{align}
We can marginalize over the unknown foreground amplitude $A$ as follows:
\begin{align}
\mathcal{P}(N|\bar{\lambda}) &= \int \mathrm{d} A \; \mathcal{P} (N, A | \bar{\lambda}) \\
&= \int \mathrm{d} A \; \mathcal{P} (A | \bar{\lambda}) \; \mathcal{P} (N | A, \bar{\lambda}) \\
&= \int \mathrm{d} A \; \mathcal{P} (A) \; \mathcal{P} (N | A, \bar{\lambda}) ,
\end{align}
where, in the last step, we assumed conditional independence, $\mathcal{P} (A | \bar{\lambda}) = \mathcal{P} (A)$. This assumption is justified since the processes which generate the foregrounds are expected to be independent of the mechanisms involved in galaxy formation. As a result of this marginalization over the amplitude $A$, and using a power-law prior for $A$, $\mathcal{P} (A) = \kappa A^{-\gamma}$ where $\gamma$ is the power-law exponent and $\kappa$ is an arbitrary constant, the likelihood simplifies to:
\begin{align}
\mathcal{P}(N|\bar{\lambda}) &= \kappa \frac{\Big(\sum_{\alpha} N_\alpha\Big)!}{\Big(\sum_\beta \bar{\lambda}_\beta\Big)^{\sum_\alpha N_\alpha + 1 - \gamma}}\prod_\alpha \frac{(\bar{\lambda}_\alpha)^{N_\alpha}}{N_\alpha} \\
&\propto \frac{1}{\Big(\sum_\beta \bar{\lambda}_\beta\Big)^{1 - \gamma}} \prod_\alpha \Bigg(\frac{\bar{\lambda}_\alpha}{\sum_\beta \bar{\lambda}_\beta}\Bigg)^{N_\alpha}.
\label{eq:likelihood_power_law}
\end{align}
We employ a Jeffreys prior for the foreground amplitude $A$, which implies setting $\gamma = 1$. Jeffrey's prior is a solution to a measure invariant scale transformation \citep{jeffreys1946invariant} and is therefore a scale-independent prior, such that different scales have the same probability and there is no preferred scale. This scale invariant prior is optimal for inference problems involving scale measurements as this does not
introduce any bias on a logarithmic scale. Moreover, this is especially interesting because this allows for a total cancellation of unknown amplitudes in Eq. (\ref{eq:likelihood_power_law}), resulting in the following simplified form of our augmented likelihood:
\begin{equation}
\mathcal{P}(N|\bar{\lambda}) \propto \prod_\alpha \Bigg(\frac{\bar{\lambda}_\alpha}{\sum_\beta \bar{\lambda}_\beta}\Bigg)^{N_\alpha}.
\label{eq:robust_likelihood}
\end{equation}
\section{Numerical implementation}
\label{numerical_implementation}
\begin{figure*}
\centering
{\includegraphics[width=\hsize,clip=true]{panels_density.png}}
\caption{Mean and standard deviation of the inferred non-linearly evolved density fields, computed from the MCMC realizations, with the same slice through the 3D fields being depicted above for both the Poissonian (upper panels) and augmented (lower panels) likelihoods. The filamentary nature of the non-linearly evolved density field can be observed in the regions constrained by the data, with the unobserved or masked regions displaying larger uncertainty, as expected. Unlike our robust data model, the standard Poissonian analysis yields some artefacts in the reconstructed density field, particularly near the edges of the survey, where the foreground contamination is stronger.}
\label{fig:density_correlation}
\end{figure*}
\begin{figure*}
\centering
\subfloat[Robust likelihood]{{\includegraphics[width=0.45\hsize,clip=true]{Pk_robust.png} }}%
\qquad
\subfloat[Standard Poissonian likelihood]{{\includegraphics[width=0.45\hsize,clip=true]{Pk_poisson.png} }}%
\caption{Reconstructed power spectra from the inferred initial conditions from a \textsc{borg} analysis with unknown foreground contamination for the robust likelihood (left panel) and the Poissonian likelihood (right panel) over the full range of Fourier modes considered in this work. The $\sigma$ limit corresponds to the cosmic variance $\sigma=\sqrt[]{1/k}$. The colour scale shows the evolution of the power spectrum with the sample number. The power spectra of the individual realizations, after the initial burn-in phase, from the robust likelihood analysis possess the correct power across all scales considered, demonstrating that the foregrounds have been properly accounted for. In contrast, the standard Poissonian analysis exhibits spurious power artefacts due to the unknown foreground contaminations, yielding excessive power on these scales.}
\label{fig:pk}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\hsize,clip=true]{Pk_corr_robust.png}
\caption{Correlation matrix of power spectrum amplitudes with respect to the mean value for the robust likelihood, normalized using the variance of amplitudes of the power spectrum modes. The correlation matrix shows that our augmented data model does not introduce any spurious correlation artefacts, thereby implying that it has properly accounted for the selection and foreground effects.}
\label{fig:pk_correlation}
\end{figure}
We implement the robust likelihood in \textsc{borg} \citep[Bayesian Origin Reconstruction from Galaxies,][]{jasche2013bayesian}, a hierarchical Bayesian inference framework for the non-linear inference of large-scale structures. It encodes a physical description for non-linear dynamics via Lagrangian Perturbation Theory (LPT), resulting in a highly non-trivial Bayesian inverse problem. At the core, it employs a Hamiltonian Monte Carlo (HMC) method for the efficient sampling of a high-dimensional and non-linear parameter space of possible initial conditions at an earlier epoch, with typically $\mathcal{O}(10^7)$ free parameters, corresponding to the discretized volume elements of the observed domain. The HMC implementation is detailed in \cite{jasche2010fast} and \cite{jasche2013bayesian}. The essence of \textsc{borg} is that it incorporates the joint inference of initial conditions, and consequently the corresponding non-linearly evolved density fields and associated velocity fields, from incomplete observations. An augmented variant, \textsc{borg-pm}, employing a particle mesh model for gravitational structure formation, has recently been presented \citep{jasche2018physical}. An extension to \textsc{borg} has also been developed to constrain cosmological parameters via a novel application of the Alcock-Paczy\'nski test \citep{DKR2018altair}.
For the implementation of the robust likelihood, the HMC method that constitutes the basis of the joint sampling framework requires the negative log-likelihood and its adjoint gradient, which are given by
\begin{align*}
\Psi &\equiv -\log \mathcal{P}(N|\bar{\lambda}) \\ &= \sum_\alpha N_\alpha \log \Big( \sum_\beta \bar{\lambda}_\beta \Big) - \sum_\alpha N_\alpha \log \bar{\lambda}_\alpha, \numberthis
\label{eq:robust_loglikelihood}
\end{align*}
and
\begin{equation}
\frac{\partial \Psi}{\partial \bar{\lambda}_\gamma}\frac{\partial \bar{\lambda}_\gamma}{\partial \rho} = \frac{\bar{\lambda}_\gamma}{\rho} \Big(b + \epsilon \rho_g \rho^{-\epsilon} \Big) \Bigg[\frac{\sum_\alpha N_\alpha}{\sum_\beta \bar{\lambda}_\beta} - \frac{N_\gamma}{\bar{\lambda}_\gamma}\Bigg].
\label{eq:robust_adjoint_gradient}
\end{equation}
The labelling of voxels with the same foreground modulation is encoded via a colour indexing scheme that groups the voxels into a collection of angular patches. This requires the construction of a sky map which is divided into regions of a given angular scale, where each region is identified by a specific colour and is stored in \texttt{HEALPix} format \citep{gorski2005healpix}, as illustrated in Fig. \ref{fig:colouring_schematic}. An extrusion of the sky map onto a 3D grid subsequently yields a 3D distribution of patches, with a particular slice of this 3D coloured grid displayed in Fig. \ref{fig:coloured_box_slice}. The collection of voxels belonging to a particular patch is employed in the computation of the robust likelihood given by Eq.~\eqref{eq:robust_loglikelihood}, where $\alpha$ corresponds to the colour index.
This is a maximally ignorant approach to deal with unknown systematic errors where we enforce that every modulation above a given angular scale is not known. Since the colouring scheme does not depend on any foreground information, the numerical implementation of the likelihood is therefore generic. Moreover, another advantage of our approach is that the other components in our forward modelling scheme do not require any adjustments to encode this data model. However, we have not considered additive contaminations typically emanating from stars. We defer the extension of our data model to account for such additive contaminants to a future investigation.
\section{Mock generation}
\label{mock_generation}
We provide a brief description of the generation of the mock data set used to test the effectiveness of our novel likelihood, essentially based on the procedure adopted in \cite{jasche2010fast} and \cite{jasche2013bayesian}. We first generate a realization for the initial density contrast $\delta_k^{\mathrm{i}}$ from a zero-mean normal distribution with covariance corresponding to the cosmological power spectrum, such that we have a 3D Gaussian initial density field in a cubic equidistant grid with $N_{\mathrm{side}} = 256$, consisting of $256^3$ voxels, where each voxel corresponds to a discretized volume element, and comoving box length of $2000${$h^{-1}$~Mpc}. This 3D distribution of initial conditions must then be scaled to a cosmological scale factor of $a_{\mathrm{init}} = 0.001$ using a cosmological growth factor $D^+ (a_{\mathrm{init}})$.
The underlying cosmological power spectrum, including baryonic acoustic oscillations, for the matter distribution is computed using the prescription described in \cite{eisenstein1998baryonic, eisenstein1999power}. We assume a standard $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology with the set of cosmological parameters ($\Omega_{\mathrm{m}} = 0.3089$, $\Omega_\Lambda = 0.6911$, $\Omega_{\mathrm{b}} = 0.0486$, $h = 0.6774$, $\sigma_8 = 0.8159$, $n_{\mathrm{s}} = 0.9667$) from \cite{13planck2015}. We then employ LPT to transform the initial conditions into a non-linearly evolved field $\delta_k^{\mathrm{f}}$ at redshift $z=0$, which is subsequently constructed from the resulting particle distribution via the cloud-in-cell (CIC) method \citep[e.g.][]{hockney1988computer}.
Given the final density field $\delta_k^{\mathrm{f}}$, we generate a mock galaxy redshift catalogue subject to foreground contamination. For the test case considered in this work, we generate a data set that emulates the characteristics of the SDSS-III survey, in particular the highly structured survey geometry and selection effects. We use a numerical estimate of the radial selection function of the CMASS component of the SDSS-III survey, shown in Fig.~\ref{fig:radial_selection}, obtained by binning the corresponding distribution of tracers $N(d)$ in the CMASS sample \citep[e.g.][]{ross2017clustering}, where $d$ is the comoving distance from the observer. The CMASS radial selection function is therefore estimated from a histogram of galaxy distribution over redshift. The procedure to
construct the CMASS sky completeness is less trivial however. We derive this CMASS mask, depicted in the left panel of Fig. \ref{fig:foreground_completeness_maps}, from the SDSS-III BOSS Data Release 12 \citep{sdss2015dr12} database by taking the ratio of spectroscopically confirmed galaxies to the target galaxies in each polygon from the mask.
In order to emulate a large-scale foreground contamination, we construct a reddening map that describes dust extinction, illustrated in the right panel of Fig.~\ref{fig:foreground_completeness_maps}. This dust template is derived from the data provided by \cite{schlegel1998maps} via straightforward interpolation, rendered in \texttt{HEALPix} format \citep{gorski2005healpix}\footnote{The construction of this template is described in more depth in Section 3 of \cite{jasche2017foreground}.}. The contamination is produced by multiplying the completeness mask of CMASS, shown in the left panel of Fig. \ref{fig:foreground_completeness_maps}, by a factor of $(1-\eta F)$, where $F$ is the foreground template rescaled to the angular resolution of the colour indexing scheme, and $\eta$ controls the amplitude of this contamination. To obtain a mean contamination of 15\% in the completeness, we arbitrarily chose $\eta = 5$ to ensure that the foreground contaminations are significant. This mean value corresponds to the average contamination per element of the sky completeness. Figure \ref{fig:modified_foreground_completeness_maps} shows the contaminated sky completeness and the percentage difference, with the edges of the survey being more affected by the contamination due to their proximity to the galactic plane where the dust is more abundant. The mock catalogue is produced by drawing random samples from the inhomogeneous Poissonian distribution described by Eq. (\ref{eq:standard_poisson}) and using the modified completeness.
\section{Results and discussion}
\label{results}
In this section, we discuss results obtained by applying the \textsc{borg} algorithm with the robust likelihood to contaminated mock data. We also compare the performance of our novel likelihood with that of the standard Poissonian likelihood typically employed in large-scale structure analyses. In order to test the effectiveness of our likelihood against unknown systematic errors and foreground contaminations, the algorithm is agnostic about the contamination and assumes the CMASS sky completeness depicted in the left panel of Fig. \ref{fig:foreground_completeness_maps}.
We first study the impact of the large-scale contamination on the inferred non-linearly evolved density field. To this end, we compare the ensemble mean density fields and corresponding standard deviations for the two Markov chains with the Poissonian and novel likelihoods, respectively, illustrated in the top and bottom panels of Fig. \ref{fig:density_correlation}, for a particular slice of the 3D density field. As can be deduced from the top-left panel of Fig. \ref{fig:density_correlation}, the standard Poissonian analysis results in spurious effects in the density field, particularly close to the boundaries of the survey since these are the regions that are the most affected by the dust contamination. In contrast, our novel likelihood analysis yields a homogeneous density distribution through the entire observed domain, with the filamentary nature of the present-day density field clearly seen. While we can recover well-defined structures in the observed regions, the ensemble mean density field tends towards the cosmic mean density in the masked or poorly observed regions, with the corresponding standard deviation being higher to reflect the larger uncertainty in these regions. From this visual comparison, it is evident that our novel likelihood is more robust against unknown large-scale contaminations.
From the realizations of our inferred 3D initial density field, we can reconstruct the corresponding matter power spectra and compare them to the prior cosmological power spectrum adopted for the mock generation. The top panel of Fig. \ref{fig:pk} illustrates the inferred power spectra for both likelihood analyses, with the bottom panel displaying the ratio of the {\it a posteriori} power spectra to the prior power spectrum. While the standard Poissonian analysis yields excessive power on the large scales due to the artefacts in the inferred density field, the analysis with our novel likelihood allows us to recover an unbiased power spectrum across the full range of Fourier modes.
In addition, we tested the combined effects of the foreground and unknown noise amplitudes by estimating the covariance matrix of the Fourier amplitudes of the reconstructed power spectra. As depicted in Fig. \ref{fig:pk_correlation}, our novel likelihood exhibits uncorrelated amplitudes of the Fourier modes, as expected from $\Lambda$CDM cosmology. The strong diagonal shape of the correlation matrix indicates that our proposed data model correctly accounted for any mode coupling introduced by survey geometry and foreground effects.
The above results clearly demonstrate the efficacy of our proposed likelihood in robustly dealing with unknown foreground contaminations for the inference of non-linearly evolved dark matter density fields and the underlying cosmological power spectra from deep galaxy redshift surveys. This method can be inverted to constrain foreground properties of the
contamination. The inferred dark matter density allows for galaxy catalogues to be built without contaminations. These can be compared to the observed number counts to reconstruct the foreground properties as the mismatch between the two catalogues.
\section{Summary and conclusions}
\label{conclusion}
The increasing requirement to control systematic and stochastic effects to high precision in next-generation deep galaxy surveys is one of the major challenges for the coming decade of surveys. If not accounted for, unknown foreground effects and target contaminations will yield significant erroneous artefacts and bias cosmological conclusions drawn from galaxy observations. A common spurious effect is an erroneous modulation of galaxy number counts across the sky, hindering the inference of 3D density fields and associated matter power spectra.
To address this issue, we propose a novel likelihood to implicitly and efficiently account for unknown foreground and target contaminations in surveys. We described its implementation in a framework of non-linear Bayesian inference of large-scale structures. Our proposed data model is conceptually straightforward and easy to implement. We illustrated the application of our robust likelihood to a mock data set with significant foreground contaminations and evaluated its performance via a comparison with an analysis employing a standard Poissonian likelihood to showcase the contrasting physical constraints obtained with and without the treatment of foreground contamination. We have shown that foregrounds, when unaccounted for, lead to spurious and erroneous large-scale artefacts in density fields and corresponding matter power spectra. In contrast, our novel likelihood allows us to marginalize over unknown large-angle contamination amplitudes, resulting in a homogeneous inferred density field, thereby recovering the fiducial power spectrum amplitudes.
We are convinced that our approach will contribute to optimising the scientific returns of current and coming galaxy redshift surveys. We have demonstrated the effectiveness of our robust likelihood in the context of large-scale structure analysis. Our augmented data model remains nevertheless relevant for more general applications with other cosmological probes, with applications potentially extending even beyond the cosmological context.
\section*{Acknowledgements}
We express our appreciation to the anonymous reviewer for his comments which helped to improve the overall quality of the manuscript. NP would like to thank Torsten En{\ss}lin for discussions and support. NP is supported by the DFG cluster of excellence ``Origin and Structure of the Universe.''\footnote{www.universe-cluster.de} This work has been done within the activities of the Domaine d’Intérêt Majeur (DIM) Astrophysique et Conditions d’Apparition de la Vie (ACAV), and received financial support from Région Ile-de-France. DKR and GL acknowledge financial support from the ILP LABEX, under reference ANR-10-LABX-63, which is financed by French state funds managed by the ANR within the Investissements d'Avenir programme under reference ANR-11-IDEX-0004-02. GL also acknowledges financial support from the ANR BIG4, under reference ANR-16-CE23-0002. This work is done within the Aquila Consortium.\footnote{\url{https://aquila-consortium.org}}
\bibliographystyle{aa.bst}
|
1,941,325,221,001 | arxiv | \section{Introduction}\label{intro}
Top-quark polarisation is important to reveal a hint of beyond the standard model, especially chiral structures of the top sector. The information of the polarisation for a highly boosted top-quark is translated into that of helicities, since the chirality coincides with the helicity at high energies.
Once the boosted top-quark is produced, the top quark decays through the weak interaction and the decay particles of the highly boosted top will be collimated along the direction of the top momentum. Such highly collimated configurations will spoil ordinary methods to distinguish these decay particles. Jet substructures are useful to overcome this difficulty in jets from boosted particles. Various jet observables are proposed, for example, "girth" \cite{girth} and "angularity" \cite{angularity} are useful to discuss sub structures of jets. For the readers who are interested in the jet substructure, see the recent review (lecture note) \cite{review.jetsubstructure} for the jet substructure.
As the top-polarisation effects can be found in the sub-jet-energy distribution \cite{boost.top.pol}, the research in a correlation between top polarisation and other jet-substructures will be important to study not only for a deeper understanding the standard model, but also to discuss an extended chiral structure in the top sector.
In our previous work \cite{YKLi}, the helicity dependence in the top-jet substructure is discussed, especially the energy profile (alternatively, jet shape) which expresses a distribution of the sub-jet energy in the top-jet cone for a highly boosted polarised-top is considered. The energy profile is defined in the following:
\begin{eqnarray}
\Psi(r) &=&
\frac{1}{N_{J_t}}\sum_{J_t}\frac{\sum_{r_i<r, i\in J_t}P_{T_i}}
{\sum_{r_i<R_t, i\in J_t}P_{T_i}},\label{profile}
\end{eqnarray}
where $r \le R_t$ is a test radius in the top jet $J_t$,
$N_{J_t}$ is a number of top jets with the top-jet radius $R_t$, $P_{T_i}$ is the transverse momentum of a particle $i$ in the top-jet. It is worthy to note that the lepton energy in the semileptonic top-decay is not included in the definition.
In our framework, this energy profile is expressed as a convolution of a hard kernel with the energy function for the light-quark-jet evaluated by pQCD calculation. By using this energy profile for the top-jet with a particular helicity, we can theoretically study the distribution of the sub-jet energy in the top decay, especially, we consider the semileptonic decay for simplicity of the analysis and we count the energy of b-jet in this decay. We will discuss the top-jet-energy dependence and the helicity dependence in the energy profile. The helicity-dependence is converted into a "helicity minus-plus (chirality left-right) difference" and this difference will be useful to distinguish the helicities of top quark. It is turned out that the energy profile is sensitive to the helicities of the top quark and the top quark with helicity-minus can accumulate the sub-jet energy faster than one with helicity-plus. This feature is understood within the standard model, namely within the $V-A$ structure of the weak interaction. Theoretical formalism and the results of the energy profile are presented in the section \ref{formalism}, the reason of the difference in the energy profile between different helicities is discussed in the section \ref{discussion}, and the section \ref{conclusion} is devoted to the conclusion.
\section{Factorisation and Energy profile}\label{formalism}
\subsection{formalism}
We consider the process $q\bar{q} \to t\bar{t}$ as the subprocess of the top-pair production to construct the top-jet function $J_t$. In principle, we can include the subprocess $gg \to t\bar{t}$, but the factorisation procedure for $gg$ process is common as well as $q\bar{q}$. Therefore we only consider $q\bar{q} \to t\bar{t}$ process as the production process for simplicity.
The factorisation at the leading order (LO) is simple. We can change the fermion flow thanks to the Fierz identity and the production part and the decay part are factorised in the squared matrix-element as
\begin{eqnarray}
\Big| \overline{\mathcal{M}} \Big|^2
&=&
\Big| \overline{\mathcal{M}}_{pro} \Big|^2
\Big| \overline{\mathcal{M}}_{decay} \Big|^2
\left[1 + O\left(\frac{m^2_t}{s}\right)\right],
\end{eqnarray}
where $\overline{\mathcal{M}}$ is the total probability-amplitude of the process $q\bar{q} \to \bar{t} b\ell \nu$, $\overline{\mathcal{M}}_{pro}$ is the production part related to the process $q\bar{q} \to t\bar{t}$, $\overline{\mathcal{M}}_{decay}$ is the decay part related to the process $t\to b\ell \nu$, $\sqrt{s}$ is the centre of mass energy for $q\bar{q}$ pair. We can neglect the term $\mathcal{O}(m^2_t/s)$ for a highly boosted top-quark. The production part is canceled out in the final result of the energy profile, hence we don't explicitly write the full expression. The decay part $\Big| \overline{\mathcal{M}}_{decay} \Big|^2$ is given by the product of leptonic trace and the trace for the decay part. Factorising the b-quark trace from the decay trace by Fierz identity and combining the phase space, the decay part is converted into a part of the top-jet function \cite{YKLi}.
The LO top-jet function $J^{(0),s_t}_{t}$ specified by the top-spin vector $s_t$ is expressed as the convolution with the hard kernels $F_a, F_b$ and the LO b-jet function $J^{(0)}_{b}$ in the following form:
\begin{eqnarray}
J^{(0),s_t}_{t}(m^2_{J_t},\bar E_{J_t},\bar R_t)
&=& f_{t}(z_{J_t}) \int dz_{J_b}d\bar x_{J_b}d\cos\bar\theta_{J_b}\nn\\
&{}& \times
\left[ F_{a}(z_{J_t}, \bar x_{J_b}, z_{J_b}) + |\vec{s}_t|
F_{b}(z_{J_t}, \bar x_{J_b}, z_{J_b}) \cos\bar\theta_{J_b}
\right] J^{(0)}_{b}(m^2_{J_b}, \bar E_{J_b}, \bar R_t),\label{eq.Jtst.LO}
\end{eqnarray}
where $\bar E_{J_t}=m_{J_t}$, $\bar E_{J_b}$ is the b-jet energy in the
rest frame of the top quark, the dimensionless parameters $z_{J_t}$, $\bar{x}_{J_b}$, $z_{J_b}$ are defined as
\begin{eqnarray}
z_{J_t} = \frac{m^2_{J_t}}{m^2_t}, \hspace{1.0cm}
\bar x_{J_b} = \frac{2\bar{E}_{J_b}}{m_{J_t}}, \hspace{1.0cm}
z_{J_b} = \frac{m^2_{J_b}}{m^2_{J_t}},
\end{eqnarray}
and the polar angle $\bar\theta_{J_b}$ is measured as the relative angle between the top-spin $\vec{s}_t$ and the b-jet momentum, $\bar R_t$ is the top-jet radius supposed to be the upper bound of $\bar\theta_{J_b}$ in the rest frame of the top quark. The hard kernels $F_a$ and $F_b$ are given by
\begin{eqnarray}
F_a(z_{J_t}, \bar{x}_{J_b}, z_{J_b}) &=&
\sqrt{z_{J_t}} \sqrt{\bar{x}^2_{J_b} - 4z_{J_b}} f_{W}(z_{J_t},\bar{x}_{J_b}, z_{J_b})
\left[ - \frac{1}{3}\bar{x}^2_{J_b}
+ \frac{1 + z_{J_b}}{2}\bar{x}_{J_b}
- \frac{2}{3} z_{J_b}
\right],\nn\\
F_b(z_{J_t}, \bar{x}_{J_b}, z_{J_b}) &=&
f_{W}(z_{J_t}, \bar{x}_{J_b}, z_{J_b})
\left[ - \frac{1}{3}\bar{x}^3_{J_b}
+ \frac{1 + 3z_{J_b}}{6}\bar{x}^2_{J_b}
+ \frac{4}{3} z_{J_b} \bar{x}_{J_b}
- \frac{2}{3} z_{J_b} (1 + 3z_{J_b})
\right],
\end{eqnarray}
where $f_{W}(z_{J_t}, \bar{x}_{J_b}, z_{J_b})=1/[(1+z_{J_b}-\bar{x}_{J_b}-\xi)^2 + (\xi\eta)^2]$ is the dimensionless $W$-boson propagator with the mass ratios $\xi=m^2_W/m^2_{J_t}, \eta=\Gamma_W/m_W$. The overall factor $f_{t}(z_{J_t})$ is proportional to the dimensionless top propagator $1/[(1-z_{J_t})^2 + \eta^2_{t}]$ with the mass ratio $\eta_t=\Gamma_t/m_{J_t}$. The spin dependence in the top-jet function is introduced through the spin decomposition $(k_t\sura + m_t)=(k_t\sura+m_t)(1+\gamma^5 s_t\sura)/2+(k_t\sura+m_t)(1-\gamma^5s_t\sura)/2$.
Although the LO b-jet function $J^{(0)}(m^2_{J_b}, \bar{E_{J_b}}, R_{b})$ is proportional to the delta function $\delta(m^2_{J_b} - m^2_b)$, by taking into account the soft-gluon contribution to this process, we obtain the expression of the top-jet function $J^{s_t}_t$ including the QCD effects in the following form:
\begin{eqnarray}
J^{s_t}_{t}(m^2_{J_t},\bar E_{J_t},\bar R_t)
&=& f_{t}(z_{J_t})\int dz_{J_b}d\bar x_{J_b}d\cos\bar\theta_{J_b}\nn\\
&{}& \times
\left[ F_{a}(z_{J_t}, \bar x_{J_b}, z_{J_b}) + |\vec{s}_t|
F_{b}(z_{J_t}, \bar x_{J_b}, z_{J_b}) \cos\bar\theta_{J_b}
\right] J_{b}(m^2_{J_b}, \bar E_{J_b}, \bar R_t),\label{eq.Jtst.resum}
\end{eqnarray}
where the bottom-jet function $J_{b}(m^2_{J_b}, \bar E_{J_b}, \bar R_t)$ improved by the soft-gluon resummation is available. for instance, in Ref. \cite{energyprofile}.
In order to convert the rest frame of the top quark to its boost frame, we relate the top-jet energy $E_{J_t}$ and the decay angle $\theta_{J_b}$ of the b-jet defined at the boost frame to those in the rest frame through the Lorentz transformation \cite{Shelton}
\begin{eqnarray}
E_{J_t} = \gamma_t \bar{E_{J_t}}, \hspace{1.0cm}
\cos\bar{\theta}_{J_b} = \frac{-v_t + \cos\theta_{J_b}}{1 - v_t \cos\theta_{J_b}}
\end{eqnarray}
where we neglect the b-jet mass, because it is smaller than the mass scale of the top-jet energy or top-jet mass. We can neglect $z_{J_b}$ dependent terms in the hard kernels $F_a, F_b$ due to the same reason. Here the Lorentz transformation is performed so that the momentum of the boosted top is along the spin direction of the top quark. Therefore we regard $J^{s_t}_{t}$ as the top-jet function $J^{R}_t$ with the helicity-plus (alternatively right-hand top) by the above Lorentz boost. On the other hand, the top jet function $J^{L}_t$ with the helicity-minus (left-hand top) is expressed by replacing the sign of the cosine dependent term as $J^{L}_t = J^{R}_t \big|_{\cos \to -\cos}$.
The top-jet-energy function $J^{E,R(L)}_{t}$ for the right(left)-hand top is defined by the similar way as well as the top-jet function $J^{R(L)}_{t}$. Multiplying the transverse energy of the b-jet within a test cone $r<R_t$ to the integrand of the top-jet function and integrating out the top-jet mass, we can derive the top-jet-energy function $J^{E,R(L)}_{t}$ in the following form:
\begin{eqnarray}
J^{R(L)}_{t}(\bar E_{J_t},\bar R_t, r)
&=& \int \frac{dz_{J_t}}{z_{J_t}}f_{t}(z_{J_t})
\int dz_{J_b}d\bar x_{J_b}d\cos\bar\theta_{J_b}\nn\\
&{}& \times
\left[ F_{a}(z_{J_t}, \bar x_{J_b}, z_{J_b}) \pm |\vec{s}_t|
F_{b}(\bar x_{J_b}, z_{J_b}) \cos\bar\theta_{J_b}
\right] J^{E}_{b}(\bar E_{J_b}, \bar R_t, r),\label{eq.JtEst.resum}
\end{eqnarray}
where the hard kernels $F_a, F_b$ are same functions appeared in $J^{R(L)}_t$, the energy function $J^{E}_{b}$ is calculated in Ref. \cite{energyprofile}.
\subsection{Results}
The energy profile $\Psi^{R(L)}(r)$ at the parton level for the helicity-plus (minus) top is expressed in terms of the energy function $J^{E,R(L)}_t$ as the function of the test-cone radius $r$:
\begin{eqnarray}
\Psi^{R(L)}(E_{J_t}, R_{t}, r)
= \frac{J^{E,R(L)}_{t}(E_{J_t},R_t, r)}
{J^{E,R(L)}_{t}(E_{J_t},R_t, r=R_t)}.
\end{eqnarray}
The top-jet-energy $E_{J_t}~(\mbox{velocity}~\beta_t,~\mbox{gamma~factor}~\gamma_t)$ dependence in the energy profile is shown in (a) of Figure \ref{fig-1}.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=7.2cm,clip]{fig1a.eps} &
\includegraphics[width=7.2cm,clip]{fig1b.eps} \\
(a) & (b)
\end{tabular}
\caption{Top-jet-energy $E_{J_t}$ dependence in the energy profile for the boosted top:(a), the helicity minus-plus (chirality left-right) difference $\Delta\Psi(r)$:(b).}
\label{fig-1}
\end{figure}
We use the parameters $m_t=172.5~\mbox{GeV}, m_{W}=80.39~\mbox{GeV}$ for masses, $\Gamma_W=2.09~\mbox{GeV}, \Gamma_{t}=1.33~\mbox{GeV}$ for decay widths, and $\Lambda_{\mbox{\tiny QCD}}=0.1~\mbox{GeV}$ for the scale parameter of QCD with six flavours.
It is obvious that the energy profile of the helicity-minus (left-hand) top is larger than one of the helicity-plus (right-hand) top for $E_{J_t} = 500~\mbox{GeV}~(\beta_t=0.94, \gamma_t=2.9), 750 \mbox{GeV}~(\beta_t=0.97, \gamma_t=4.3)$ and $1~\mbox{TeV}~(\beta_t=0.99, \gamma_t=5.8)$ with a fixed top jet radius $R_t=1.0$. The difference of the energy profile between the helicity-plus top and the helicity-minus top can be evaluated with a difference between $\Psi^{L}(r)$ and $\Psi^{R}(r)$, for example with the value $\Delta \Psi(r)$ expressed by the following definition
\begin{eqnarray}
\Delta \Psi(r) = \frac{\Psi^{L}(r) - \Psi^{R}(r)}
{\frac{\Psi^{L}(r) + \Psi^{R}(r)}{2}},
\end{eqnarray}
where this value is the ratio of the difference between the helicity-minus and helicity-plus to its average. The ratio $\Delta\Psi(r)$ is shown in (b) of Figure \ref{fig-1}.
Typical difference between the helicity-minus and helicity-plus top can be found at small $r$ region, for example, at $r=0.1$. The numerical values of $\Delta\Psi(r=0.1)$ are $50\verb
\section{Discussion}\label{discussion}
\noindent
The mechanism why the energy profile for the helicity-minus (left-hand) dominates than that of the helicity-plus (right-hand) is explained in Figure \ref{fig-2}.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[width=12cm,clip]{fig2.eps}\\
\hspace{0cm} (a) \hspace{5cm} (b)
\end{tabular}
\caption{Favoured decay direction of the b-jet for the top quark with the helicity-plus:(a) and helicity-minus:(b). }
\label{fig-2}
\end{figure}
According to the standard $V-A$ weak interaction, the angular distribution for decay particle of the polarised top-quark at the rest frame of the top is summarised in the following form \cite{Spin.analyz1,Spin.analyz2}:
\begin{eqnarray}
\frac{1}{\Gamma}\frac{d\Gamma}{d\cos\theta_{i}}
&=& \frac{1}{2}(1 + \kappa_i |\vec{\rho}| \cos\theta_{i}), \hspace{1cm}i=b,\ell,\nu,
\end{eqnarray}
where $\Gamma$ is the partial decay width of this decay process, $|\vec{\rho}|$ is the polarisation vector for the top, the decay angle $\theta_i$ is measured between the top spin and the momentum direction of the decay particle $i$, the numerical constant $\kappa_i$ is known as the spin-analysing-power which describes the sensitivity of the decay particle $i$ to the top spin. Numerically, the spin-analysing-power of b-quark is the negative value $\kappa_b \simeq -0.4$ \cite{Spin.analyz3} and it means that the favoured decay direction of the bottom quark is opposite to the top spin direction.
This tendency will be kept as far as the boost is not too large, since the top quark is very heavy and the boost parameter is less than unity in actual experiments.
Hence there is a correlation between the momentum direction of the b-jet and the top helicity translated from the top spin.
For example, according to the definition of the helicity, the b-jet tends to be emitted along the opposite direction to the top spin for the helicity-plus top (right-hand top) and therefore the b-jet tends to go outside the top-jet cone ((a) of Figure \ref{fig-2}). On the other hand, the b-jet tends to be emitted along the same direction to the top spin for the helicity-minus top quark (left-hand top) and therefore the b-jet tends to go inside the top-jet cone ((b) of Figure \ref{fig-2}).
Comparing the contribution to the jet energy profile both for the helicity-plus top and for helicity-minus top, the b-jet contribution to the energy profile for the helicity-minus top has a larger probability than one for helicity-plus top. This is the reason why the helicity-minus top can accumulate the b-jet energy faster than the helicity-plus top.
\section{Conclusion}\label{conclusion}
We have theoretically investigated the helicity dependence in the jet substructure within the standard model, especially, the energy profile of the top-jet for the semileptonic top-decay. The main result is expressed by the convolution with the b-jet energy function improved by pQCD resummation and the hard kernel calculated by the weak interaction. It indicates that the helicity-minus top can accumulate the energy of the b-jet in the semileptonic decay faster than the helicity-plus top.
This tendency can be understood with the standard $V-A$ weak interaction, i.e., the consequence of the negative spin-analysing-power for b-quark. These results imply that the energy profile, one of the simple jet-substructure, is useful for the discrimination of the helicities of the boosted top quark.
Besides this discrimination will be helpful not only for the identification of the top helicities, but also for the study of the chiral structure of the top quark through jet observables. The straightforward application of this formalism to the hadronic top-decay is under investigation. We expect that a similar difference will appear in other observables of the jet substructure like girth \cite{girth} or angularity \cite{angularity} and other jet observables discussed in \cite{review.jetsubstructure}.
Although the data of the energy profiles for light-jet and b-jet in top events at the LHC is reported in Ref. \cite{atlas.jetshape}, the comparison with our results and the experimental data is nontrivial, since we only focus on highly boosted top-quarks. Nevertheless this kind of comparisons will be interesting to test our formalism and it will be future work.
\section*{Acknowledgements}
I would like to thank Pietro Colangelo, Fulvia De Fazio, Claudio Corian\`o, Luca Trentadue and other organisers of the international workshop on QCD at Giovinazzo (Italy) for their hospitality. I also acknowledge Hisnag-nan Li for useful discussions and the financial support to participate in this workshop. This work was supported in part by the grant NSC-101-2112-M-001-006-MY3.
|
1,941,325,221,002 | arxiv | \section{Introduction}
Stellar mass black holes candidates (BHCs) exhibiting transient behavior generally reside in binaries. They show occasional outbursts
of variable duration ranging from few weeks to months. In between two outbursts, these transient BHCs stay in
long periods of quiescence. During the outbursts, compact objects (here, BHCs) accrete matter from their companions
via Roche-lobe overflow and/or by wind accretion, which forms a disk-like structure, commonly known as an {\it accretion disk}.
Electromagnetic radiation from radio to $\gamma$-rays are emitted from the disk, which makes it observable.
It is believed that an outburst is triggered by a sudden rise in viscosity in the disk, which increased
the accretion rates in the inner disk causing outbursts (Chakrabarti, 2013). Rapid evolution of spectral and temporal properties are
observed during an outburst of transient BHCs and these are found to be strongly correlated. In the hardness-intensity
diagram (HID; Fender et al. 2004; Debnath et al. 2008) or accretion rate ratio intensity diagram (ARRID; Jana et al. 2016),
observed in different states are found to be correlated with different branches.
Generally four spectral states, namely, the hard (HS), hard-intermediate (HIMS), soft-intermediate (SIMS) and soft (SS)
states are observed during an outburst. Each state is defined with certain characteristics of spectral and temporal features.
HS and HIMS are dominated by non-thermal high energetic radiations with observation of monotonical rise/fall of low frequency
quasi-periodic oscillations (QPOs), whereas SIMS and SS are dominated by thermal radiations with sporadic QPOs (in SIMS)
or no QPOs (in SS) (for more details, see Nandi et al. 2012; Debnath et al. 2010, 2013 and references therein).
According to Debnath et al. (2017), outbursts are of two types: type-I or classical type, where
all spectral states are observed, and type-II or harder type, where SS are absent. The latter type of outbursts
are termed as `failed' outbursts. For instance, 2005 outburst of Swift~J1753.5-0127 is of type-II.
Black hole (BH) X-ray spectrum consists of both thermal and non-thermal components. The thermal component is basically a
multicolor blackbody that is emitted from the standard Keplerian disk (Shakura \& Sunyaev 1973). The non-thermal
component is of power-law (PL) type, and it originates from the so-called `hot corona' or 'Compton cloud' (Sunyaev \& Titarchuk 1980).
In the two-component advective flow (TCAF) solution (Chakrabarti \& Titarchuk 1995), this corona is identified with
the CENtrifugal pressure supported BOundary Layer (CENBOL), which naturally forms behind the centrifugal barrier due
to pile-up of the free-falling, weakly viscous (less than critical viscosity), sub-Keplerian (low angular momentum) matter.
Soft photons from the Keplerian disk gain energy by repeated inverse-Compton scattering with the hot electron at the CENBOL
and emerge as high energetic photons having a power-law distribution in energy.
Recently, this TCAF solution has been included in HEASARC's spectral analysis software package XSPEC as an additive table
model to fit BH spectra (Debnath et al. 2014, 2015a). Few transient BHCs have been studied by our group during their
X-ray outbursts to find a clear picture about the evolution of the physical properties of these sources
during their X-ray outbursts (Mondal et al. 2014, 2016; Debnath et al. 2015a,b, 2017;
Chatterjee et al. 2016; Jana et al, 2016; Bhattacharjee et al. 2017; Molla et al. 2017).
Jets and outflows are important features in accretion disk dynamics. According to the TCAF paradigm,
the jets and outflows are produced primarily from the CENBOL region (Chakrabarti 1999a; Das \& Chakrabarti 1999).
If this region remains hot as in hard and hard-intermediate states, jets could be produced, otherwise not.
Generally, inflow rates increase as the object goes from the hard state to the hard-intermediate state,
higher outflow rates are also observed in the intermediate states. It is also reported in the literature that blobby-jets
are possible in intermediate states (Chakrabarti, 1999b; 2001; Nandi et al. 2001) due to higher optical depth
at the base of the jet which episodically cools and separates the jets.
In softer states, this region is quenched and the outflow rates are reduced (also see, Garain et al. 2013).
Collimation of the jets could be accomplished by toroidal flux tubes emerging from generally convective
disks (Chakrabarti \& D’Silva 1994; D’Silva \& Chakrabarti 1994).
There are several papers in the literature that invoke diverse mechanisms for the acceleration
of this matter discussion of which is beyond the scope of the present paper. In the present paper,
we introduce a new method to estimate X-ray flux, emitted from the base of the jets during the entire
period of the 2005 outburst of Swift~J1753.5-0127 and compare that with the radio observations.
Radio jets are common in active galactic nucleus (AGN) sources. It has been observed for several Galactic BHCs, such as, GRS~1758-258
(Rodriguez et al., 1992), 1E~1740.7-2942 (Mirabel et al., 1992) etc. Compact radio jets have been detected in BHCs, such as,
GRS~1915+105 (Dhawan et al., 2000), Cyg X-1 (Stirling et al., 2001). The BHCs GRS~1915+105 (Mirabel \& Rodriguez, 1994)
and GRO~J1655-40 (Tingay et al., 1995, Hjellming \& Rupen, 1995) show superluminal jets.
Though jets are prominent in radio, they could be observed in other
energy bands, such as, X-rays and $\gamma$-rays. High energy $\gamma$-ray jets
have been observed in Cyg X-1 (Laurent et al. 2011, Jourdain et al. 2012) and V~404~Cyg (Loh et al. 2016).
Large scale, decelerating relativistic X-ray emitting jets have been observed in BHC XTE~J1550-564
(Corbel et al. 2002a, 2002b, Kaaret et al. 2006, Tomsick et al. 2003). In this case, radio blobs were predicted
to move at relativistic speed, with blobs emitting in X-rays. H~1743-322 also showed a similar X-ray jet (Corbel et al. 2005).
Kaaret et al. (2006) reported large scale X-ray jet in BHC 4U~1755-33.
A relation between IR and X-ray jets has been found in BHC GRS~1915+105 (Eikenberry et al. 1998;
Lasso-Cabrera \& Eikenberry, 2013). X-ray jet of SS~433 even close to the black hole is well known.
A correlation between X-ray and radio band intensity in compact jets was first found in BHC
GX~339-4 (Hannikainen et al. 1998). The standard correlation is $F_{R} \propto F_{X}^b$ with $b \sim 0.6-0.7$
(Corbel et al. 2003; Gallo et al. 2003). This empirical relation is thought to be universal, although for some BHCs, it is observed
to have steeper PL with index $\sim 1.4$ (Jonker et al. 2004; Coriat et al. 2011). Some BHCs also have shown
dual track in correlation plot. Dual correlation indices were observed for BHCs GRO~J1655-40 (Corbel et al. 2004), H~1743-322
(Coriat et al. 2011), XTE~J1752-522 (Ratti et al. 2012), MAXI~J1659-152 (Jonker et al. 2012).
Until now, the radio and X-ray correlation study was done using quasi-simultaneous data of radio and X-ray fluxes.
Usually, total X-ray flux (disk plus jet) is used for the correlation.
It is reported that jets are emitted in the entire range of electromagnetic spectra: radio to $\gamma$-ray.
Thus X-rays emitted from BHCs when jets are present is the net contribution coming from both the jet and the accretion
disk. Till now, there was no way to separate the contribution of these two components.
In the present paper, for the first time, we make an attempt to separate these two components from the total observed X-rays
using the unique aspects of spectral studies by the TCAF solution. These are radiation in the accretion
disk component is contributed by the Keplerian disk (dominating the soft X-ray band) and
from the `hot Compton cloud' region, i.e., from the CENBOL (dominating the hard X-ray band) and the normalization can be
treated as a constant across the spectral states.
Swift~J1753.5-0127 is discovered on 2005 June 30 by Swift/BAT instrument at RA$=17^h 53^m 28^s.3$, DEC$=-01^\circ 27' 09''.3$
(Palmer et al. 2005). BHC Swift~J1753.5-0127 has a short orbital period ($2.85$~hrs according to Neustroev et al. 2014;
and $3.2\pm 0.2 $~hrs according to Zurita et al. 2007). Neustroev et al. (2014) also estimated the mass of the source as
$< 5~M_{\odot}$ and the companion mass to be between $0.17-0.25 M_{\odot}$ with the disk inclination angle $>40^\circ$.
On contrary, Shaw et al. (2016) estimated the mass as $>7.4~M_{\odot}$. The distance of the source is estimated to be
$4-8$~kpc (Cadolle Bel et al. 2007).
Radio jets are also observed during 2005 outburst of the source (Fender et al. 2005; Soleri et al. 2010).
Several authors have found radio/X-ray correlation for this source. This does not fall on the traditional correlation track,
rather, it shows power-law index to be steeper $\sim 1-1.4$ (Soleri et al. 2010, Rushton et al. 2016, Kolehmainen et al. 2016).
In Debnath et al. (2017; hereafter Paper-I), a detailed study of the spectral and temporal properties of this object during
its 2005 outbursts (from 2005 July 2 to 2005 October 19) was made. They used TCAF model fits file to fit the spectra and
obtained accretion flow properties of the source during the outburst. Based on the variations of the TCAF model fitted
(spectral) physical flow parameters and observed QPO frequencies, entire 2005 outburst was classified into two harder
spectral states, such as, HS \& HIMS, and these states were observed in the sequence:
HS (Ris.) $\rightarrow$ HIMS (Ris.) $\rightarrow$ HIMS (Dec.) $\rightarrow$ HS (Dec.).
They also estimated the mass of the BHC to be in the range of $4.75-5.90$~M$_{\odot}$ or $5.35^{+0.55}_{-0.60}$~M$_{\odot}$.
According to the TCAF solution, model normalization (N) is a function of intrinsic parameters, such as, distance, mass
and constant inclination angle of the binary system. So, $N$ is a constant for a particular BHC across, its spectral states
unless there is a precession in the disk to change the projected emission surface area or there are some significant outflow
or jet activities which so far are not included in the current version (v0.3) of the TCAF model {\it fits} file. As reported
in Paper-I, there are significant deviation of the constant $N$ in few observations during the outburst. This allows us to
estimate the amount of jet flux by separating it from the total X-ray luminosity from our spectral study with the current
version of the TCAF solution by keeping model normalization frozen at the lowest observed value. The spectral property of the
residual X-ray is also found.
The {\it paper} is organized in the following way. In \S 2, we briefly discuss the relation of jet with spectral states.
In \S 3, we also briefly present a method to estimate the jet flux from the total
X-ray flux. In \S 4, we present results on our estimated jet flux and its evolution during the entire 2005 outburst of
Swift~J1753.5-0127. We compare our estimated jet flux with that of the radio fluxes observed during the outburst
and study correlation between X-ray and radio jet flux components.
Finally, in \S 5, a brief discussion and concluding remarks are presented.
\section{Disk-Jet Connection with Spectral States}
In general, there are two types of jets: continuous outflows (Compact jets) and discrete ejections (blobby jets:
Chakrabarti \& Nandi 2000; Chakrabarti et al. 2002). In TCAF, CENBOL acts as a base of the jet (Chakrabarti 1999a).
Ejection of the matter depends on the shock location ($X_s$), compression ratio ($R$) and inflow rate.
A schematic diagram of inflow and outflow is shown in the second panel of Fig. 1. Jet move subsonically up to the
sonic surface ($\sim 2.5 X_s$) and then moves away supersonically, thereby reducing its temperature during expansion
and emitting in UV, IR to radio (Chakrabarti 1999ab; Chakrabarti \& Manickam 2000, hereafter CM00).
The subsonic region will upscatter seed photons from the Keplerian disk and downscatter CENBOL photons contributing to softer X-rays,
which we define here as the jet X-ray ($F_{ouf}$) flux in this paper. This does not include the X-rays emitted from
interaction of the jet with ambient medium. If the CENBOL is not hot, i.e., the object is not in the hard
or hard intermediate states, compact jets are not possible. However, as the shock moves in due to
larger inflow rates and consequent post-shock cooling, as in soft-intermediate states,
the outflow rate increases and the subsonic region has relatively high optical depth (Chakrabarti 1999b).
In some outburst sources, Keplerian matter may rise much faster than the sub-Keplerian flow as in the present case (Paper-I).
Thus, the shock disappears even in HIMS and blobby jets may arise in HIMS as well.
In presence of high Keplerian accretion rates, CENBOL cools down due to high supply of the soft photons from the Keplerian disk.
Hence it is quenched and we do not see any jet in this state. The results from this considerations are given in
Fig. 1 (left panel), where the `generic' variation of the ratio of outflow ($\dot{M}_{out}$) and inflow ($\dot{M}_{in}$)
rate ($R_{\dot{m}}=\frac{\dot{M}_{out}}{\dot{M}_{in}}$) with shock compression ratio ($R$) is shown. Clearly, the ratio ($R_{\dot{m}}$)
is maximum when the Compression ratio is intermediate as in the hard-intermediate and soft-intermediate states.
The observed jet in this spectral state is
dense compact initially, but becomes increasingly blobby as the transition to the soft-intermediate state is approached.
This is due to the rapid cooling of the jet base, the outflowing matter gets separated since even the
subsonic flow region becomes suddenly supersonic (Chakrabarti, 1999b; Das \& Chakrabarti, 1999; CM00).
\section{Flux and Spectrum of X-rays from the base of the Jet}
Detailed study of the evolution of the spectral and timing properties for the BHC Swift~J1753.5-0127 during its 2005 outburst
using the TCAF solution is presented in Paper-I. Depending upon the variation of TCAF model fitted physical flow parameters and
nature of QPOs (if present), they classified the entire outburst (from 2005 July 2 to 2005 October 19) into two harder (HS and HIMS)
spectral states. No signatures of softer states (SIMS and SS) were observed. This could be due to the lack of viscosity
that prevented the Keplerian disk to achieve a significant rate close to the black hole.
While fitting spectra with the current version (v0.3) of the TCAF solution, the model normalization (N) is found to vary
in a very narrow range ($1.41-1.81$), except for a few days when the radio flux was higher.
This may be because of non-inclusion of the jet mechanism in the current TCAF model
{\it fits} file. This motivated us to introduce a new method to detect an X-ray jet
and calculate its contribution from the total X-ray flux.
We use $2.5-25$~keV RXTE/PCA data to calculate the X-ray flux from the base of the outflow.
In presence of a jet, the total X-ray flux ($F_{X}$)
is contributed from the radiation emitted from both the disk and the base of the jet.
So, during the days with significant X-rays in the outflow, we require higher values of the model normalization
to fit the spectra, since the present version of our TCAF model {\it fits}
file is only concerned with the emission from the disk and no contribution from the jets is added.
If the jet is absent, a constant or nearly constant TCAF model normalization is capable
of fitting the entire outburst (see, Molla et al. 2016, 2017; Chatterjee et al. 2016). In Paper-I,
TCAF normalization found to be constant at $\sim 1.6$ during the entire 2005 outburst of Swift~J1753.5-0127, except for $5$ observations
when it assumed higher values ($\ge 2.0$) in the initial period of HIMS (dec.).
However in HS (dec.) minimum
normalization of $\sim 1.41$ was required to fit spectral data on 2005 September 17 (MJD=55630.31). We assume that
there was very little X-ray jet or, outflowing matter on that day and the entire X-ray flux is contributed only
by the accretion disk and CENBOL, i.e., from inflowing matter alone. This is also the theoretical outcome (Chakrabarti, 1999b).
When we compared the radio data, it was observed that radio flux contributions were also minimum during these days of observations.
To calculate X-ray flux contribution $F_{inf}$ only from the inflow, we refitted all the spectra by freezing model
normalization at $1.41$. Then, we take the difference of the resulting spectrum from the total flux to calculate jet X-ray
flux $F_{ouf}$. In other words, the flux of the jet, relative to MJD=55630.31 can be written as,
$$
F_{ouf} = F_{X} - F_{inf}. \eqno{(1)}
$$
Here, $F_{X}$ and $F_{inf}$ fluxes (in units of $10^{-9}~ergs~cm^{-2}~s^{-1}$)
are calculated using `flux 2.5 25.0' command after obtaining the best fitted spectrum in XSPEC. $F_{X}$ is basically
the TCAF model flux in the energy range of $2.5-25$~keV with free normalization as reported in Paper-I,
where as $F_{inf}$ is TCAF model flux in the same energy range with constant normalization, N=$1.41$.
\section{Results}
\subsection{Evolution of Jet X-rays}
X-ray fluxes from jets or outflow ($F_{ouf}$) are calculated using Eq. (1).
The variation of the derived jet X-ray flux ($F_{ouf}$) during the
entire phase of the 2005 outburst of Swift~J1753.5-0127 is shown in Fig. 2(c). To make a comparison, we show the
variation of $4.8$~GHz VLA radio flux as reported by Soleri et al. (2010) in Fig. 2(d). First radio observation
was $\sim 5$~days after RXTE/PCA observation, which missed initial two harder spectral states. Note that the
Radio flux is maximum, during the middle of the HIMS, namely, in the late stage of HIMS in the rising phase and
early stage of HIMS in the declining phase, precisely as anticipated from the outflow rate behavior in Fig. 1.
Since the object started to return to the hard state, the outflow rate went down also (Fig. 2c) and thus the radio
flux also started to go down (Fig. 2d). During the initial $5$~days (MJD=53553.05-53557.24), X-ray flux was completely
dominated by the inflowing component ($F_{inf}$) and reached its peak on 2005 July 7 (MJD=53557.24),
which was the day of HS to HIMS transition (Paper-I). Jet X-ray flux ($F_{ouf}$) started
to increase from the transition day and reached its maxima on 2005 July 13 (MJD=53564.91).
After that the jet X-ray flux starts to decrease; initially the flux reduced rapidly for the next
$\sim 6$~days and then very slowly or roughly becomes constant until the end of our observation,
except a weak local peak, observed near on 2005 August 11 (MJD=53593.23).
The TCAF normalization ($N$) also shows a behavior similar to the radio flux of the jet as shown by $F_{ouf}$ plot in Fig. 2c.
It was constant in the first few observations. Then it increased and attained maximum value on the same day when
$F_{ouf}$ shows peak value on MJD=53564.91. After that, it decreases fast and becomes almost
constant till the end of our observations, starting from $\sim$ MJD=53570. This additional requirement on $N$
arises from emission of X-rays from the base of the jet, particularly in the subsonic region, which is
not included in the present version of the TCAF model {\it fits} file.
The four plots in Fig. 3(a-d) show spectra from four different spectral states (dates marked as online
red square boxes in Fig. 2e), fitted with free (black solid curve) or frozen
(online red dashed curve) normalization of the TCAF model. The jet spectrum is also shown
(online blue dot-dashed curve). It clearly shows that the jet was becoming stronger as the outburst
progressed and was strongest in HIMS (dec.). Then the contribution from the jet is rapidly
reduced as the shock receded farther away in the HS (dec.).
In the strong jet-dominated region (HIMS in the rising and the declining phases), $F_{ouf}$ is observed to be
in the order of $10^{-9}~ergs~cm^{2}~s^{-1}$, whereas towards the end of the outburst, when
the jet is weak, it decreases by a factor of a hundred.
We also calculated the contribution of the jet in total X-ray emission. On an average, the flux of X-ray
jet is $\sim 12.5\%$ of the total X-rays ($F_{X}$). When the jet activity is strong, the contribution rises
up to $\sim 32\%$ (see, Appendix Table I). The spectrum of X-ray emission from the jet appears to be
harder than the disk spectrum, which is expected when the base of the jet is optically thin.
Note also that, the spectral slope of the jet component is different with a turnover property
at a lower energy than that of the disk as is expected from an expanded system. Though we
did not plot at lower energy, we expect this region to be downscattered radiation emitted
from the inflow.
\subsection{Correlation between the Radio and X-ray Jets}
The first radio observation of Swift~J1753.5-0127 was made with MERLIN on 2005 July 3 at $1.7$~GHz (Fender et al. 2005).
WRST and VLA also observed the BHC (Soleri et al. 2010). VLA observed the BHC at $1.4$~GHz, $4.8$~GHz and $8.4$~GHz.
First radio observation was made with VLA on 2005 July 8 (MJD=53558) with radio flux $F_R=2.79$~mJy at $4.8$~GHz.
After that, $F_R$ slightly decreased on $MJD=53561$, before attaining peak on 2005 July 15 (MJD=53566).
X-ray jets attain its peak roughly two days prior to the radio, i.e., on 2005 July 13 (MJD=53564.91).
There is $\sim 9$~days gap between 2nd and 3rd radio observations. So, it is hard to find
exact delay between the X-ray jet and the radio peak fluxes, although there is a gap
of $\sim 2$~day. Similar to $F_{ouf}$, $F_R$ also showed decreasing nature after
its peak. $F_R$ decreased rapidly until HIMS (Dec.) to HS (Dec.) transition day (MJD=53589),
and then decreased slowly and becomes almost constant from $\sim$ MJD=53590.
It is known from the literature that there exists a correlation between radio and X-ray wave bands from jets.
In Fig. 4(a-d), we draw an $F_R$ versus $F_X$ plot.
We use the results of the available quasi-simultaneous observations of $4.8$~GHz VLA and
$2.5-25$~keV RXTE/PCA.
In an effort to find a relation, we fit the data with $F_R \sim F_{X}^{b}$, where $b$ is a constant.
In Fig. 4a, we show the variation between jet X-ray ($F_{ouf}$) with radio ($F_R$) from quasi-simultaneous observations.
We obtained $b \sim 0.59 \pm 0.11$. The relation with the X-ray flux from inflow ($F_{inf}$), shown in Fig. 4b, required
an index $b\sim 1.28\pm 0.11$. The relation of soft X-ray ($3-9$ keV)
and radio (Fig. 4c), which is a standard practice, yields $b\sim1.05 \pm 0.14$.
When we use $F_R$ and total $F_X$ in the $2.5-25$~keV range, we find $b\sim 1.13 \pm 0.12$ (Fig. 4d).
From these plots, we conclude that the entire X-ray (sum of those from inflow and outflow) is well correlated
only at lower fluxes be it in $3-9$ keV range (Fig. 4c) or in $2.5-25$ keV range (Fig. 4d).
However, if we consider outflow X-ray flux ($F_{ouf}$) instead of $F_X$, then the correlation of
$F_{ouf}$ vs. $F_R$ (Fig. 4a) is found to be weak. However, a good correlation is obtained between $F_R$
and X-ray flux from the inflow ($F_{inf}$) at all levels of flux (Fig. 4b). It is possible that the nature
of the jet deviates from compactness as the intermediate state is approached.
This behavior is compatible with the observed fact that the compact jets are generally
well correlated with the radio flux, while the blobby jets are not.
Swift~J1753.5-0127 is less luminous in radio as compared to other BHCs (Soleri et al. 2010).
In fact, even during the strong jet observation, the total X-ray flux is not entirely contributed
by the jets. A large contribution always comes from the accretion disk.
This may be the reason behind not fitting our result with the standard $b$ ($0.6-0.7$).
Rushton et al. (2016) also found a similar result. They found the correlation
index to be $\sim0.99\pm0.12$ in soft (0.6-10 keV) and $\sim0.96\pm0.06$ in hard (15-150 keV)
X-ray bands using the data of Swift/XRT and Swift/BAT instruments respectively.
\section{Discussions and Concluding Remarks}
In this paper, we use a novel approach to obtain the spectral evolution of the X-rays from the outflow
component of Swift~J1753.5-0127 during its 2005 outburst by exploiting the fact that the normalization
of a TCAF fit having X-ray contributions from an inflow remains constant across the states. We use
$2.5-25$~keV RXTE/PCU2 data of BHC Swift~J1753.5-0127 during its 2005 outburst.
Much higher normalization values were required to fit spectra on a few days,
belonging to HIMS (dec.). Assuming the minimum TCAF model normalization, $1.41$ obtained on 2005 September 17 (MJD=55630.31)
to be contributed from the $2.5-25$ keV range flux from accretion flows only, we estimated the outflow contribution
in rest of the observations. This was done by separating accretion disk spectrum and flux ($F_{inf}$) from the total
spectrum and flux by refitting all spectra, keeping normalization frozen at $1.41$. X-ray
flux ($F_{ouf}$) contribution from the outflow was obtained using Eq. 1. Time dependence of X-ray flux and spectrum
from the outflow thus obtained and the flux variation is appeared to be similar to the observed radio flux data
(see, Fig. 2d).
The variations of $F_{inf}$ and $F_{ouf}$ showed that although initially disk flux increased rapidly and attained
its maximum on 2005 July 7 (MJD=53557.24), the jet flux stays roughly constant. Starting from the time when the
$F_{inf}$ is maximum, jet flux also starts to increase and attains its maximum on 2005 July 13 (MJD=53564.91)
when the spectral state changed from hard to hard intermediate.
In the declining phase, the jet flux decreases and becomes roughly constant in the later phase of the outburst
and finally became negligible. If we interpret that the radio intensity is directly related to the outflow rate,
then it should follow the nature of outflow rate (${\dot m} R_{\dot{m}}$, where $R_{\dot{m}}$ variation as in Fig. 1)
that was predicted by Chakrabarti (1999ab) in the presence of shocks.
Here, ${\dot m}$ is the sum of the disk and halo component rates that increased
from HS to HIMS (Mondal et al. 2014, 2016; Debnath et al. 2015a,b; Jana et al. 2016; Molla et al. 2017).
In deriving the properties of the X-rays from the jets, we assumed that the significant variation of the
TCAF model normalization (N) is entirely due to the variation in jet contribution in X-ray.
Since the outflow rate is supposed to increase in HIMS,
it is likely that the X-ray contribution would also go up. We needed $N=2.61$
(maximum) on MJD=53564.91 for fitting, when $F_{ouf}$ is observed to be maximum. Correlation between these two is good
until the compactness of the jet is maintained. Higher outflow rates may have caused blobbiness (Chakrabarti, 1999b, 2000)
and the variation of the outflow contribution with radio was no longer well correlated at higher flux.
During the radio jet-dominated region, i.e.,
HIMS (dec.), the X-ray jet had a flux of around of $10^{-9}~ergs~cm^{2}~s^{-1}$,
whereas during the declining phase, the flux drops to $\sim 10^{-11}~ergs~cm^{2}~s^{-1}$, which is about $100$ times
lower. There are a few examples of X-ray flux measurements of inner jets. For example, Nandi et al. (2005) showed that the
X-ray flux from the jets for BHC SS~433 is around $10^{-10}~ergs~cm^{2}~s^{-1}$ in $3-25$ keV energy band.
For 4U~1755-33, X-ray flux from the jet is observed to be around $10^{-16}~ergs~cm^{2}~s^{-1}$ in quiescent state
(Angelini \& White, 2003).
In the later part of the 2005 outburst of the BHC Swift J1753.5-0127, radio flux ($F_{R}$)
was found to be about constant at its lower value ($\sim 0.4$~mJy).
Toward of the end of our observations, jets may be moderately stronger in radio but weaker in the X-ray band.
Overall, jet X-ray contribution is found to be at $\sim 12.5$\% over the total X-ray. When the jet is strong, i.e.,
in the HIMS, the outflow contribution is about $32$\% of that of the inflow contribution, surprisingly very similar
to the ratio of the flow rates predicted in HIMS (Chakrabarti, 1999a).
Our result is consistent with what is observed in other similar compact sources.
In the TCAF solution, the jets are considered to emerge out of CENBOL (Chakrabarti 1999ab), which is the `hot' puffed-up
region acting as a Compton cloud. The CENBOL acts as the base of the jet. While CENBOL is the post-shock compressed matter
flowing inward, the matter in the jet is expanding outward and is relatively optically thin. This explains why the spectrum
from the jet is flatter. As matter expands and interacts with entangled magnetic fields, it emits radio waves, generally
far away from the black hole.
Both the X-ray and the radio emissions from outflow depend on the outflow rate. However, X-ray component is strong only
if the outflow rate is higher as happens when the object goes to HIMS. Since the shock is weaker, the
outflow must be radiation driven, rather than thermal pressure driven. The jets could be blobby when the optical
depth is high and the correlation between the two fluxes breaks down. On the
other hand, the X-ray emission from the inflow causes $F_{inf}$ to rise also from HS to HIMS. Outflow rate is controlled
by the shock strength i.e., by the compression ratio $R$ (Fig. 1). Hence, it is expected that a correlation between $F_{inf}$
and $F_{R}$ should exist. Since $F_{ouf} << F_{inf}$ this translates to a correlation between total $F_X$ and $F_R$.
An empirical relation ($F_{R} \propto F_{X}^b$ with $b \sim 0.6-0.7$) was found by Hannikainen et al. 1998; Corbel et al. 2003;
Gallo et al. 2003), although, some `outliers' were found to have a steeper power-law index ($b \sim 1.4$) (Jonker et al. 2004;
Coriat et al. 2011). Using quasi-simultaneous observation of VLA at $4.8$~GHz and the $2.5-25$~keV RXTE/PCA TCAF model fitted
total X-ray flux, we find $b\sim 1.13 \pm 0.12$, for $F_{R}$ and $F_X$ i.e., $F_{R} \propto F_{X}^{1.13 \pm 0.12}$.
Instead of the $2.5-25$~keV total X-ray flux ($F_X$), using the $3-9$~keV soft X-ray flux, we find a less steep exponent of
$b\sim 1.05\pm 0.14$. Our result is consistent with several other authors, who also have found a steeper exponent for this
particular BHC with $b\sim 1.0-1.4$ (Soleri et al. 2010, Rushton et al. 2016, Kolehmainen et al. 2016).
This BHC candidate is less luminous in radio which may be the reason behind getting a steeper index (Soleri et al. 2010).
When $F_{inf}$ and $F_R$ are compared, the index is $\sim 1.28 \pm 0.11$. When $F_{ouf}$ and $F_R$ are compared, $b \sim 0.59 \pm 0.11$.
The observed points in the high jet-dominated region are not well correlated in the later case ($F_{ouf}$ vs. $F_R$, see Fig. 4a).
This may be due to the possible blobby nature of the jets in the high flux HIMS (dec.) region of the outburst.
In future, we would like to estimate X-ray jet fluxes for a few other transient BHCs, such as, MAXI~J1836-194, XTE~J1180+480, etc.,
where deviations of the constancy of the TCAF model normalization have been observed (see, Jana et al. 2016; Chatterjee et al. 2016),
using the same method described in this paper as well as persistent sources such as GRS~1915+105, GX~339-4, V~404~Cyg.
\section*{Acknowledgments}
A.J. and D.D. acknowledge the support from ISRO sponsored RESPOND project fund (ISRO/RES/2/388/2014-15).
D.D. also acknowledges support from DST sponsored Fast-track Young Scientist project fund (SR/FTP/PS-188/2012).
|
1,941,325,221,003 | arxiv | \section{Introduction}
\label{sec:intro}
Strongly interacting matter under extreme conditions of temperature and
density has been a matter of interest over the last decades, related to
the understanding of the strong interaction \cite{Rapp:1999ej} as well
as the analysis of compact stars. In particular, strange pseudoscalar
mesons in matter have been throughly investigated in exotic atoms
\cite{Friedman:2007zza},
heavy-ion collisions (HICs)
\cite{Aichelin:1986ss,Ko1,Cass:1999,Fuchs:2005zg,FOPI,KaoS,Forster:2007qk,Hartnack:2011cn,Zinyuk:2014zor}
and neutron stars \cite{Kaplan:1986yq}.
The phenomenology of kaonic atoms \cite{Friedman:2007zza} requires an
attractive potential for $\bar K$ mesons whereas the $\bar K N$
scattering amplitude in vacuum is repulsive at low energies
due to the presence of the $\Lambda(1405)$ resonance below the $\bar K N$ threshold. Indeed, this resonance has been throughly analyzed in photon induced reactions by the CLAS collaboration \cite{Moriya:2013eb} and in proton-proton reactions by the ANKE experiment \cite{Zychor:2007gf}, and more recently by the HADES at GSI \cite{Agakishiev:2012xk}.
The onset
of an attractive $\bar K N$ interaction at low densities is a
consequence of an upper shift of the $\Lambda(1405)$ resonance induced
by Pauli blocking on the intermediate nucleon states
\cite{Koch:1994mj,Waas:1996xh,Waas:1996fy,Lutz:1997wt}. Additional
medium effects such as the self-energy of mesons in related coupled
channels and the binding of hyperons in the nuclear environment bring a
smoothened $\Lambda (1405)$ back to its vacuum position
\cite{Ramos:1999ku}, while keeping the attractive character of the
$\bar K N$ interaction in matter.
Unitarized chiral coupled-channel approaches
\cite{Waas:1996xh,Waas:1996fy,Lutz:1997wt,Jido:2003cb,Weise:2008aj}
with a self-consistent evaluation of the kaon self-energy
\cite{Lutz:1997wt,Ramos:1999ku,Tolos:2000fj,Tolos:2006ny,Lutz:2007bh}
have proven to be very successful in describing the $\bar K$ meson
interaction in matter. An attractive potential of about 40-60~MeV at
normal nuclear matter density is obtained when self-consistency is
implemented, rather shallow as compared to relativistic mean-field
calculations \cite{Schaffner:1996kv} or phenomenological analysis of
kaonic atom data with density dependent potentials including
non-linearities \cite{Friedman:2007zza,Cieply:2011fy,Friedman:2012qy}.
Yet, this shallow potential is able to reproduce the data from kaonic
atoms \cite{Hirenzaki:2000da,Baca:2000ic}.
The $\bar K$ meson interaction with nucleons has also been addressed
recently in connection to the possible formation of deeply bound kaonic
states after the prediction of narrow strongly bound states in few-body
systems \cite{Akaishi:2002bg,dote04,Akaishi:2005sn}. This analysis was
strongly criticized \cite{Oset:2005sn} due to the unrealistic treatment
of the $\bar K N$ interaction. Recent improved calculations using
different few-body methods with diverse $\bar KN$ input
\cite{Shevchenko:2006xy,Shevchenko:2007zz,Ikeda:2007nz,Revai:2014twa,Ikeda:2008ub,Ikeda:2010tk,Dote:2008in,Dote:2008hw,Barnea:2012qa,Bayar:2012hn} predict few-nucleon kaonic states with large widths, although the predicted binding energies and widths differ substantially from one model to the other. Thus, the experimental quest for such deeply bound kaonic states is an active field of research \cite{FINUDA,OBELIX,DISTO,HADES,LEPS,E15,E27}, that will allow to further constrain the $\bar K N$ interaction in the near future.
Moreover, the in-medium modification of kaon/antikaon properties has been
explored experimentally close to threshold in heavy-ion collisions
at SIS energies \cite{FOPI,KaoS,Forster:2007qk}. With the help of
microscopic transport models
\cite{Aichelin:1986ss,Ko1,Cass:1999,Aichelin,Fuchs:2005zg,Cassing:2003vz,Hartnack:2011cn}
the creation and transport of kaons/antikaons has been studied revealing
a complicated multiple interaction scenario of the strange particles with
hadronic matter whose consequences show up in the measured
spectra and kaon flow characteristics. The strangeness production in heavy-ion collisions is very
different from that in elementary interactions as the excitation
functions for kaons and antikaons show. The comparison of transport model calculations with
experimental results (such as production cross sections, energy
and polar angular distributions, azimuthal anisotropy coefficients
$v_1, v_2$ etc.) indicate that in matter the kaons are affected by
a shallow repulsive potential whereas the antikaon dynamics are influenced by a much
stronger attractive potential.
For kaons the spectral function is very narrow and therefore it
behaves almost as a good quasi-particle. For antikaons the situation is
much more uncertain. This is due to three reasons: a) They have a
broad spectral function due to strong interactions with the
baryons. b) The simple $t\, \rho$ approximation for the antikaon
optical potential does not work in the $I=0$ channel since this scattering amplitude is dominated by the
$\Lambda(1405)$ resonance and is repulsive in vacuum. c) The measured
excitation function of the antikaon yield close to threshold
energies confirms that the $\pi Y \to {\bar K}N$ reaction is the dominant channel for
antikaon production in heavy-ion collisions
\cite{Aichelin:1986ss} since the hyperons are more abundantly produced together with kaons.
This cross section is expected to be
substantially modified in the hot and dense medium. For all
these reasons it is very important to incorporate a self-consistent treatment of the $\bar K$ self
energy and the $\bar K$ scattering amplitudes in transport
calculations.
The first transport calculations for antikaon observables in
nuclear matter were performed assuming that the finite width
of the antikaon spectral function might be neglected
\cite{Aichelin:1986ss,Cass:1999,Aichelin,Hartnack:2002xc}. These calculations revealed the strangeness exchange reaction as the
dominant production channel and the existence of an attractive
antikaon optical potential. Some years later antikaon production was
studied using off-shell dynamics with in-medium spectral functions
in the Hadron-String-Dynamics (HSD) transport model
\cite{Cassing:2003vz} employing the J\"ulich meson-exchange model
\cite{Tolos:2000fj,Tolos:2002ud} as the effective $\bar KN$
interaction in matter. Multiplicity ratios involving strange mesons
coming from heavy-ion collisions data were analyzed in
\cite{Tolos:2003qj}.
During the last decade several conclusions on the production mechanisms
for strangeness have been achieved by the analysis of experimental data
in conjunction with microscopic transport approaches,
i.e. the production mechanisms of strangeness, the different freeze-out
conditions exhibited by $K^+$ and $K^-$ mesons and the use of $K^+$ as
a probe of the nuclear matter equation of state at high baryon densities.
Still, the analysis of all experimental antikaon observables has not allowed so far
for a consensus on the antikaon cross sections and optical potential
(cf. the recent review \cite{Hartnack:2011cn}).
For example, recent experimental data on the $v_1, v_2$ flow of strange
mesons \cite{Zinyuk:2014zor} show a sensitivity to the
details of the in-medium meson-baryon interaction, leaving room for a
more elaborate description within hadronic models.
A model for the $\bar K N$ interaction at finite density and zero
temperature has been recently developed within a chiral unitarity
approach in coupled channels by incorporating the $s$- and $p$- waves
of the kaon-nucleon interaction in a self-consistent manner
\cite{Tolos:2006ny}. Finite temperature effects have been also
implemented \cite{Tolos:2008di}, although only a full self-consistent
solution for $s$- wave effective $\bar KN$ interaction was reached as
the $p$-wave contribution was treated by means of hyperon-nucleon
insertions. In this work we aim at improving on the chiral effective
scheme in dense matter developed in
Refs.~\cite{Tolos:2006ny,Tolos:2008di} as we incorporate the full
self-consistency in $s$- and $p$-waves at finite density and
temperature. In this way, we are able to generate in-medium
meson-baryon cross sections (amplitudes) at finite temperature as well
as to determine the single-particle properties of hyperons, such as
$\Lambda(1115)$, $\Sigma(1195)$ and $\Sigma^*(1385)$, at finite
momentum, density and temperature. These results will be used to
analyze the antikaon and hyperon production near threshold in HICs in
a subsequent publication \cite{preparation}.
This paper is organized as follows. In Sec.~\ref{sec:model} we present
the improved model for the $S=-1$ meson-baryon amplitudes in hot
nuclear matter. In Sec.~\ref{sec:amplitudes} the $S=-1$ in-medium
amplitudes and the single-particle properties of the $\Lambda$,
$\Sigma$ and $\Sigma^*(1385)$ at finite density, temperature and
momentum are studied, whereas in Sec.~\ref{sec:cross-sec} we show the
results for the in-medium cross sections and transition amplitudes. We
draw our summary, conclusions and outlook in Sec.~\ref{sec:Conclusion}.
\section{Chiral unitarized model for $S=-1$ meson-baryon amplitudes in hot nuclear matter}
\label{sec:model}
In the present work we build upon the recent results of Refs.~\cite{Tolos:2006ny,Tolos:2008di}, where the properties of strange mesons in nuclear matter at finite temperature were studied within a self-consistent coupled-channel approach based on the $SU(3)$ meson-baryon chiral Lagrangian.
In Ref.~\cite{Jido:2002zk} the $p$-wave amplitude in vacuum was added to the $s$-wave contribution coming from the Weinberg-Tomozawa term \cite{Oset:1997it}. The $p$-wave scattering was generated by the pole terms of the octet $\Lambda(1115)$, $\Sigma(1195)$ and the decuplet $\Sigma^*(1385)$ in $s$-channel exchange \cite{Jido:2002zk}. A full self-consistent treatment of the in-medium interaction at zero temperature in $s$- and $p$-waves was performed in a later work \cite{Tolos:2006ny}. In addition, nuclear short range correlations were incorporated in the $p$-wave amplitudes in line with the mechanisms that drive the nucleon-nucleon and nucleon-hyperon interactions in Ref.~\cite{Tolos:2006ny}, thus improving the formalism developed in \cite{Ramos:1999ku}.
The effect of finite temperature was taken into account in Ref.~\cite{Tolos:2008di} by recalculating all the relevant meson-baryon propagators and self-energies within the Imaginary Time (Matsubara) Formalism, thus extending the applicability of the model to the experimental conditions of intermediate energy heavy-ion collisions (FAIR). Still, the $p$-wave self-energy of kaons and antikaons was calculated at the level of single hyperon-hole insertions and not within the present unitarized and self-consistent scheme. Thus, although we were able to obtain the $p$-wave self-energy, which was evaluated in terms of finite-temperature hyperon-hole Lindhard functions including baryonic mean-field potentials, a drawback of this calculation was that the in-medium $p$-wave amplitudes for $\bar K N \to \bar K N$ and related (off-diagonal) coupled channels were not accessible at finite temperature.
With the focus on the implementation of in-medium hadronic scattering amplitudes in microscopic transport simulations, we have improved our previous calculations in \cite{Tolos:2006ny} by adding the unitarization of the $\bar K N$ $p$-wave interaction and keeping the finite temperature formalism of \cite{Tolos:2008di} for the scattering amplitudes and the meson self-energies. This improvement not only generalizes the results of \cite{Tolos:2006ny} to hot and dense matter, but additionally gives access to full off-shell in-medium scattering amplitudes in the $SU(3)$ set of coupled channels.
Moreover, the improved model renders an additional output, namely the in-medium single-particle properties of the hyperons exchanged in the $p$-wave amplitudes, which are consistently generated within the same approach. Previous results in cold nuclear matter were advanced in \cite{Tolos:2006ny} for the mass shift and width of these states at normal matter density, $\rho_0=0.17 {\rm fm^{-3}}$. We generalize and extend those results by providing the density, temperature and momentum dependent single-particle potentials for the $\Lambda(1115)$, $\Sigma(1195)$ and $\Sigma^*(1385)$, which we obtain by analyzing the poles in the scattering amplitudes, cf.~Sec.~\ref{sec:amplitudes}.
The dynamics of strange meson-baryon scattering as can be extracted from our scattering amplitudes is best implemented within transport models in terms of in-medium cross sections or else as off-shell reaction rates when the propagation of unstable particles is taken into account \cite{Hartnack:2011cn}. We explore both scenarios and for the first time we calculate in our model the total cross section of several $\bar K N$ two-body reactions at finite temperature and nuclear density as well as the off-shell transition probabilities for several processes which play a key role in accessing the near sub-threshold region in anti-kaon production dynamics (cf.~Sec.~\ref{sec:cross-sec}).
\subsection{$S=-1$ meson-baryon amplitudes in vacuum}
The extensive details of the formalism for $\bar K N$ scattering and related channels in meson-baryon Chiral Perturbation Theory can be found in \cite{Oset:1997it,Oller:2000fj,Jido:2002zk,Jido:2003cb,Garcia-Recio:2003ks,Hyodo:2002pk,Borasoy:2005ie,Oller:2006jw,Borasoy:2006sr}. Here we provide a brief summary of the leading order $s$- and $p$-wave scattering amplitudes in vacuum, and the unitarization in coupled channels.
The lowest order chiral Lagrangian which couples the octet of light pseudoscalar
mesons to the octet of $1/2^+$ baryons is given by
\begin{eqnarray}
{\cal L}_1^{(B)} &=& \langle \bar{B} i \gamma^{\mu} \nabla_{\mu} B
\rangle - M \langle \bar{B} B\rangle \nonumber \\
&& + \frac{1}{2} D \left\langle \bar{B} \gamma^{\mu} \gamma_5 \left\{
u_{\mu}, B \right\} \right\rangle + \frac{1}{2} F \left\langle \bar{B}
\gamma^{\mu} \gamma_5 \left[u_{\mu}, B\right] \right\rangle \ ,
\label{chiralLag}
\end{eqnarray}
where the symbol $\langle \, \rangle$ denotes the trace of $SU(3)$ flavor
matrices, $M$ is the baryon mass and
$\nabla_{\mu}$ denotes the covariant derivative coupling the baryon fields to the pseudoscalar meson vector current $\Gamma_{\mu}$,
\begin{eqnarray}
\label{eq:defs1}
\nabla_{\mu} B &=& \partial_{\mu} B + [\Gamma_{\mu}, B] \nonumber \ ,\\
\Gamma_{\mu} &=& \frac{1}{2} (u^\dagger \partial_{\mu} u + u\, \partial_{\mu}
u^\dagger) \ .
\end{eqnarray}
The pseudoscalar (Goldstone) bosons are introduced within the non-linear realization of chiral symmetry in exponential parameterization, $U = u^2 = {\rm exp} (i \sqrt{2} \Phi / f)$, and $f$ is the meson decay constant. The two last terms in Eq.~(\ref{chiralLag}) contain the coupling of the baryon fields to the meson axial vector current $u_{\mu}$, with
\begin{equation}
\label{eq:defs2}
u_{\mu} = i u ^\dagger \partial_{\mu} U u^\dagger = i \left( u^\dagger \partial_{\mu} u
- u \partial_{\mu} u^\dagger \right) .
\end{equation}
We note that in the $SU(2)$ sector only the sum $D+F$ is relevant and corresponds to the nucleon axial vector coupling. The $\pi NN$ interaction strength relates to the former via the Goldberger-Treiman relation, $g_{\pi NN}/2M_N = (D+F)/2f$.
The $SU(3)$ meson and baryon field matrices are standard in notation and given by
\begin{equation}
\Phi =
\left(
\begin{array}{ccc}
\frac{1}{\sqrt{2}} \pi^0 + \frac{1}{\sqrt{6}} \eta & \pi^+ & K^+ \\
\pi^- & - \frac{1}{\sqrt{2}} \pi^0 + \frac{1}{\sqrt{6}} \eta & K^0 \\
K^- & \bar{K}^0 & - \frac{2}{\sqrt{6}} \eta
\end{array}
\right) \ ,
\end{equation}
\begin{equation}
B =
\left(
\begin{array}{ccc}
\frac{1}{\sqrt{2}} \Sigma^0 + \frac{1}{\sqrt{6}} \Lambda &
\Sigma^+ & p \\
\Sigma^- & - \frac{1}{\sqrt{2}} \Sigma^0 + \frac{1}{\sqrt{6}} \Lambda & n \\
\Xi^- & \Xi^0 & - \frac{2}{\sqrt{6}} \Lambda
\end{array}
\right) \ .
\end{equation}
Let us focus first on the $s$-wave meson-baryon interaction. Keeping at the level of two meson fields, the covariant derivative term in
Eq.~(\ref{chiralLag}) provides the following interaction Lagrangian,
\begin{equation}
{\cal L}_1^{(B)} \doteq \left\langle \bar{B} i \gamma^{\mu} \frac{1}{4 f^2}
[(\Phi\, \partial_{\mu} \Phi - \partial_{\mu} \Phi \Phi) B
- B (\Phi\, \partial_{\mu} \Phi - \partial_{\mu} \Phi \Phi)]
\right\rangle \ , \label{lowest}
\end{equation}
from which one can derive the meson-baryon (tree-level) transition amplitudes as
\begin{equation}
V_{ij} = - C_{ij} {1 \over 4 f^2} \bar{u}(p^\prime) \gamma^\mu u(p)
(k_\mu + k^\prime_\mu) \ ,\label{fourpoint}
\end{equation}
where $k$, $k^\prime$ ($p$, $p^\prime$) are the initial and final
meson (baryon) momenta, respectively, and the coefficients $C_{ij}$
($i$, $j$ indicate the particular meson-baryon channel) form a
symmetric matrix and can be found explicitly in
\cite{Oset:1997it}.
For low-energy scattering (i.e. neglecting corrections of order $p/M$) the following expression for the $s$-wave scattering amplitude is obtained,
\begin{eqnarray}
V_{i j}^s &=& - C_{i j} \, \frac{1}{4 f^2} \, (2 \, \sqrt{s}-M_{B_i}-M_{B_j})
\left( \frac{M_{B_i}+E_i}{2 \, M_{B_i}} \right)^{1/2} \, \left( \frac{M_{B_j}+E_j}{2 \, M_{B_j}} \right)^{1/2} \nonumber \\
&\simeq& - C_{i j} \, {1 \over 4 f^2} (k^0_i + k^0_j)
\ ,
\label{swa}
\end{eqnarray}
where $\sqrt{s}$ is the center-of-mass (c.m.) energy, $M_{B_{i(j)}}$ and $E_{i(j)}$ are the mass and energy of the baryon in the $i(j)$ channel, respectively, and the second equation is satisfied to a good approximation for practical purposes. Note that in the previous expressions the spin structure is omitted for simplicity of notation and a $\chi^\dagger_s \dots \chi_r$ spinor product has to be understood.
The meson decay constant $f$
is taken as an average value $f=1.123 f_\pi$ \cite{Oset:2001cn}, as is customary in meson-baryon studies within the strangeness -$1$ sector.
The channels included in our study are $K^- p$, $\bar{K}^0n$, $\pi^0
\Lambda$, $\pi^0 \Sigma^0$, $\eta \Lambda$, $\eta \Sigma^0$, $\pi^+
\Sigma^-$, $\pi^- \Sigma^+$, $K^+ \Xi^-$, $K^0 \Xi^0$.
Unitarization along the right-hand cut of the leading order (tree-level) amplitudes in a coupled-channel approach has been thoroughly established as the method to extend the applicability of the effective theory to higher energies and, in particular, to account for the presence of resonant states, such as the $s$-wave $\Lambda(1405)$. Formally, the unitarized solution is obtained by iteration of the leading order amplitude in a Bethe-Salpeter equation in coupled channels (in matrix notation),
\begin{equation}
\label{eq:BS-matrix}
T = V+\overline{VGT} \ ,
\end{equation}
where $V$ is the $s$-wave potential discussed above and the line indicates the phase-space integral over intermediate meson-baryon states in the $VGT$ term. The set of coupled integral equations involved in Eq.~(\ref{eq:BS-matrix}) is notably simplified within the chiral effective theory since both the potential $V$ and the resummed amplitude $T$ can be factorized on-shell, and thus the solution proceeds by algebraic inversion, $T=[1-VG]^{-1} V$.
For $s$-wave amplitudes it has been shown that the off-shell parts in the integral term of the equation lead to structures that can be renormalized by higher-order counterterms and are effectively accounted for by using physical masses and coupling constants \cite{Oset:1997it}. A more general proof of the on-shell factorization in absence of a left-hand cut was given in \cite{Oller:1998zr,Oller:2000fj} based on the $N/D$ method and dispersion relations. The quantity $G$ is a diagonal matrix accounting for the meson-baryon loop function,
\begin{equation}
\label{G_vacuum}
G_l (\sqrt{s}) = {\rm i} \,
\int \frac{d^4q}{(2\, \pi)^4} \,
\frac{M_{l}}{E_l(\vec{P}-\vec{q}\,)} \,
\frac{1}{\sqrt{s} - q_0 - E_l(\vec{P}-\vec{q}\,) + {\rm i} \varepsilon} \,
\frac{1}{q_0^2 - \vec{q}\,^2 - m_l^2 + {\rm i} \varepsilon} \ \,
\end{equation}
with $(P^0,\vec{P})$ being the total four-momentum of the meson-baryon pair and $s=(P^0)^2-\vec{P}\,^2$. Note that we work with a non-relativistic reduction of baryon propagators (leading order in $M_B^{-1}$) in consistency with the approximations done in Eq.~(\ref{swa}), and therefore we neglect contributions from negative-energy poles (we keep, however, full relativistic kinematics for the baryon dispersion relation).
The loop function is divergent and needs to be regularized. This can be done by adopting either a cutoff method or dimensional regularization. Both schemes provide equivalent results as the pertinent regularization parameters (cut-off momentum, $q_{\rm max}$, and subtraction constant, $a_{MB}$) can be related at a given energy scale \cite{Oller:1997ng}. For practical purposes the cutoff method is more convenient and transparent when dealing with particles in the medium. Within this method, and taking advantage of Lorentz invariance to calculate in the c.m. frame, the loop function reads
\begin{eqnarray}
\hspace{-0.5cm}G_{l}(\sqrt{s})&=& i \, \int \frac{d^4 q}{(2
\pi)^4} \, \frac{M_l}{E_l (-\vec{q}\,)} \, \frac{1}{\sqrt{s} - q^0 - E_l
(-\vec{q}\,) + i \epsilon} \, \frac{1}{q^2 - m^2_l + i \epsilon} \nonumber \\
&=& \int_{\mid {\vec q}\, \mid < q_{\rm max}} \, \frac{d^3 q}{(2 \pi)^3} \,
\frac{1}{2 \omega_l (\vec q\,)} \frac{M_l}{E_l (-\vec{q}\,)} \,
\frac{1}{\sqrt{s}- \omega_l (\vec{q}\,) - E_l (-\vec{q}\,) + i \epsilon} \, ,
\label{eq:gprop}
\end{eqnarray}
with $\omega_l$ and $E_l$ being the energy of the meson (baryon) in the intermediate state in the c.m. frame, respectively, and $q_{\rm max}=630$~MeV, which has been fixed in this scheme to reproduce the $\Lambda(1405)$ properties and several threshold branching ratios \cite{Oset:1997it}.
The main contribution to the $p$-wave comes from the $\Lambda$ and $\Sigma$ pole terms, which are obtained from the $D$ and $F$ terms of the lowest-order meson-baryon chiral Lagrangian \cite{Jido:2002zk}. The $\Sigma^*(1385)$, belonging to the baryon decuplet, is also accounted for explicitly in our approach. The coupling of the $\Sigma^*$ to the $\bar K N$ system and other channels was elaborated in \cite{Oset:2000eg} according to quark-model $SU(6)$ symmetry.
Due to its spin structure, the $p$-wave terms from the chiral Lagrangian contribute to both the $J=1/2$ and $J=3/2$ $p$-wave meson-baryon amplitudes, with $J$ the total angular momentum. In order to obtain the leading-order amplitudes for $J=1/2,3/2$ we proceed as follows. The general expression for the partial-wave expansion of the scattering amplitude of a spin zero meson and a spin $1/2$ baryon reads
\begin{eqnarray}
f(\vec q\,^{\prime}, \vec q) & = &\sum_{L=0}^{\infty} \left\{ (L+1)
f_{L+} + L f_{L-} \right\} P_{L}(\cos \theta) \nonumber \\
&& -i \vec\sigma
\cdot ( \hat q^{\prime} \times \hat q) \sum_{L=0}^{\infty} \left\{
f_{L+} - f_{L-} \right\} P^{\prime}_{L}(\cos \theta) \ ,
\label{partwaveamp}
\end{eqnarray}
where $\vec{q}$~($\vec{q}\,'$) is the three-momentum of the incoming (outgoing) meson and $\theta=\angle(\vec{q},\vec{q}\,')$.
In the previous expression the separation into spin-non-flip and spin-flip parts is manifest and each partial-wave amplitude $f_{L\pm}$ corresponds to orbital angular momentum $L$ and total angular momentum $J=L\pm1/2$.
In particular, for $L=1$ ($p$-wave interaction) one writes in a more usual notation
\begin{eqnarray}
V^p(\vec q\,^\prime, \vec q\,) = (2L +1) \lbrack f(\sqrt{s})\, \hat
q^{\prime} \cdot \hat q - i g(\sqrt{s})\, (\hat q^{\prime} \times
\hat q) \cdot \vec\sigma \rbrack \label{pwamp} \ ,
\end{eqnarray}
where two amplitudes at tree level, $f_{-}^{\rm tree}$ ($L=1$, $J=1/2$) and
$f_{+}^{\rm tree}$ ($L=1$, $J=3/2$), can be defined as
\begin{eqnarray}
f_{+}^{\rm tree} &=& f+g \label{fg} \\
f_{-}^{\rm tree} &=& f-2g \nonumber \ ,
\end{eqnarray}
with
\begin{eqnarray}
f_{ij}(\sqrt{s}) &=& {1 \over 3} \left\{ - C_{ij} {1 \over
4 f^2}\, a_i\,
a_j \left({1 \over b_i} + {1 \over b_j} \right)
+ { D^{\Lambda}_i D^{\Lambda}_j \left(1+{q_i^0 \over M_i} \right)
\left(1+{q_j^0 \over M_j} \right) \over \sqrt{s} - \tilde M_\Lambda}
\right. \nonumber \\
&& \left. + { D^{\Sigma}_i D^{\Sigma}_j \left(1+{q_i^0 \over M_i}
\right) \left(1+{q_j^0 \over M_j} \right) \over \sqrt{s} - \tilde
M_\Sigma}
+ {2 \over 3} {D^{\Sigma^{*}}_i D^{\Sigma^{*}}_j \over
\sqrt{s} - \tilde M_\Sigma^{*}} \right\} q_{i} q_{j}
\label{f1}\\
g_{ij}(\sqrt{s}) &=& {1 \over 3} \left\{ C_{ij} {1 \over 4
f^2}\, a_i\,
a_j \left({1 \over b_i} + {1 \over b_j} \right)
- { D^{\Lambda}_i D^{\Lambda}_j \left(1+{q_i^0 \over M_i} \right)
\left(1+{q_j^0 \over M_j} \right) \over \sqrt{s} - \tilde M_\Lambda}
\right. \nonumber \\
&& \left. - { D^{\Sigma}_i D^{\Sigma}_j \left(1+{q_i^0 \over M_i}
\right) \left(1+{q_j^0 \over M_j} \right) \over \sqrt{s} - \tilde
M_\Sigma} + {1 \over 3} {D^{\Sigma^{*}}_i D^{\Sigma^{*}}_j \over
\sqrt{s} - \tilde M_\Sigma^{*}} \right\} q_{i} q_{j} \label{g1} \ ,
\end{eqnarray}
where $i,j$ are channel indices and $q_{i(j)}\equiv|\vec{q}_{i(j)}|$ here. The first term in both $f_{ij}$ and $g_{ij}$ comes from the small $p$-wave component in the meson-baryon amplitudes from the lowest order chiral Lagrangian in Eq.~(\ref{lowest}) \cite{Tolos:2006ny}, with
\begin{equation}
a_i = \sqrt{E_i + M_i \over 2 M_i}\ , \hspace{0.7cm} b_i = E_i +
M_i\ , \hspace{0.7cm} E_i = \sqrt{M_i^{\, 2} + \vec q_i\,^{ 2}} \ ,
\end{equation}
given in the c.m. frame. Moreover, the couplings of the hyperons excited in the $p$-wave amplitude to a given meson-baryon pair in channel $i$, $D^Y_i$, read
\begin{eqnarray}
D^\Lambda_i &=& c_i^{D,\Lambda} \sqrt{20 \over 3} {D \over 2 f} -
c_i^{F,\Lambda} \sqrt{12} { F \over 2 f} \nonumber \ , \\
D^\Sigma_i &=& c_i^{D,\Sigma} \sqrt{20 \over 3} {D \over 2 f} -
c_i^{F,\Sigma} \sqrt{12} { F \over 2 f} \ , \\
D^{\Sigma^*}_i &=& c_i^{S,\Sigma^*} {12 \over 5} {D + F\over 2 f}
\nonumber \ .
\end{eqnarray}
The constants $c^D$, $c^F$, $c^S$ are given by the pertinent $SU(3)$ Clebsch-Gordan coefficients and can be found in Table~I of Ref.~\cite{Jido:2002zk}, whereas the leading-order (vector and axial vector) meson-baryon chiral couplings $D$ and $F$ are chosen as $D=0.85$ and $F=0.52$. The masses $\tilde M_\Lambda$,
$\tilde M_\Sigma$, $\tilde M_{\Sigma^*}$ are bare masses of the
hyperons ($\tilde M_\Lambda$$=$1030 MeV, $\tilde M_\Sigma$$=$1120 MeV, $\tilde M_{\Sigma^*}$$=$1371 MeV), which will turn into physical masses upon unitarization.
Unitarization proceeds in a similar way as described for the $s$-wave contribution. The on-shell factorization for $p$-waves in meson-baryon scattering is proven along the same lines as in meson-meson scattering \cite{Cabrera:2000dx}. Using Eq.~(\ref{eq:BS-matrix}), one obtains
\begin{eqnarray}
f_{+} &=& [1-f_{+}^{\rm tree} G ]^{-1} f_{+}^{\rm tree} \ ,
\label{fs} \\
f_{-} &=& [1-f_{-}^{\rm tree} G ]^{-1} f_{-}^{\rm tree} \ , \nonumber
\end{eqnarray}
where the $f^\pm$ amplitudes decouple within the Bethe-Salpeter equation and thus are unitarized independently.
The $\Sigma^*$ pole for $I=1$
is contained in the $f_{+}$ amplitude while the $f_{-}$ amplitude includes the $\Lambda$ and $\Sigma$ poles for
$I=0$ and $I=1$, respectively [cf.~Eqs.~(\ref{fg}-\ref{g1})].
Note that the amplitudes $f_{+}^{\rm tree}$, $f_{-}^{\rm tree}$ in the diagonal meson-baryon channels contain the factor $\vec q\,^{2}$, with $\vec q$ being the on-shell c.m. momentum of the meson in this channel. For transition matrix elements from channel $i$ to $j$ the corresponding factor is $q_{i}q_{j}$, where the energy and momentum of the meson in a certain channel are given by the expressions
\begin{equation}
E_{i} = { s + m_{i}^{2} - M_{i}^{2} \over 2 \sqrt{s}} \ ;
\hspace{0.5cm} q_{i} = \sqrt{E_{i}^{2} - m_{i}^{2}} \ ,
\end{equation}
which also provide the analytical extrapolation below the threshold of the channel, where $q_i$ becomes purely imaginary.
\subsection{$S=-1$ meson-baryon amplitudes in hot nuclear matter}
We next discuss how the model is modified to account for medium effects in hot and dense nuclear matter.
In order to obtain the effective $s$- and $p$-wave $\bar KN$ amplitudes (and related ones) in hot and
dense matter, the meson-baryon loop functions $G(\sqrt{s})$ have to be calculated at finite temperature and baryonic density, accounting for the in-medium propagators of the particles in the intermediate states.
One of the main sources of density and temperature dependence comes from the Pauli principle. This is
implemented by replacing the free nucleon propagator in the loop function by the
corresponding in-medium one. The other essential source is related to the fact that all mesons and baryons in the intermediate loops interact with the nucleons of the Fermi sea and their properties are modified with respect to those in vacuum.
All these changes are straightforwardly implemented within the Imaginary Time Formalism (IFT), as extensively discussed in Ref.~\cite{Tolos:2008di}. Applying the (finite-temperature) Feynman rules in this approach
the meson-baryon propagator reads \cite{Tolos:2008di}
\begin{eqnarray}
\label{G_ITF0}
{\cal G}_{MB}(W_m,\vec{P};T) &=& - T \int \frac{d^3q}{(2\pi)^3} \,
\sum_n
{\cal D}_B(W_m-\omega_n,\vec{P}-\vec{q};T) \,
{\cal D}_M(\omega_n,\vec{q};T)
\ ,
\end{eqnarray}
where $T$ is the temperature, $\vec{P}$ is the external
total three-momentum, $\vec{q}$ the relative momentum and $W_m$ an external fermionic frequency,
${\rm i} W_m={\rm i} (2m+1)\pi T + \mu_B$, with $\mu_B$ being the baryonic chemical potential. The baryon and meson propagators within the Matsubara sum are given by
\begin{eqnarray}
\label{eq:DmesonDbaryon}
{\cal D}_B(w_n,\vec{p};T) &=& [{\rm i} w_n - E_B(\vec{p},T)]^{-1} \ , \nonumber \\
{\cal D}_M(\omega_n,\vec{q};T) &=& [({\rm i} \omega_n)^2-\vec{q}\,^2 - m_M^2 -
\Pi_M(\omega_n,\vec{q};T)]^{-1} \ ,
\end{eqnarray}
with frequencies ${\rm i} w_n={\rm i} (2n+1)\pi T + \mu_B$ (fermionic) and ${\rm i} \omega_n = {\rm i} 2 \pi n T$ (bosonic). $E_B$ stands for the single-particle baryon energy and $\Pi_M$ denotes the pseudoscalar meson self-energy, which we discuss in more detail below.
The sum over the index $n$ is not straightforward because the meson self-energy depends on $n$ in a non-trivial way. This complication is circumvented by rewriting the meson propagator, $D_M$, in the spectral (Lehmann) representation, i.e.
\begin{eqnarray}
\label{Lehmann}
D_M(\omega_n,\vec{q};T) =
\int_0^{\infty} d\omega \,
\frac{S_M(\omega,\vec{q};T)}{{\rm i}\omega_n - \omega}
-
\int_0^{\infty} d\omega \,
\frac{S_{\bar M}(\omega,\vec{q};T)}{{\rm i}\omega_n + \omega}
\,\,\, ,
\end{eqnarray}
where $S_M$ and $S_{\bar M}$ stand for the spectral functions of the meson and its corresponding anti-particle. The relation between the meson spectral function and the propagator is evident by performing the analytical continuation from the Matsubara frequencies onto the real energy axis [$D_M(\omega,\vec{q};T)={\cal D}_M ({\rm i}\omega_n \to \omega+i\epsilon,\vec{q};T)$],
\begin{equation}
S_M(\omega,{\vec q}; T)= -\frac{1}{\pi} {\rm Im}\, D_M(\omega,{\vec q};T)
= -\frac{1}{\pi}\frac{{\rm Im}\, \Pi_M(\omega,\vec{q};T)}{\mid
\omega^2-\vec{q}\,^2-m_M^2- \Pi_M(\omega,\vec{q};T) \mid^2 } \ .
\label{eq:spec}
\end{equation}
Replacing Eq.~(\ref{Lehmann}) in Eq.~(\ref{G_ITF0}) one has
\begin{eqnarray}
\label{G_ITF}
{\cal G}_{MB}(W_m,\vec{P};T) &=& - T \int \frac{d^3q}{(2\pi)^3} \,
\sum_n \frac{1}{{\rm i} W_m - {\rm i}\omega_n - E_B(\vec{P}-\vec{q},T)} \,
\nonumber \\
&\times&
\int_0^{\infty} d\omega \,
\left[ \frac{S_M(\omega,\vec{q};T)}{{\rm i}\omega_n - \omega}
- \frac{S_{\bar M}(\omega,\vec{q};T)}{{\rm i}\omega_n + \omega} \right]
\,\,\, .
\end{eqnarray}
In this form the analytical structure of the meson-baryon loop is explicit and the Matsubara sums can be solved by using standard complex analysis techniques, leading to
\begin{eqnarray}
\label{G_ITF:Matsu-summed}
{\cal G}_{MB}(W_m,\vec{P};T) &=&
\int \frac{d^3q}{(2\pi)^3} \,
\int_0^{\infty} d\omega \,
\left[ S_M(\omega,\vec{q};T) \,
\frac{1-n_B(\vec{P}-\vec{q},T)+f(\omega,T)}
{{\rm i} W_m - \omega - E_B(\vec{P}-\vec{q},T)} \right.
\nonumber \\
&+&
\left.
S_{\bar M}(\omega,\vec{q};T) \,
\frac{n_B(\vec{P}-\vec{q},T)+f(\omega,T)}
{{\rm i} W_m + \omega - E_B(\vec{P}-\vec{q},T)} \, \right]
\,\,\, .
\end{eqnarray}
The properties of baryons in hot dense matter are implemented in the meson-baryon propagator in a two-fold manner. On the one hand, Pauli blocking is taken into account by considering the term 1-$n_B(\vec{P}-\vec{q},T)$, where $n_B(\vec{p},T)=[1+\exp(E_B(\vec{p},T)-\mu_B)/T)]^{-1}$ is the baryon Fermi-Dirac distribution. The single-particle baryon energy $E_B$ contains the medium binding effects obtained within a temperature dependent Walecka-type $\sigma -\omega$ model (see Ref.~\cite{KAP-GALE}). These binding effects are thus also present in the energy denominators.
The medium modifications on mesons, such as pions and antikaons, are incorporated in the meson-baryon loop by means of the inclusion of the meson Bose-Einstein distribution at finite temperature, $f(\omega,T) = [\exp (\omega / T) - 1]^{-1}$, as well as the meson and its corresponding anti-particle spectral functions, $S_M(\omega,\vec{q};T)$ and $S_{\bar M}(\omega,\vec{q};T)$, defined above.
We consider in this work the dressing of pion and kaon propagators as they participate in the most relevant channels driving the meson-baryon interaction and the dynamical generation of the $\Lambda(1405)$.
For pions, we refer to Ref.~\cite{Tolos:2008di} for a detailed calculation of the pion self-energy at finite temperature within the ITF in the $ph-\Delta h$ model, including relativistic kinematics as well as full analyticity and crossing properties. For antikaons, the self-energy receives contributions of comparable size from both $s$- and $p$-wave interactions with the baryons in the medium. We refer the reader to the end of this section for details about its calculation.
The expression of Eq.~(\ref{G_ITF:Matsu-summed}) can be analytically continued onto the
real energy axis, $G_{MB}(P_0+{\rm i} \varepsilon \, ,\vec{P}; T) = {\cal
G}_{MB}({\rm i} W_m \to P_0 + {\rm i} \varepsilon \, , \vec{P}; T )$, where $P=(P_0,\vec{P})$ is the total
two-particle momentum. Here we provide the detailed expressions for the in-medium loop functions on the real energy axis, where some simplifications are applicable for practical purposes.
For $\bar KN$ states one has
\begin{eqnarray}
\label{eq:gmed}
{G}_{\bar KN}(P_0+{\rm i} \varepsilon,\vec{P};T)
&=&\int \frac{d^3 q}{(2 \pi)^3}
\frac{M_N}{E_N (\vec{P}-\vec{q},T)} \nonumber \\
&\times &\left[ \int_0^\infty d\omega
S_{\bar K}(\omega,{\vec q};T)
\frac{1-n_N(\vec{P}-\vec{q},T)}{P_0 + {\rm i} \varepsilon - \omega
- E_N
(\vec{P}-\vec{q},T) } \right. \nonumber \\
&+& \left. \int_0^\infty d\omega
S_{K}(\omega,{\vec q};T)
\frac{n_N(\vec{P}-\vec{q},T)} {P_0 +{\rm i} \varepsilon + \omega -
E_N(\vec{P}-\vec{q},T)} \right] \ ,
\end{eqnarray}
with ${\vec q}$ being the meson three-momentum\footnote{We note the additional factor $M_B/E_B$ with respect to Eq.~(\ref{G_ITF:Matsu-summed}) in order to keep consistency with the normalization of the baryon propagator in free space.}. The second term in the $\bar KN$ loop function typically provides a small, real contribution for the studied energy range in $P_0$.
Here one can
replace $S_{K}(\omega, \vec q;T )$ by a free-space delta function, which simplifies numerical computations. The latter is a sensible approximation since the $K$ spectral function in the medium still peaks at the quasi-particle energy and the latter does not differ much from the energy in vacuum \cite{Tolos:2008di}. In addition, one finds that the kaon distribution function can be safely neglected at the temperatures of interest (we expect Bose enhancement to be relevant only for pions at $T = 0$ -- $150$~MeV \cite{Tolos:2008di}).
In the case of $\pi \Lambda$ or $\pi \Sigma$ states one gets
\begin{eqnarray}
\label{eq:gmed_piY}
{G}_{\pi Y}(P_0+{\rm i} \varepsilon,\vec{P}; T)
&= & \int \frac{d^3 q}{(2 \pi)^3} \frac{M_{Y}}{E_{Y}
(\vec{P}-\vec{q},T)} \nonumber \\
& \times &
\int_0^\infty d\omega
S_\pi(\omega,{\vec q},T)
\left[
\frac{1+f(\omega,T)}
{P_0 + {\rm i} \varepsilon - \omega - E_{Y}
(\vec{P}-\vec{q},T) } \right.
\nonumber \\
& + &
\left.
\frac{f(\omega,T)}
{P_0 + {\rm i} \varepsilon + \omega - E_{Y}
(\vec{P}-\vec{q},T) } \right] \ .
\end{eqnarray}
The $\pi Y$ loop function incorporates the $1+f(\omega ,T)$
enhancement factor which accounts for the contribution from thermal pions at
finite temperature.
In this case, we have neglected the fermion distribution for the participating
hyperons, which is a reasonable approximation for the range of temperatures and
baryonic chemical potentials.
Finally, for $\eta \Lambda$, $\eta \Sigma$ and $K \Xi$ intermediate states,
we simply consider
the meson propagator in vacuum and include only the effective baryon
energies modified by the mean-field binding potential for $\Lambda$ and $\Sigma$ hyperons, i.e.
\begin{eqnarray}
G_i(P_0+{\rm i} \varepsilon,\vec{P};T)= \int \frac{d^3 q}{(2 \pi)^3} \,
\frac{1}{2 \omega_i (\vec q\,)} \frac{M_i}{E_i (\vec{P}-\vec{q},T)} \,
\frac{1}{P_0 +
{\rm i} \varepsilon - \omega_i (\vec{q}\,) - E_i (\vec{P}-\vec{q},T) } \, .
\label{eq:gmed-etaY-KXi}
\end{eqnarray}
This approximation is justified as the latter channels are less relevant in the unitarization procedure \cite{Oset:1997it}.
In order to compute the in-medium $s$- and $p$-wave amplitudes of $\bar K N$ at finite temperature, one needs to solve Eq.~(\ref{eq:BS-matrix}) in matter. The on-shell factorization of the amplitudes in the Bethe-Salpeter equation can be maintained in the case of the in-medium calculation for $s$-wave scattering \cite{Tolos:2006ny}. The amplitudes in the $p$-wave, however, require a slightly different treatment since the on-shell factorization is not exactly reproduced in the medium due to remaining tadpole contributions \cite{Tolos:2006ny}. As it was shown in Ref.~\cite{Tolos:2006ny}, the formal algebraic solution of the Bethe-Salpeter equation with on-shell amplitudes can be kept for the $p$-waves with a simple modification of the meson-baryon loop function, modulo some small tadpole corrections. Summarizing the results in Ref.~\cite{Tolos:2006ny}: if we denote by $G_i^L(P^0,\vec{P};T)$ the in-medium meson-baryon propagator for $s$- ($L=0$) and $p$-wave ($L=1$) scattering (and $i$ labels a specific
$MB$ channel), one has:
\begin{eqnarray}
\label{eq:Gsummary}
G_i^{(s)}(P^0,\vec{P};T) &=& G_{i}(P_0+{\rm i} \varepsilon \, ,\vec{P}; T) \ , \nonumber \\
G_i^{(p)}(P^0,\vec{P};T) &=& G_i(s) + \frac{1}{\vec{q}\,^2_{\rm on}} [ \tilde{G}_{i}(P_0 +{\rm i} \varepsilon \,,\vec{P}; T) - \tilde{G}_i(s) ] \ ,
\end{eqnarray}
where the $\tilde{G}$ functions carry an extra $\vec{q}\,^2$ factor in the integrand, corresponding to the off-shell $p$-wave vertex.
As discussed in \cite{Tolos:2006ny}, nuclear short-range correlations have to be taken into account when dealing with $p$-wave amplitudes in order to account for the fact that the nucleon-nucleon (hyperon-nucleon) interaction is not only driven by one-pion (one-kaon) exchange. These correlations arise when the $\pi$ ($\bar K$) in the meson-baryon loops are dressed in the medium and develop $NN^{-1}$ ($YN^{-1}$) excitations. The short-range part of the interaction is mimicked by phenomenological Landau-Migdal contact vertices ($NY$-$NY'$) and is technically implemented by replacing the propagator of the exchanged pion (kaon) in Eq.~(\ref{eq:DmesonDbaryon}) by a correlated interaction which performs the Dyson resummation of the irreducible meson selfenergy modified by successive iterations of the contact interaction [cf.~Eqs.~(30-35) in \cite{Tolos:2006ny} for detailed expressions].
Once the in-medium $\bar K N$ amplitudes at finite temperature are obtained, we can compute the $\bar K$ self-energy in either $s$- or $p$-wave by
integrating the effective interaction $T_{\bar K N}$ over the nucleon Fermi distribution at a given
temperature, i.e.
\begin{eqnarray}
\Pi^L_{\bar K}(q_0,{\vec q};T)= 4 \int \frac{d^3p}{(2\pi)^3}\,
n_N(\vec{p},T) \,
\bar{T}^L_{\bar K N}(P_0,\vec{P};T) \ ,
\label{eq:selfd}
\end{eqnarray}
where $P_0=q_0+E_N(\vec{p},T)$ and $\vec{P}=\vec{q}+\vec{p}$ are
the total energy and momentum of the $\bar KN$ pair in the
nuclear medium rest frame, $q$ stands for the
momentum of the $\bar K$ meson also in this frame, and $\bar{T}$ indicates the spin and isospin averaged scattering amplitude for a given partial wave.
We also provide for convenience Eq.~(\ref{eq:selfd}) rewritten in the basis of physical states for antikaons,
\begin{eqnarray}
\Pi^L_{K^-}(q_0,{\vec q};T)= 2 \int \frac{d^3p}{(2\pi)^3}\,
\lbrack
n_p(\vec{p},T) \,
T^L_{K^-p}(P_0,\vec{P};T)
+
n_n(\vec{p},T) \,
T^L_{K^-n}(P_0,\vec{P};T)
\rbrack
\ ,
\label{eq:selfd-v2}
\end{eqnarray}
where the $L=1$ amplitude is defined as in Eq.~(\ref{pwamp}) and reads here $T^{L=1}=3\, \lbrack f_- + 2f_+ \rbrack$, with $f_\pm$ given in Eqs.~(\ref{fg}-\ref{g1},\ref{fs}). A similar expression is obtained for $\bar K^0$ and we recall that $\Pi_{K^-}=\Pi_{\bar K^0}\equiv\Pi_{\bar K}$ in symmetric nuclear matter.
The antikaon self-energy must be determined self-consistently since it is
obtained from the in-medium amplitude, $ T^L_{\bar K N}$, which
requires the evaluation of the $\bar KN$ loop function,
$G^L_{\bar KN}$, and the latter itself is a function of
$\Pi_{\bar K}(q_0, \vec q; T)$ through the antikaon spectral
function, cf.~Eqs.~(\ref{eq:spec}), (\ref{eq:gmed}).
Note that Eq.~(\ref{eq:selfd}) corresponds to a na\"{\i}ve generalization of the zero-temperature result, as discussed in Ref.~\cite{Tolos:2008di}. For completeness we provide a detailed derivation of the finite temperature antikaon self-energy in terms of the $\bar K N$ $T$-matrix in the appendix.
\section{Results for $S=-1$ meson-baryon amplitudes and hyperon single-particle properties in matter}
\label{sec:amplitudes}
We discuss in the following our results for the scattering amplitudes in the isospin channels $I=0,1$ and $s$- and $p$- waves at finite nuclear density and temperature. This information is accessible due to the extension of our model to account for unitarized amplitudes in both $s$- and $p$-waves and different isospin and $J^P$ channels. The final goal is to study the excited hyperon resonances and assess how the nuclear environment influences their properties.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth,height=0.5\textwidth]{ImT-KbarN-Lam1405.eps}
\includegraphics[width=0.49\textwidth,height=0.5\textwidth]{ImT-KbarN-Lam1115-v2.eps}
\caption{Imaginary part of the $\bar K N$ scattering amplitude in vacuum and in the medium for specific resonant channels. Left: $I=0$, $L=0$ and $J=1/2$ amplitude ($\Lambda (1405)$ channel). Right: $I=0$, $L=1$ and $J=1/2$ amplitude ($\Lambda (1115)$ channel).}
\label{fig:iso-amp-lambdas}
\end{center}
\end{figure}
In Fig.~\ref{fig:iso-amp-lambdas} we depict the imaginary part of the $\bar K N \to \bar K N$ scattering amplitude in the isoscalar channel with $J^P=1/2^+$, for $L=0$ ($\Lambda(1405)$ channel, left panel) and $L=1$ ($\Lambda(1115)$ channel, right panel). We show two different values of the meson-baryon total momentum (upper and lower panels).\footnote{We note here that our results are available in a full $(P^0,P)$ grid.} We reproduce our previous results for the $\Lambda(1405)$ at nuclear saturation density and zero temperature \cite{Tolos:2006ny,Tolos:2008di}. This resonance strongly dilutes in the nuclear medium mostly due to the pion-related decay channels such as $\Lambda(1405)\to \Lambda NN^{-1},\Sigma NN^{-1}$ and similarly with $\Delta N^{-1}$ components, whereas the peak of the distribution (from here on, the quasi-particle energy) remains slightly above its vacuum position for normal nuclear matter density, $\rho_0$. At $\rho=2\rho_0$ the distribution is substantially broader and appreciably
shifted to higher energies. The effect of the temperature is two-fold: first, it further broadens the resonance as a result of the smearing of the Fermi surface, which increases the available phase space for in-medium decays. Second, the attractive baryonic potentials entering the quasi-particle energies of nucleons and hyperons in the meson-baryon loops become shallower with increasing the temperature, implying that all meson-baryon thresholds are shifted to higher energies with respect to the $T=0$ case. This can be easily appreciated in Fig.~\ref{fig:iso-amp-lambdas} (left), where at $T=100$~MeV the $\Lambda(1405)$ is dynamically generated at a slightly higher $\sqrt{s}$ and the kink corresponding to the opening of the in-medium $\pi\Sigma$ channel, which remains below the range in the plot at $T=0$, is visible at the low-energy tail of the resonance at finite temperature.
The $\Lambda(1115)$ exhibits attraction in the nuclear medium, which in our approach amounts to about -$48$~MeV at normal nuclear matter density, and is essentially dominated by the pion mediated $\Lambda N \to \Sigma N$ transition incorporated in our approach by the dressing of pions and the implementation of short-range correlations (the lack of the latter leads to unphysically larger attractive shifts by roughly a factor $\sim$2). We note that the apparent width of the resonance at $P=0$ and zero temperature is simply a numerical artifact in order to solve the matrix inversion problem, whereas at finite total momentum the resonance acquires a physical finite width from intermediate $\Lambda NN^{-1}$ excited states. At finite temperature, however, the broadened Fermi distribution of nucleons allows to accommodate such excitations even at $P=0$, and the $\Lambda(1115)$ develops a finite decay width as can be seen in the right panel of Fig.~\ref{fig:iso-amp-lambdas} for the $T=100$~MeV case, whereas the
attraction on the quasi-particle energy is slightly reduced.
The attractive shift at $T=0$ found here for the $\Lambda(1115)$ overestimates previous determinations within the same model in \cite{Tolos:2006ny} and meson exchange models \cite{Reuber:1993ip,Stoks:1999bz,Rijken:1998yy,Vidana:2001rm,Haidenbauer:2005zh}, which estimate an attraction for the $\Lambda$ in nuclear matter of about -$30$~MeV at $\rho=\rho_0$, as required by hypernuclear spectroscopy \cite{hyper}.
Our larger shift is partly due to the input baryonic mean-field potential for the hyperons ($\Lambda$ and $\Sigma$), which are estimated from those of the nucleon within a $\sigma$-$\omega$ model at finite density and temperature by means of simple quark-model counting rules. The model leads to an attractive binding for both hyperons of approximately -$50$~MeV at $\rho=\rho_0$. We have used this model as it incorporates the temperature dependence of the baryonic potentials and also in order to compare the present results to our previous calculation \cite{Tolos:2008di}, where $p$-wave unitarization at finite density and temperature was missing.
The hyperon binding potential can be readily improved by modifying the scalar ($\sigma YY$) and vector ($\omega YY$) couplings ($g_{\sigma/\omega YY} = \alpha\, g_{\sigma/\omega NN}$ with $\alpha$=$2/3$ within the strict quark counting scheme) so as to satisfy the phenomenological requirement $U_\Lambda(\rho_0)\simeq$-$30$~MeV.
We find, however, that such modifications barely affect the $\Lambda$ and $\Sigma$ mass shifts obtained from the $p$-wave amplitudes, indicating that
the effect of the input baryonic potentials saturates to some extent within our self-consistent calculation.
On top of this, the impact of these variations is marginal on the position and shape of the $\Lambda(1405)$ resonance. Therefore, although the binding potentials certainly influence the eventual nuclear potential of the hyperons in the $p$-waves, one can say that it is not the leading effect in our calculation.
As discussed before, the former test corroborates that the attractive potentials that the $\Lambda$ and $\Sigma$ develop at finite density are mostly due to the pion-mediated coupled channels, when the pion is also dressed in the medium and short-range correlations within vertices related to the $NN$ and $NY$ interaction are simultaneously implemented \cite{Tolos:2006ny}. The strength of these mechanisms depend on a reduced set of parameters, namely the baryonic form factor of the pion (with scale parameter $\Lambda_{\pi}$), accounting for the finite size of $\pi NN$,~$\pi N\Delta$ vertices; and the Landau-Migdal parameter, $g'$, controlling the size of short-range correlations.
We have checked that varying these parameters within realistic ranges ($\Lambda_{\pi}\simeq 0.8$-$1$~GeV, $g'\simeq 0.6$-$0.8$) one can accommodate the value of $U_\Lambda(\rho_0)\simeq$-$30$~MeV. This can be achieved by using a softer hadronic pion form factor, with $\Lambda_{\pi}\simeq 0.8$~GeV, and $g'\simeq0.6$. For this set of parameters the nuclear potential for the $\Sigma$ is reduced to approximately -$25$~MeV at $\rho_0$ (note that the $\Sigma$ is even more sensitive to the pion properties due to the in-medium open channels $\Sigma N \to \Sigma N, \Lambda N$).
The need for a softer pion form factor in our calculation, as compared to previous studies within similar models in cold nuclear matter, can be justified as follows: within heavy-ion studies, non-relativistic approximations typically performed to simplify the calculation of $NN^{-1}$, $\Delta N^{-1}$, $YN^{-1}$ excitation functions (Lindhard-Migdal functions) are not suitable since the meson-baryon pair scans a larger set of states in momentum space. The use of fully relativistic kinematics in the baryon propagators \cite{Tolos:2008di} results in meson self-energies with slower high-energy and momentum behavior, leading to stronger effects from the in-medium pion dressing. The use of a slightly softer hadronic form factor for the pion selfenergy is enough to compensate this extra strength from pion-related coupled channels.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth,height=0.5\textwidth]{ImT-KbarN-Sig1195-v2.eps}
\includegraphics[width=0.49\textwidth,height=0.5\textwidth]{ImT-piLam-Sig1385-v2.eps}
\caption{
Same as in Fig.~\ref{fig:iso-amp-lambdas} for the isovector hyperon channels.
Left: $I=1$, $L=1$ and $J=1/2$ $\bar K N$ amplitude ($\Sigma (1195)$ channel). Right: $I=1$, $L=1$ and $J=3/2$ $\pi\Lambda$ amplitude ($\Sigma^* (1385)$ channel).}
\label{fig:iso-amp-sigmas}
\end{center}
\end{figure}
Our results for the isovector hyperons are shown in Fig.~\ref{fig:iso-amp-sigmas}. The left panel corresponds to the $\bar K N \to \bar K N$ $p$-wave amplitude for $J^P=1/2^+$, where the $\Sigma(1195)$ is excited, whereas the right panel shows the $J^P=3/2^+$ component of the $p$-wave $\pi \Lambda\to\pi\Lambda$ amplitude, dominated by the $\Sigma^*(1385)$. The $\Sigma$ acquires an attractive shift of about -$40$~MeV at normal matter density, about $5$~MeV larger than in \cite{Tolos:2006ny} and again mostly due to the pion-mediated $YN$ interaction.
Its decay width at $P=0$ originates from the $\Sigma N\to\Lambda N$ transition, readily incorporated in the model through the dressing of pions and kaons. At $T=100$~MeV, the $\Sigma$ attraction is reduced by about $1/3$ of the value at zero temperature. The $\Sigma$ also becomes narrower, which seems counterintuitive given the expected enhancement of phase space from a broader nucleon distribution at finite temperature. However, one should keep in mind that the baryons in all intermediate states also become heavier with increasing temperature and that the mass difference between the $\Sigma$ and $\Lambda$ hyperons becomes smaller with temperature. Given the relatively small energies available for $\Sigma\to\Lambda N N^{-1}$ decays the latter effect dominates and the $\Sigma$ width is reduced. At $\rho=2\rho_0$, the $\Sigma$ profile displays a kink at $\sim 1140$~MeV and peaks below this energy. This is due to the large attraction which shifts the $\Sigma$ state below the in-medium $\pi \Sigma$ threshold,
and consequently the hyperon in-medium width is reduced. This effect would have been smeared out if we had performed a self-consistent calculation for the hyperon single-particle potential in dense matter.
Our present results regarding the $\Sigma$ self-energy in the medium are comparable to former determinations \cite{Batty:1978sb,pedro} in cold nuclear matter. Other approaches based on phenomenological potentials constrained by $\Sigma$-atom data conclude that the $\Sigma$ experiences repulsion at short distances, while the potential turns attractive at large distances \cite{Batty:1994yx,Mares:1995bm}.
It is worth mentioning the model calculation of \cite{Kaiser:2005tu}, based on the meson-baryon chiral Lagrangian and accounting for long-range dynamics (pion and kaon exchange mechanisms), which finds a net repulsive potential of about $60$~MeV at nuclear matter density.
The theoretical status of the $\Sigma$ potential seems to be far from being settled, whereas the only experimental evidence is that $\Sigma$-atoms require an
attraction at the relatively large distances that are probed in these experiments.
The study of inclusive spectra in $(\pi^-, K^+)$ $\Sigma$-production reactions provides complementary information. In \cite{Noumi:2001tx,Saha:2004ha,Kohno:2004pb} these spectra have been analyzed within the
distorted wave approximation for pions and kaons (see also \cite{Morimatsu:1994sx}, where the equivalent Green's function method is used) with the conclusion of a repulsive
$\Sigma$-nucleus potential at central densities.
These results, however, should be pondered with care since the method employed may not be appropriate for inclusive reactions (where one sums over all possible nuclear final states), as the distorted wave approximation removes $K$ and $\pi$ quasi-elastic and absorption events from the flux whereas the resulting final state still contains the particles of interest. Presumably, this method forces a repulsive $\Sigma$-nucleus potential in order to
prevent the $\Sigma$ hyperon from being too close to the nucleus (and scan larger densities), as the distorted pion and kaon waves would then remove too many events from the flux.
Summarizing, in our understanding the experimental situation concerning the $\Sigma$ potential is also unresolved and, again, the only robust information that can be presently extracted is that the $\Sigma$ potential is attractive at the small densities probed in atom production.
The $\Sigma^*(1385)$ has a finite decay width in vacuum from $\pi \Lambda$ and $\pi \Sigma$ decays, which is correctly accounted for in our model. At finite nuclear density, the opening of additional decay channels related to the pion and kaon dressing (thus $\Sigma^* \to \Lambda N N^{-1}, \Sigma N N^{-1}$) and the $p$-wave character of the interaction enhances considerably the $\Sigma^*$ width, which evolves from $35$~MeV in vacuum to close to $100$~MeV at $\rho=\rho_0$. This value has to be compared to $80$~MeV as obtained in \cite{Tolos:2006ny,Kaskulov:2005uw}; the larger value in our case is related to the more attractive potentials acting on the $\Lambda$ and $\Sigma$ hyperons as well as the increase of the $\Sigma^*$ mass with density. Indeed, the attractive $\Sigma^*$ mass shift of $10$~MeV at $\rho=\rho_0$ turns into repulsion at larger densities. At two times normal matter density the resonance is so broad that it starts to be meaningless to define a quasi-particle energy or a mass shift. The
effect of the temperature in this case is moderate due to the important phase space already available at zero temperature. Due to the baryons picking up larger quasi-particle energies and the slight reduction of the $\Sigma^*$ mass, the $\Sigma^*$ width is actually mildly reduced at $\rho=\rho_0$ and $T=100$~MeV as compared to the zero temperature case.
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{ReU-hyp.eps}\\
\hspace{-0.9cm}
\includegraphics[height=6cm]{ImU-hyp.eps}
\caption{Momentum dependence of the nuclear optical potential for the $\Lambda(1115)$, $\Sigma(1195)$ and $\Sigma^*(1385)$ hyperons at finite nuclear density and temperature. Upper panel: real part. Lower panel: imaginary part.}
\label{fig:UY}
\end{center}
\end{figure}
It is pertinent to make a comparison of our results with similar approaches, particularly that in Ref.~\cite{Lutz:2007bh} by Lutz~{\it et al.}, where a self-consistent and covariant many-body approach based on the chiral $SU(3)$ Lagrangian is employed to study antikaon dynamics in dense nuclear matter at zero temperature.
The most relevant differences can be summarized as follows: a) the angular integration in the meson-baryon loop function [cf.~Eq.~(\ref{G_ITF:Matsu-summed})] is approximated in our calculation by an average over the Fermi distribution, whereas in \cite{Lutz:2007bh} it is also evaluated explicitly. b) We incorporate density and temperature dependent scalar and vector mean-field potentials for the nucleon and the ground-state hyperons, whereas in \cite{Lutz:2007bh} this is only implemented for the nucleon. c) The interaction of the hyperons with the $\bar K N$ system in the $p$-wave amplitudes is modified by short-range correlations in our approach, in consistency with the phenomenology of nucleon-nucleon and hyperon-nucleon interactions, which requires a treatment of short distances beyond the one-pion and one-kaon exchange mechanisms.
The effect of the angular average has been analyzed in \cite{Lutz:2007bh} and the authors conclude that the impact of this approximation is marginal for the antikaon spectral function, as previously stated in \cite{Ramos:1999ku}, whereas it becomes more important for the contribution of $d$-wave interactions, driven by the excitation of the $\Lambda(1520)$ (not accounted for in our model).
Still, some differences are found when comparing the scattering amplitudes in the $p$-wave and the in-medium excitation energy of the $\Lambda$, $\Sigma$ and $\Sigma^*$, which, disregarding the effect of the angular average, can only be adscribed to the different strength of the $p$-wave interaction in both approaches (short-range phenomena are not implemented in the calculation of Lutz~{\it et al.})
Overall, the size of the attraction experienced by the $\Lambda$ and $\Sigma$ in \cite{Lutz:2007bh} is larger than in our case (by roughly factor $1.5$-$2$), a feature that we can also reproduce if short-range interactions are switched off. The discrepancy with the mass shift of the $\Sigma^*(1385)$ is even more dramatic, of the order of a factor $\sim$4.
Incidentally, a narrow soft mode associated to a highly collective $\Lambda N^{-1}$ excitation is observed by the authors of Ref.~\cite{Lutz:2007bh} in the low-energy tail of the $\bar K$ spectral function. Such a peaky structure is not present in our previous results \cite{Tolos:2006ny} and thus one can infer that short-range correlations in the $\bar K N$ interaction are taming the strength of this low-energy mode.
The emergence of a low-energy tail in the $\bar K$ spectral function due to many-body correlations is an important phenomenon with direct connection with the possibility of formation of kaon condensates in dense matter (e.g.~in compact stars). Populated by $YN^{-1}$ excitations in the $p$-wave, and enhanced at finite temperature as discussed in \cite{Tolos:2008di}, such soft modes in the $\bar K$ spectral function are likely to increase the reactivity of the $\phi$ meson at FAIR energies by "stimulated" $\phi\to\bar K K$ decay and diffusion processes (e.g. $\phi \bar K \to \bar K$), since Bose enhancement is more effective on the light modes of the system \cite{Cabrera:2013iya}.
Apart from the temperature and density dependence of the scattering amplitudes and the corresponding behavior of the hyperons in matter, an additional output of our model is the momentum dependence of nuclear optical potentials. This is important in order to have a comprehensive description of medium effects on all the hadrons involved in strangeness production near threshold. Moreover, hadronic medium effects can only be implemented by means of the quasi-particle prescription in a certain class of transport models such as the Isospin Quantum Molecular Dynamics approach (see \cite{Hartnack:2011cn} and references therein), where the physical states are implemented according to on-shell kinematics.
Particularly for the $\Lambda$, $\Sigma$ and $\Sigma^*$ hyperons, which essentially keep a quasi-particle nature in the medium (with some caveats with the $\Sigma^*(1385)$, largely broadened in the medium), we can determine the hyperon optical potential by analyzing the momentum evolution of the resonance pole in the scattering amplitudes. On one hand, the real part of the optical potential can be obtained by subtracting the free hyperon dispersion relation from the in-medium quasi-particle energy, $\epsilon_{Y}(P)$,
\begin{equation}
{\rm Re} \ U_{Y}(P) = \epsilon_{Y}(P) - \sqrt{M_Y^2+\vec{P}^2} \ .
\end{equation}
On the other hand, a suitable combination of the amplitude residue at the resonance pole and the imaginary part of the amplitude evaluated at the quasi-particle energy allows to calculate the acquired width in the medium (total width including the vacuum one in the case of the $\Sigma^*$):
\begin{equation}
{\rm Im}\ U_{Y}(P) = {\rm Im} \ T_{ij}(\epsilon_{Y}(P)) / m
\end{equation}
with $m$ being the slope of ${\rm Re}\ T_{ij}$ at the resonance pole. We note that this definition is equivalent to the more general definition of the hyperon self-energy, $\Sigma_Y$, as elaborated in Ref.~\cite{Kaskulov:2005uw}. Our $p$-wave amplitudes are driven by the spectral function (or, equivalently, the propagator) of the hyperons, which reads
\begin{equation}
\label{SF}
S_{Y}(P^0,\vec{P}) =
-\frac{1}{\pi}
\frac{M_Y}{E_{Y}(\vec{P})}
\frac{{\rm Im} \,\Sigma_{Y}(P^0,\vec{P})}
{[P^0-E_Y(\vec{P})-{\rm Re}\, \Sigma_{Y}(P^0,\vec{P})]^2
+[{\rm Im} \, \Sigma_{Y}(P^0,\vec{P})]^2} \ .
\end{equation}
The optical potential defined above corresponds to the hyperon self-energy evaluated at the quasi-particle energy for a given momentum.
Following this method we provide in Fig.~\ref{fig:UY} the momentum-dependent nuclear optical potentials for the $\Lambda(1115)$, $\Sigma(1195)$ and $\Sigma^*(1385)$ hyperons for several densities and temperatures. The evolution of the optical potentials with nuclear density and temperature can be easily traced back to the shape and position of the hyperon peaks in the isospin amplitudes previously discussed.
The real (imaginary) part of the optical potential is displayed in the upper (lower) panel of Fig.~\ref{fig:UY}.
At normal nuclear matter density, the $\Lambda$, $\Sigma$ and $\Sigma^*$ acquire attractive potentials of -$48$, -$40$ and -$10$~MeV, respectively, at rest in the nuclear matter rest frame. At densities beyond $\rho_0$, the attraction on the $\Lambda$ and $\Sigma$ is enhanced whereas the potential for the $\Sigma^*$ turns from attractive to repulsive between $\rho_0$ and $2\rho_0$. The momentum dependence is rather smooth in all three cases: the potentials monotonically increase (thus the attraction being reduced). For the $\Sigma^*$, which experiences a rather small binding, the potential turns from attractive to repulsive at about $500$~MeV$/c$ momentum.
The temperature mildly reduces the size of the potential for the $\Lambda$ and $\Sigma$ hyperons, in line with the input baryonic binding potentials implemented in the intermediate hyperon propagators. For the $\Sigma^*$ the optical potential is tied to medium effects on the main decay channels already existing in vacuum, as already discussed,
where typically large cancellations between real parts in the self-energy (from different channel contributions) lead to only moderate shifts of the resonance mass \cite{Kaskulov:2005uw}.
We obtain in this case that the real part of the $\Sigma^*$ potential is slightly larger in magnitude (more attractive) at $\rho=\rho_0$ and $T=100$~MeV as compared to the zero temperature case.
The imaginary part of the optical potential is due to the opening of in-medium decay or absorption channels involving the interactions with nucleons. Both the $\Lambda$ and the $\Sigma$ evolve from being stable states in vacuum to having relatively small decay widths, below 20~MeV for the range of density, temperature and momentum studied here. We recall again that the $\Lambda$ can only decay through the excitation of $NN^{-1}$ components, which require a finite hyperon momentum at zero temperature. Whereas one may expect larger widths for $\Lambda$ and $\Sigma$ at $\rho=2\rho_0$ as compared to $\rho=\rho_0$, the enhancement of the available states with density is compensated by the shift in mass of the hyperons, leading to a small reduction in the width for both $J^P=1/2^+$ baryons at low momentum.
We recall that for large densities, self-consistency for the hyperon single-particle potential might be required.
Moreover, the value of the hyperon width at the quasi-particle energy (as obtained from ${\rm Im}\,U_Y$) may differ from the one developed by the (off-shell) spectral function, particularly when the energy dependence of the self-energy is substantial, as is the case for $p$-wave interactions.
The density evolution of the width for the $\Sigma^*$ essentially reflects the enhancement of in-medium phase space of its decay channels. At $\rho=\rho_0$ and $T=100$~MeV, however, we find a small decrease of the width with respect to the $T=0$ case, which is traced back to the baryons in the final state, becoming heavier with increasing temperature.
\section{In-medium transition probabilities and cross sections}
\label{sec:cross-sec}
The dynamics of the $\bar KN$ system and its related coupled channels in the hot and dense medium is encoded in the $S=-1$ meson-baryon scattering amplitudes. With the focus on the implementation, in transport simulations, of strangeness dynamics in heavy-ion collisions at FAIR conditions we present our analysis in terms of transition probabilities and cross sections for different binary reactions. These results are complementary to the $\bar K$ spectral functions and nuclear optical potentials provided in Ref.~ \cite{Tolos:2008di}
and, altogether, permit a systematic accounting of medium effects in the $S=-1$ sector, not only within the relevant binary reactions but also regarding the production and propagation of light strange hadrons.
In general the calculation of dynamical quantities in transport theory will require an appropriate folding of reaction rates or transition probabilities with the spectral functions of the particles in the initial and final states. Such is the case of the model in \cite{Cassing:2003vz}, which is based on a gradient expansion of the Kadanoff-Baym equation and accounts for the transport of off-shell particles.
The transition probability for a given reaction, ${\cal P}(s)$, is determined as the angular integrated average squared amplitude (including all partial waves) and can be defined fully off-shell as a function of the total energy $P^0$ and momentum $\vec{P}$ of the meson-baryon pair. For the process $i \to j$ (where $i,j$ denote meson-baryon channels) one has
\begin{equation}
\label{eq:P-of-s}
{\cal P}_{ij}(P^0,\vec{P};\rho,T) = \int_{-1}^{+1} du
\lbrace
| f^{(s)}_{ij} + (2f^+_{ij}+f^-_{ij}) \, u |^2
+
| f^+_{ij} - f^-_{ij} |^2 \,(1-u^2)
\rbrace \ ,
\end{equation}
where $f^{(s)} = T^{L=0}$, $f^{\pm}$ is given in Eqs.~(\ref{fg}-\ref{g1}) in terms of suitable combinations of spin-flip and spin-non-flip $p$-wave amplitudes, and $\theta$ is the scattering angle in the c.m. frame of the meson-baryon pair.
We note that, modulo kinematical factors related to flux of the incoming and outgoing particles, the former expression recalls that of the total cross section for a binary process in vacuum. The definition of an in-medium cross section, however, is more complex and requires both the knowledge of the pertinent scattering amplitudes at finite temperature and density, and a suitable generalization of the corresponding flux factors. Taking into account that the hadrons in the initial and final states do need not be on the mass shell (as they could develop a broad spectral function in the medium), there is not a unique and simple way to implement such definition in the medium and requires the choice of an on-shell reduction scheme \cite{Cassing:2003vz,Hartnack:2011cn}.
Still, medium effects for strange reactions are best implemented in terms of in-medium cross sections (and pertinent nuclear optical potential) in transport models which rely on the narrow test quasiparticle approach \cite{Hartnack:2011cn}, and thus we deem pertinent to provide also in-medium cross sections from our meson-baryon scattering amplitudes, which we discuss in the second half of this section.
The differential cross section for the process $i\to j$ (where $i,j$ denote meson-baryon channels) reads
\begin{equation}
\label{eq:diff-cross-sec}
\frac{d\sigma_{ij}}{d\Omega} = \frac{1}{16 \pi^2} \frac{M_i M_j}{s} \frac{\tilde{q}_j}{\tilde{q}_i}
\lbrace
| f^{(s)}_{ij} + (2f^+_{ij}+f^-_{ij}) \cos\theta |^2
+
| f^+_{ij} - f^-_{ij} |^2 \sin^2\theta
\rbrace \ ,
\end{equation}
with $\tilde{q}_i$ the c.m. three-momentum of meson-baryon pair $i$. The total cross section follows as
\begin{equation}
\sigma_{\rm tot} = \int d\Omega \frac{d\sigma_{ij}}{d\Omega} = 2\pi\int_{-1}^{1} du \, \frac{d\sigma_{ij}}{d\Omega}(u) \ ,
\end{equation}
with $u\equiv\cos\theta$.
We discuss next the transition probability for several $K^- p$ reactions. From here on we shall denote these rates as ${\cal P}(s)$, keeping in mind that in our model they actually depend separately on the total energy $P^0$ and momentum $\vec{P}$ of the meson-baryon pair. In the following discussion, we present selected results for ${\cal P}(s)$ at zero total momentum, $P=0$, as a function of $s^{1/2}=P^0$ for several nuclear densities and temperatures.
\begin{figure}[t]
\begin{center}
\includegraphics[height=9cm]{P-Kmp-dens-log.eps}
\includegraphics[height=9cm]{P-Kmp-pi0S0-dens-log.eps}
\caption{In-medium transition probability ${\cal P}(s)$ at zero total three-momentum for the elastic $K^-p$ (left) and the inelastic $K^- p \to \pi^0 \Sigma^0$ (right) reactions. The peaks associated to the $\Lambda(1115)$, $\Sigma(1195)$ and $\Lambda(1405)$ resonances are clearly visible in the vacuum case.}
\label{fig:Ptrans-Kmp}
\end{center}
\end{figure}
In Fig.~\ref{fig:Ptrans-Kmp} we depict the transition probability for the $K^-p$ elastic reaction and the $K^-p \to \pi^0 \Sigma^0$ strangeness exchange reaction.
The $K^-p$ state is an admixture of $I=0,1$ and therefore the two isoscalar $\Lambda$ resonances and the isovector $\Sigma(1195)$ show up according to the results discussed in Sec.~\ref{sec:amplitudes}.
The $\Sigma^*(1385)$ couples weakly to the $\bar K N$ system and cannot be resolved in the $K^-p$ elastic case.
The $K^-p \to \pi^0 \Sigma^0$ reaction selects the $I=0$ component of the $\bar K N$ amplitude and consequently only the isoscalar hyperons are present in the right panel of Fig.~\ref{fig:Ptrans-Kmp}.
The resonance profiles exhibit the temperature and density evolution as discussed for the amplitudes in Sec.~\ref{sec:amplitudes}. The structure of the $\Lambda(1405)$ is practically washed out and only some remnants are visible at normal matter density. The effect of temperature is particularly appreciable as a broadening of the $p$-wave resonances as compared to the vacuum case (recall that in vacuum the $\Lambda$ and $\Sigma$ are stable and their apparent width is a numerical artifact).
\begin{figure}[t]
\begin{center}
\includegraphics[height=9cm]{P-Kmp-pi0Lam-dens-log.eps}
\includegraphics[height=9cm]{P-pi0Lam-pi0Lam-dens-log.eps}
\caption{Same as in Fig.~\ref{fig:Ptrans-Kmp} for the inelastic $K^-p\to \pi^0 \Lambda$ (left) and the $\pi^0\Lambda\to \pi^0\Lambda$ (right) $I=1$ reactions. The peaks associated to the $\Sigma(1195)$ and the $\Sigma^*(1385)$ resonances are clearly visible in the vacuum case.}
\label{fig:Ptrans-I1}
\end{center}
\end{figure}
Next we show in Fig.~\ref{fig:Ptrans-I1} the transition probability for the pure isovector processes $\pi^0\Lambda \to K^- p$ and $\pi^0 \Lambda \to \pi^0 \Lambda$, where in this case only the $\Sigma$ resonances populate the spectrum.
The $\Sigma^*(1385)$ couples strongly to $\pi \Lambda$ and more weakly to $\pi\Sigma$ and $\bar K N$ states. The latter channel is actually closed in vacuum. However, at finite nuclear density the $\bar K N$ threshold is lowered because of the attractive potentials acting on the meson and the baryon. Because of the opening of this channel (and related in-medium processes accounted for in the $\bar K$ self-energy) one observes that the $\Sigma^*$ shape is distorted by the in-medium $\bar K N$ threshold and its signal practically disappears in the $\pi^0 \Lambda \to K^- p$ reaction. In the diagonal process $\pi^0 \Lambda \to \pi^0 \Lambda$, one can appreciate the large in-medium width of the $\Sigma^*$ induced by the one- and two-body mechanisms incorporated through the dressing of pions and kaons, as well as the small changes in the position of the resonance.
In the following we present results for the in-medium cross sections of the $K^- p$ elastic and inelastic binary reactions, which we compare with the vacuum ones.
The simplest way to estimate the in-medium cross section for these processes is to replace in Eq.~(\ref{eq:diff-cross-sec}) the amplitudes in vacuum by their in-medium counterparts.
The results are shown in Fig.~\ref{fig:cross-sec-med-effects}, with a solid line (on-shell prescription), and in Fig.~\ref{fig:cross-sec-med} for the elastic case and several inelastic channels involving strangeness exchange (thick lines) at different temperature and density. As a common feature we observe that the rapid fall of the cross section close to threshold is softened and the strength is distributed over a wide range of energies, as expected from the melting of the $\Lambda (1405)$ resonance in matter at finite temperature. Typically, the in-medium cross section overshoots the vacuum one at finite momentum (this happens, e.g., in elastic $K^-p$ for $K^-$ momenta in the lab $\gtrsim 300$~MeV$/c$).
\begin{figure}[t]
\begin{center}
\includegraphics[height=8cm]{sigma-KN-med-compare.eps}
\caption{Total $K^- p \to K^- p$ and $K^- p \to \bar K^0 n$ cross sections with in-medium amplitudes.
The different curves show the cross section if medium effects on the initial/final state two-particle flux, the meson/baryon energies and the Fermi motion are included (see text).}
\label{fig:cross-sec-med-effects}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[height=6.5cm]{sigma-KN-med.v3.eps}
\includegraphics[height=6.5cm]{sigma-pi0S0-and-pi0L-med.v3.eps}
\includegraphics[height=6.5cm]{sigma-piScharged-med.v3.eps}
\caption{Total $K^- p$ cross sections with in-medium amplitudes including $s$- and $p$-waves for several coupled channels.}
\label{fig:cross-sec-med}
\end{center}
\end{figure}
As discussed at the beginning of this section, some caveats emerge from the definition of the in-medium cross section in the on-shell prescription.
First, in vacuum the incident kaon momentum in the lab frame determines the total energy and momentum of the $K^-p$ pair and thus the center-of-mass energy, $\sqrt{s}$. Then the evaluation of the scattering amplitude is straightforward since it only depends on $s$ (in general on the invariants $s$, $t$ and $u$) in vacuum.
However, in the nuclear medium, the Lorentz covariance is broken and the (off-shell) scattering amplitudes depend explicitly on $P^0$ and $\vec{P}$. In the nuclear matter rest frame and neglecting the Fermi motion of the initial nucleon one has $\vec{P}=\vec{q}$ and $P^0=q^0+M_N$, where $q^0$ is the off-shell energy of the incoming antikaon. Since there is not a unique relation between $q^0$ and $\vec{q}$ the probability for this reaction to occur should be folded with the spectral function of the antikaon. Otherwise the information about the in-medium properties of the strange mesons, encoded in the meson self-energies, is not taken into account. Note, for instance, that the $\bar K$ is attracted by -$45$~MeV at $\rho=\rho_0$ and therefore the total energy of the $\bar K N$ pair at a given lab momentum is lower than within the free on-shell prescription ($P^0=\sqrt{m_K^2+\vec{q}\,^2}+M_N$), thus giving access to the energy region below the nominal $\bar K N$ threshold in vacuum.
An educated estimation of these effects on the effective cross section for $\bar K N$ scattering in the nuclear medium can be addressed as follows. An incident antikaon with momentum $\vec{q}$ in the nuclear matter rest frame will have an energy distribution which we can approximate by the narrow quasi-particle energy, $q^0\simeq \omega(q)+U_{\bar K}(q)$ with $U_{\bar K}=\Pi_{\bar K}/2m_K$ the $\bar K$ nuclear optical potential and $\omega(q)=\sqrt{m_K^2+\vec{q}\,^2}$ (the former provides a good approximation to the exact dispersion relation for the present purpose).
Then the total energy and momentum of the meson-baryon pair is given by
\begin{eqnarray}
\label{eq:P0Pmed}
P^0 &=& \omega(q) + U_{\bar K}(q) + M_N^* + \Sigma_N^v \ , \nonumber \\
P &=& q \ ,
\end{eqnarray}
where the nucleon energy is also modified by the corresponding scalar ($\Sigma_N^s$) and vector ($\Sigma_N^v$) mean field potentials ($M_N^*$ contains the scalar part, $M_N^*=M_N-\Sigma_N^s$), and where we have assumed for simplicity that the initial nucleon is at rest. It is clear from the equations above that the effective squared invariant energy $s^*=(P^0)^2-P^2$ is lower than its value in vacuum due to the attractive potentials acting on both the $\bar K$ and the nucleon.
Of course in the nuclear matter rest frame the initial nucleon is not at rest but vibrates with Fermi motion. In order to estimate the effect of the Fermi motion of the initial nucleon we obtain the angular average over the nucleon distribution, which amounts to modifying Eq.~(\ref{eq:P0Pmed}) as follows,
\begin{equation}
\label{eq:Fermiav}
M_N^* \to \sqrt{(M_N^*)^2+\frac{3}{5}p_F^2(\rho)} \quad , \quad
P = q \to P = \sqrt{q^2+\frac{3}{5}p_F^2(\rho)} \, ,
\end{equation}
with $p_F(\rho)$ such that $\rho=2p_F^3/3\pi^2$.
Similarly, the c.m. incoming and outgoing momenta $\tilde{q}$ and $\tilde{q}'$ in Eq.~(\ref{eq:diff-cross-sec}) are modified to take into account the meson-baryon binding.
The effect of these corrections is analyzed in Fig.~\ref{fig:cross-sec-med-effects} for the $K^- p$ elastic and $K^- p \to \bar K^0 n$ reactions at normal nuclear matter density and zero temperature.
In these particular channels the binding on the initial and final states is the same and thus the modification of the kinematical factors in the cross section simply reflects the reduction of $s^*$ with respect to the vacuum case, which induces a moderate increase in the cross section (cf.~compare curves labelled with ``on-shell prescription" and ``in-medium flux only'', where in the latter only the kinematical factors $\frac{1}{s} \,\tilde{q}_j/\tilde{q}_i$ are modified). When the amplitudes are evaluated at total energy and momentum accounting for the nuclear potentials, cf.~Eq.~(\ref{eq:P0Pmed}) and curves labelled with "in-med flux+energy", we find that the strength is substantially redistributed to higher lab momenta. This results from the fact that, due to the nuclear binding in the initial state, the energy required to excite the $\Lambda(1405)$ can only be reached at a finite momentum of the incident antikaon. Finally, the Fermi motion of the initial nucleon (cf.~curves "in-med flux+energy+Fermi'')
only enhances moderately the cross section at small incident momentum in the elastic channel, whereas for $K^- p\to\bar K^0 n$ the threshold is shifted to lower energies and this channel is open even below the $\bar K^0 n$ vacuum threshold (with relative momentum of the meson-baryon pair in the c.m. frame of about $100$~MeV$/c$).
We refer to the dash-dotted lines in Fig.~\ref{fig:cross-sec-med} for an estimation of these modifications in the inelastic $K^-p$ reactions.
\section{Summary, conclusions and outlook}
\label{sec:Conclusion}
We have extended our model for $S=-1$ meson-baryon interaction in hot and dense nuclear matter by incorporating the $p$-wave amplitudes within the unitarized self-consistent scheme that was already built in for the $s$-wave. This has allowed us to compute scattering amplitudes for binary kaon-nucleon reactions in different diagonal and off-diagonal coupled channels, for isospin $I=0,1$ and total spin $J=1/2,3/2$.
The isoscalar, $s$-wave $\bar K N$ amplitude is dominated by the excitation of the $\Lambda(1405)$ right below threshold, which acquires its physical width dominantly from the decay into $\pi \Sigma$ states. When the nuclear medium is switched on, the resonance is practically washed out and its strength spread out over energy, as a consequence of the in-medium decay mechanisms incorporated through the self-consistent dressing of mesons (e.g. $\Lambda(1405) \to \pi (YN^{-1}) N, \pi (NN^{-1})\Sigma, \pi (\Delta N^{-1})\Sigma$).
The $p$-wave amplitude reflects the excitation of the $\Lambda$ and $\Sigma$ hyperons (in isospin 0 and 1, respectively) in the spin $1/2$ channel, and the $\Sigma^*(1385)$ with spin $3/2$. At finite nuclear densities both the $\Lambda$ and the $\Sigma$ experience an attractive potential of roughly -50 and -40~MeV at normal matter density and zero temperature, consistently with the input mean-field of the $\sigma-\omega$ model employed to account for medium effects in the baryon propagators of intermediate meson-baryon states. Both hyperons acquire a finite decay width, reflecting the probability to be absorbed by the nuclear medium or have quasi-elastic scattering processes at finite density and temperature. The $\Sigma^*$ develops a much smaller attractive potential of about -10~MeV at $\rho=\rho_0$ and zero temperature which turns even to a small repulsion for increasing densities. Its decay width is notably enhanced by a factor three at normal density mostly due to the dressing of pions, which opens new
absorption channels such as $\Sigma^* \to \pi (NN^{-1}) \Lambda, \pi (NN^{-1}) \Sigma$ and similarly with the pion coupling to $\Delta N^{-1}$ excitations. The effect of the temperature in this case is moderate due to the important phase space already available at zero temperature.
An additional output of the model, which can be accessed from the $p$-wave amplitudes, is the momentum, density and temperature dependent optical potential of the $\Lambda$, $\Sigma$ and $\Sigma^*$. In all cases we have observed a smooth behavior with momentum up to 500 MeV/c.
We have exploited the novel features in our model in order to calculate the in-medium total cross section for the $K^-p$ elastic and inelastic reactions and compared our results with the vacuum case. These cross sections, dominated by the $s$-wave interaction, are particularly smoothened at low incident momenta and some strength is extended to energies below threshold due to the effectively smaller mass of antikaons in the dense medium. As a consequence of the melting of the $\Lambda(1405)$ the cross sections fall off more slowly and eventually remain larger than the vacuum ones with increasing energy.
Our in-medium scattering amplitudes have also been used to generate off-shell transition rates for binary reactions involving strange mesons, such as $\bar K N \to \bar K N$ and $\pi\Lambda \to \bar K N$, of crucial importance to understand strangeness production mechanisms in heavy-ion collisions.
The implementation of this dynamical information together with the spectral functions of $\bar K$ in a suitable off-shell transport model along the line of Ref.~\cite{Cassing:2003vz} is on-going and will be reported elsewhere \cite{preparation}. Also, results on strange vector mesons in matter have been recently reported ($\bar K^*$ \cite{Tolos:2010fq} and $K^*$ \cite{Ilner:2013ksa}) and should be implemented in the transport models in order to have a unified scheme for strangeness production and dynamics in heavy-ion collisions.
\section*{Acknowledgements}
We acknowledge fruitful discussions with Wolfgang Cassing, Eulogio Oset and \`Angels Ramos. This work has been supported by the Helmholtz International Center for FAIR within the framework of the LOEWE program. We also acknowledge support from Grants No. FPA2010-16963 (Ministerio de Ciencia e Innovaci\'on), No. FP7-PEOPLE-2011-CIG under Contract No. PCIG09-GA-2011-291679 and the European Community-Research Infrastructure Integrating Activity Study of Strongly Interacting Matter (acronym HadronPhysics3, Grant Agreement n. 283286) under the Seventh Framework Programme of EU. D.C. acknowledges support from the BMBF (Germany) under project no.~05P12RFFCQ. L.T. acknowledges support from the Ram\'on y Cajal
Research Programme (Ministerio de Ciencia e Innovaci\'on).
|
1,941,325,221,004 | arxiv | \section*{Introduction}
Since 2013, the Ebola virus outbreak in West Africa has become the largest such outbreak known. The epidemic first emerged in December 2013 in southern Guinea, but as of 11 May 2016, there have been about 28,600 cases of EVD in Guinea, Liberia, and Sierra Leone, and isolated cases in Italy, Mali, Nigeria, Senegal, Spain, the United Kingdom, and the United States. 11,300 of these cases were fatal \cite{WHOEbola} and, as high as these numbers are, they may be under-estimates due to the poor quality of current data \cite{Spatiotemporal}. The goal of this paper is to better understand the spread of EVD, and test the assumptions of leading EVD models.
Individuals have often been assumed to homogenously mix with each other in many recent EVD models \cite{EbolaMixingModel1,EbolaMixingModel2,EbolaMixingModel3,EbolaMixingModel4,EbolaMixingModelReview}, but we show that, by applying recent work on the migration of diseases \cite{ContagionGeometry}, homogeneous mixing is an especially poor approximation for EVD. We find that human migration patterns help predict where and when EVD originated and will appear, which would not be possible with a homogeneous mixing assumption. We also find evidence that the spread of EVD is much slower than other recent diseases, such as H1N1 and SARS \cite{ContagionGeometry}, which may have helped health workers control the disease.
Furthermore, against our expectations, we find that the initial growth rate of EVD can decrease significantly with population density, possibly because higher population density areas are correlated with other attributes, such as better healthcare. This compares to previous work where exponential and sub-exponential growth rates were found in many diseases, including the most recent EVD epidemic \cite{Chowell15,Chowell16}, where variations in the growth rate of diseases were found, but mechanistic explanations were not explored. A previous model \cite{ScalingHumanInteractions}, in comparison, found that higher population cities should contribute to a faster rate of disease spread, although we are not aware of previous research on disease spread and population density. Our work suggest that location-specific initial growth rates better model EVD, although the underlying reason for this heterogeneity should be a topic of future research.
Finally, we create novel metrics for the relative transmissibility of EVD strains, which are robust to sparse sampling. These metrics add to previous work on EVD in Sierra Leone \cite{EbolaCompetingClaudes}, and provide a novel understanding of EVD strains in Guinea. We find that the relative transmissibility of strains, as measured from these metrics, is not uniform; therefore, treating EVD as a single disease may be inappropriate \cite{EbolaMixingModel1,EbolaMixingModel2,EbolaMixingModel3,EbolaMixingModel4,EbolaMixingModelReview}.
These results, when taken together, suggest unexpectedly simple ways to improve EVD modeling. In the Discussion section, we will explain how a meta-population model can potentially aid in our understanding of disease spread and growth. Furthermore, incorporating disease strain dynamics into this model could help us better predict which strains will become dominant in the future, which may improve vaccination strategies.
\section*{Results}
Models of the West Africa Ebola outbreak have often assumed that the disease spreads via homogeneous mixing \cite{EbolaMixingModel1,EbolaMixingModel2,EbolaMixingModel3,EbolaMixingModel4,EbolaMixingModelReview}. We find, however, that this assumption may not accurately model EVD when the disease first arrives in a given area. We will first discuss how the arrival time of EVD within a country or administrative area follows a predictable pattern due to the underlying migration model, in contrast to the mixing hypothesis. Next, we model the cumulative number of individuals infected in administrative divisions at the first or second level in Guinea, Liberia, and Sierra Leone to estimate the initial growth rate of EVD. We find this growth rate varies significantly, and appears to decrease with the population density within the administrative division. Finally, we introduce models of how EVD disease strains spread to rule out uniform strain transmissibility.
\subsection*{How Does Ebola Spread?}
Homogeneous mixing models assume that healthy individuals can get sick regardless of where they are, even when they are hundreds of miles from the origin of the infection. If this is true, then the disease should be quickly seen in all susceptible areas almost simultaneously. Although this approximation may be reasonable at short distances, there has to be a lengthscale when this would break down because, in the years since Ebola first emerged, no more than a handful of countries have become infected (Fig. \ref{fig:DeffCountry}).
Alternatively, one might assume that EVD spreads spatially. There is a significant positive correlation (Spearman $\rho = 0.26$, $p<0.05$, $n = 56$ at the spatial resolution of administrative divisions; $0.81$, $p<0.05$, $n = 8$ at the country resolution) between the arrival time of EVD in administrative divisions at the first or second level and the distance from the outbreak origin, Gu\'eck\'edou, Guinea, to division centroids. Furthermore, this assumption has been applied successfully to model EVD in Liberia \cite{Spatiotemporal}.
We find, however, that migration in West Africa is more complex than either of these assumptions \cite{EbolaMigration}. Intuitively, diseases should spread more quickly between administrative divisions or countries with significant travel between them than between isolated areas. Therefore, it is reasonable to rescale distances such that areas with stronger travel ties are considered ``closer'' than areas without these ties, following work by Brockman and Helbing \cite{ContagionGeometry}. We find that rescaling distances using migration patterns helps us (1) better understand how quickly EVD spreads, and (2) estimate where the outbreak started.
To our surprise, we find that the correlation between the arrival time and effective distance for EVD (Spearman $\rho = 0.95$, $p<10^{-3}$, $n = 8$) was consistently higher than the correlation between the arrival time and geodesic distance (see Figs.~\ref{fig:DeffCountry}, \ref{fig:DeffDistrict}, and Supplementary Fig. S1 online). A high Pearson correlation ($0.96$, $p<10^{-3}$), and agreement with the normality assumption ($p > 0.05$, using the Kolmogorov-Smirnov normality test), also suggests a linear relationship between the arrival time and effective distance. Taking the slope of the plot gives us an effective velocity of spread, which we find to be $0.015 \text{days}^{-1}$, which is much slower than for previous diseases ($\approx0.1 \;\text{days}^{-1}$) \cite{ContagionGeometry}.
An intuitive explanation for our finding is that lower overall migration in West Africa reduces the speed at which EVD spreads compared to other diseases.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.8 \columnwidth]{figures/TaDgDeffCountry-NewTa-crop.eps}
\caption{\label{fig:DeffCountry}The arrival time, $T_a$, of EVD in a country versus (a) the great-arc length distance, $D_\text{g}$ and (b) the migration-based effective distance, $D_\text{eff}$ from the disease's point of origin (Guinea). Error is smaller than marker size. The migration network used to construct the effective distance comes from census microdata \cite{EbolaMigration}. The linear relationship between arrival time and effective distance suggests that there is a constant effective velocity of disease spread, in agreement with previous work on other diseases \cite{ContagionGeometry}.}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.8 \columnwidth]{figures/TaDgDeffDistrict-NewTa-crop.eps}
\caption{\label{fig:DeffDistrict}The arrival time, $T_a$, of patients with EVD in administration divisions at the first or second level within Guinea, Mali, Liberia, Sierra Leone, or Nigeria, versus (a) the great-arc length distance, $D_\text{g}$, and (b) the migration-based effective distance, $D_\text{eff}$, from the disease's point of origin (Gu\'eck\'edou, Guinea). Error is smaller than marker size. The migration network used to construct the effective distance comes from a radiation migration model \cite{MigrationModel} (similar results were found using the gravity migration models in \cite{EbolaMigration}, see Supplementary Fig. S1 online).}
\end{figure}
The lower correlations at the first or second administrative level are due in part to the fact that we use migration models to determine the effective distance, and the disease spread for several months before it was detected \cite{EbolaMixingModelReview}. Despite the lower quality data, however, we can still use it to determine the most likely origin of EVD, which we compare to the known origin, Gu\'eck\'edou, Guinea. Quickly finding where a disease originated is important to help understand what caused it (e.g., what was the vector), and to predict where and when it will arrive, which can allow health workers to prepare \cite{ContagionGeometry}. Previously, it was found that the correlation between the arrival time and the effective distances from the disease origin is higher than correlations from areas where the disease did not originate \cite{ContagionGeometry}. We therefore ranked the correlation between the arrival times and effective distances from all the administrative divisions where Ebola was found between 2013 and 2016 (Figs.~\ref{fig:DistrictRank} \& Supplementary Fig. S2 online), and found the true origin has the highest correlation. To determine the statistical significance of the ranking, we bootstrapped residuals of the linear regression between $D_{eff}$ and $T_a$, with Gu\'eck\'edou as the origin, to accurately bootstrap arrival times. This is equivalent to simulating what would happen if we wound back the clock and restarted the infection from Gu\'eck\'edou. The true origin has the highest correlation $45\%\pm 1.6\%$ of the time for the radiation model, and $44\%-70\%$ of the time for gravity models (Supplementary Fig. S2 online), therefore the true origin does not always have the highest correlation, but it often does, and can be a good first guess when determining the origin.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.4 \columnwidth]{figures/TaDeffCorrelRankDistrict-NewTa-crop.eps}
\caption{\label{fig:DistrictRank}
The (ranked) correlations between arrival time, $T_a$, and migration-based effective distance, $D_{eff}$, between administrative divisions, calculated via Eq.~5 in \cite{ContagionGeometry}. The migration network used to construct the effective distance comes from a radiation migration model \cite{MigrationModel} (similar results were found using the gravity migration models in \cite{EbolaMigration}, see Supplementary Fig. S2 online). In red is the true origin, which is found to have the highest correlation.
}
\end{figure}
In conclusion, we find strong evidence that a migration network can elucidate how quickly Ebola spreads, when and where it will arrive, and where the infection began even with limited data. Furthermore, we see that alternative hypotheses for how EVD spreads, such as homogeneous mixing and nearest neighbor interactions, provide quantitatively poorer agreement with data.
\subsection*{The Growth of EVD Across Administrative Divisions}
In this section, we show that the cumulative number of Ebola cases within administrative divisions at the first or second level is well-approximated by a logistic function (Fig.~\ref{fig:original-scaled-infection-curve-over-time}a). We use this finding to estimate the initial growth rate of EVD for each infected administrative division, where, to our surprise, we find that the initial growth rate decreases with population density. We emphasize that variations in EVD growth rates have been seen before \cite{Chowell15,Chowell16}, but mechanisms that might drive this behavior were not proposed. The logistic function has been used to model initial stages of EVD \cite{Chowell14}, and is equivalent to the Susceptible-Infectious (SI) model when 100\% of individuals are initially susceptible \cite{Bailey75}. To fit the SI model to our data, however, we would have to make a simplistic assumption that only a fraction $p_n$ of individuals are susceptible, in order to explain why a small fraction of the population is ever infected. We do not claim this describes the actual dynamics of EVD, although we will explain later why the cumulative number of cases should approximately follow this distribution. For an administrative division $n$, the cumulative number of infected individuals over time, $t$, by the logistic growth function is simply:
\begin{equation}
\label{LogisticRegression}
i_n(t)=\frac{p_n}{1+e^{-q_n(t-{t_0}_n)}}.
\end{equation}
The initial growth rate is $q_n$, while $t_0$ is the time when the cumulative infection increases the most. The dynamics are highly heterogeneous (Fig.~\ref{fig:original-scaled-infection-curve-over-time}a), therefore, it would seem un-intuitive for a single function to fit all the data. However, after fitting each cumulative distribution to the logistic model, and rescaling the variables: $p_n\rightarrow 1$, $\tilde{t} = (t-t_0) q_n$, we find that the distributions collapse (Fig.~\ref{fig:original-scaled-infection-curve-over-time}b) to a good approximation.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.6\columnwidth]{figures/raw-and-scaled-cumulative-infection-new.eps
\caption{\label{fig:original-scaled-infection-curve-over-time} The cumulative number of infected individuals over time within administration divisions at the first or second level in Guinea, Sierra Leone and Liberia, from the patient database dataset \cite{WHOEbola}. It is clear that the rate and size of infections are heterogeneous, but when we fit the data to logistic functions, and renormalize the coefficients to $p_n\rightarrow 1$, $\tilde{t} = (t-t_0) q_n$ in (b), we see that the distributions collapse. A logistic function is plotted in red.}
\end{figure}
Because small variations in $q_n$ very quickly become substantial variations in the infection size later on, we want to understand how $q_n$ varies across administrative divisions. When plotting $q_n$ versus population density (Fig.~\ref{fig:ExpVsPopDen}) with more than 20 infections total, we notice that, although $q_n$ is $\approx0.1~\text{days}^{-1}$, which is similar to a previous EVD outbreak \cite{EbolaGrowthRate}, $q_n$ varies significantly across administrative divisions. This result contrasts with many previous models that assume a global parameter can describe the growth of Ebola \cite{EbolaMixingModel1,EbolaMixingModel2,EbolaMixingModel3,EbolaMixingModel4,EbolaMixingModelReview}. Furthermore, we find that $q_n$ decreases significantly with population density (Spearman $\rho = -0.48$, $p< 10^{-3}$, $n = 44$), plausibly because better healthcare may exist in higher density areas. This contrasts with a previous model, which predicted a positive scaling relation between the mean growth rate and population of cities \cite{ScalingHumanInteractions}, although we are not aware of research that explored how disease spread is affected by population density. Therefore, not only is the growth rate of Ebola unexpectedly heterogeneous, but the dependence on population density may help us understand why this is the case.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5 \columnwidth]{figures/GrowthRateVsPopDensity-LogScaled-crop.eps
\caption{\label{fig:ExpVsPopDen}The initial growth rate, $q_n$, versus population density for the PSD dataset with more than 20 infections (not shown is the outlier Kissidougou, Guinea, with a population density of 34 individuals per kilometer, and a growth rate of $0.41 \text{days}^{-1}$), where error bars are standard deviations. We find that the initial growth rate drops significantly with population density (Spearman $\rho = -0.48$, $p< 10^{-3}$, $n = 44$).}
\end{figure}
\paragraph{What Makes the Data Collapse?}
In the SI model, $100\%$ of individuals are eventually infected, therefore, to find agreement with data, our model has to assume a small proportion of individuals in each division are susceptible to the disease. This seems implausible; more likely, all individuals are susceptible and, as they become aware of an infection, they reduce their interactions or otherwise reduce the overall disease transmissibility. To demonstrate this hypothesis, we created a more realistic, although still simplistic, disease model, in which susceptible (s) individuals can become infected (i), but then recover or are removed \cite{SIR}. In our data, all individuals who recovered or died were assumed to be ``removed". The SIR model, like the SI model, significantly overshoots the cumulative number of cases in the absence of intervention. We counter-balance this effect with an exponentially decreasing disease transmissibility as a result of public health interventions \cite{SDIR1,SDIR2}. We call this model the Susceptible - Decreasingly Infectious - Recovered (SDIR) model.
The equations are:
\begin{equation}
\label{SDIR1}
\frac{d s_n}{dt} = -a(t)\,s_n i_n
\end{equation}
\begin{equation}
\label{SDIR2}
\frac{d i_n}{dt} = a(t)\,s_n i_n - b\,i_n
\end{equation}
\begin{equation}
\label{SDIR3}
r_n = 1-s_n -i_n
\end{equation}
in normalized units, where $s_n$ is the susceptible fraction of the population, $i_n$ is the infected population and $r_n$ is the removed population (either by recovery or death), for each administrative division. $b$ is the recovery rate, while the infectious rate, $a(t)$, is defined as
\begin{equation}
\label{InfectionRate}
a(t) = a \,\{1 - [e^{-k(t-t')}-1] \theta(t-t')\},
\end{equation}
In Eq. \ref{InfectionRate}, $a$ represents the overall infection rate, while $k$ represents the rate at which control measures reduce transmission, $t'$ is a delay before which individuals are not aware enough of the disease to try to limit its spread, and $\theta$ is the Heaviside step function. Before $t'$, the disease infects at a rate $a$, while afterwards, we see a sharp drop-off in the disease transmissibility. It is clear that other, more realistic assumptions could be included into the model, including an exposed state (e.g., the SEIR model \cite{SEIR}, or the effect of burial practices, hospitalization, and other factors \cite{EbolaMixingModel1,EbolaMixingModel2,EbolaMixingModel3,EbolaMixingModel4}, but we leave this out of our model for simplicity.
We see rough quantitative agreement with empirical data (e.g., Fig.~\ref{fig:all-country-fits}), which suggests that this simple model captures the essence of the real-world dynamics.
In review, our model assumes only 3 states and a global time time-varying infection rate, $a(t)$. Even though there are reasons to consider this model too simplistic, it is able to effectively describe the roughly s-like curve of infections, and therefore it begins to help us understand the mechanism for a logistic-like cumulative distribution (Fig. \ref{fig:original-scaled-infection-curve-over-time}), such as reductions in infectability due to disease awareness
\begin{figure}[hbt]
\centering
\includegraphics[width=0.8\columnwidth]{figures/SDIRExampleFits-crop.eps
\caption{\label{fig:all-country-fits} SDIR fits to empirical data (fits for all infected administrative divisions can be seen in Supplementary Figs. S3, S4, \& S5 online). The cumulative number of infected individuals in (a) Forecariah, Guinea; (b) Montserrado, Liberia; and (c) Bo, Sierra Leone, where the blue line is the best fit and the shaded area represents standard errors. While some fits are poorer than others, we generally capture the qualitative structure of the empirical data, including the cumulative number of infected. This model may therefore begin to explain the mechanism behind the s-curve.}
\end{figure}
\subsection*{Are Strains Uniformly Transmissible?}
In this section, we use EVD genome sequences to determine what strains appear more often than expected by chance in Guinea and Sierra Leone. Our results suggest that EVD strains do not necessarily have uniform transmissibility. We are not aware of any previous models that take the strain of EVD into account, although a previous paper found certain EVD strains in Sierra Leone have different growth rates \cite{EbolaCompetingClaudes}. Unlike the previous paper, however, we can use our method to understand the transmissibility of strains in Guinea, where the sampling rate is otherwise too low.
\subsubsection*{Modeling Strain-Dependent Infection Probabilities}
We use meta data from Ebola nucleotide sequences isolated from patients in Guinea \cite{GuineaLineages1} between April, 2014 and January, 2015, and Sierra Leone between late May 2014 and January, 2015 \cite{SLEbolaSeq2014,SLEbolaFreeTown,SLEbolaSeq}, to determine when and where a strain of EVD was found, then use kernel-density estimation (KDE, see Methods) to estimate the spatial probability density function (PDF) of being infected with an EVD strain $s$\cite{KDE}:
\begin{equation}
\label{strainEVD}
P_{E}(\vec{x},t,h,\Delta t|s) = \frac{C}{|\mathcal{S}_s| h^2} \sum_{i\in \mathcal{S}_s} K\left(\frac{\vec{x}-\vec{x}_i}{h}\right) H\left(\Delta t - |t-t_i|\right),
\end{equation}
Here, $K$ represents a kernel with bandwidth $h$ to represent the area around an observed sequence where individuals are likely to be infected, $C$ is an overall constant and $H$ is the Heaviside step function. $\mathcal{S}_s$ are all labels for the set of pairs $\{\vec{x}_i,t_i\}$ with known infections of strain $s$.
We add a Heaviside step function in the above equation to represent a sliding time window of length $\Delta t$ -- only within timeframe $(t - \Delta t)$ to $t$ is sequence data relevant. For the rest of the paper, our kernel is chosen to be a radially symmetric Gaussian:
\begin{equation}
K\left(\frac{\vec{x}-\vec{x}_i}{h}\right) = K\left(\frac{\|\vec{x} - \vec{x}_i\|}{h}\right) = \frac{1}{2 \pi}\text{Exp}\left(\frac{\|\vec{x} - \vec{x}_i\|^2}{2 h^2}\right).
\end{equation}
We do not believe that our results are qualitatively sensitive to the kernel choice. By summing these probabilities, we can then find the probability of being infected with EVD:
\begin{equation}
\label{Prob}
P_{E}(\vec{x},t,h,\Delta t) = \frac{C}{nh^2} \sum_s\sum_{i\in \mathcal{S}_s} K\left(\frac{\vec{x}-\vec{x}_i}{h}\right) H\left(\Delta t - |t-t_i|\right),
\end{equation}
where $n = \sum_s |\mathcal{S}_s|$.
There are two free parameters: the kernel bandwidth $h$ and sliding time window width $\Delta t$, but knowing the true infection pattern allows for these parameters to be estimated (see Fig. \ref{fig:GuineaProbabilityCorrel} for EVD in Guinea).
\begin{figure}[hbt]
\centering
\includegraphics[width=0.5 \columnwidth]{figures/KDESpearmanCorrelation-Guinea.eps}~~~~~~~~~~~~~~~~~~
\caption{\label{fig:GuineaProbabilityCorrel}
A heat map of the Spearman rank correlation between Eq.~\ref{Prob} and the number of individuals infected with EVD within a time $\Delta t$, as a function of the bandwidth $h$ and the time window width, $\Delta t$ in Eq.~\ref{Prob}. The correlation between the model and data is highest (Spearman $\rho = 0.50$, $p < 0.01$, $n = 68$) when the bandwidth is 0.5 degrees and the time window width is 120 days.}
\end{figure}
We apply KDE to EVD in Guinea and Sierra Leone, and test whether the estimated probabilities of becoming infected correlate with the number of infected individuals within each time window. A high correlation would represent close agreement between the KDE and actual spatial probability, and would increase our trust in these findings. In Guinea, we find Spearman correlations of up to 0.5 ($p<0.01$, $n = 68$, see Fig. \ref{fig:GuineaProbabilityCorrel}) which suggests that the KDE is in good agreement with data. In Sierra Leone, however, these correlations are negligable. The Guinea dataset will therefore be the focus of the rest of our paper.
From Fig. \ref{fig:GuineaProbabilityCorrel}, we find that the model best correlates with Guinea's infection data if $h=0.5$ and $\Delta t = 120$ days.
A plausible biological reason for the time window to be 120 days is that it may correspond to the effective viral shedding time, which can occur over timescale of roughly 100 days \cite{ViralShed}. For example, the virus has been found to spread via sexual contact, and has been seen in vaginal fluid for up to 33 days, and in semen for up to 82 days via cell culture, and 550 days via RNA. These values are extremes and might not be likely to be encountered often, however there has been at least one other report of RNA seen in semen 100 days afterwards \cite{EVDSTD}. The length scale of 0.5 degrees, however, is roughly the distance between prefectures, which probably explains why the peak is at this bandwidth.
We also find that our results are robust to choices of $h$ and $\Delta t$, and we therefore let $h$ vary between 0.1 and 1 degrees and $\Delta t$ vary between 30 and 300 days.
\subsubsection*{Measuring the Relative Transmissibility of Strains}
We have just demonstrated the applicability of KDE to measuring the relative likelihood of being infected with EVD across districts. This is not in and of itself too interesting -- indeed other models may create better quantitative predictions of how EVD spreads \cite{Spatiotemporal}. Unlike previous work, however, this model can predict the relative transmissibility of individual EVD strains (Eq. \ref{strainEVD}). We will apply these predictions towards determining the relative transmissibility of EVD strains. When a strain $s$ appears much more than it should compared to other strains, then we can conclude that $s$ may be more transmissible.
Let a new infection arrive at position $\vec{y}$ and at time $t_y$. Our model may predict that the probability this infection would have a strain $s$ is, e.g., 20\% and any other strain is 40\%. If we find the disease is indeed strain $s$, we may be a little ``surprised". If $s$ consistently appears more often than expectations, we may believe that $s$ is simply more successful -- it can reach individuals more quickly than other strains. To quantify our small ``surprise" of seeing a new $s$ strain, we can write the equation:
\begin{equation}
\label{Qs}
Q_s(i+1) = \frac{P_E(\vec{x}_{i+1},t_{i+1},h,\Delta t|s) p(s)}{P_{E}(\vec{x_{i+1}},t_{i+1},h,\Delta t)} - I(i+1 \in \mathcal{S}_s),
\end{equation}
where $I$ is the indicator function that a new infection's strain is $s$ or not, and $p(s)$ is the probability that the infection strain is $s$ (See Fig. \ref{fig:EVDSchematic}). On average, the difference between our prediction and data, $S_s$, defined as
\begin{figure}[hbt]
\centering
\includegraphics[width=0.75\columnwidth]{figures/EVDSchematic-crop.eps}
\caption{\label{fig:EVDSchematic}A schematic for our success metric. (a) When a new infection appears, the probability the infection would be a strain $s$ is $P_{E}(\vec{x},t,h,\Delta t|s)$. We define $Q_s$ to be the difference between the prediction and actual event. (b) $S_s$, the expectation value of $Q_s$, should be 0 if $s$ is no more transmissible than the disease as a whole. If $S_s < 0$, then we would say that $s$ is more transmissible than the typical strain, while the opposite is true if $S_s > 0$.}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.7\columnwidth]{figures/SsHeatmapNew-crop.eps}~~~~~~~~~~~~~
\caption{\label{fig:GNPval}The success metric for strains in Guinea for (a) SL1, (b) GN1, (c) GN2, (d) GN3, (e) GN4, across various time window widths and bandwidths, where red values correspond to strains seen less often, and blue more often, than expected between the time the strain first appeared and last was seen. Sequence data comes from \cite{GuineaLineages1}.}
\end{figure}
\begin{equation}
S_s \equiv \langle Q_s\rangle = \frac{\sum_{j| t_{i_1}\le t_j\le t_{i_2}, i_1,i_2\in\mathcal{S}_s} Q_{s}(j)}{m},
\label{Success}
\end{equation}
should be 0, where the sum is over all sequences between the first and last appearance of strain $s$, and $m$ is the number of sequences that satisfy this constraint. When $S_s = 0$, $s$ is no stronger or weaker than the infection as a whole. If $S_s < 0$, then $s$ is stronger than strains of EVD generally, while $S_s>0$ implies the opposite (see Fig, \ref{fig:EVDSchematic}). To determine the statistical significance of $S_s$, we bootstrap values for which $S_s = 0$ (see Methods).
We find that $S_s$ is significantly different than chance, (Fig. \ref{fig:GNPval}), therefore, Ebola strains might not be uniformly transmissible.For example, SL1 has one of the most negative values of $S_s$ (Fig.~\ref{fig:GNPval}), where all values are statistically significant ($p<0.05$, $n$ varies with $\Delta t$). Interestingly, we also find the values of SL1 appear to fluctuate, possibly because we only have 9 SL1 datapoints, versus 37, 19, 13, and 36 datapoints for GN1, GN2, GN3, and GN4, respectively. Most other strains have a success metric indicating a stronger strain than expected by chance probably because few strains are concurrent in time, and therefore, comparing $S_s$ across strains is difficult.
Our method may be compared to the method used by Meyer, Elias, and Hohle \cite{EpidPointProcess}, who compared the success of invasive meningococcal disease (IMD) strains. Common surveillance algorithms were found to be practical but these appear to be more useful for a chronic rather than an acute outbreak, such as EVD.
\section*{Discussion}
In conclusion, we find several factors not often accounted for that may improve the accuracy of modeling EVD. First, EVD appears to spread with a constant effective velocity through a migration network, in disagreement with the homogeneous mixing hypothesis (Figs. \ref{fig:DeffCountry} \& \ref{fig:DeffDistrict}, and Supplementary Fig. S1 online) commonly used to model diseases like Ebola. By taking into account the role a migration network has in disease spread, we can predict where EVD will arrive with greater accuracy than before. This method can also accelerate the process of identifying the index case by determining which administrative division is the origin through arrival time-effective distance correlation maximization. Second, we find that the growth of EVD at the finer spatial resolutions can be well described by three scaling parameters, and the initial growth rate decreases with the population density, contrary to our intuition, which suggests that a population density-dependent EVD model may more accurately predict the spread of Ebola when the disease first arrives at an administrative division. One plausible explanation for this result is that higher population density areas receive better healthcare than other areas, but more work is necessary to understand this behavior. Finally, we find a wide variation in the transmissibility of different strains of EVD, which suggests that modeling each disease strain, when this information is known, can improve the prediction task by reducing the heterogeneity of the data. In addition, our method may improve vaccination strategies if vaccines are made for particularly transmissible strains as well as the most common ones.
One way to take these factors into account would be through a meta-population model on top of a migration network with high spatial resolution. A meta-population model treats areas such as cities, districts, or countries as nodes on a network, with links connecting them to represent the flow of individuals from one area to another. Diseases within each area (node) are modeled with a compartmental model, under the assumption that the homogeneous mixing approximation is more accurate in smaller areas than for the entire network. Previous work has already found that a meta-population model \cite{ContagionGeometry}, or spatio-temporal model \cite{Spatiotemporal}, can accurately predict the spread of diseases. Finally, strain-dependent transmissibility may further improve model accuracy.
Our approach towards testing assumptions in disease modeling is not restricted to EVD, but can be applied toward other diseases of epidemiological concern. Future work is therefore necessary to test our methods on other diseases and check whether a meta-population model will better predict disease spread.
\section*{Methods}
This section explains how data on the cumulative number of infected individuals at the first and second administrative level (e.g., counties or districts) was gathered, how the migration network was constructed, and how we found and used strain data in Fig. \ref{fig:GNPval}.
\subsection*{Infection Data from Humanitarian Data Exchange and World Health Organization}
For the 2014-2016 EVD epidemic, we are aware of two main data sources on the cumulative number of infections: World Health Organization (WHO) patient database and the WHO weekly situation reports \cite{WHOEbola}. The patient database data produces results similar to the Situation Reports, but for consistency, all plots in this paper use the patient database. We focused on data from December 2013 to January 2016 for the major West African countries affected: Guinea, Liberia, Mali, Nigeria, Senegal, and Sierra Leone.
There was a significant amount of work parsing data from the patient database. Administration names in the data had multiple spellings, some administrative areas appeared individually but also aggregated with nearby areas, and finally spaces, accents, and other characters appeared inconsistently. These were cleaned and harmonized. Although in the most affected countries, we have the cumulative number of cases at the second administrative level, in three countries (Liberia, Mali, and Nigeria) we only have data at the first administrative level (region, not district).
\subsection*{Migration Data from Flowminder}
Data on intra- and international migration in West Africa is taken from Flowminder \cite{EbolaMigration}. For modeling migration at fine spatial resolutions, the Flowminder data contain three different data sets: (1) within countries (capturing exclusively \underline{intra}national movement of people); (2) between countries (capturing exclusively \underline{inter}national movement of people); and (3) within and between countries, capturing both intra- and international movement of people. All three data sets include the West African countries most affected by EVD as well as Benin, Cote d'Ivoire, Gambia, Ghana, Guinea Bissau, Senegal, and Togo, which had few, if any, EVD cases. In the third dataset, Flowminder collected census microdata from Public Use Microdata Series (IPUMS) on what country (or countries) an individual resided during the previous year.
To create a migration network, we first matched the Flowminder node coordinates to known district centroids (errors between centroids and node coordinates were $\pm 10$km). Having matched nodes to administrative names, we associated each node to the district-level arrival time of EVD, recording the date that the WHO patient database first records for a case in each administrative area. Four gravity model parameters were used to estimate the traffic between administration areas. Three were fit to migration using cell phone data in Cote d'Ivoire, Kenya, and Senegal, respectively, while the final one was fit to IPUMS data. We found that all produce similar fits, and work equally well to estimate the effective distances between areas. In addition, using population data from Geohive (next section), we created a radiation migration model, found to more accurately estimate migration patterns than gravity models \cite{MigrationModel}, we therefore used this for all figures that use migration networks in this paper. Figures using gravity models can be seen in Supplementary Figs. S1 \& S2 online.
\subsection*{Population data from GeoHive}
To determine the population density, and to estimate the migration network from the radiation migration model \cite{MigrationModel}, we first find the population and area of each administrative division. First, we collected the area administrative divisions in West Africa from www.geohive.com. Next, we collected population in those districts from population census records. For districts providing multiple census datasets, we collected from the latest report: Guinea, 2014; Liberia, 2008; Mali, 2009; Nigeria 2011; Senegal 2013; and Sierra Leone, 2004. Some population data is probably out-of-date and may lead to some biases in the population density and radiation model. However, we believe that newer data will confirm our initial conclusions.
\subsection*{Fitting Disease Models}
To find the best-fit parameters and associated errors in the logistic model (Eq. \ref{LogisticRegression}), and the SDIR model (Eqs. \ref{SDIR1}, \ref{SDIR2}, \& \ref{SDIR3}), we used least squares fitting. To reduce the possibility of overfitting data, we focus on districts where more than 20 individuals become infected.
\subsection*{Temporal-Spatial Resolution of Sequences}
To find empirical values of
Eqs. \ref{strainEVD} \& \ref{Prob}, we gathered infection meta-data (the strain, time, location of infected individual) from recent papers \cite{SLEbolaSeq2014,SLEbolaFreeTown,SLEbolaSeq,GuineaLineages1}. In the future we hope to make our own phylogenetic tree from the sequences, but for now we use the strain labels from the supplementary data itself.
A substantial fraction of sequences do not belong in any significant clade, and some sequences could belong to multiple clades, depending on the phylogenetic tree method used \cite{SLEbolaSeq2014,SLEbolaFreeTown,SLEbolaSeq}, therefore, we found $Q_s$ and $S_s$ for each strain by comparing that strain to all other sequences within the time window.
To determine the statistical significance of values in Fig. \ref{fig:GNPval}, we generate Bernoulli random variables (1 with probability $Pr(s,\vec{x}_i,t_i)$, and 0 with probability $1-Pr(s,\vec{x}_i,t_i)$) with
\begin{equation}
Pr(s,\vec{x}_i,t_i) = \frac{P_E(\vec{x}_{i},t_{i},h,\Delta t|s) p(s)}{P_{EVD}(\vec{x_{i}},t_{i},h,\Delta t)}
\end{equation}
where $\vec{x}_{i}$ is the infection location at a time $t_{i}$, to represent idealized data in which $\langle S_s\rangle=0$.
We determine $S_{s,bootstrap}$ (Eq. \ref{Success}) from these idealized values of $Q_s$
If the absolute value of the empirical $S_s$ value, $|S_{s,empirical}|$, is greater than 95\% of $|S_{s,bootstrap}|$ values, then the empirical data is statistically significant.
|
1,941,325,221,005 | arxiv | \section{Introduction}
The theory for equilibrium systems is far from being complete. For example, the stationary state of a system strongly interacting with a surrounding thermostat is generally not available \cite{talkner2020}. Despite this incompleteness for equilibrium setups there are laws in nature which allow us to predict their reaction to an external perturbation. A celebrated example constitutes Le Chatelier-Braun principle \cite{chatelier1884, braun1887a, braun1887b, landau} which loosely speaking states that if a system in equilibrium is subjected to a perturbation, a reaction will occur so that the equilibrium will be shifted towards a new one, counteracting this change. This principle may be regarded as a precursor of a linear response theory \cite{kubo1966, marconi2008} which nowadays is a common tool for predicting the properties of a system in equilibrium perturbed by a external stimuli. An archetypal example is Sutherland-Einstein relation \cite{sutherland1905, einstein1905,chaos2005}, saying that for such setups the diffusion coefficient is an increasing function of temperature.
Despite many years of active research our current understanding of nonequilibrium physics fundamentals is still incomplete, undoubtedly far beyond what we known for equilibrium systems. Yet much progress has been achieved over the last decades in modelling certain aspects of such systems like stochastic resonance \cite{gammaitoni1998}, noise assisted transport far from equilibrium \cite{kula,hanggi2009}, absolute negative mobility \cite{eichhorn2002a, machura2007, nagel2008, jsm,spiechowicz2014pre, slapik2019}, anomalous diffusion \cite{metzler2014,rysiek,spiechowicz2019njp} or various recent fluctuation theorems \cite{jarzynski2011, campisi2011}, to name only a few. Here, we aim to demonstrate that nonequlibrium conditions allow for a rich complexity which is not present in a system at thermal equilibrium. The reason behind it is that equilibrium is ruled by various Thermodynamic Laws and symmetries such as for example {\it detailed balance}, which generally loose their validity if taken out of equilibrium. For this purpose we survey recent research on peculiar transport behaviour occurring in temporally driven periodic system with the particular emphasis put on diffusion anomalies. We rely on the Langevin equation description for nonlinear Brownian motion which, as we shall demonstrate, can be successfully applied also to nonequilibrium systems. It can be derived from a corresponding microscopic Hamiltonian description complemented by fundamentals of equilibrium statistical physics imposed on the thermostat \cite{kubo1966}. The system of interest may look simple at first glance; however, the emerging underlying inertial dynamics is exceptionally rich upon observing that the driven Brownian motion is governed by several parameters which in turn yield a complex dynamics.
\section{Generic model of an inertial Brownian motor}
In this work we consider a classical Brownian motor \cite{hanggi2009}. It is typically modeled as an inertial particle of mass $M$ which moves in a spatially periodic potential $U(x)$ which breaks reflection symmetry and additionally is driven by an unbiased time-periodic force $A\cos{(\Omega t)}$ of amplitude $A$ and angular frequency $\Omega$. The system is coupled to thermostat of temperature $T$. The corresponding Langevin equation reads
\begin{equation} \label{LL}
M\ddot{x} + \Gamma\dot{x} = -U'(x) + A\cos{(\Omega t)} + \sqrt{2\Gamma k_B T}\,\xi(t),
\end{equation}
where the dot and the prime denote differentiation with respect to time $t$ and the Brownian particle coordinate $x$, respectively.
The parameter $\Gamma$ stands for the kinetic friction coefficient and $k_B$ denotes the Boltzmann constant. The interaction with thermostat is modeled by $\delta$-correlated, Gaussian white noise $\xi(t)$ of vanishing mean and unit intensity, i.e.,
\begin{equation}
\langle \xi(t) \rangle = 0, \quad \langle \xi(t)\xi(s) \rangle = \delta(t-s).
\end{equation}
The spatially periodic potential $U(x)$ is assumed to be reflection non-symmetric, i.e. of a ratchet-type \cite{hanggi2009,denisov2014}. As an example, we choose a double-sine form of period $2\pi L$ and barrier height $2 \Delta U$; explicitly
\begin{equation}
\label{pot}
U(x) = -\Delta U\left[ \sin{\left(\frac{x}{L}\right)} + \frac{1}{4}\sin{\left( 2 \frac{x}{L} + \varphi - \frac{\pi}{2}\right)}\right].
\end{equation}
Before we start the analysis of this setup we need to transform the above equation of motion in its dimensionless form. Towards this aim, we introduce a dimensionless distance and time variables for the system under consideration \cite{spiechowicz2015pre, slapik2018}; i.e. we set
\begin{equation}
\label{scales}
\hat{x} = \frac{x}{L}, \quad \hat{t} = \frac{t}{\varkappa_0}, \quad \varkappa_0 = \frac{\Gamma L^2}{\Delta U},
\end{equation}
so that the dimensionless form of the Langevin dynamics (\ref{LL}) reads
\begin{equation}
\label{dimlessmodel}
m\ddot{\hat{x}} + \dot{\hat{x}} = -\hat{U}'(\hat{x}) + a\cos{(\omega \hat{t})} + \sqrt{2Q} \hat{\xi}(\hat{t})\;.
\end{equation}
Here, the dimensionless potential $\hat{U}(\hat{x}) = U(x)/\Delta U = U(L\hat{x})/\Delta U = \hat{U}(\hat{x} + 2\pi)$ possesses the period $2\pi$ and half of the barrier height is $\Delta {\hat U} = 1$. The remaining parameters are scaled as: $m = M/(\Gamma\varkappa_0)$, $a = (L/\Delta U)A$, $\omega = \varkappa_0\Omega$. The rescaled thermal noise reads \mbox{$\hat{\xi}(\hat{t}) = (L/\Delta U)\xi(t) = (L/\Delta U)\xi(\varkappa_0\hat{t})$} and assumes the same statistical properties as $\xi(t)$, namely $\langle \hat{\xi}(\hat{t}) \rangle = 0$ and \mbox{$\langle \hat{\xi}(\hat{t})\hat{\xi}(\hat{s}) \rangle = \delta(\hat{t} - \hat{s})$}. The dimensionless noise intensity $Q = k_BT/\Delta U$ is the ratio of thermal and half of the activation energy the particle needs to overcome the non-rescaled potential barrier. In order to simplify the notation further, we shall omit the $\wedge$-notation in the above equation (\ref{dimlessmodel}).
The above proposed scaling procedure is not unique as one is free to define other characteristic time scales of the system described by Eq. (\ref{LL}); namely,
\begin{equation}
\varkappa_0 = \frac{\Gamma L^2}{\Delta U}, \qquad \varkappa_1 = \frac{M}{\Gamma}, \qquad \varkappa_2^2 = \frac{ML^2}{\Delta U}, \qquad \varkappa_3 = \frac{2\pi}{\Omega} \;.
\end{equation}
%
Note that only three of them are independent because $\varkappa_0 \varkappa_1 = \varkappa_2^2$. Here we use as the unit of time $\varkappa_0$ , see in Eq. (\ref{scales}) above. This corresponds to the characteristic time scale for an overdamped particle to move from the maximum of the potential $U(x)$ to its minimum. It can be extracted from the equation $\Gamma \dot x = -U'(x)$. The scale $\varkappa_1$ denotes a relaxation time of the velocity $v=\dot x$ of the free Brownian particle (i.e. for the choice $U(x)= A = 0$) which is obtained from the relation $M\ddot x +\Gamma\dot x=0$. Note that here the dimensionless mass emerges as $m=\varkappa_1/\varkappa_0$; i.e. it equals the ratio of these two characteristic time scales. The quantity $\varkappa_2$ is a characteristic time scale for the conservative system (when $\Gamma=A=0$) and follows from the equation $M \ddot x =-U'(x)$. It is related to the period of the linearized particle oscillations within one potential well. The remaining third time scale $\varkappa_3$ is the period of the external time-periodic force. Thermal fluctuations are modeled here approximately as white noise. In real systems, however, it is never strictly zero but physically typically much smaller than the other time scales.\\
The limit $\Gamma \longrightarrow \infty$, implying that $ m \longrightarrow 0$, presents an overdamped thermal rocking ratchet dynamics, whose adiabatic and alike its non-adiabatic driving regimes have been thoroughly studied previously in Refs. \cite{epl1994,lnp1996}; in both, its stochastic dynamics at finite temperatures and as well in its deterministic limit \cite{epl1994,acta2006}. Remarkably, this overdamped determinitic regime is already rather complex, exhibiting for example locking regimes which follow a devil's staircase behaviour \cite{lnp1996,acta2006}.\\
The potential (\ref{pot}) has originally been derived for the asymmetric superconducting quantum interference device (SQUID) which is composed of a loop with three capacitively and resistively shunted Josephson junctions
\cite{kautz, zapata1996, spiechowicz2014prb, sterck2005, spiechowicz2015chaos}. The particle coordinate $x$ and velocity $v$ corresponds to the Josephson phase and the voltage drop across the device, respectively. The particle mass stands for the capacitance of the SQUID, the friction coefficient translates to the reciprocal of the SQUID resistance. The time-periodic force corresponds to the modulated external current. The asymmetry parameter $\varphi$ of the potential (\ref{pot}) can be controlled by an external magnetic flux which pierces across the device.
From a mathematical point of view Eq. (\ref{dimlessmodel}) is a second order differential equation additionally complemented by a random force. At first glance it seems simple for undertaking a study. However, note that even the phase space of the noiseless autonomous system modeled by Eq. (\ref{dimlessmodel}) is already three-dimensional $\{x,y=\dot{x},z=\omega t\}$; therefore being minimal for it to display a chaotic dynamics \cite{strogatz}. Moreover, the underlying parameter space $\{m, a, \omega, Q, \varphi\}$ is five-dimensional implying a rich and correspondingly highly complex behaviour.
The probability density $P(x, v, t)$ for the particle coordinate $x$ and its velocity $v$ obeys a Fokker-Planck equation corresponding to the Langevin equation (\ref{dimlessmodel}) \cite{risken}. It is a parabolic partial differential equations with a time-periodic drift coefficient in phase space of position and velocity. Combining it with a nonlinear periodic potential $U(x)$ together with a five-dimensional parameter space, corresponding analytic time-dependent solutions become in practice unattainable and we are thus forced to use advanced numerical resources. Details of the latter are elaborated in Ref. \cite{spiechowicz2015cpc}. However, for large dimensionless times $(t \gg 1$) the probability density $P(x, v, t)$ approaches the asymptotic periodic probability distribution $P_{as}(x, v, t)= P_{as}(x, v, t + \mathsf{T})$ with the periodicity $\mathsf{T}=2\pi/\omega$ of the time-periodic driving $a\cos(\omega t)$ \cite{jung1990,jung1991, jung1993}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{Fig1a.pdf}
\includegraphics[width=0.45\linewidth]{Fig1b.pdf}\\
\includegraphics[width=0.45\linewidth]{Fig1c.pdf}
\caption{Transient anomalous diffusion of an inertial Brownian particle moving in a periodic potential and driven by a unbiased time-periodic force. In panel (a) we present the diffusion coefficient $D(t)$ as defined in Eq. (\ref{diff}). Panel (b) depicts time evolution of the coordinate variance $\sigma_x^2(t)$. In panel (c) the period averaged velocity variance $\sigma_\mathbf{v}^2(t)$ is shown.
Two cases of thermal noise intensity $Q$ proportional temperature are presented (red and blue line). The region corresponding to the subdiffusive behaviour is for the intensity set at $Q=0.0007$ (region in cyan colour). The remaining parameters are chosen as: $m = 6$, $a = 1.899$, $\omega = 0.403$. The rescaled potential is $U(x)=-\sin(x) -(1/4) \sin(2x)$, which corresponds to $\varphi = \pi/2$. These panels are reproduced from Ref. \cite{spiechowicz2017scirep}.}
\label{fig1}
\end{figure}
\section{Transient regime: anomalous diffusion}
The diffusion behaviour of the particle dynamics and the spread of its trajectories is conventionally characterized by the mean-square deviation (variance) of the particle position $x(t)$ \cite{metzler2014}, namely,
\begin{equation}
\label{msd}
\sigma_x^2(t) = \langle \left[x(t) - \langle x(t) \rangle \right]^2 \rangle = \langle x^2(t) \rangle - \langle x(t) \rangle^2,
\end{equation}
where the averaging $\langle \cdot \rangle$ is over all realizations of thermal fluctuations as well as over the initial conditions for the position $x(0)$ and the velocity $\dot{x}(0)$. The latter is necessary because in the deterministic limit of vanishing thermal noise intensity $Q \to 0$ the dynamics may possess several coexisting attractors thus being non-ergodic and implying that the corresponding results may be affected by a specific choice of those selected initial conditions \cite{spiechowicz2016scirep}. If the coordinate variance grows linearly in evolving time; i.e.,
\begin{equation}
\label{normal}
\sigma_x^2(t) = 2Dt
\end{equation}
we refer to diffusion as {\it normal} and the parameter $D$ is termed the diffusion coefficient. Any deviation from this strict linearity qualifies as a process exhibiting anomalous diffusion \cite{hofling2013, metzler2014, meroz2015, zaburdaev2015}. For anomalous diffusion the variance assumes an increasing function of elapsing time, growing either according to a sub-diffusive or a superdiffusive power law \cite{metzler2014}
\begin{equation}
\label{alpha}
\sigma_x^2(t) \sim t^{\alpha}
\end{equation}
Normal diffusion is observed for $\alpha = 1$. The case $0 < \alpha < 1$ refers to subdiffusion while the case $\alpha > 1$ is classified as superdiffusion. It becomes appropriate for the following discussion to consider a time-dependent "diffusion coefficient" $D(t)$, defined by the relation \cite{spiechowicz2016scirep}
\begin{equation}
\label{diff}
D(t) := \frac{\sigma_x^2(t)}{2t}.
\end{equation}
If the behaviour is as in (\ref{alpha}) then $D(t) \sim t^{\alpha -1}$ and
\begin{itemize}
\item $D(t)$ is time-decreasing for subdiffusion,
\item $D(t)$ is constant for normal diffusion,
\item $D(t)$ is time-increasing for superdiffusion.
\end{itemize}
We stress that only in the asymptotic long time regime with the exponent $\alpha$ approaching unity we find a properly defined, finite diffusion coefficient $D$, i.e.,
\begin{equation}
D = \lim_{t \to \infty} D(t) < \infty.
\end{equation}
If the diffusion process is anomalous then $D(t)$ either converges to zero (for subdiffusion) or diverges to infinity (for superdiffusion) when $t\to\infty$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{fig2}
\caption{The dependence of the asymptotic diffusion coefficient $D$ (left side ordinate) and the directed velocity
$\langle \mathbf{v} \rangle$ (right side ordinate, see eq. (16)) {\it vs.} the noise intensity $Q$, the latter being proportional to the temperature $T$ of the bath. The chosen parameters are: $m = 6$, $a = 1.899$, $\omega = 0.403$, $\varphi = \pi/2$. The panel reproduced from Ref. \cite{spiechowicz2017chaos}.}
\label{fig2}
\end{figure}
In panels (a) and (b) of Fig. 1 we depict time evolution of the diffusion coefficient $D(t)$ and the coordinate mean square deviation $\sigma_x^2(t)$, respectively, for two values of the noise intensity $Q \propto T$. At first glance, it is difficult to identify whether in fact anomalous diffusion takes place by just inspecting the behavior for $\sigma_x^2(t)$. In distinct contrast, from inspecting instead the behaviour for $D(t)$ it becomes more facile to differentiate between the two anomalous types of diffusion: superdiffusion occurs in the interval where $D(t)$ increases while the case of decreasing $D(t)$ corresponds to subdiffusion. For an invariant $D(t)$ normal diffusion takes place. In panel (a), the evolution of $D(t)$ can be divided into the three following time-intervals: an early behaviour of superdiffusion $(0, \tau_1)$, an intermediate temporal interval $(\tau_1, \tau_2)$ where subdiffusion emerges over several decades and an asymptotic long time regime $t> \tau_2$ where normal diffusion occurs. The crossover times $\tau_1$ and $\tau_2$ separating these domains can be controlled by the temperature or noise intensity. For a lower temperature (i.e. $Q=0.00016$, see the red curve in panel (a)) the lifetime of superdiffusion is extremely long. In fact it tends to infinity when $Q\to 0$ (the deterministic case). For $Q = 0.00016$ the superdiffusion regime extends to $\tau_1 \approx 3.2 \cdot 10^6$. It is difficult to numerically determine $\tau_2$ due to limited stability of the utilized algorithm, leading to uncontrolled propagation of roundoff- and truncation-errors. However, if we adopt an extrapolation from other cases then the time $\tau_2$ is at least of order $10^{11} \sim 10^{13}$. For higher temperature ($Q=0.0007$, the blue curve in panel (a)) the lifetime $\tau_1$ is shorter and it tends to zero when the temperature tends to infinity. For $Q = 0.0007$ the superdiffusion lifetime is at $\tau_1 \approx 3.2 \cdot 10^3$ and for subdiffusion it is at $\tau_2 \approx 10^8$. For higher temperatures the dynamics is initially superdiffusive and approaches a normal diffusion behavior without exhibiting an intermediate subdiffusion time-interval. It is important to note that generally the anomalous diffusion behavior is only of a transient nature and eventually it always tends to normal diffusion in the asymptotic long time limit.
\section{Asymptotic normal long time diffusion: Non-monotonic temperature dependence}
In the standard Sutherland-Einstein relation \cite{sutherland1905, einstein1905} valid for systems at thermal equilibrium the diffusion coefficient $D$ is a monotonically increasing linear function of temperature $T$, i.e.,
\begin{equation}
\label{einstein}
D=\mu k_B T,
\end{equation}
where $\mu$ is a mobility coefficient. This is in accordance with our intuition because when temperature grows then thermal fluctuations become larger and in consequence fluctuations of the particle position also increase. However, for some parameter regimes of our nonequilibrium setup we observe an atypical, non-monotonic temperature dependence for the emerging asymptotic normal diffusion constant $D$ \cite{spiechowicz2017chaos}. An example is presented with Fig. \ref{fig2}. At low temperatures $Q$ the diffusion coefficient increases with increasing $Q$ until it reaches a local maximum at $Q\approx 2 \cdot 10^{-5}$ (cf. red line). Then it {\it decreases} towards a minimum at $Q\approx 5\cdot 10^{-3}$ before turning over into a monotonically growing function of $Q$; finally, at sufficiently large values of $Q$, the diffusion coefficient $D$ becomes precisely proportional to $Q$; i.e. to the temperature $T$ of the ambient thermal bath. This high temperature behaviour, however, is not depicted in Fig. \ref{fig1}. The decrease of the diffusion constant with increasing temperature $Q \propto T$ is truly counter-intuitive, being in clear contrast with the Sutherland-Einstein relation (\ref{einstein}) as well as with other known relations such as for example Vogel-Fulcher-like laws \cite{goychuk2014} or an Arrhenius-type behaviour for the diffusion of a Brownian particle in periodic potentials \cite{lifsonjackson,festa1978,htb1990}.
\begin{figure}[t]
\includegraphics[width=0.45\linewidth]{fig3.jpg}
\caption{Basins of attraction for the asymptotic long time particle velocity $\mathbf{v}(t)$. The red and blue coloured sets consist of all initial conditions $\{x(0), \dot{x}(0)\}$ eventually evolving to the running states with the positive $v_+ \approx 0.4$ and negative $v_- \approx -0.4$ velocity, respectively. The green colour marks the set of locked states $v_0 \approx 0$. Parameters are: $m = 6$, $a = 1.899$, $\omega = 0.403$, $\varphi = \pi/2$. For this particular regime the deterministic system (\ref{dimlessmodel}) with $Q = 0$ is non-chaotic. Panel is reproduced from \cite{spiechowicz2016scirep}.}
\label{fig3}
\end{figure}
\section{Averaged velocity of the Brownian motor}
In order to explain the above two anomalous transport phenomena one needs first to carefully examine the deterministic structure of the phase space $\{x, v\}$ of all coordinates and velocities of the Brownian motor. For the presented parameter regime, c.f. Figs. 1 and 2, the noiseless system with $Q=0$ is non-chaotic with three coexisting attractors $\{v_+, v_0, v_{-}\}$ in the velocity subspace $\{v\}$. These attractors correspond to running solutions with $v_+ \approx 0.4$ and $v_{-} \approx -0.4$, and the locked solution $v_0 \approx 0$. There are three classes of trajectories corresponding to these three states: $x_+(t)\sim 0.4 t$, $x_{-}(t) \sim -0.4 t$ and $x_0(t) \sim 0$.
The basins of attraction for these attractors is shown in Fig. \ref{fig3}.
The red and blue sets consist of all initial conditions $\{x(0), v(0)\}$ evolving into the running states with either positive $v_+ \approx 0.4$ and negative $v_- \approx -0.4$ velocity, respectively. The green colour regimes mark the locked states with $v_0 \approx 0$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{Fig4a}
\includegraphics[width=0.45\linewidth]{Fig4}
\caption{Left panel: The potential given by Eq. (\ref{pot}) depicted in the symmetric case $\varphi = 0$ and its ratchet form for the asymmetry parameter $\varphi =\pi/2$. Right panel: The asymptotic long time averaged directed velocity $\langle \mathbf{v} \rangle$ versus the noise intensity
$Q$, being proportional to temperature $T$ of the ambient bath. Parameters are: $m = 6$, $a = 1.899$, $\omega = 0.403$, $\varphi = \pi/2$. Right panel is reproduced from \cite{spiechowicz2019chaos}.}
\label{fig4}
\end{figure}
When the noise intensity is non-vanishing, then thermal fluctuations induce a stochastic dynamics which destabilizes those attractors and leads to random transitions between its coexisting basins of attraction. This situation is analogous to an escape dynamics from metastable wells in multistable equilibrium systems \cite{htb1990}. Such transitions between the running and/or locked states may generate the transient anomalous diffusion documented with the previous sections. Because we are interested not only in the asymptotic state but also in the full time dynamics it is useful to consider the averaged velocity over the realizations and here additionally also over the temporal driving period of the Brownian motor in presence of thermal noise; i.e.,
\begin{equation}
\label{dvelocity}
\langle \mathbf{v}(t) \rangle = \frac{\omega}{2\pi} \int_t^{t + 2\pi/\omega} ds \, \langle \dot{x}(s) \rangle
\end{equation}
and its variance
\begin{equation}
\label{vvariance}
\sigma_\mathbf{v}^2(t) = \langle \mathbf{v}^2(t) \rangle - \langle \mathbf{v}(t) \rangle^2 \;.
\end{equation}
In the asymptotic long time limit these so double-averaged quantities become time-independent, while the solely noise-averaged quantities alone assume a time-periodic function of the asymptotic time-periodic (with period $\mathsf{T}$) phase-space probability $P_{as}(x, v, t)= P_{as}(x, v, t + \mathsf{T})$. Put differently, in the asymptotic long time limit, the mean velocity $\langle \dot{x}(t)\rangle$ takes the form of a Fourier series over all possible higher harmonics of the driving force \cite{jung1990,jung1993, gammaitoni1998}.
\begin{equation}
\lim_{t \gg 1} \, \langle \dot{x}(t) \rangle = \langle \mathbf{v} \rangle + v_{\omega}(t) + v_{2\omega}(t) + ...
\end{equation}
where $\langle \mathbf{v} \rangle$ is the time-independent (dc) component while $v_{n\omega}(t)$ denote time-periodic higher harmonic functions of zero average over the fundamental period $\mathsf{T}=2\pi/\omega$ of the driving. The averaged directed velocity $\langle \mathbf{v} \rangle$ can also be obtained from Eq. (\ref{dvelocity}), namely,
\begin{equation}
\label{v}
\langle \mathbf{v} \rangle = \lim_{t \gg 1} \, \langle \mathbf{v}(t) \rangle.
\end{equation}
Due to presence of the external driving the Brownian motor is taken far away from thermal equilibrium and a time-dependent nonequilibrium state is reached in the asymptotic long time regime. Since all forces in the right hand side of Eq. (\ref{dimlessmodel}) are non-biased, a necessary condition for the occurrence of directed transport $\langle \mathbf{v} \rangle \neq 0$ is the breaking of the reflection symmetry of the potential $U(x)$ \cite{hanggi2009, denisov2014}, cf. the left panel of Fig. \ref{fig4}.
In right panel of Fig. \ref{fig4} we plot the directed velocity $\langle \mathbf{v} \rangle$ as a function of temperature $Q \propto T$. For vanishing intensity $Q \to 0$ of thermal fluctuations the directed velocity $\langle \mathbf{v} \rangle \to 0$, which is in agreement with the probability distribution $P(\mathbf{v}(t))$ of the individual asymptotic long time period averaged motor velocity for the deterministic variant of the system (\ref{dimlessmodel}) with $Q = 0$. It is so because of the weighted average over the two running attractors $v_- = 0.4$ and $v_+ = 0.4$ as well as the locked state $v_0 = 0$ yields a vanishing $\langle \mathbf{v} \rangle = 0$. For slightly higher temperature we observe small fluctuations around the deterministic value $\langle \mathbf{v} \rangle = 0$. A further growth of temperature causes a notable enhancement of the particle velocity $\langle \mathbf{v} \rangle \approx 0.4$. We marked this region with the cyan color. In this regime for the deterministic counterpart of the system there is no directed transport of the motor, however thermal fluctuations induce it. The reason for this behaviour is concealed in thermally activated jumps in the phase space of the nonequilibrium dynamics.
\section{Mechanism responsible for anomalous transport}
Let us now explain the mechanisms which are at the origin for the above presented transport anomalies: \\
(i) {\it Superdiffusion} \cite{spiechowicz2016scirep}. In the deterministic case $Q = 0$ there are three attractors in the velocity subspace and there are three classes of trajectories associate with them: $x_+(t)\sim 0.4 t$, $x_-(t) \sim -0.4 t$ and $x_0(t) \sim 0$. The overall mean value of the particle position is negligible small $\langle x(t) \rangle \approx 0$, meaning that also $\langle \mathbf{v} \rangle \approx 0$. Moreover, this fact implies that the mean-square deviation $\langle \Delta x^2(t) \rangle = \langle x^2(t) \rangle - \langle x(t) \rangle^2 \approx \langle x^2(t) \rangle \sim t^2$. As a consequence superdifusive transport takes place which in fact is ballistic diffusion in this deterministic case. This superdiffusive regime is persistent only if $Q \to 0$. For $Q > 0$ thermal noise induces repeated stochastic transitions among the deterministic solutions $x_+(t)$, $x_-(t)$ and $x_0(t)$ which in turn allow for the occurrence of finite directed Brownian motor transport $\langle \mathbf{v} \rangle \ne 0$ if the reflection symmetry of the system is broken, c.f. Fig. \ref{fig4}. In particular, as temperature grows progressively more transitions from the trajectories $x_-(t)$ and $x_0(t)$ to the solution $x_+(t) \sim 0.4 t$ are observed. There occurs even an intensity interval where almost all particles travel according to $x_+(t) \sim 0.4 t$ as then $\langle \mathbf{v} \rangle \approx 0.4$, c.f. cyan color regime in Fig. \ref{fig4}. The relaxation time of the velocity $\langle \mathbf{v}(t) \rangle$ to its asymptotic long-time value $\langle \mathbf{v} \rangle$ is the same as the lifetime $\tau_1$ for superdiffusion. If the intensity of thermal fluctuations increases the frequency of thermally activated transitions between the deterministic solutions grows greater and therefore the lifetime for superdiffusion decreases.\\
(ii) {\it Subdiffusion} \cite{spiechowicz2017scirep, spiechowicz2019chaos}. For the noisy system $Q \neq 0$ the directed transport velocity $\langle \mathbf{v} \rangle \ne 0$ and the probability for the particle to be in the positive running state $v_+ \approx 0.4$ grows whereas the corresponding quantity to stay in the negative running state $v_+ \approx -0.4$ as well as in the locked state $v_0 \approx 0$ decreases. Consequently, the spread of trajectories is smaller and the subdiffusion is developed. It means that once the particles enter the state $v_+$ they move almost coherently. This argument is supported by the panel (c) of Fig. \ref{fig1} where the velocity fluctuations are significantly reduced within the time interval for subdiffusion. These small, but still finite fluctuations are responsible for the ultraslow subdiffusion where the observed scaling index $\alpha$ in Eq.(\ref{alpha}) is tiny but nonzero $\alpha \ll 1$ \cite{spiechowicz2017scirep}. In the time interval $\tau_1 < t < \tau_2$ the probability for the particle to be in the state $v_+$ is extremely close to unity, meaning that almost all particle trajectories are localized in this regime.
Finally, for sufficiently long times $t > \tau_2$ random dynamics induced by thermal fluctuations again activate jumps between the coexisting trajectories, thus eventually leading to normal diffusion.\\
(iii) {\it Non-monotonic temperature dependence of the diffusion coefficient} \cite{spiechowicz2017chaos}. Let us focus on two exemplary temperatures $Q_1 < Q_2$, e.g. $Q_1 = 10^{-2}$ and $Q_2 = 1$ in Fig. \ref{fig2}, in order to explain the mechanism responsible for non-monotonic temperature dependence of the diffusion coefficient. Here we observe that $D(Q_1) > D(Q_2)$. For the lower temperature $Q_1 = 10^{-2}$ the averaged directed velocity is $\langle \mathbf{v} \rangle \approx 0.25 > 0$, whereas for larger $Q_2 = 1$ the velocity $\langle \mathbf{v} \rangle \approx 0$, i.e. $\langle \mathbf{v} \rangle (Q_1) > \langle \mathbf{v} \rangle (Q_2) \approx 0$. The latter observation means that for the lower temperature the deterministic structure of the three attractors $\{v_+, v_0, v_- \}$ is still present and plays an important role in controlling the diffusive properties of the system. In particular, because then $\langle \mathbf{v} \rangle > 0$ the majority of trajectories is traveling with the positive velocity $v_+$, nevertheless however, a still significant fraction of them follows the locked solution $v_0$ and alike also the negative running solution $v_-$. The probability distribution for the particle velocity can approximately be represented by a sum of three Gaussians of different mean values, representing the corresponding deterministic solution $\{v_+, v_0, v_- \}$. This causes a large overall spread among the particles and as a consequence also for the corresponding diffusion coefficient. For larger temperature $\langle \mathbf{v} \rangle \approx 0$ the deterministic structure of attractors ceases to be of relevance. In such a case the probability distribution of the particle velocity can be approximated by a single Gaussian with zero mean. Now, the spread between the trajectories is only due to variance of the latter probability distribution which is significantly smaller as compared to the case with lower temperatures. Consequently the diffusion coefficient is reduced. For even larger growing temperature $Q > Q_2$ the diffusion coefficient behaves in a standard way and increases with increasing temperature. This corroborates the finding that for higher temperature the variance of the Gaussian probability distribution for the particle velocity becomes larger.
\section{Summary}
We have shown that for an inertial nonlinear Brownian motor dynamics modeled by an equation which at first glance may appear simple, i.e., for a one-dimensional Newton equation driven by random forces and external driving, the resulting diffusive dynamics manifests an exceptionally rich spectrum of physical phenomena, including counter-intuitive anomalous transport behaviour. The main reason responsible for these features is that the system operates far away from thermal equilibrium. Even the asymptotic long-time state is manifest nonequilibrium and its analytical form is far from being accessible analytically. Therefore, the only applicable scheme for the analysis of this archetype setup is via numerical simulations. The latter can explain the physics of the occurring phenomena, but only on a qualitative level. A remaining open question then is to what extent this simple, stylized setup with its already complex behavior can still serve as a trustworthy paradigm for describing those anomalous transport features occurring in more realistic complex systems possessing many more degrees of freedom.
\section*{Acknowledgment}
This work supported by the Grant NCN 2017/26/D/ST2/00543 (J.S. and J.{\L}.) and alike by
the Deutsche Forschungsgemeinschaft (DFG) Grant No. HA1517/42-1 (P.H.).
|
1,941,325,221,006 | arxiv | \section{Background}\label{sec:Background}
\begin{tabular}{|c|l|}
\hline
\multicolumn{2}{|c|}{\bf Notations}\\
\hline
$\mathrm{N}$ & number of coordinates/bit positions\\ &in the compressed data \\
\hline
$\mathrm{\psi}$ & upper bound on the number of $1$'s in \\&any binary vector.\\
\hline
$\mathrm{\Psi}$ & upper bound on the norm of any \\&real-valued vector.\\
\hline
$||\mathbf{a}||$ & $l_2$ norm of the vector $\mathbf{a}$\\
\hline
$\mathbf{a}[i]$ & $i$-th bit position (coordinate) of \\&binary (real-valued) vector $\mathbf{a}$ .\\
\hline
$\mathrm{d_H}(\mathbf{u}, \mathbf{v})$& Hamming distance between binary\\& vectors $\mathbf{u}$ and $\mathbf{v}.$\\
\hline
$\mathrm{IP}(\mathbf{a}, \mathbf{b})$ & Inner product between binary/\\&real-valued vectors $\mathbf{a}$ and $\mathbf{b}.$\\
\hline
\end{tabular}
\subsection{Probability background}
\begin{definition}\label{definition:varDef}
The Variance of a random variable $X$, denoted $\mathrm{Var}(X)$, is defined as the expected value of the squared deviation
of $X$ from its mean.
\[
\mathrm{Var}(X)=\mathbb{E}[(X-\mathbb{E}(X))^2]=\mathbb{E}(X^2)-\mathbb{E}(X)^2.
\]
\end{definition}
\begin{definition}\label{definition:coVarDef}
Let $X$ and $Y$ be jointly distributed random variables. The \textit{Covariance} of $X$
and $Y$, denoted $\mathrm{Cov}(X, Y)$, is defined as
\[
\mathrm{Cov}(X, Y)=\mathbb{E}[(X-\mathbb{E}(X))(Y-\mathbb{E}(Y))].
\]
\end{definition}
\begin{fact}\label{fact:varProp}
Let $X$ be a random variable and $\lambda$ be a constant. Then,
$\mathrm{Var}(\lambda+X)=\mathrm{Var}(X)$ and $\mathrm{Var}(\lambda X)=\lambda^2\mathrm{Var}(X).$
\end{fact}
\begin{fact}\label{fact:varProp1}
Let $X_1, X_2, \ldots, X_n$ be a set of $n$ random variables. Then,
\[
\mathrm{Var}\left(\sum_{i=1}^n X_i \right)=\sum_{i=1}^n\mathrm{Var}\left( X_i \right)+\sum_{i \neq j} \mathrm{Cov}(X_i, X_j).
\]
\end{fact}
\begin{fact}\label{fact:coVarProp}
Let $X$ and $Y$ be a pair of random variables and $\lambda$ be a constant. Then,
$\mathrm{Cov}(\lambda X, \lambda Y)=\lambda^2\mathrm{Cov}(X, Y).$
\end{fact}
\begin{fact}[Chebyshev's inequality]\label{fact:Chebyshev}
Let $X$ be a random variable having finite mean and finite non-zero variance $\sigma^2$. Then for any real number
$\lambda>0,$
\[
\Pr[|X-\mathbb{E}(X)|\geq\lambda \sigma] \leq \frac{1}{\lambda^2}.
\]
\end{fact}
\begin{comment}
\begin{fact}[Bernstein's inequality]\label{fact:Bernstein}
Let $X_1, X_2, \ldots, X_n$ be a set of $n$ independent real-valued random variables and let
$\sigma^2=\frac{1}{n}\sum_{i=1}^n \mathrm{Var}(X_i)$. Then for any real number $\epsilon>0$,
\[
\Pr\left[\left|\sum_{i=1}^n X_i-\mathbb{E}\left(\sum_{i=1}^n X_i\right)\right|\geq\epsilon\right] \leq 2\exp\left(-\frac{n\epsilon^2}{2(\sigma^2+\epsilon/3)} \right).
\]
\end{fact}
\end{comment}
\subsection{Similarity measures and their respective compression schemes}
\paragraph{Hamming distance} Let $\mathbf{u}, \mathbf{v}\in \{0, 1\}^d$ be two binary vectors, then the
Hamming distance between these two vectors is the number of bit positions where they differ.
To the best of
our knowledge, there does not exist any non-trivial compression scheme which provide
similar compression guarantees such as JL-lemma provides for Euclidean distance.
In the following lemma, we show that for a set of $n$-binary vectors an analogous JL-type binary to binary
compression (if it exist) may require compression length linear in $n$.
Further collision \footnote{A collision occurs when two object hash to the same hash value.} based hashing scheme such as LSH (due to
Gionis \textit{et al.}~\cite{GIM99}, see Subsection~\ref{subsection:subsecLSH}) can be considered as a
binary to binary compression scheme, where the size of hashtable determines the compression-length. Their techniques includes randomly choosing bit positions and checking if the query and input vectors are matching exactly at those bit positions.
\begin{lem}\label{lem:analogousJL}
Consider a set of $n$-binary vectors, then an analogous JL-type binary to binary compression (if it exist)
may require compression length linear in $n$.
\end{lem}
\begin{proof}
Consider a set of $n$ binary vectors $\{e_i\}_{i=1}^n$ -- standard unit vectors, and the zero vector $e_0$.
The Hamming distance between $e_0$ and any $e_i$ is $1$, and the Hamming distance between any pair of vectors
$e_i$ and $e_j$ for $i \neq j$ is $2$.
Let $f$ be a map which map these points into binary vectors of dimension $k$ by preserving the distance between
any pair of vectors within a factor of $1\pm \varepsilon$, for a parameter $\varepsilon>0$. Thus, these $n$ points $\{f(e_i)\}_{i=1}^n$ are within a
distance at most $(1+\varepsilon)$ from $f(e_0)$, and any two points $f(e_i)$ and $f(e_j)$ for $i \neq j$ are at distance
at least $2(1-\varepsilon)$. However, the total number of points at distance at most $(1+\varepsilon)$ from
$f(e_0)$ is $O(k^{1+\varepsilon})$, and distance between any two points $f(e_i)$ and $f(e_j)$ for $i \neq j$
is non-zero so each point $\{e_i\}_{i=1}^n$ has its distinct image. Thus
$O(k^{1+\varepsilon})$ should be equal to $n$,
which gives $k=\Omega(n^{\frac{1}{1+\varepsilon}})$. Thus the compression
length can be linear in $n$.
\end{proof}
\paragraph{Euclidean distance}
Given two vectors $\mathbf{a}, \mathbf{b} \in \mathbb{R}^d$, the Euclidean distance
between them is denoted as $||\mathbf{a}, \mathbf{b}||$ and defined as
\\ $\sqrt{\Sigma_{i=1}^d(\mathbf{a}[i]- \mathbf{b}[i])^2}.$
A classical result by Johnson and Lindenstrauss~\cite{JL83} suggest
a compressing scheme which
for any set $\mathcal{D}$ of $n$ vectors in $\mathbb{R}^d$ preserve
pairwise Euclidean distance between any pair of vectors in $\mathcal{D}$.
\begin{lem}[JL transform~\cite{JL83}]
For any $\epsilon\in (0, 1)$, and any integer $n$,
let $k$ be a positive integer such that $k=O\left( \frac{1}{\epsilon^2}\log n \right)$.
Then for any set $\mathcal{D}$ of $n$ vectors in $\mathbb{R}^d$, there is a map $f: \mathbb{R}^d\rightarrow \mathbb{R}^k$
such that for any pair of vectors $\mathbf{a}, \mathbf{b}$ in $\mathcal{D}:$
\[
(1-\epsilon)||\mathbf{a}, \mathbf{b}||^2\leq ||f(\mathbf{a}), f(\mathbf{b})||^2\leq (1+\epsilon)||\mathbf{a}, \mathbf{b}||^2
\]
Furthermore, the mapping $f$ can be found in randomized polynomial time.
\end{lem}
In several followup works on JL lemma, the function $f$ has been regarded as a random
projection matrix $R\in \mathbb{R}^{d\times k}$, and can be constructed element-wise using
Gaussian due to Indyk and Motwani~\cite{IM98},
or uniform $\{+1,-1\}$ due to Achlioptas~\cite{Achlioptas03}.
\paragraph{Inner product}
Given two vectors $\mathbf{u}, \mathbf{v} \in \mathbb{R}^d$, the Inner product $\langle \mathbf{u}, \mathbf{v}\rangle $
between them is defined as $$\langle \mathbf{u}, \mathbf{v}\rangle :=\Sigma_{i=1}^d\mathbf{u}[i]\mathbf{v}[i].$$
Compression schemes which preserves Inner product has been studied quite a lot in the recent time.
In the case of binary data, along with some sparsity assumption (bound on the number of $1$'s),
there are some schemes available which by padding (add a few extra bits in the vector)
reduce the Inner product (of the original data) to the Hamming~\cite{BeraP16},
and Jaccard similarity~\cite{ShrivastavaWWW015}. Then the compression scheme for
Hamming or Jaccard can be applied on the padded version of the data. Similarly,
in the case of real-valued data, a similar padding technique is known that due padding
reduces Inner product to
Euclidean distance~\cite{Shrivastava014}. Recently, an interesting work by
Ata Kab\'{a}n~\cite{Kaban15} suggested a compression schemes \textit{via} random projection method. Their scheme
approximately preserve Inner Product between any pair of input points and their compression bound
matches the bound of JL-transform~\cite{JL83}.
\paragraph{Jaccard similarity} Binary vectors can also be considered as sets over
the universe of all possible features, and a set contain only those elements which
have non-zero entries in the corresponding binary vector. For example two vectors $\mathbf{u}, \mathbf{v}\in \{0, 1\}^d$
can be viewed as two sets $\mathbf{u}, \mathbf{v}\subseteq \{1, 2, \ldots d\}$.
Here, the underlying similarity measure of interest is the Jaccard
similarity which is defined as follows
$\mathrm{JS}(\mathbf{u}, \mathbf{v})=\frac{|\mathbf{u} \cap \mathbf{v}|}{|\mathbf{u} \cup \mathbf{v}|}.$
A celebrated work by Broder
\textit{et al.}~\cite{Broder00,BroderCFM98,BroderCPM00} suggested a
technique to compress a collection of sets while preserving the Jaccard similarity between any pair of sets.
Their technique includes taking a random permutation
of $\{1, 2, \ldots, d\}$ and assigning a value to each set which maps to
minimum under that permutation. This compression scheme is popularly known as Minwise hashing.
\begin{definition}[Minwise Hash function]\label{defn:minwise}
Let $\pi$ be a permutations over $\{1, \ldots, d\}$, then for a set $\mathbf{u}\subseteq \{1,\ldots d\}$
$h_\pi(\mathbf{u}) = \arg\min_i \pi(i)$ for $i \in \mathbf{u}$. Then due to~\cite{Broder00,BroderCFM98,BroderCPM00},
\begin{align*}\label{eq:cosine}
\Pr[h_\pi(\mathbf{u})=h_\pi(\mathbf{v})]=\frac{|\mathbf{u}\cap \mathbf{v}|}{|\mathbf{u} \cup \mathbf{v}|}.
\end{align*}
\end{definition}
\begin{comment}
\paragraph{Cosine similarity}
The cosine similarity, proposed by Charikar~\cite{Charikar02}, for two vectors $u, v\in \mathbb{R}^d$ suggest a measure of
similarity which calculate the cosine of the angle between them.
For two vectors $u, v$ cosine similarity between them is $\cos\theta=\frac{\langle u,v\rangle }{||u||||v||}$.
\begin{definition}[Signed Random Projection~\cite{Charikar02}]\label{definition:cosine}
Given a set $\mathcal{D}$ of vectors in $\mathbb{R}^d$, a random vector $r$ is chosen from the $d$-dimensional Gaussian
distribution \textit{i.e.,} each coordinate is of vector $r$ generated from an \textit{i.i.d.}
normal distribution, $r_i \sim N(0, 1)$, then corresponding to this vector $r,$ a hash function $h_r$ is defined as follows:
\begin{align*}\label{eq:cosinedef}
h_r(u)=\begin{cases}
+1, & \text{if $\langle u, r\rangle <0$}\\
-1, & \text{if $\langle u, r\rangle \geq 0$}
\end{cases}
\end{align*}
Then due to Goemans and Williamson~\cite{Goemans}, for any two vectors $u, v \in \mathbb{R}^n$
the collision under the above hash function satisfies the following equation:
\begin{align*}
\Pr[h_r(u)=h_r(v)]=1-\frac{\theta(u, v)}{\pi
\end{align*}
\end{definition}
Where, $\theta(u, v)$ denotes the angle between vectors $u$ and $v$, and $1-\frac{\theta(u, v)}{\pi}$
is a good approximation of $\cos(\theta(u, v)).$ Probability that two vectors collide under the hash function is
proportional to the cosine of the angle between them.
For a set $\mathcal{D}$ of $n$ vectors if we hash them using $O\left(\frac{1}{\epsilon^2}\log^2n\right)$
many hash functions, then for any pair of vectors $u, v \in \mathcal{D}$ the fraction of places where
their hash values matches is within a factor of $(1\pm \epsilon)$ to their cosine similarity.
\end{comment}
\subsection{Locality Sensitive Hashing}\label{subsection:subsecLSH}
LSH suggest an algorithm or alternatively a data structure for efficient approximate
nearest neighbor ($c$-$\mathrm{NN}$) search in high dimensional space.
We formally state it as follows:
\begin{definition}{($c$-Approximate Nearest Neighbor $(c$-$\mathrm{NN})).$}\label{definition:cNN}
Let $\mathcal{D}$ be set of points in $\mathbb{R}^d$, and $\mathrm{Sim}(.,.)$ be a desired similarity measure. Then for parameters $S, c>0$,
the $c$-$\mathrm{NN}$ problem is to construct a data structure that given any query point
$\mathbf{q} \in \mathcal{D}$
reports a $cS$-near neighbor of $\mathbf{q}$ in $\mathcal{D}$ if
there is an $S$-near neighbor of $\mathbf{q}$ in $D$.
Here, we say a point $\mathbf{x}\in \mathcal{D}$ is $S$-near neighbor of $\mathbf{q}$ if $\mathrm{Sim}(\mathbf{q}, \mathbf{x})>S.$
\end{definition}
In the following we define the concept of locality sensitive hashing (LSH) which suggest a
data structure to solve $c$-$\mathrm{NN}$ problem.
\begin{definition}[Locality sensitive hashing~\cite{IM98}]\label{definition:LSH}
Let $\mathcal{D}$ be a set of $n$ vectors in $\mathbb{R}^d$, and $U$ be the hashing
universe. Then, a family $\mathcal{H}$ of functions from $\mathcal{D}$ to $U$
is called as $(S, cS, p_1, p_2)$-sensitive for a similarity measure $\mathrm{Sim}(.,.)$ if for any $\mathbf{x}, \mathbf{y} \in D$,
\begin{itemize}
\item if $\mathrm{Sim}(\mathbf{x}, \mathbf{y})\geq S$, then $\displaystyle \Pr_{h \in \mathcal{H}}[h(\mathbf{x})=h(\mathbf{y})]\geq p_1$,
\item if $\mathrm{Sim}(\mathbf{x}, \mathbf{y})\leq cS$, then $\displaystyle \Pr_{h \in \mathcal{H}}[h(\mathbf{x})=h(\mathbf{y})]\leq p_2.$
\end{itemize}
\end{definition}
Clearly, any such scheme is interesting only when $p_1> p_2$, and $c<1$. Let $K, L$ be
the parameters of the data structure for LSH, where $K$ is the number of hashes in each hash table,
and $L$ is the number of hash tables, then due to \cite{IM98,GIM99}, we
have $K=O\left(\log_{\frac{1}{p_2}} n\right)$ and
$L=O\left(n^{\rho}\log n\right)$, where $\rho=\frac{\log p_1}{\log p_2}.$
Thus, given a family of $(S, cS, p_1, p_2)$-sensitive hash functions, and using result of~\cite{IM98,GIM99}, one can
construct a data structure for $c$-$\mathrm{NN}$ with $O(n^{\rho}\log n)$ query time and space $O(n^{1+\rho}).$
\subsubsection{How to convert similarity preserving compression schemes to LSH ?}
LSH schemes for various similarity measures can be viewed as first compressing the
input such that it preserve the desired similarity measure, and then applying collision
based hashing on top of it. If any similarity preserving compression scheme provides a
similar guarantee as of Definition~\ref{definition:LSH}, then for parameters -- similarity
threshold $S$, and $c$, one can construct data
structure for LSH (hash-tables with parameters $K$ and $L$) for the $c$-$\mathrm{NN}$ problem via \cite{IM98,GIM99}.
\section{A compression scheme for high dimensional sparse binary data}\label{sec:BinaryResult}
\begin{comment}
Given a binary vector $\textbf{u}\in \{0,1\}^{d}$, our scheme compress it into a
$\mathrm{N}$-dimensional binary vector (say) $\mathbf{u'}\in\{0,1\}^{\mathrm{N}}$ as follows (where $\mathrm{N}$ is to
be specified later). We randomly map each bit position (say) $\{i\}_{i=1}^d$ of the original
data to an integer $\{j\}_{j=1}^{\mathrm{N}}$. To compute the $j$-th bit of the compressed
vector $\mathbf{u'}$ we check which bits positions have been mapped to $j$, we compute
the parity of the corresponding bit positions and assign it to $\mathbf{u'}[j].$
\end{comment}
We first formally define our Compression Scheme as follows:
\begin{definition}[\textbf{B}inary \textbf{C}ompression \textbf{S}cheme]\label{defi:bcs}
Let $\mathrm{N}$ be the number of buckets, for $i=1$ to $d$, we randomly assign
the $i$-th position to a bucket number $b(i)$ $\in \{1, \ldots \mathrm{N}\}$. Then a vector $\mathbf{u} \in \{0, 1\}^d$,
compressed into a vector $\mathbf{u}' \in \{0, 1\}^{\mathrm{N}}$ as follows:
\[\mathbf{u}'[j] = \sum_{i : b(i) = j} \mathbf{u}[i] \pmod 2.\]
\end{definition}
\begin{note}
For brevity we denote the Binary Compression Scheme as $\mathrm{BCS}$.
\end{note}
\paragraph{Some intuition} Consider two binary vectors $\mathbf{u}, \mathbf{v} \in\{0, 1\}^d$, we call a bit position
\textit{``active''} if at least one of the vector between $\mathbf{u}$ and $\mathbf{v}$ has value $1$ in that position.
Let $\mathrm{\psi}$ be the maximum number of $1$ in any vector, then
there could be at most $2\mathrm{\psi}$ active positions shared between vectors $\mathbf{u}$ and $\mathbf{v}$.
Further, using the $\mathrm{BCS}$, let $\mathbf{u}$ and $\mathbf{v}$ get
compressed into binary vectors $\mathbf{u'}, \mathbf{v'} \in \{0, 1\}^{\mathrm{N}}$.
In the compressed vectors, we call a particular bit position \textit{``pure''} if the number of
active positions mapped to that position is at
most one, otherwise we call it \textit{``corrupted''}. It is easy to see that the contribution of pure bit
positions in $\mathbf{u'}, \mathbf{v'}$ towards Hamming distance (or Inner product similarity), is exactly equal to the
contribution of the bit positions in $\mathbf{u}, \mathbf{v}$ which get mapped to the pure bit positions.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.032]{pure.jpg}
\end{figure}
The number of maximum possible corrupted bits in the compressed data is $\mathrm{\psi}$ because in the worst case
it is possible that all the $2\mathrm{\psi}$ active bit position got paired up while compression.
The deviation of Hamming distance (or Inner product similarity) between $\mathbf{u'}$ and $\mathbf{v'}$ from that
of $\mathbf{u}$ and $\mathbf{v}$, corresponds to the number of corrupted bit positions shared between
$\mathbf{u'}$ and $\mathbf{v'}$. The above figure illustrate this with an example, and the lemma below analyse it.
\begin{lem}\label{lem:compression}
Consider two binary vectors $\mathbf{u}, \mathbf{v} \in\{0, 1\}^d$, which get compressed into
vectors $\mathbf{u'}, \mathbf{v'} \in \{0, 1\}^{\mathrm{N}}$
using the $\mathrm{BCS}$, and suppose $\mathrm{\psi}$ is the maximum number of $1$ in any vector.
Then for an integer $r\geq1$, and
$\epsilon>0$, probability that $\mathbf{u'}$ and $\mathbf{v'}$ share more than $\epsilon r$
corrupted positions is at most
$\left(\frac{2\mathrm{\psi}}{\sqrt{\mathrm{N}}}\right)^{\epsilon r}.$
\end{lem}
\begin{proof}
We first calculate the probability that a particular bit position gets corrupted between $\mathbf{u'}$ and $\mathbf{v'}$.
As there are at most $2\mathrm{\psi}$ active positions shared between vectors $\mathbf{u}$ and $\mathbf{v}$, the number of ways of
pairing two active positions from $2\mathrm{\psi}$ active positions is at most $2\mathrm{\psi} \choose 2$, and this
pairing will result a corrupted bit position in $\mathbf{u'}$ or $\mathbf{v'}$.
Then, the probability that a particular bit position in
$\mathbf{u'}$ or $\mathbf{v'}$ gets corrupted is at most $\frac{{2\mathrm{\psi} \choose 2}}{{\mathrm{N}}}\leq \left(\frac{4{\mathrm{\psi}}^2}{\mathrm{N}} \right).$
Further, if the deviation of Hamming distance (or Inner product similarity) between $\mathbf{u'}$ and $\mathbf{v'}$
from that of $\mathbf{u}$ and $\mathbf{v}$ is more than $\epsilon r$, then at least $\epsilon r$ corrupted
positions are shared between $\mathbf{u'}$ and $\mathbf{v'}$,
which implies that at least $\frac{\epsilon r}{2}$ pair of active positions in $\mathbf{u}$ and $\mathbf{v}$
got paired up while compression.
The number of possible ways of pairing $\frac{\epsilon r}{2}$ active positions from $2\mathrm{\psi}$ active positions
is at most ${2\mathrm{\psi} \choose \frac{\epsilon r}{2}}{2\mathrm{\psi}-\frac{\epsilon r}{2} \choose \frac{\epsilon r}{2}} \frac{\epsilon r}{2}!\leq
(2\mathrm{\psi})^{\epsilon r}.$
Since the probability that a pair of active positions got mapped in the same bit position in the compressed data
is $\frac{1}{\mathrm{N}}$, the probability that $\frac{\epsilon r}{2}$ pair of active positions got mapped
in $\frac{\epsilon r}{2}$ distinct bit positions in the compressed data is at
most $(\frac{1}{\mathrm{N}})^{\frac{\epsilon r}{2}}$.
Thus, by union bound, the probability that at least $\epsilon r$ corrupted bit
position shared between $\mathbf{u'}$ and $\mathbf{v'}$ is at most $\frac{(2\mathrm{\psi})^{\epsilon r}}{{\mathrm{N}}^{\frac{\epsilon r}{2}}}
=\left(\frac{2\mathrm{\psi}}{\sqrt{\mathrm{N}}}\right)^{\epsilon r}.$
\end{proof}
In the following lemma we generalize the above result on a set of $n$ binary vectors.
We suggest a compression bound such that any pair of compressed vectors share
only a very small number of corrupted bits, with high probability.
\begin{lem}\label{lem:compressionBound}
Consider a set~$\mathrm{U}$ of $n$ binary vectors \\$\{\mathbf{u_i}\}_{i=1}^n\subseteq \{0, 1\}^d$,
which get compressed into a set $\mathrm{U'}$ of binary
vectors $\{\mathbf{u_i'}\}_{i=1}^n\subseteq\{0, 1\}^{\mathrm{N}}$ using the $\mathrm{BCS}$. Then for any
positive integer $r$, and $\epsilon>0$,
\begin{itemize}
\item if $\epsilon r >3 \log n$, and we set $\mathrm{N}=16{\mathrm{\psi}}^2$, then probability
that for all $\mathbf{u_i'}, \mathbf{u_j'}\in \mathrm{U'}$
share more than $\epsilon r$ corrupted positions is at most $\frac{1}{n}$.
\item If $\epsilon r < 3 \log n$, and we set $\mathrm{N}=144{\mathrm{\psi}}^2\log^2n $, then probability
that for all $\mathbf{u_i'}, \mathbf{u_j'}\in \mathrm{U'}$
share more than $\epsilon r$ corrupted positions is at most $\frac{1}{n}$.
\end{itemize}
\end{lem}
\begin{proof}
In the first case, for a fixed pair of compressed
vectors $\mathbf{u_i'}$ and $\mathbf{u_j'}$, due to lemma~\ref{lem:compression}, probability that they
share more than $\epsilon r$ corrupted positions is at most $\left(\frac{2{\mathrm{\psi}}}{\sqrt{\mathrm{N}}}\right)^{\epsilon r}.$
If $\epsilon r >3 \log n$, and $\mathrm{N}=16{\mathrm{\psi}}^2$, then the above probability is at most
$\left(\frac{2\mathrm{\psi}}{\sqrt{\mathrm{N}}}\right)^{\epsilon r}<\left(\frac{2{\mathrm{\psi}}}{4t}\right)^{3 \log n}=\left(\frac{1}{2}\right)^{3 \log n}<\frac{1}{n^3}.$ As there are at most
${n \choose 2}$ pairs of vectors, then the probability of every pair of compressed vectors share more than $\epsilon r$
corrupted positions is at most $\frac{{n \choose 2}}{n^3}<\frac{1}{n}$.
In the second case,
as $\epsilon r <3 \log n$, we cannot upper bound the desired probability similar to the first case.
Here we use a trick, in the input data we replicate each bit position $3 \log n$ times,
which makes a $d$ dimensional
vector to a $3d\log n$ dimensional, and as a consequence the Hamming distance
(or Inner product similarity) is also scaled up by a multiplicative factor of $3 \log n$.
We now apply the compression scheme on these scaled vectors, then for a fixed pair of compressed
vectors $\mathbf{u_i'}$ and $\mathbf{u_j'}$, probability that they have more than
$3 \epsilon r \log n $ corrupted positions is at most
$\left(\frac{6\mathrm{\psi}\log n }{\sqrt{\mathrm{N}}}\right)^{3 \epsilon r\log n}$. As we set
$\mathrm{N}=144{\mathrm{\psi}}^2\log^2n $, the above probability is at most
$\left(\frac{6\mathrm{\psi}\log n }{\sqrt{144{\mathrm{\psi}}^2\log^2n}}\right)^{3 \epsilon r \log n}< \left(\frac{1}{2}\right)^{3 \log n}<\frac{1}{n^3}.$
The final probability follows by applying union bound over all ${n \choose 2}$ pairs.
\end{proof}
\begin{rem}
We would like to emphasize that using the $\mathrm{BCS}$, for any pair of vectors,
the Hamming distance between them in the compressed version is always less than or equal to
their original Hamming distance. Thus, this compression scheme has only one-sided-error for
the Hamming case. However, in the case of inner product similarity this compression
scheme can possibly have two-sided-error --
as the inner product in the compressed version can be smaller or higher than the inner product of original input.
We illustrate this by the following example,
where the compression scheme assigns both bit positions of the input to one bit of the compressed data.
\begin{itemize}
\item If $\mathbf{u}=[1, 0]~ \mbox{and~} \mathbf{v}=[0, 1]$, then $\mathrm{IP}(\mathbf{u}, \mathbf{v})=0$;
and after compression $\mathbf{u'}=[1]\mbox{~and~} \mathbf{v'}=[1]$ which
gives $\mathrm{IP}(\mathbf{u'}, \mathbf{v'})=1$.
\item If $\mathbf{u}=[1, 1]~ \mbox{and~} \mathbf{v}=[1, 1]$, then $\mathrm{IP}(\mathbf{u}, \mathbf{v})=2$,
and after compression $\mathbf{u'}=[0]\mbox{~and~} \mathbf{v'}=[0]$ which
gives $\mathrm{IP}(\mathbf{u'}, \mathbf{v'})=0.$
\end{itemize}
\end{rem}
As a consequence of Lemma~\ref{lem:compressionBound} and the above remark, we present our compression
guarantee for the Hamming distance and Inner product similarity.
{ \renewcommand{\thetheorem}{\ref{theorem:compressionHamming}}
\begin{theorem
Consider a set $\mathrm{U}$ of binary vectors \\ $\{\mathbf{u_i}\}_{i=1}^n\subseteq \{0, 1\}^d$,
a positive integer $r$, and $\epsilon>0$.
If $\epsilon r >3 \log n$, we set $\mathrm{N}=O({\mathrm{\psi}}^2)$; if $\epsilon r < 3 \log n$,
we set $\mathrm{N}=O({\mathrm{\psi}}^2\log^2n) $, and compress them
into a set $\mathrm{U'}$ of binary vectors $\{\mathbf{u_i'}\}_{i=1}^n\subseteq\{0, 1\}^{\mathrm{N}}$ using
$\mathrm{BCS}$. Then for all $\mathbf{u_i}, \mathbf{u_j}\in \mathrm{U}$,
\begin{itemize}
\item if $\mathrm{d_H}(\mathbf{u_i}, \mathbf{u_j})< r$, then $\Pr [\mathrm{d_H}({\mathbf{u_i}}', {\mathbf{u_j}}')< r]=1$,
\item if $\mathrm{d_H}(\mathbf{u_i}, \mathbf{u_j})\geq (1+\epsilon)r$, then $\Pr [\mathrm{d_H}({\mathbf{u_i}}', {\mathbf{u_j}}')< r]<\frac{1}{n}.$
\end{itemize}
\end{theorem}\addtocounter{theorem}{-1}}
{\renewcommand{\thetheorem}{\ref{theorem:compressionIP}}
\begin{theorem}
Consider a set $\mathrm{U}$ of binary vectors \\$\{\mathbf{u_i}\}_{i=1}^n\subseteq \{0, 1\}^d$,
a positive integer $r$, and $\epsilon>0$.
If $\epsilon r >3 \log n$, we set $\mathrm{N}=O({\mathrm{\psi}}^2)$; if $\epsilon r < 3 \log n$,
we set $\mathrm{N}=O({\mathrm{\psi}}^2\log^2n) $, and compress them into
a set $\mathrm{U'}$ of binary vectors
$\{\mathbf{u_i'}\}_{i=1}^n\subseteq\{0, 1\}^{\mathrm{N}}$ using
$\mathrm{BCS}$. Then for all $\mathbf{u_i}, \mathbf{u_j}\in \mathrm{U}$ the following is true with probability
at least $1-\frac{1}{n},$
\[
(1-\epsilon)\mathrm{IP}(\mathbf{u_i}, \mathbf{u_j})\leq \mathrm{IP}({\mathbf{u_i}}', {\mathbf{u_j}}')\leq (1+\epsilon)\mathrm{IP}(\mathbf{u_i}, \mathbf{u_j}).
\]
\end{theorem}\addtocounter{theorem}{-1}}
\subsection{A tighter analysis for Hamming distance}
In this subsection, we strengthen our analysis for the Hamming case,
and shows a compression bound which is independent of the dimension
and the sparsity, and depends only on the Hamming distance between
the vectors.
However, we could show our result in expectation, and only for a pair of vectors.
For a pair of vectors $\mathbf{u}, \mathbf{v}\in \{0, 1\}^d$, we say
that a bit position is \textit{``unmatched''} if exactly one of the vector has value $1$ in that position
and the other one has value $0$. We say that a bit position in the compressed data is \textit{``odd-bit''} if odd
number of unmatched positions get mapped to that bit. Let $\mathbf{u}$ and $\mathbf{v}$ get compressed into vectors $\mathbf{u'}$ and $\mathbf{v'}$ using the
$\mathrm{BCS}$. Our observation is that
each odd bit position in the compressed data contributes to Hamming distance in $1$ in the compressed data.
We illustrate this with an example: let $\mathbf{u}[i,j,k]=[1,0,1],~\mathbf{v}[i,j,k]=[0,1,0]$
and let $i, j, k$ get mapped to bit position $i'$ (say) in the compressed data, then $\mathbf{u}[i']=0, \mathbf{v}[i']=1$,
then clearly $\mathrm{d_H}(\mathbf{u}[i'], \mathbf{v}[i'])=1.$
{\renewcommand{\thetheorem}{\ref{theorem:compressionR}}
\begin{theorem}
Consider two binary vectors $\mathbf{\mathbf{u}}, \mathbf{v} \\\in\{0, 1\}^d$, which get compressed into
vectors $\mathbf{\mathbf{u'}}, \mathbf{v'} \in \{0, 1\}^{\mathrm{N}}$ using $\mathrm{BCS}$. If we set $\mathrm{N}=O(r^2)$, then
\begin{itemize}
\item if $\mathrm{d_H}(\mathbf{u}, \mathbf{v})< r$, then $\Pr [\mathrm{d_H}({\mathbf{u}}', {\mathbf{v}}')< r]=1$, and
\item if $\mathrm{d_H}(\mathbf{\mathbf{u}}, \mathbf{v})\geq 4r$, then $\mathbb{E}[\mathrm{d_H}(\mathbf{\mathbf{u'}}, \mathbf{v'})]>2r.$
\end{itemize}
\end{theorem}\addtocounter{theorem}{-1}}
\begin{proof}
Let $\mathrm{{\psi}_u}$ denote the number of unmatched bit positions between $\mathbf{u}$ and $\mathbf{v}$.
As mentioned earlier, if odd number of unmatched bit positions gets mapped to a
particular bit in the compressed data, then that bit position corresponds
to the Hamming distance $1$. Let we call that bit position as \textit{``odd-bit''}
position. In order to give a bound on the Hamming distance in the compressed data
we need to give a bound on number of such odd-bit positions. We first calculate the probability
that a particular bit position say $k$-th position in the compressed data is odd.
Let we denote this by ${\Pr}_{odd}^{(k)}$.
We do it using the following binomial distribution:
\begin{align*}
{\Pr}_{\mathrm{odd}}^{\mathrm{(k)}}&=\sum_{{i\bmod 2}=1}^{\mathrm{{\psi}_u}} \frac{1}{\mathrm{N}^i}{\mathrm{{\psi}_u} \choose i}\left(1-\frac{1}{\mathrm{N}}\right)^{\mathrm{{\psi}_u}-1}.
\end{align*}
Similarly, we compute the probability that the $k$-th bit is even:
\[
{\Pr}_{\mathrm{even}}^{\mathrm{(k)}}=\sum_{{i\bmod 2}=0}^{\mathrm{{\psi}_u}} \frac{1}{\mathrm{N}^i}{\mathrm{{\psi}_u} \choose i}\left(1-\frac{1}{\mathrm{N}}\right)^{\mathrm{{\psi}_u}-1}.
\]
We have,
\[
{\Pr}_{\mathrm{even}}^{\mathrm{(k)}}+{\Pr}_{\mathrm{odd}}^{\mathrm{(k)}}=1. \numberthis\label{eq:eq1}
\]
Further,
\begin{align*}
&{{\Pr}_{\mathrm{even}}^{\mathrm{(k)}}-{\Pr}_{\mathrm{odd}}^{\mathrm{(k)}}}\\
&=\sum_{{i\bmod 2}=0}^{\mathrm{{\psi}_u}} \frac{1}{\mathrm{N}^i}{\mathrm{{\psi}_u} \choose i}\left(1-\frac{1}{\mathrm{N}}\right)^{\mathrm{{\psi}_u}-1}\\&-\sum_{{i\bmod 2}=1}^{\mathrm{{\psi}_u}} \frac{1}{\mathrm{N}^i}{\mathrm{{\psi}_u} \choose i}\left(1-\frac{1}{\mathrm{N}}\right)^{\mathrm{{\psi}_u}-1}\\
&=\left(1-\frac{1}{\mathrm{N}} -\frac{1}{\mathrm{N}} \right)^{\mathrm{{\psi}_u}}\\
&=\left(1-\frac{2}{\mathrm{N}}\right)^{\mathrm{{\psi}_u}}.\numberthis\label{eq:eq2}
\end{align*}
Thus, we have the following from Equation~\ref{eq:eq1} and Equation~\ref{eq:eq2}
\begin{align*}
{\Pr}_{\mathrm{odd}}^{\mathrm{(k)}}&=\frac{1}{2}\left(1- \left(1-\frac{2}{\mathrm{N}}\right)^{\mathrm{{\psi}_u}}\right)\\
&\geq \frac{1}{2}\left(1- \exp\left(-\frac{2\mathrm{{\psi}_u}}{\mathrm{N}}\right)\right). \numberthis\label{eq:eq3}
\end{align*}
The last inequality follows as $(1-x)\leq e^x$ for $x<1.$
Thus expected number of odd-bits is at least $$\frac{\mathrm{N}}{2}\left(1- \exp\left(-\frac{2\mathrm{{\psi}_u}}{\mathrm{N}}\right)\right).$$
We now split here in two cases: $1)$ $\mathrm{{\psi}_u}<20r$, and $2)$ $\mathrm{{\psi}_u}\geq 20r$. We address them one-by-one.
\textbf{Case 1:} $\mathrm{{\psi}_u}< 20r$.
We complete this case using Lemma~\ref{lem:compression}. It is easy to verify that in
the case of Hamming distance the analysis of Lemma~\ref{lem:compression} also holds if
we consider ``unmatched'' bits instead of ``active'' bits in the analysis.
Thus, the probability that at least $r$ corrupted bit
position shared between $\mathbf{u'}$ and $\mathbf{v'}$ is at most $\left(\frac{2\mathrm{{\psi}_u}}{\sqrt{\mathrm{N}}}\right)^{r}.$
We wish to set the value of $\mathrm{N}$
such that with probability at most $1/3$ that $\mathbf{u'}$ and $\mathbf{v'}$ share more than $r$ corrupted positions.
If we set the value of $\mathrm{N}=4{\mathrm{{\psi}_u}^2}3^{\frac{2}{r}}$, then
the above
probability is at most $\left({\frac{2\mathrm{{\psi}_u}}{\sqrt{4{\mathrm{{\psi}_u}^2}3^{\frac{2}{r}}}}}\right)^r=\frac{1}{3}.$
Thus, when $\mathrm{N}=4{\mathrm{{\psi}_u}^2}3^{\frac{2}{r}}=O(\mathrm{{\psi}_u}^2)=O(r^2)$ as $\mathrm{{\psi}_u}<20r$ and $r\geq 2$, with probability at most $1/3$,
at most $r$ corrupted bits
are shared between $\mathbf{u'}$ and $\mathbf{v'}$. As a consequence to this, we have $\mathbb{E}[\mathrm{d_H}(\mathbf{u'}, \mathbf{v'})]>\frac{2}{3}.3r=2r.$
\textbf{Case 2:} $\mathrm{{\psi}_u}\geq 20r$.
We continue here from Equation~\ref{eq:eq3}
\begin{align*}
&\text{Expected number of odd buckets}\\&\geq \frac{\mathrm{N}}{2}\left(1- \exp\left(-\frac{2\mathrm{{\psi}_u}}{\mathrm{N}}\right)\right)\\
&\geq \frac{\mathrm{N}}{2}\left(1- \exp\left(-\frac{40r}{\mathrm{N}}\right)\right)\\
&= 4r^2\left(1- \exp\left(-\frac{5}{r}\right)\right) \numberthis\label{eq:eq4}\\
&> 4r^2\left(\frac{1}{2r}\right) \numberthis\label{eq:eq5}\\
&=2r.
\end{align*}
Equality~\ref{eq:eq4} follows by setting $\mathrm{N}=8r^2$ and
Inequality~\ref{eq:eq5} holds as $1-\exp\left(-\frac{5}{r}\right)>\frac{1}{2r}$ for $r \geq 2$.
Finally, Case $1$ and Case $2$ complete a proof of the theorem.
\end{proof}
\section{Conclusion and open questions}\label{sec:Conclusion}
In this work, to the best of our knowledge, we obtain the first efficient binary to binary
compression scheme for preserving Hamming distance and Inner Product for high dimensional sparse data. For
Hamming distance in fact our scheme obtains the ``no-false-negative''
guarantee analogous to the one obtained in recent paper by Pagh~\cite{Pagh16}.
Contrary to the ``local'' projection approach of previous schemes we first randomly
partition the dimension, and then take a ``global summary'' within a partition.
The compression length of our scheme depends only on the sparsity and is independent
of the dimension as opposed to previously known schemes.
We also obtain a generalization of our result to real-valued setting. Our work
leaves the possibility of several open questions -- improving the bounds of our compression
scheme, and extending it to other similarity measures such as Cosine and Jaccard similarity
are major open questions of our work.
\begin{comment}
In particular, for binary data, our scheme simultaneously preserves the Hamming distance and other
similarity measures such as a. Inner Product Similarity b. Jaccard Similarity c. Cosine Similarity.
In addition, for Hamming distance, it comes with ``no-false-negative'' guarantee (similar to
Theorem $3.1$ in Rasmus Pagh's recent paper~\cite{Pagh16}).
Our compression scheme for real-valued data simultaneously preserves Euclidean distance and Inner Product
and can also be extended to 'k-way' Inner Product.
In particular, for binary data, our scheme simultaneously preserves the Hamming distance and other
similarity measures such as a. Inner Product Similarity b. Jaccard Similarity c. Cosine Similarity.
In addition, for Hamming distance, it comes with ``no-false-negative'' guarantee (similar to
Theorem $3.1$ in Rasmus Pagh's recent paper~\cite{Pagh16}).
Our compression scheme for real-valued data simultaneously preserves Euclidean distance and Inner Product
and can also be extended to 'k-way' Inner Product.
\end{comment}
\section{Introduction}
The technological advancements have led to the generation of huge
amount of data over the web such as texts, images, audios, and videos.
Needless to say that most of these datasets are high dimensional.
Searching for similar data-objects in such massive and high dimensional datasets is becoming a fundamental
subroutine in many scenarios like clustering, classification, nearest neighbors, ranking etc.
However, due to the \textit{``curse of dimensionality''}
a brute-force way to compute the similarity scores on such data sets is very expensive and at times infeasible.
Therefore it is quite natural to investigate the techniques that compress the dimension of dataset
while preserving the similarity between data objects. There are various compressing schemes
that have been already studied for different similarity measures.
We would like to emphasize that any such compressing scheme is useful only when it satisfies the following guarantee,
\textit{i.e.} when data objects are ``nearby'' (under the desired similarity measure), then they should
remain near-by in the compressed version, and when they are ``far'', they should remain far in the
compressed version. In the case of probabilistic compression schemes the above should happen with high probability.
Below we discuss a few such notable schemes. In this work we consider binary and real-valued datasets.
For binary data we focus on Hamming distance and Inner product, while for real-valued data we focus on
Euclidean distance and Inner product.
\subsection{Examples of similarity preserving compressions }
Data objects in a datasets can be considered as points (vectors) in high dimensional space.
Let we have $n$ vectors (binary
or real-valued) in $d$-dimensional space.
\begin{itemize}
\item Gionis, Indyk, Motwani~\cite{GIM99} proposed a data structure to solve
approximate nearest neighbor ($c$-$\mathrm{NN}$) problem
in binary data for {\bf Hamming distance}. Their scheme popularly known
as {\bf Locality Sensitive Hashing} (LSH). Intuitively, their data structure can be
viewed as a compression of a binary vector, which is obtained by projecting it on a
randomly chosen bit positions.
\item \textbf{JL transform}~\cite{JL83} suggests a compressing scheme for real-valued data.
For any $\epsilon>0$, it compresses the dimension of the points from $d$ to
$O\left(\frac{1}{\epsilon^2}\log n\right)$ while preserving
the \textbf{Euclidean distance} between any pair of points within factor of $(1\pm \epsilon)$.
\item Given two vectors $\mathbf{u},\mathbf{v} \in \mathbb{R}^d$, the \textbf{inner product similarity}
between them is defined as $\langle \mathbf{u},\mathbf{v}\rangle :=\Sigma_{i=1}^d\mathbf{u}[i] \mathbf{v}[i].$
Ata Kab\'{a}n~\cite{Kaban15} suggested a compression schemes for real data which preserves inner product
\textit{via} \textbf{random projection}. On the contrary, if the input data is binary, and it
is desirable to get the compression only in binary data, then to the best of our
knowledge no such compression scheme is available which achieves a non-trivial compression.
However, with some sparsity assumption (bound on the number of $1$'s),
there are some schemes available which \textit{via} asymmetric padding (adding a few extra bits in the vector)
reduce the inner product similarity (of the original data) to the Hamming~\cite{BeraP16},
and Jaccard similarity (see Preliminaries for a definition)~\cite{ShrivastavaWWW015}. Then the compression scheme for
Hamming or Jaccard can be applied on the padded version of the data.
\item Binary data can also be viewed as a collection of sets, then the underlying similarity
measure of interest can be the \textbf{Jaccard similarity}.
Broder \textit{et. al.}~\cite{Broder00,BroderCFM98,BroderCPM00} suggested a compression scheme for preserving Jaccard similarity between sets
which is popularly known as \textbf{Minwise permutations}.
\begin{comment}
\item The \textbf{cosine similarity} for two vectors suggest a measure of similarity
which calculate the cosine of the angle between them. \textbf{Signed Random Projection}
due to Charikar~\cite{Charikar02} suggest a compression scheme for preserving cosine similarity between vectors.
\end{comment}
\end{itemize}
\begin{comment}
\subsection*{Efficient search via composing with LSH}
Due to the ``curse of dimensionality'' many search algorithms scale poorly in high dimensional data.
So, if it is possible to get a succinct compression of data while preserving the similarity
score between pair of data points, the such compression naturally helps for efficient search.
One can first compress the input such that it preserve the desired
similarity measure, and then can apply a collision based hashing algorithm such as
LSH~\cite{GIM99, IM98} for efficient approximate nearest neighbor ($c$-$\mathrm{NN}$) on the compressed data.
\end{comment}
\subsection{Our focus: High dimensional (sparse) data}
In this work, we focus on High Dimensional Sparse Data. In many real-life scenarios,
data object is represented as very high-dimensional but sparse vectors, \textit{i.e.}
number of all possible attributes (features) is huge, however, each data object has
only a very small subset of attributes. For example, in bag-of-word representation of
text data, the number of dimensions equals to the size of vocabulary, which is large.
However for each data point, say a document, contains only a small number of words in
the vocabulary, leading to a sparse vector representation. The bag-of-words representation
is also commonly used for image data. Data-sparsity is commonly prevalent in audio and
video-data as well.
\subsection{Shortcomings of earlier schemes for high dimensional (sparse) data}
The quality of any compression scheme can be evaluated based on the following two parameters - 1)
the \textit{compression-length}, and 2) the amount of \textit{randomness} required for the compression.
The compression-length is defined as the dimension of the data after compression.
Ideally, it is desirable to
have both of these to be small while preserving a desired accuracy in the compression. Below
we will notice that most of the above mentioned compression schemes
become in-feasible in the case of high dimensional sparse datasets as 1) their
compression-length is very high, and
2) the amount of randomness required for the
compression is quite huge.
\begin{itemize}
\item \textbf{Hamming distance:} Consider the problem of finding $c$-$\mathrm{NN}$ (see Definition~\ref{definition:cNN})
for Hamming distance in binary data. In the LHS scheme, the size of hashtable determines the compression-length.
The size of hashtable $K=O\left(\log_ {\frac{1}{p_2}} n \right)$ (see Definition~\ref{definition:LSH}).
If $r=O(1)$, then the size of hashtable $K=O\left(\log_ {\frac{1}{p_2}} n \right)= O(\frac{d}{cr}\log n)=O(d\log n)$,
which is linear in the dimension. Further, in order to randomly choose a
bit position (between $1$ to $d$), it is require to generate $O(\log d)$ many random bits.
Moreover, as the size of hash table is $K$,
and the number of hash tables is $L$, it is required to generate $O(KL\log d)$ many random bits to create the hashtable,
which become quite large specially when $K$ is linear in $d$.
\begin{comment}
\item \textbf{Cosine similarity:} In signed random projection each hash function amounts
to a random hyperplane in dimension-$d$. Hashing a vector requires a $O(d^2)$ compression-time - time
required to project a vector on the random hyperplane; time require to hash $n$ vectors
with $O\left(\frac{1}{\epsilon^2}\log^2 n\right)$ hash functions is $O(\frac{1}{\epsilon^2}d^2n \log^2n)$.
The amount of randomness required for the hashing is $O(\frac{1}{\epsilon^2}dn\log^2n)$; and
the compression-length is $O\left(\frac{1}{\epsilon^2} \log^2 n\right)$.
\end{comment}
\item \textbf{Euclidean distance:} In order to achieve compression that preserve the distance
between any pair of points, due to JL transform~\cite{JL83, Achlioptas03}, it is required
to project the input matrix on a random matrix of
dimensions $d\times k$, where $k=O\left(\frac{1}{\epsilon^2}\log n\right)$. Each entry of the
random matrix is chosen from $\{\pm 1\}$ with probability $\frac{1}{2}$ (see~\cite{Achlioptas03}),
or from a normal distribution (see~\cite{JL83}).
The compression-length in this scheme is $O\left(\frac{1}{\epsilon^2}\log n\right) $, and it requires \\$O\left(\frac{1}{\epsilon^2}d\log n\right)$
randomness.
\item \textbf{Inner product:}
Compression schemes which compress binary data into binary data while preserving Inner
product is not known. However using \textit{``asymmetric padding scheme''} of~\cite{BeraP16,ShrivastavaWWW015}
it is possible to get a compression via Hamming or Jaccard Similarity measure, then shortcomings of Jaccard and Hamming will
get carry forward in such scheme. Further, in case of real valued data the compression scheme of
Ata Kab\'{a}n~\cite{Kaban15} has
compression-length $=O\left(\frac{1}{\epsilon^2}\log n\right) $, and requires $O\left(\frac{1}{\epsilon^2}d\log n\right)$
randomness.
\item\textbf{Jaccard Similarity:} Minhash permutations~\cite{Broder00,BroderCFM98,BroderCPM00} suggest a compression scheme for preserving Jaccard similarity
for a collection of sets. A major disadvantage of this scheme is that for high dimensional
data computing permutations are very expensive, and further in order to achieve a reasonable
accuracy in the compression a larger number of repetition might be required.
A major disadvantage of this scheme is that it requires substantially large
amount of randomness that grows polynomially in the dimension.
\end{itemize}
\vspace{-0.5cm}
\paragraph{Lack of good binary to binary compression schemes}
To summarize the above, there are two main compression schemes currently available for binary to binary compression.
The first one is LSH and the second one is JL-transform.
The LSH requires the compression size to be linear in the dimension and the JL-transform can achieve logarithmic
compression size but it will compress binary vectors to real vectors. The analogue of JL-transform which compresses
binary vectors to binary vectors requires the compression-length to be linear in the number of data points
(see Lemma~\ref{lem:analogousJL}.)
Since both dimension as well as the number of data points can be large, these schemes are inefficient.
In this paper we propose an efficient binary to binary compression scheme for sparse data which works simultaneously
for both Hamming distance and Inner Product.
\subsection{Our contribution}
In this work we present a compressing scheme for high dimensional sparse data.
In contrast with the ``local projection''
strategies used by most of the previous schemes such as LSH~\cite{IM98,GIM99} and JL~\cite{JL83},
our scheme combines (using sparsity) the following two step approach 1. Partitioning the dimensions
into several buckets, 2. Then obtaining ``global linear summaries'' of each of these buckets.
We present our result below:
\subsubsection{For binary data}
For binary data, our compression scheme provides one-shot solution for both Hamming and Inner
product -- compressed data preserves both Hamming distance and Inner product.
Moreover, the compression-length depends only on the sparsity of data and is independent of the dimension of data.
We first informally state our compression scheme for binary data, see Definition~\ref{defi:bcs}
for a formal definition.
Given a binary vector $\textbf{u}\in \{0,1\}^{d}$, our scheme compress it into a
$\mathrm{N}$-dimensional binary vector (say) $\mathbf{u'}\in\{0,1\}^{\mathrm{N}}$ as follows, where $\mathrm{N}$ to be specified later.
We randomly map each bit position (say) $\{i\}_{i=1}^d$ of the original
data to an integer $\{j\}_{j=1}^{\mathrm{N}}$. To compute the $j$-th bit of the compressed vector $\mathbf{u'}$
we check which bits positions have been mapped to $j$, we compute
the parity of bits located at those positions, and assign it to $\mathbf{u'}[j].$
The following figure illustrate an example of the compression.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.033]{binary.jpg}
\end{figure}
In the following theorems let $\mathrm{\psi}$ denote the maximum number of $1$ in any vector.
We state our result for binary data as follows:
\begin{theorem}\label{theorem:compressionHamming}
Consider a set $\mathrm{U}$ of binary vectors \\$\{\mathbf{u_i}\}_{i=1}^n\subseteq \{0, 1\}^d$,
a positive integer $r$, and $\epsilon>0$.
If $\epsilon r >3 \log n$, we set $\mathrm{N}=O({\mathrm{\psi}}^2)$; if $\epsilon r < 3 \log n$,
we set $\mathrm{N}=O({\mathrm{\psi}}^2\log^2n) $, and compress them
into a set $\mathrm{U'}$ of binary vectors $\{\mathbf{u_i'}\}_{i=1}^n\subseteq\{0, 1\}^{\mathrm{N}}$ using
our Binary Compression Scheme.
Then for all $\mathbf{u_i}, \mathbf{u_j}\in \mathrm{U}$,
\begin{itemize}
\item if $\mathrm{d_H}(\mathbf{u_i}, \mathbf{u_j})< r$, then $\Pr [\mathrm{d_H}({\mathbf{u_i}}', {\mathbf{u_j}}')< r]=1$,
\item if $\mathrm{d_H}(\mathbf{u_i}, \mathbf{u_j})\geq (1+\epsilon)r$, then $\Pr [\mathrm{d_H}({\mathbf{u_i}}', {\mathbf{u_j}}')< r]<\frac{1}{n}.$
\end{itemize}
\begin{comment}
Consider a set of binary vectors $\{\mathbf{u_i}\}_{i=1}^n\subseteq \{0, 1\}^d$, a positive integer $r$,
$\epsilon>0$.
If $\epsilon r >3 \log n$ we set $\mathrm{N}=O({\mathrm{\psi}}^2)$; if $\epsilon r < 3 \log n$
we set $\mathrm{N}=O({\mathrm{\psi}}^2\log^2n) $, and compress them
into $\{\mathbf{u_i'}\}_{i=1}^n\subseteq\{0, 1\}^{\mathrm{N}}$ using our Binary Compression Scheme.
Then for any pair of vectors $\mathbf{u_i}, \mathbf{u_j}
\begin{itemize}
\item if $\mathrm{d_H}(\mathbf{u_i}, \mathbf{u_j})< r$, then $\Pr [\mathrm{d_H}({\mathbf{u_i}}', {\mathbf{u_j}}')< r]=1$, and
\item if $\mathrm{d_H}(\mathbf{u_i}, \mathbf{u_j})\geq (1+\epsilon)r$, then $\Pr [\mathrm{d_H}({\mathbf{u_i}}', {\mathbf{u_j}}')< r]<\frac{1}{n}.$
\end{comment}
\end{theorem}
\begin{theorem}\label{theorem:compressionIP}
Consider a set $\mathrm{U}$ of binary vectors \\$\{\mathbf{u_i}\}_{i=1}^n\subseteq \{0, 1\}^d$,
a positive integer $r$, and $\epsilon>0$.
If $\epsilon r >3 \log n$, we set $\mathrm{N}=O({\mathrm{\psi}}^2)$; if $\epsilon r < 3 \log n$,
we set $\mathrm{N}=O({\mathrm{\psi}}^2\log^2n) $, and compress them into
a set $\mathrm{U'}$ of binary vectors
$\{\mathbf{u_i'}\}_{i=1}^n\subseteq\{0, 1\}^{\mathrm{N}}$ using our Binary Compression Scheme.
Then for all $\mathbf{u_i}, \mathbf{u_j}\in \mathrm{U}$ the following is true with probability
at least $1-\frac{1}{n}$,
\[
(1-\epsilon)\mathrm{IP}(\mathbf{u_i}, \mathbf{u_j})\leq \mathrm{IP}({\mathbf{u_i}}', {\mathbf{u_j}}')\leq (1+\epsilon)\mathrm{IP}(\mathbf{u_i}, \mathbf{u_j}).
\]
\begin{comment}
Consider a set of binary vectors $\{\mathbf{u_i}\}_{i=1}^n\subseteq \{0, 1\}^d$, a positive integer $r$, and $\epsilon>0$.
If $\epsilon r >3 \log n$ we set $\mathrm{N}=O({\mathrm{\psi}}^2)$; if $\epsilon r < 3 \log n$
we set $\mathrm{N}=O({\mathrm{\psi}}^2\log^2n) $, and compress them
into $\{\mathbf{u_i'}\}_{i=1}^n\subseteq\{0, 1\}^{\mathrm{N}}$ using
$\mathrm{BCS}$~\ref{defi:bcs}. Then for any pair of vectors $\mathbf{u_i}, \mathbf{u_j}$ the following is true with probability
at least $1-\frac{1}{n}$
\[
(1-\epsilon)\mathrm{IP}(\mathbf{u_i}, \mathbf{u_j})\leq \mathrm{IP}({\mathbf{u_i}}', {\mathbf{u_j}}')\leq (1+\epsilon)\mathrm{IP}(\mathbf{u_i}, \mathbf{u_j}).
\]
\end{comment}
\end{theorem}
In the following theorem, we strengthen our result of Theorem~\ref{theorem:compressionHamming},
and shows a compression bound which is independent of the dimension
and the sparsity, but depends only on the Hamming distance between
the vectors. However, we could show our result in the Expectation, and only for a pair of vectors.
\begin{theorem}\label{theorem:compressionR}
Consider two binary vectors $\mathbf{\mathbf{u}}, \mathbf{v} \\ \in\{0, 1\}^d$, which get compressed into
vectors $\mathbf{\mathbf{u'}}, \mathbf{v'} \in \{0, 1\}^{\mathrm{N}}$ using our Binary Compression Scheme.
If we set $\mathrm{N}=O(r^2)$, then
\begin{itemize}
\item if $\mathrm{d_H}(\mathbf{u}, \mathbf{v})< r$, then $\Pr [\mathrm{d_H}({\mathbf{u}}', {\mathbf{v}}')< r]=1$, and
\item if $\mathrm{d_H}(\mathbf{\mathbf{u}}, \mathbf{v})\geq 4r$, then $\mathbb{E}[\mathrm{d_H}(\mathbf{\mathbf{u'}}, \mathbf{v'})]>2r.$
\end{itemize}
\begin{comment}
Consider two binary vectors $\mathbf{\mathbf{u}}, \mathbf{v} \in\{0, 1\}^d$, which get compressed into
vectors $\mathbf{\mathbf{u'}}, \mathbf{v'} \in \{0, 1\}^{\mathrm{N}}$ using our Binary Compression Scheme.
If we set $\mathrm{N}=O(r^2)$, then
\begin{itemize}
\item if $\mathrm{d_H}(\mathbf{u}, \mathbf{v})< r$, then $\Pr [\mathrm{d_H}({\mathbf{u}}', {\mathbf{v}}')< r]=1$, and
\item if $\mathrm{d_H}(\mathbf{\mathbf{u}}, \mathbf{v})\geq 4r$, then $\mathbb{E}[\mathrm{d_H}(\mathbf{\mathbf{u'}}, \mathbf{v'})]>2r.$
\end{itemize}
\end{comment}
\end{theorem}
\begin{rem}
To the best of our knowledge, ours is the first efficient binary to binary
compression scheme for preserving Hamming distance and Inner product. For
Hamming distance in fact our scheme obtains the ``no-false-negative''
guarantee analogous to the one obtained in recent paper by Pagh~\cite{Pagh16}.
\end{rem}
\begin{rem}
When $r$ is constant, as mentioned above,
LSH~\cite{GIM99} requires compression
length linear in the dimension. However, due to Theorem~\ref{theorem:compressionR}, our compression length
is only constant.
\end{rem}
\begin{rem}
Our compression length is $O(\mathrm{\psi} \log^2n)$, which is independent of the dimension $d$;
whereas other schemes such as LSH may require the compression length growing linearly in $d$
and the analogue of JL-transform for binary to binary compression requires compression
length growing linearly in $n$ (see Lemma~\ref{lem:analogousJL}).
\end{rem}
\begin{rem} The randomness used by our compression scheme is $O(d \log \mathrm{N})$
which grows logarithmically in the compression length $N$ whereas the
JL-transform uses randomness growing linearly in the compression length.
For all-pair compression for $n$ data points we use $O(d (\log \mathrm{\psi} + \log \log n))$ randomness,
which grows logarithmically in the sparsity and sub-logarithmically in terms of number of data points.
\end{rem}
\vspace{-0.2cm}
\subsubsection{For real-valued data}
We generalize our scheme for real-valued data also and obtain compressions for Euclidean distance,
Inner product, and $k$-way Inner product. We first state our compression scheme as follows:
Given a vector $\textbf{a}\in \mathbb{R}^{d}$, our scheme compress it into a
$\mathrm{N}$-dimensional vector (say) $\boldsymbol{\alpha}^{\mathrm{N}}$ as follows.
We randomly map each coordinate position (say) $\{i\}_{i=1}^d$ of the original
data to an integer $\{j\}_{j=1}^{\mathrm{N}}$. To compute the $j$-th coordinate of the
compressed vector $\boldsymbol{\alpha}$ we check which coordinates of the original data have been
mapped to $j$, we multiply the numbers located at those positions with a random variable $x_i$,
compute their summation, and assign it to $\boldsymbol{\alpha}[j]$,
where $x_i$ takes a value between $\{-1, +1\}$ with probability $1/2.$
The following figure illustrate an example of the compression.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.033]{real.jpg}
\end{figure}
In the following we present our main result for real valued data which is compression bound for preserving
$k$-way inner product. For a set of $k$ vectors $\{\boldsymbol{\alpha_i}\}_{i=1}^k\in \mathbb{R}^d$, their
$k$-way inner product is defined as
$$\langle \boldsymbol{\alpha}_1\boldsymbol{\alpha}_2\ldots\boldsymbol{\alpha}_k\rangle
=\sum_{j=1}^d\boldsymbol{\alpha}_1[j]\boldsymbol{\alpha}_2[j]\ldots\boldsymbol{\alpha}_k[j],$$
where $\boldsymbol{\alpha}_1[j]$ denote the $j$-th coordinate of the vector $\boldsymbol{\alpha}_1$.
\begin{theorem}\label{theorem:compressionRealKway}
Consider a set of $k$ vectors $\{\mathbf{a_i}\}_{i=1}^k\in \mathbb{R}^d$,
which get compressed into vectors $\{\boldsymbol{\alpha_i}\}_{i=1}^k \in \mathbb{R}^{\mathrm{N}}$ using our Real Compression Scheme.
If we set $\mathrm{N}=\frac{10\mathrm{\Psi}^k}{\epsilon^2}$,
where $\mathrm{\Psi}=\max\{||{\mathbf{a_i}||^2}\}_{i=1}^k$ and $\epsilon>0$, then the following holds
\[
\Pr\left[\left|\langle\boldsymbol{\alpha}_1\boldsymbol{\alpha}_2\ldots\boldsymbol{\alpha}_k\rangle- \langle\mathbf{a}_1\mathbf{a}_2\ldots\mathbf{a}_k\rangle \right|>\epsilon \right]<1/10.
\]
\end{theorem}
\begin{rem}
An advantage of our compression scheme is that it can be constructed in the
streaming model quite efficiently. The only requirement is that in the case of
binary data the maximum number of $1$ the vectors in the stream should be bounded,
and in the case of real valued data norm of the vectors should be bounded.
\end{rem}
\begin{comment}
\begin{rem}
As our compression scheme provides a similar guarantee as of
Definition~\ref{definition:LSH}, then using results
of Indyk and Motwani~\cite{IM98} one can construct data structure for LSH
for approximate nearest neighbor problem.
Thus, our result leads to efficient approximate search for high dimension sparse data.
\end{rem}
\end{comment}
\vspace{-0.2cm}
\subsection{Comparison with previous work}
A major advantage of our compression scheme is that it provides a one-shot solution for different similarity
measures -- Binary compression scheme preserves both Hamming distance and Inner product,
and Real valued data compression scheme preserves both Euclidean distance, Inner product, and $k$-way Inner product.
The second main advantage of our compression scheme for binary data it gives a binary to binary
compression as opposed to the binary to real compression by JL-transform.
Third main advantage is that our compression scheme is that its compression
size is independent of the dimensions and depends only on the sparsity as opposed to
Gionis, Indyk, Motwani~\cite{GIM99} scheme which requires linear size compression. For real-valued
data our results are weaker compared to previous known works but they generalize to $k$-way
inner product, which none of the previous work does.
Another advantage of our real valued compression scheme is that
when the number of points are small (constant), then for preserving
a pairwise Inner Product or Euclidean distance,
we have a clear advantage on the amount of randomness required for the
compression,
the randomness required by our scheme
grows logarithmically in the compression length, whereas the
other schemes require randomness which grows linearly in the compression length.
\begin{comment}
-- even sparse representation in indyk motwani requires $O(t log d)$ bits also does not give no-false-negative guarantee
-- comparing with pagh -- similar to 3.1 in pagh. and can get theorem 1.1
space: ours = $O(r^2 n)$ compared to $O(2^r n)$ by Pagh
time: ours = $O(2^{r^2})$ compared to $O(2^r)$
\end{comment}
\subsection*{Potential applications}
A potential use of our result is to improve approximate nearest neighbor search via composing with LSH.
Due to the ``curse of dimensionality'' many search algorithms scale poorly in high dimensional data.
So, if it is possible to get a succinct compression of data while preserving the similarity
score between pair of data points, then such compression naturally helps for efficient search.
One can first compress the input such that it preserve the desired
similarity measure, and then can apply a collision based hashing algorithm such as
LSH~\cite{GIM99, IM98} for efficient approximate nearest neighbor ($c$-$\mathrm{NN}$) on the compressed data.
As our compression scheme provides a similar guarantee as of
Definition~\ref{definition:LSH}, then
one can construct data structure for LSH
for approximate nearest neighbor problem.
Thus, our similarity preserving compression scheme leads to an efficient approximate nearest neighbor search.
There are many similarity based algorithmic methods used in large scale learning and information retrieval,
e.g., Frequent itemset mining~\cite{AgrawalS94}, ROCK clustering \cite{ROCK}. One could potentially obtain algorithmic
speed up in these methods via our compression schemes.
Recently compression based on LSH for inner-product is used to speed up the forward and back-propagation
in neural networks \cite{deeplearning}. One could potentially use our scheme to take advantage of sparsity and
obtain further speed up.
\begin{comment}
\begin{itemize}
\item Speeding up ROCK clustering
\item Speeding up scoring of L2 regularized SVM/Neural Networks
\item Fast search for documents relevant to short comments
\item Fast search for near duplicate in sparse images.
\item Space efficient near neighbor search for sparse data
\end{itemize}
\end{comment}
\vspace{-0.4cm}
\subsection*{Organization of the paper}
In Section~\ref{sec:Background}, we present the necessary background which
helps to understand the paper. In Section~\ref{sec:BinaryResult}, we present
our compression scheme for high dimensional sparse binary data.
In Section~\ref{sec:RealResult}, we present our compression scheme for high
dimensional sparse real data. Finally in Section~\ref{sec:Conclusion}, we
conclude our discussion, and state some possible extensions of the work.
\begin{comment}
High-dimensional datasets are ubiquitous in a variety of applications
such as e-commence, computer vision, text processing, bioinformatics,
and world wide web. In these applications, data object is represented
as very high-dimensional but sparse vectors, \textit{i.e.} number of
all possible attributes (features) is huge, however, each data object
has only a very small subset of attributes. The problem of computing
significant similarity scores between these data objects is a fundamental
problem in my scenarios like clustering, classification, nearest neighbours, ranking etc.
However, due to the curse of dimensionality (i.e. due to the size of data objects)
a brute-force way to compute the similarity scores of on such data sets is very expensive.
Then it is quite natural to investigate the techniques that compress the dimension of dataset
while preserving the similarity between data objects. There are various compressing schemes
have been studied for different similarity measure.
Please note that any compressing scheme is useful only when it satisfies the two side guarantee,
\textit{i.e.} when data objects are near by, then they should remain nearby in the compressed version,
and whey they are far, they should remain far in the compressed version. In the case of probabilistic compression
schemes the above should happen with high probability.
\subsection{Existing compression schemes for binary data}
In this paragraph we briefly discuss some notable similarity measures and their
respective similarity preserving compressing schemes for binary data. Binary data
can be consider as binary vectors in some high dimensional space. Hamming distance is
one of the most popular similarity (distance) measure in binary vectors.
Hamming similarity is the number of bit positions where two vectors have same bit value.
In a seminal work by Gionis \textit{et al.}~\cite{GionisIM99} suggested a collision
based hashing scheme for hamming distance in binary data. They studied the problem of
ANN, \textit{i.e.} given a query vectors $q$, a set of $n$ input vectors, a threshold
$r$ and constant $c>1$, the goal is to output a vector withing hamming distance $cr$
from $q$ if there is a vector within distance $r$ in the data set. They techniques
includes randomly choosing bit positions and checking if the query and input vectors
are matching exactly at those bit positions. However, the main disadvantage of their
work is that when $r\ll d$, then the size of their projection, $K=O(\frac{d}{cr}\log n)$,
becomes linear in the dimension.
The binary data can also be interpreted as sets. For example two vectors $u, v\in \{0, 1\}^d$
can be considered as two sets $u, v\subseteq \{1, 2, \ldots d\}$, and the set has respective
features corresponding to those bit positions where it has value $1$. The underlying
similarity measure of interest is the Jaccard similarity which is defined as follows
$JS(u, v)=\frac{|u \cap v|}{|u \cup v|}.$ A celebrated work by Broder
\textit{et al.}~\cite{Broder00,BroderCFM98,BroderCPM00} suggested a
technique to compress a collection of sets while preserving their
Jaccard similarity. Their technique includes taking a random permutation
of $(1, 2, \ldots, d)$ and assigning a value to each set which maps to
minimum under that permutation. The larger number of repetition will
give more accurate compression. However, a main disadvantage of this
work is the for high dimensional data computing permutations are very
expensive, and possibly may not be feasible in many real life applications.
\begin{comment}
\subsection{Existing compression schemes for real data}
A classic result of Johnson and Lindenstrauss~\cite{JL83} suggest
a compressing scheme for real data which preserve all pairwise euclidean distance.
We would like to emphasize that to the best of our knowledge there is no direct
compression scheme known for preserving inner product similarity for real or binary data.
However, in case of binary data, with some sparcity assumption (bound on the number of $1$'s)
there are some schemes available which by padding (add a few extra bits in the vector),
reduce the inner product similarity (of the original data) to the Hamming~\cite{BeraP16},
and Jaccard similarity~\cite{ShrivastavaWWW015}. Then the compression scheme of
Hamming and Jaccard can be applied on the padded version of the data. Similarly,
in the case of real data, similar padding techniques are known that by padding
reduces inner product similarity to cosine similarity~\cite{NeyshaburS15},
and euclidean distance~\cite{Shrivastava014}.
\end{comment}
\section{A compression scheme for high dimensional sparse real data}\label{sec:RealResult}
We first define our compression scheme for the real valued data.
\begin{definition}(\textbf{R}eal-valued \textbf{C}ompression \textbf{S}cheme)\label{defi:rcs}
Let $\mathrm{N}$ be the number of buckets, for $i=1$ to $d$, we randomly assign
the $i$-th position to the bucket number $b(i)$ $\in \{1, \ldots \mathrm{N}\}$.
Then, for $j=1 \text{~to~} \mathrm{N}$, the $j$-th coordinate of the compressed vector
$\boldsymbol{\alpha}$ is computed as follows:
\[\boldsymbol{\alpha}[j] = \sum_{i : b(i) = j} \mathbf{a}[i]x_i,\]
where each $x_i$ is a random variable that takes a value between $\{-1, +1\}$ with probability $1/2.$
\end{definition}
\begin{note}
For brevity we denote our Real-valued Compression Scheme as $\mathrm{RCS}$.
\end{note}
We first present our compression guarantee for preserving Inner product for a pair of real valued vectors.
\begin{lem}\label{lem:innerprod}
Consider two vectors $\mathbf{a}, \mathbf{b} \in\mathbb{R}^d$, which get compressed into
vectors $\boldsymbol{\alpha}, \boldsymbol{\beta} \in \mathbb{R}^{\mathrm{N}}$ using the
$\mathrm{RCS}$. If we set $\mathrm{N}=\frac{10\mathrm{\Psi}^2}{\epsilon^2}$,
where $\mathrm{\Psi}=\max\{||\mathbf{a}||^2, ||\mathbf{b}||^2\}$ and $\epsilon>0$, then the following holds,
\[
\Pr\left[\left|\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle- \langle\mathbf{a}, \mathbf{b}\rangle \right|>\epsilon \right]<1/10.
\]
\end{lem}
\begin{proof}
Let we have two vectors $\mathbf{a}, \mathbf{b}\in \mathbb{R}^d$ such that $\mathbf{a}=[a_1, a_2,\ldots a_d]$ and
$\mathbf{b}=[b_1, b_2,\ldots b_d]$. Let $\{x_i\}_{i=1}^d$ be a set of $d$ random variables such that each
$x_i$ takes a value between $\{-1, +1\}$ with probability $1/2$, $z_i^{(k)}$ be a random variable that takes
the value $1$ if $i$-th dimension of the vector is mapped to the $k$-th bucket of the compressed vector and $0$ otherwise.
Using the compression scheme $\mathrm{RCS}$, let vectors $\mathbf{a}, \mathbf{b}$ get
compressed into vectors $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$,
where $\boldsymbol{\alpha}=[\alpha_1, .. \alpha_k, .. \alpha_{\mathrm{N}}]$ such that $\alpha_k=\Sigma_{i=1}^da_ix_iz_i^{(k)}$,
and $\boldsymbol{\beta}=[\beta_1, .. \beta_k, .. \beta_{\mathrm{N}}]$ such that $\beta_k=\Sigma_{i=1}^db_ix_iz_i^{(k)}.$
We now compute the inner product of the compressed vectors $\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle$.
\begin{align*}
\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle&=\sum_{k=1}^{\mathrm{N}} \alpha_k \beta_k
=\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i=1}^da_ix_iz_i^{(k)} \right)\left( \Sigma_{i=1}^db_ix_iz_i^{(k)} \right)\\
&=\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i=1}^da_ib_ix_i^2{z_i^{(k)}}^2 + \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right)\numberthis\label{eq:eq10}\\
&=\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i=1}^da_ib_i{z_i^{(k)}} + \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right)\numberthis\label{eq:eq11}\\
&= \Sigma_{i=1}^da_ib_i\sum_{k=1}^{\mathrm{N}}{z_i^{(k)}} +\sum_{k=1}^{\mathrm{N}} \left(\Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right)\\
&=\Sigma_{i=1}^da_ib_i +\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right)\\
&= \langle\mathbf{a}, \mathbf{b}\rangle +\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right).\numberthis\label{eq:eq12}
\end{align*}
Equation~\ref{eq:eq11} follows from Equation~\ref{eq:eq10} because $x_i^2=1$ as $x_i=\pm1$, and $z_i^2=z_i$ as $z_i$
takes value either $1$ or $0.$ We continue from Equation~\ref{eq:eq12} and
compute the Expectation and the Variance of the random variable
$\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle$. We first compute the Expectation of the
random variable $\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle$
as follows:
\begin{align*}
\mathbb{E}[\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle]&=\mathbb{E}\left[\langle\mathbf{a}, \mathbf{b}\rangle +\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right)\right]\\
&=\mathbb{E} [\langle\mathbf{a}, \mathbf{b}\rangle] +\mathbb{E}\left[\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right) \right]\\
&=\langle\mathbf{a}, \mathbf{b}\rangle +\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}\mathbb{E}[a_ib_jx_ix_jz_i^{(k)} z_j^{(k)}] \right) \numberthis\label{eq:eq13} \\
&=\langle\mathbf{a}, \mathbf{b}\rangle +\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}a_ib_j\mathbb{E}[x_ix_jz_i^{(k)} z_j^{(k)}] \right) \\
&=\langle\mathbf{a}, \mathbf{b}\rangle. \numberthis\label{eq:eq14}
\end{align*}
Equation~\ref{eq:eq13} holds due to the linearity of expectation.
Equation~\ref{eq:eq14} holds because $\mathbb{E}[x_ix_jz_i^{(k)} z_j^{(k)}]=0$
as both $x_i$ and $x_j$ take a value between $\{-1, +1\}$ each with probability $0.5$ which leads to $\mathbb{E}[x_ix_j]=0$.
We now compute the Variance of the random variable $\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle$ as follows:
\begin{align*}
\mathrm{Var}[\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle]&=\mathrm{Var}\left[\langle\mathbf{a}, \mathbf{b}\rangle +\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right)\right]\\
&=\mathrm{Var}\left[\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}a_ib_jx_ix_jz_i^{(k)} z_j^{(k)} \right)\right]\numberthis\label{eq:eq14}\\
&=\mathrm{Var}\left[\sum_{k=1}^{\mathrm{N}} \left( \Sigma_{i\neq j}\xi_{ij}^{(k)} \right)\right]\numberthis\label{eq:eq17}\\
&=\mathrm{Var}\left[\sum_{i\neq j} \sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)} \right]\\
&=\sum_{i\neq j}\mathrm{Var}\left[\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)} \right]\ldots \\&+\sum_{i\neq j, i'\neq j', i\neq i', j\neq j'}\mathrm{Cov}\left[\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)}, \sum_{k=1}^{\mathrm{N}}\xi_{i'j'}^{(k)} \right]\numberthis\label{eq:eq16}
\end{align*}
Equation~\ref{eq:eq14} holds due to Fact~\ref{fact:varProp}; Equation~\ref{eq:eq17} holds as
we denote the expression $a_ib_jx_ix_jz_i^{(k)} z_j^{(k)}$ by the variable $\xi_{ij}^{(k)}$;
Equation~\ref{eq:eq16} holds due to Fact~\ref{fact:varProp1}. We now bound the values of the two terms
of Equation~\ref{eq:eq16}.
\begin{align*}
\sum_{i\neq j}\mathrm{Var}\left[\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)} \right]&=\sum_{i\neq j}\sum_{k=1}^{\mathrm{N}}\mathrm{Var}\left[\xi_{ij}^{(k)} \right]\ldots \\&~~~~+\sum_{i\neq j}\sum_{k\neq l}\mathrm{Cov}\left[\xi_{ij}^{(k)}, \xi_{ij}^{(l)} \right]\numberthis\label{eq:eq18}
\end{align*}
Equation~\ref{eq:eq18} holds due to Fact~\ref{fact:varProp1}.
We bound the values of two terms
of Equation~\ref{eq:eq18} one by one as follows.
\begin{align*}
& \sum_{i\neq j}\sum_{k=1}^{\mathrm{N}}\mathrm{Var}\left[\xi_{ij}^{(k)} \right]=\sum_{i\neq j}\sum_{k=1}^{\mathrm{N}}\mathrm{Var}\left[a_ib_jx_ix_jz_i^{(k)} z_j^{(k)}\right]\\
&=\sum_{i\neq j}a_i^2b_j^2\sum_{k=1}^{\mathrm{N}}\mathrm{Var}\left[x_ix_jz_i^{(k)} z_j^{(k)}\right]\numberthis\label{eq:eq19}\\
&=\sum_{i\neq j}a_i^2b_j^2\sum_{k=1}^{\mathrm{N}} \left( \mathbb{E}[x_i^2x_j^2{z_i^{(k)}}^2 {z_j^{(k)}}^2]-\mathbb{E}[x_ix_jz_i^{(k)} z_j^{(k)}]^2\right)\numberthis\label{eq:eq20}\\
&=\sum_{i\neq j}a_i^2b_j^2\sum_{k=1}^{\mathrm{N}} \mathbb{E}\left[{z_i^{(k)}} {z_j^{(k)}}\right]\numberthis\label{eq:eq21}\\
&=\sum_{i\neq j}a_i^2b_j^2/\mathrm{N}\leq ||\textbf{a}||^2||\textbf{b}||^2/\mathrm{N}.\numberthis\label{eq:eq22}
\end{align*}
Equation~\ref{eq:eq19} holds due to Fact~\ref{fact:varProp}; Equation~\ref{eq:eq20} holds due to Definition~\ref{definition:varDef};
Equation~\ref{eq:eq21} holds as $x_i^2, x_j^2=1$, ${z_i^{(k)}}^2={z_i^{(k)}}$, and $\mathbb{E}[x_ix_j]=0$; finally,
Equation~\ref{eq:eq22} holds as $\sum_{i\neq j}a_i^2b_j^2\leq \sum_{i}a_i^2\sum_{i}b_i^2=||\textbf{a}||^2||\textbf{b}||^2.$
We now bound the second term of Equation~\ref{eq:eq18}.
\begin{align*}
&\mathrm{Cov}\left[\xi_{ij}^{(k)}, \xi_{ij}^{(l)} \right]\\&=\mathrm{Cov}\left[a_ib_jx_ix_jz_i^{(k)} z_j^{(k)}, a_ib_jx_ix_jz_i^{(l)} z_j^{(l)} \right]\\
&={a_i}^2{b_j}^2\mathrm{Cov}\left[x_ix_jz_i^{(k)} z_j^{(k)}, x_ix_jz_i^{(l)} z_j^{(l)} \right]\numberthis\label{eq:eq23}\\
&={a_i}^2{b_j}^2\mathbb{E}[(x_ix_jz_i^{(k)} z_j^{(k)}-\mathbb{E}(x_ix_jz_i^{(k)} z_j^{(k)}))\\&~~~~~~~~~~~~~~(x_ix_jz_i^{(l)} z_j^{(l)}-\mathbb{E}(x_ix_jz_i^{(l)} z_j^{(l)})) ]\numberthis\label{eq:eq24}\\
&={a_i}^2{b_j}^2\mathbb{E}\left[{x_i}^2{x_j}^2 z_i^{(k)} z_j^{(k)} z_i^{(l)} z_j^{(l)} \right]\numberthis\label{eq:eq25}\\
&={a_i}^2{b_j}^2\mathbb{E}\left[z_i^{(k)} z_j^{(k)} z_i^{(l)} z_j^{(l)}\right]=0\numberthis\label{eq:eq26}
\end{align*}
Equation~\ref{eq:eq23} holds due to Fact~\ref{fact:coVarProp}; Equation~\ref{eq:eq24} holds
due to Definition~\ref{definition:coVarDef}; Equation~\ref{eq:eq25} holds as $\mathbb{E}(x_ix_j)=0$; finally,
Equation~\ref{eq:eq26} holds as in our compression scheme each
dimension of the input is mapped to a unique coordinate (bucket) in the compressed vector
which implies that at least one of the random variable between $z_i^{(k)}$ and $z_j^{(k)}$ has to be zero.
We now bound the second term of Equation~\ref{eq:eq16}.
\begin{align*}
& \mathrm{Cov}\left[\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)}, \sum_{k=1}^{\mathrm{N}}\xi_{i'j'}^{(k)} \right]\\
&=\mathbb{E}\left[\left(\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)}-\mathbb{E}(\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)}) \right) \left(\sum_{k=1}^{\mathrm{N}}\xi_{i'j'}^{(k)}-\mathbb{E}(\sum_{k=1}^{\mathrm{N}}\xi_{i'j'}^{(k)}) \right) \right]\\
&=\mathbb{E}\left[(\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)})(\sum_{k=1}^{\mathrm{N}}\xi_{i'j'}^{(k)}) \right]\numberthis\label{eq:eq27}\\
&=\mathbb{E}\left[\left(\sum_{k=1}^{\mathrm{N}}a_ib_jx_ix_j{z_i}^{(k)}{z_j}^{(k)}\right)\left(\sum_{k=1}^{\mathrm{N}}a_{i'}b_{j'}x_{i'}x_{j'}{z_{i'}}^{(k)} {z_{j'}}^{(k)}\right)\right]\\
&=a_ib_ja_{i'}b_{j'}\mathbb{E} \left[x_ix_jx_{i'}x_{j'}\left(\sum_{k=1}^{\mathrm{N}}{z_i}^{(k)}{z_j}^{(k)}\right)\left(\sum_{k=1}^{\mathrm{N}}{z_{i'}}^{(k)} {z_{j'}}^{(k)}\right)\right]\\
&=0 \numberthis\label{eq:eq28}
\end{align*}
Equation~\ref{eq:eq27} holds as $\mathbb{E}(\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)})$ and $\mathbb{E}(\sum_{k=1}^{\mathrm{N}}\xi_{i'j'}^{(k)})$ is equal to zero because
$$\mathbb{E}(\sum_{k=1}^{\mathrm{N}}\xi_{ij}^{(k)})=\sum_{k=1}^{\mathrm{N}}\mathbb{E}(\xi_{ij}^{(k)})=\sum_{k=1}^{\mathrm{N}}\mathbb{E}(a_ib_jx_ix_jz_i^{(k)} z_j^{(k)})=0.$$
A similar argument follows for the other term as well. Equation~\ref{eq:eq28} holds as $\mathbb{E}[x_ix_jx_{i'}x_{j'}]$ is equal
to zero because each variable in the expectation term takes a value between $+1$ and $-1$ with probability $0.5.$
Thus, we have
$$\mathbb{E}[\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle]=\langle\mathbf{a}, \mathbf{b}\rangle,$$ and
Equation~\ref{eq:eq16} in conjunction with Equations~\ref{eq:eq18}, \ref{eq:eq22}, \ref{eq:eq26}, \ref{eq:eq28} gives
$$\mathrm{Var}[\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle]\leq ||\textbf{a}||^2||\textbf{b}||^2/\mathrm{N}\leq {\mathrm{\Psi}}^2/{\mathrm{N}},$$
where $\mathrm{\Psi}=\max\{||\mathbf{a}||^2, ||\mathbf{b}||^2\}$.
Thus, by Chebyshev's inequality (see Fact~\ref{fact:Chebyshev}), we have
\[
\Pr\left[\left|\langle\boldsymbol{\alpha}, \boldsymbol{\beta}\rangle- \langle\mathbf{a}, \mathbf{b}\rangle \right|>\epsilon \right]<\frac{\mathrm{\Psi}^2}{\epsilon^2\mathrm{N}}=1/10.
\]
The last inequality follows as we set $\mathrm{N}=\frac{10\mathrm{\Psi}^2}{\epsilon^2}$.
\end{proof}
Using a similar analysis we can generalize our result for $k$-way inner product.
We state our result as follows:
{\renewcommand{\thetheorem}{\ref{theorem:compressionRealKway}}
\begin{theorem}
Consider a set of $k$ vectors $\{\mathbf{a_i}\}_{i=1}^k\in \mathbb{R}^d$,
which get compressed into vectors $\{\boldsymbol{\alpha_i}\}_{i=1}^k \in \mathbb{R}^{\mathrm{N}}$ using the $\mathrm{RCS}$.
If we set $\mathrm{N}=\frac{10\mathrm{\Psi}^k}{\epsilon^2}$,
where $\mathrm{\Psi}=\max\{||{\mathbf{a_i}||^2}\}_{i=1}^k$ and $\epsilon>0$, then the following holds
\[
\Pr\left[\left|\langle\boldsymbol{\alpha}_1\boldsymbol{\alpha}_2\ldots\boldsymbol{\alpha}_k\rangle- \langle\mathbf{a}_1\mathbf{a}_2\ldots\mathbf{a}_k\rangle \right|>\epsilon \right]<1/10.
\]
\end{theorem}\addtocounter{theorem}{-1}}
We can also generalize the result of Lemma~\ref{lem:innerprod} for Euclidean distance as well.
Consider a pair of vectors $\mathbf{a}, \mathbf{b}\in \mathbb{R}^d$ which get compressed into vectors
$\boldsymbol{\alpha}, \boldsymbol{\beta}\in \mathbb{R}^{\mathrm{N}}$ using the compression scheme
$\mathrm{RCS}$.
Let $||\boldsymbol{\alpha}, \boldsymbol{\beta}||^2$ denote the squared euclidean distance between
the vectors $\boldsymbol{\alpha}, \boldsymbol{\beta}$.
Using a similar analysis of
Lemma~\ref{lem:innerprod} we can compute Expectation and Variance of the random variable $||\boldsymbol{\alpha}, \boldsymbol{\beta}||^2$
\[
\mathbb{E}[||\boldsymbol{\alpha}, \boldsymbol{\beta}||^2]=||\mathbf{a}, \mathbf{b}||^2,
\]
and
\[
\mathrm{Var}[||\boldsymbol{\alpha}, \boldsymbol{\beta}||^2]\leq \frac{(||a||^2-||b||^2)^2}{\mathrm{N}}\leq \frac{\mathrm{\Psi}^2}{\mathrm{N}},
\]
where $\mathrm{\Psi}=\max\{||\mathbf{a}||^2, ||\mathbf{b}||^2\}$.
Thus, due to Chebyshev's inequality (see Fact~\ref{fact:Chebyshev}), we have the following result for Euclidean distance.
\begin{theorem}\label{theorem:compressionEuclidean}
Consider two vectors $\mathbf{a}, \mathbf{b} \in\mathbb{R}^d$, which get compressed into
vectors $\boldsymbol{\alpha}, \boldsymbol{\beta} \in \mathbb{R}^{\mathrm{N}}$ using the
$\mathrm{RCS}$. If we set $\mathrm{N}=\frac{10\mathrm{\Psi}^2}{\epsilon^2}$,
where $\mathrm{\Psi}=\max\{||\mathbf{a}||^2, ||\mathbf{b}||^2\}$ and $\epsilon>0$, then the following holds
\[
\Pr\left[\left|||\boldsymbol{\alpha}, \boldsymbol{\beta}||^2- ||\mathbf{a}, \mathbf{b}||^2 \right|>\epsilon \right]<1/10.
\]
\end{theorem}
\begin{rem}
In order to compress a pair of data points our scheme requires $O(d\log \mathrm{N})$ randomness,
which grows logarithmically in the compression length, whereas the
other schemes require randomness which grows linearly in the compression length.
Thus, when the number
of points are small (constant),
then for preserving a pairwise Inner product or Euclidean distance,
we have a clear advantage on the amount of
randomness required for the compression. We also believe that using a more sophisticated
concentration result (such as Martingale) it is possible to obtain a more tighter
concentration guarantee, and as a consequence a smaller compression length.
\end{rem}
|
1,941,325,221,007 | arxiv | \section{Introduction}\label{intro}
Stars can maintain their observable magnetic activity from the pre-main sequence (thereafter MS) until the tip of the red giant branch. We define an `active star' as one for which we can infer from observations surface spots due to variable magnetic fields.
Observable activity develops in those stars where there are extra circumstances to strengthen the magnetic field, and among these is rapid rotation which is maintained in binary stars via tidal synchronization.
Strong magnetic fields have been directly observed from Stokes vectors already in many active stars (Reiners,~\cite{reiners}). In the case of pre-main sequence stars, it is known that the strong magnetic fields can suppress the convection in the spotted regions and alter the entire stellar atmosphere which, in turn, makes the derived physical parameters of the stars uncertain (Bouvier \& Bertout, \cite{bouvier}). Having similar rotation rates, red giant stars with deep convection zones can also possess surface magnetic fields maintained by the magnetic dynamo. Therefore, their atmospheric structure can also be altered by the strong magnetic fields making their astrophysical parameters more uncertain (see Ol\'ah et al.~\cite{overactive}). In the case of large spotted areas, the observed temperature is lower and the resulting masses and ages inferred from their location on evolution tracks can be quite inaccurate. This is why observing an active giant star in an eclipsing binary is of great value, since this makes it possible to determine the stellar masses independently. By now, direct evidence -- i.e., an interferometric map -- shows that the atmospheres of active giants can be covered with dark and bright features of possibly magnetic origin (for further details see Roettenbacher et al.~\cite{rachael1}).
The number of known active giants is much lower than active stars on the main sequence, since on the giant branch the stars evolve rapidly, spending only about 10\% of their MS lifetime there. This is supported by the results of van Doorsselaere et al. (\cite{vandoor}) who studied a sample of {\em Kepler} stars searching for flares, where the sample had 10\% giants among the group of F+G+K+M+giant stars. Of the giant stars, 3.18\% show flares---a clear sign of magnetic activity. The occurrence rate for flaring among giants is similar to their progenitor F and G stars on the MS (2.37\% and 2.96\%, respectively), whereas K+M stars have twice as high an occurrence rate for flaring of 5.87\%.
Due to their rapid evolution it is difficult to estimate the stellar parameters of giant stars. A possibility for obtaining more reliable stellar parameters of an active giant arises when it is a member of an eclipsing binary system. Although quite a number of active stars are found in eclipsing binaries, most of those are main-sequence stars or subgiants. Only about a dozen well-studied systems have a giant star as an active component, but at the moment the only well-studied {\em active} giant star that is the primary of an eclipsing binary is BE~Psc. BE~Psc was studied in detail by Strassmeier et al.~(\cite{klausetal}) using photometry spanning 19 years, and 61 high-resolution spectra. However, it is a lucky case, since the two components have fairly similar masses (1.56 and 1.31\,~$M_{\odot}$); one star is a giant already and the other is just leaving the MS (temperatures are 4500\,K and 6300\,K). Therefore, one eclipse is deep, more than 0.15~mag., but the other eclipse was more difficult to find. Even the deep eclipse was recognized later than the discovery of the light variations of the star.
In the case of solar size, subsolar, or red-dwarf secondaries the eclipses (if they exist at all due to the inclination of the orbit) are increasingly more shallow. A shallow eclipse in a long-period light curve (as those of the active giants with periods of tens of days) can last from several hours to days, and can be easily missed in ground-based data due to photometric uncertainties. Another, very important factor is that these stars are usually observed once a night with ground-based telescopes due to their long rotation periods. Even those surveys that observe a stellar field regularly and have many data points can fail to detect shallow eclipses.
The long-cadence {\em Kepler} data have a photometric precision per sample below millimagnitudes, and thus shallow eclipses of red giants in binary systems can be more readily recognized. Excellent examples are given in Gaulme et al. (\cite{gaulmeetal2}) for 16 eclipsing systems with red giant (or subgiant) stars using {\em Kepler} photometry, together with high-resolution spectroscopy, aimed at testing how the asteroseismic scaling relations agree with those derived from dynamical modeling. The observed eclipses make it possible to determine the stellar parameters. The depths of the observed eclipses are typically a few hundredths (between $0.01-0.05$) of a magnitude, and only in 2 cases out of 16 binaries does the eclipse depth exceed 0.1 magnitude. In addition, half of the studied systems have longer periods than 100 days. Apart from their genuine rareness arising from the rapid evolution of these stars, all this evidence suggests that the main reason for not observing more eclipsing active giants is the quality and quantity of the ground-based observations.
Stellar activity, which can be seen in the data themselves and in the residuals from the model fits, is detected in only 8 of the 16 systems (see Fig. 3. of Gaulme et al., \cite{gaulmeetal2}). Combining the results of Table 1 from Gaulme et al. (\cite{gaulmeetal1}), and Fig.~3 and Table 4 of Gaulme et al. (\cite{gaulmeetal2}), we find 8 systems with a total out-of-eclipse flux variability of over 1\%. The higher-amplitude rotationally-modulated light curves in 4 primaries among the active giants, having over 15\% peak-to-peak flux variability, belong to the fastest rotators ($P_{\rm rot}$ $\lesssim 41$ days) of the sample; the rotational modulation periods are given in Gaulme et al. (\cite{gaulmeetal1}). For these four primaries no solar-like p-mode oscillations were detected, probably because of suppression by stellar activity (for more discussion see Gaulme et al., \cite{gaulmeetal1}).
Three very active giants in {\em non-eclipsing} binary systems (IL~Hya, XX~Tri and DM~UMa) were studied by Ol\'ah et al. (\cite{overactive}) using decades-long photometry, all of which show long-term variability with an amplitude of about a magnitude. It was found that the derived luminosities were so low that it was impossible to get reliable ages for the systems from evolutionary tracks. This difficulty was further exacerbated due to the high magnetic activity which prevented Ol\'ah et al. (\cite{overactive}) from getting accurate stellar parameters. One may speculate that in the case of XX~Tri, the very high amplitude rotational modulation indicates a relatively high inclination angle, and additionally, since the secondary star is not seen in the spectra, it is very likely to be a dwarf star. Therefore, even if the geometry were favorable, the eclipses would not have been observed due to the very large luminosity difference in the two stars, leaving us with an uncertain orbital inclination.
This paper reports an active red giant star, EPIC~211759736, in a doubly eclipsing binary found in Campaign 5 (GO\,5069, 2015) and re-observed in Campaign 18 (GO\,19033, 2018) of the {\em Kepler} K2 program (Huber et al. \cite{huberetal}). The star appears first in Schmidt et al. (\cite{schmidtetal}) as a Cepheid variable, based on ASAS measurements, with a period of 36.31 days, and in the same year in Hoffman et al.~(\cite{hoffmanetal}) as a long-period variable (Cepheid or Mira) based on NSVS data, with a period of 34.483~days. These variable-star classifications were, however, incorrect. The large-amplitude long-period modulations turn out to be due to starspots on a slowly rotating star.
The eclipsing nature of the active red giant EPIC~211759736 should allow us to much better constrain the stellar parameters than in most cases of active giant stars. The binary solution is supported by new spectroscopic data. Follow-up ground-based $BVR_CI_C$ photometry was also obtained covering one stellar rotation. We make use of archival observations by ASAS (Pojmanski \cite{ASAS}) and by HATNet (Bakos et al., \cite{gazsi1}, Bakos ~\cite{gazsi3}), and also the DASCH database (Grindlay et al., \cite{grindlayetal}) which has some $\sim$900 photographic measurements of brightness spanning about 100 years. We note that in the Gaulme et al.~(\cite{gaulmeetal2}) sample of 16 eclipsing giants in the {\em Kepler} field, only one star has supplemental ASAS data and none was found in the DASCH historical database.
EPIC\,211759736\,=\,2MASS\,08151296+1644414 has J, H, and K magnitudes of 9.516$\pm$0.022, 8.921$\pm$0.016 and 8.766$\pm$0.018 mag., respectively (Skrutskie et al.~\cite{2MASS}), and WISE W1, W2, W3, and W4 magnitudes of 8.686$\pm$0.022, 8.743$\pm$0.019, 8.631$\pm$0.025 and $\gtrsim$8.197, respectively (the AllWise catalog; Wright et al.~\cite{WISE}).
{\it Gaia} DR2 puts the star at a distance of $1368^{+77}_{-69}$\,pc, and gives the following parameters: radial velocity is $40.6\pm4.8$\,km/sec (note that {\it Gaia} DR2 treats single-lined binaries as one star), $T_{\rm eff} = 4747\pm200$\,K, $R = 11.0\pm1\,R_\odot$, $L = 55\pm4\,L_\odot$.
The paper is structured as follows: Section 2 describes the observational data; Section 3 presents the results from photometry, Section 4 deals with the binary solution and the results of spot modeling; Section 5 discusses the results; and Section 6 gives a summary, and conclusion.
\section{Applied data}\label{obs}
\subsection{Archival Photometric data}
EPIC~211759736 has a $\sim$100 years long photometric record in the DASCH database (Grindlay et al. \cite{grindlayetal}) from scanned photographic plates taken between 1888-1989, thereby allowing us to check the cyclic nature of the active giant.
From the public All Sky Automated Survey (ASAS) database (Pojmanski \cite{ASAS}) a 7-yr long dataset measured in the $V$-band, was downloaded and analyzed. The ASAS project automatically monitors the entire sky with wide-field instruments targeting all stars brighter than 14 magnitude, searching for, and following photometric variability.
Data for the star EPIC~211759736 was obtained with the Hungarian-made Automated Telescope Network (HATNet; Bakos et al., \cite{gazsi1}, Bakos ~\cite{gazsi3}), using the HAT-5 and HAT-10 telescopes at the Fred Lawrence Whipple Observatory, Arizona, and the HAT-8 telescope at the Smithsonian Astrophysical Observatory's Mauna Kea site at Hawaii. Altogether, over 15,000 data points were obtained between Nov 21, 2008 and January 17, 2012 in the Sloan $r$ filter. Data were reduced to light curves as described in, e.g., Bakos et al.~(\cite{gazsi2}). We used the ``magnitude fitted'' values, which correct for any smooth flux changes across the field and as a function of time (e.g.~due to extinction or focus changes), with respect to a selected reference frame. We did {\em not} use the de-correlated or trend filtered magnitude values in the light curves, as such methods can potentially distort the light curve shape for variable stars (unless done in a signal reconstruction mode, which was not available).
\subsection{New $BVR_CI_C$ observations}
$BVR_CI_C$ observations of EPIC\,211759736 were made with the $0.5$~m telescope of Baja Observatory of the University of Szeged, located at Baja, Hungary, and equipped with an SBIG ST-6303 CCD detector. The target was observed on 20 nights between March~13 and May~19, 2018, covering almost two orbital periods; however, gaps induced by weather conditions effectively limited the coverage to one orbital period only. The usual data reduction and photometric analysis were performed using IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} routines in a PyRAF\footnote{PyRAF is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.} environment. Nearby Landolt photometric standard fields SA-26, -29, and -32 (Landolt, \cite{landolt13}) were also observed on nights with appropriate photometric quality, and were used for determining standard magnitudes of a set of stars in the field of the target. These in turn were used to obtain standard magnitudes of the target itself for all dates. The data were corrected for interstellar reddening using the maps of Schlafly \& Finkbeiner (\cite{schlafly}).
Table\,\ref{tab:bvri-obslog} lists the $BVR_CI_C$ observations.
\begin{table*}
\centering
\caption{The log of $BVR_CI_C$ observations$^a$ of EPIC\,21175936.}
\label{tab:bvri-obslog}
\begin{tabular}{ccccccccc}
\hline\hline \noalign{\smallskip}
{HJD} & $B$ & $\pm$ & $V$ & $\pm$ & $R_C$ & $\pm$ & $I_C$ & $\pm$\\
$[2\,400\,000+]$ & mag. & & mag. & & mag. & & mag. & \\
\hline
\noalign{\smallskip}\hline
58190.4475 & 12.574 & 0.010 & 11.646 & 0.005 & 11.100 & 0.004 &10.444 & 0.003 \\
58191.3205 & 12.567 & 0.010 & 11.654 & 0.014 & 11.108 & 0.016 &10.442 & 0.028 \\
58211.3385 & 12.596 & 0.023 & 11.635 & 0.008 & 11.088 & 0.002 & 10.438& 0.003 \\
58213.4545 & 12.436 & 0.025 & 11.561 & 0.013 & 11.022 & 0.008 &10.383 & 0.014 \\
58217.3445 & 12.407 & 0.006 & 11.492 & 0.004 & 10.964 & 0.008 &10.333 & 0.009 \\
58220.3445 & 12.472 & 0.017 & 11.555 & 0.005 & 11.023 & 0.007 &10.382 & 0.009 \\
58221.3395 & 12.496 & 0.015 & 11.583 & 0.008 & 11.041 & 0.010 &10.412 & 0.027 \\
58222.3345 & 12.525 & 0.006 & 11.609 & 0.001 & 11.068 & 0.004 &10.415 & 0.011 \\
58227.3405 & 12.553 & 0.011 & 11.628 & 0.003 & 11.090 & 0.006 & 10.434& 0.013 \\
58228.3265 & 12.524 & 0.024 & 11.614 & 0.013 & 11.068 & 0.004 &10.419 & 0.035 \\
58229.3295 & 12.514 & 0.006 & 11.589 & 0.003 & 11.048 & 0.005 &10.401 & 0.005 \\
58230.3215 & 12.501 & 0.009 & 11.554 & 0.008 & 11.023 & 0.008 &10.385 & 0.006 \\
58232.3425 & 12.425 & 0.023 & 11.534 & 0.003 & 11.017 & 0.025 &10.363 & 0.015 \\
58233.3445 & 12.411 & 0.050 & 11.505 & 0.001 & 10.978 & 0.005 &10.380 & 0.015 \\
58234.3325 & 12.443 & 0.018 & 11.532 & 0.009 & 10.987 & 0.001 &10.372 & 0.010 \\
58236.3095 & 12.464 & 0.017 & 11.533 & 0.010 & 11.000 & 0.001 &10.367 & 0.001 \\
58246.3395 & 12.608 & 0.004 & 11.661 & 0.005 & 11.110 & 0.005 &10.452 & 0.004 \\
58247.3305 & 12.562 & 0.045 & 11.632 & 0.008 & 11.083 & 0.013 & 10.454& 0.020 \\
58250.3355 & 12.430 & 0.006 & 11.535 & 0.009 & 11.004 & 0.009 &10.360 & 0.017 \\
58251.3505 & 12.367 & 0.018 & 11.487 & 0.019 & 10.968 & 0.002 &10.339 & 0.016 \\
\noalign{\smallskip}\hline
\end{tabular}
{\bf Notes.} $^a$Carried out at the Baja Observatory of the University of Szeged.
\end{table*}
\subsection{K2 Observations}
\begin{figure}[t]
\centering
\includegraphics[width=0.98 \columnwidth]{fig1a.pdf} \\
\includegraphics[width=0.982 \columnwidth]{fig1b.pdf}
\caption{K2 light curve of EPIC 211759736 spanning 75 days. {\em Top panel:} The black curve is the raw light curve, while the thin red curve is a spline smoothed version. {\em Bottom panel:} The difference between the black and red curves in the top panel, showing more clearly both the primary and secondary eclipses. Note the difference in vertical scale by a factor of $\sim$15. The eclipse depths are at most 5\% of the full amplitude ($\approx0.2$ mag) of the light variations.}
\label{fig:K2}
\end{figure}
After the {\em Kepler} main mission, the {\em Kepler} spacecraft was re-purposed to observe a set of fields along the ecliptic plane. Each {\em Kepler} K2 campaign typically monitors some 25,000 stars in a given field for about 80 days (Howell et al.~2014), and a similar precision to that of the original {\em Kepler} mission is often achieved (see, e.g., Vanderburg et al.~\cite{vanderburg16}). EPIC 211759736 was observed during Campaign 5 (`C5') from April 28, 2015 to July 10, 2015. The light curves were extracted from the {\em Kepler} pipeline calibrated target pixel files from the Mikulski Archive for Space Telescopes\footnote{MAST; https://archive.stsci.edu/}. The data were corrected for the K2 spacecraft-motion induced systematics following the approach described in Vanderburg \& Johnson (\cite{vanderburg14}) and Vanderburg et al.~(\cite{vanderburg16}). We also utilized the {\em raw data} of the very recent 2018 K2 {\em Kepler} observations of the star (Campaign 18; `C18').
In addition to systematic searches for periodic events, e.g., planetary transits, binary eclipses, and stellar pulsations, a number of Citizen Scientist groups visually inspect all the light curves by eye in search of aperiodic phenomena and/or events of an unusual nature that might well escape the systematic searches. Such was the case here when two of us (T.\,J. and D.\,L.), using {\tt LcTools} (Kipping et al.~2015), found two pairs of shallow eclipses in the light curve of the highly modulated giant star EPIC~211759736.
This K2 light curve for EPIC~211759736 is shown in Fig.~\ref{fig:K2}. The top panel is the raw K2 light curve along with a spline fit to indicate the smoothly varying starspot modulations. The bottom panel shows the difference between the data and the fit, thereby clearly revealing the primary and secondary eclipses. The deeper eclipses are the giant passing in front of the higher-temperature, i.e., higher surface brightness, but much smaller, secondary star. The more shallow eclipses, by contrast, occur when the smaller secondary star passes in front of the cooler giant and traces out the limb-darkened profile of the giant.
\subsection{Spectroscopic data}
\label{sec:TRES}
We observed EP211759736 with the Tillinghast Reflector Echelle Spectrograph (TRES; F\H ur\'esz et al. \cite{TRES}) on the 1.5 m Tillinghast Reflector telescope at the Fred Lawrence Whipple Observatory (FLWO) on Mt. Hopkins, Arizona. TRES has a spectral range of 3900-9100 Angstroms and a resolving power of R $\simeq$ 44,000. The spectra were reduced and extracted as described in Buchhave et al. (\cite{buch1}).
We obtained 10 radial velocity observations between UT 2018 February 04 and 2018 March 22. The spectra had an average signal-to-noise per resolution element (SNRe) of $\sim$30 at the peak continuum near the Mg b triplet at 519 nm with exposure times averaging 1200 seconds. A multi-order velocity analysis was performed by cross-correlating the spectra, order by order, against the observation with the strongest SNRe as a template. Twenty-two orders were used, excluding low S/N orders in the blue part of the spectrum and some red orders with telluric fringing.
The {\tt Stellar Parameter Classification} ({\tt SPC}; Buchhave et al. \cite{buch2}) tool was used to derive the stellar parameters of the giant. {\tt SPC} cross correlates the observed spectra against a library of synthetic spectra based on Kurucz model atmospheres (Kurucz et al. \cite{kurucz}). We calculated the weighted average of the parameters taking into account the cross correlation function peak height. Our results are fully consistent with those given in the EPIC input catalog (Huber et al. \cite{huberetal}).
\begin{table}[t]
\caption{Radial velocity measurements of EPIC~211759736}
\label{radvel}
\centering
\begin{tabular}{l l l l}
\hline\hline \noalign{\smallskip}
BJD & rad. vel. & error & $T_{\rm eff}$ \\
& km/sec & km/sec & K \\
\hline
2458153.846186 & 26.340 & 0.058 & 4668.06 \\
2458156.728078 & 41.115 & 0.055 & 4734.32 \\
2458170.820695 & 39.937 & 0.080 & 4706.88 \\
2458173.834510 & 22.483 & 0.098 & 4600.17 \\
2458179.747478 & 0.0000 & 0.080 & 4785.54 \\
2458181.614651 & $-$0.441 & 0.059 & 4773.20 \\
2458184.830288 & 4.7870 & 0.134 & 4702.90 \\
2458186.698832 & 10.669 & 0.086 & 4748.23 \\
2458189.707202 & 22.973 & 0.080 & 4708.55 \\
2458199.682131 & 60.673 & 0.072 & 4754.30 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Results}
\subsection{Long-term variations?}
The long-term ($\sim$100 years) photometric history of EPIC~211759736 from the DASCH archival data (Grindlay et al.~\cite{grindlayetal}) is shown in Figure \ref{dasch}. The historic photometry record shows hints of long-term changes possibly due to variations in general spottedness, i.e., an activity cycle. This can be seen in Fig.~\ref{dasch} where a change on the timescale of a few decades ($\approx$40-yr) is indicated by the solid curve drawn by the beating of two close periods. Another, weaker signal with a possible 7-8 years cycle time is also found.
For comparison, the bottom panel of Fig.~\ref{dasch} shows the DASCH photometry of the $\delta$~Sct-type star GP~Cnc, with low amplitude light variation (below 0.1 mag.; Wetterer et al. \cite{ibvs}) and without long-term changes in its mean magnitude. GP~Cnc is very close to EPIC~211759736 on the sky, which rules out systematics in the long-term photographic-plate records of our object.
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{DASCH_GP.pdf}
\caption{{\em Top:} Long-term photometric variations based on scanned photographic plates of EPIC~211759736 from the DASCH database. Data are fitted with the possible decades-long cyclic changes of the star. {\em Bottom}: For comparison, DASCH magnitudes of the nearby $\delta$~Sct-type star GP~Cnc, which is not expected to change its mean brightness. The two light curves are plotted to the same magnitude scale.}
\label{dasch}
\end{figure}
\subsection{Rotational periods and differential rotation}
We extracted rotational periods from the data with the multiple-frequency analysis tool {\tt MuFrAn} (Csubry \& Koll\'ath \cite{mufran1}).
The rotational period of the star is fairly long (about 10 rotations/year), therefore one ground-based observing season covers only a few (6-8) rotations. Thus, more than one season of observations is advisable to obtain a well-determined rotation period. But the period is expected to change due to the anticipated differential rotation and the possible appearance-dissolution and movement of the spots. Therefore, time-spans longer than 2~years should not be used to derive rotational periods, since this is the timescale on which the light variation is stable, i.e., the phases of the extrema of the light curve do not change appreciably.
The Fourier transform (`FT') of the 100 years of DASCH photometric data for EPIC 211759736 is shown in Fig.~\ref{dasch_sp}. The top two panels show the FT down to periods as short as 25 days, and a zoom-in on the lower-frequency portion of the spectrum, respectively. Not much evidence for the 36-day rotation period is seen. However, after removing the long-term variation from the dataset, we find a group of weak peaks near periods around 36 days, indicating that the rotational signal of the giant is indeed present in the historical data. See Figure \ref{dasch_sp}.
We did a period search of the entire 7-yr long ASAS dataset (Pojmanski \cite{ASAS}). The raw ASAS light curve is plotted in the top panel of Fig.~\ref{ASASfig}. The rotational period for all the data is 36.27 days (second panel of the figure). After removing this signal, the difference light curve (third panel of Fig.~\ref{ASASfig}) shows an increased amplitude during the last three years relative to the previous ones, and the half period appears in the Fourier amplitude spectrum (fourth panel). This indicates that the light curve was sinusoidal during the first 4 years, but from 2006 onward the light curves became asymmetric and the half period appeared significantly with about 1/3 the amplitude of the fundamental period, suggesting two well separated active regions on the stellar surface. Additionally, in the case of both the ASAS and HAT surveys we used 1-2 seasons of data to derive independent periods for different epochs. We list the rotational periods in Table~\ref{periods} at different epochs found from the ASAS and HAT surveys. The 7th year of the ASAS data and the 1st one of the HAT data (2008-2009) cover nearly the same time interval, and the consistency of the results demonstrates the reliability of the derived periods. The resulting rotational periods of these overlapping observations are within their mutual uncertainties.
We plotted all the photometric data, folded with the orbital period (see footnote to Table~\ref{periods}), in Fig.~\ref{lightcurves_orbital}.
This is the only reliable and completely coherent period in the system, and differs from the rotational period by only about 0.2 day, with the orbital period being longer.
\begin{figure}[tbp]
\centering
\includegraphics[width=9cm]{DASCH_sp.pdf}
\caption{Fourier amplitude spectra of EPIC~211759736 from the DASCH dataset. {\em Top:} Fourier amplitude spectrum. {\em Middle:} Detail of the long-period range of the amplitude spectrum. {\em Bottom:} Weak signals around the 36-d rotational period after removing the long-term trends from the data.}
\label{dasch_sp}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.5cm]{ASAS_2sp.pdf}
\caption{{\em Top two panels:} The 7-yr long ASAS dataset and its Fourier amplitude spectrum resulting a 36.27 days rotational period. The two `satellite peaks' in the amplitude spectrum are the sidebands of the 1-year observational window function. {\em Bottom two panels:} Data `cleaned' of the rotational period, and its amplitude spectrum. Apart from the remaining signals near the 36-d periods, half periods also show up significantly due to the non-sinusoidal light curves in the second half of the dataset.}
\label{ASASfig}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=9cm]{2117_lc_orbital.pdf}
\caption{Light curves of EPIC~211759736 folded using the time of primary minimum and the orbital period from Table~\ref{specorbit}. ASAS data from 2008-2009 are overplotted with magenta points on the 2006-2008 ASAS light curve. Note, that the different flux levels are not real and merely reflect the different observational bandpasses and automated reduction processes (HAT). The deeper minimum is always between phases of about 0.1-0.6.}
\label{lightcurves_orbital}
\end{figure}
\begin{table}[tbh]
\caption{Observed rotational periods of EPIC~211759736}
\label{periods}
\centering
\begin{tabular}{c c c c l}
\hline\hline \noalign{\smallskip}
survey & JD start - end & years & $P_{rot}{^1}$ & ampl.${^2}$ \\
\hline
ASAS & 52623 - 53143 & 2002-2004 & 36.32 & 0.21 \\
ASAS & 53322 - 53884 & 2004-2006 & 36.39 & 0.19 \\
ASAS & 54092 - 54612 & 2006-2008 & 36.35 & 0.13 \\
& & &18.11 & 0.04 \\
ASAS & 54774 - 54975 & 2008-2009 & 36.30 & 0.08 \\
HAT & 54792 - 54971 & 2008-2009 & 36.22 & 0.07 \\
& & & 18.11 & 0.02 \\
HAT & 55507 - 55674 & 2009-2010 & 36.40 & 0.09 \\
& & & 18.19 & 0.03 \\
\noalign{\smallskip}\hline
Average & 52623 - 55674 & 2002-2010 & $36.33 \pm 0.03$ & ... \\
\noalign{\smallskip}\hline
\end{tabular}
\tablefoot{For comparison: $P_{\rm orb}$ = 36.522 days. ${^1}$The uncertainties in the spot-rotation periods are approximately 0.2 day, which means that periods off more than this amount result in noticeable changes in the folded light curve. ${^2}$Fourier amplitude.}
\end{table}
\subsection{Starspots and spot temperatures from $BVR_CI_C$ data}\label{5col}
There are only two active giant stars, $\zeta$~And and $\sigma$~Gem, which have already been directly mapped using interferometry and they exhibit large scale magnetic structures on their surfaces (see Roettenbacher et al., \cite{rachael1}, \cite{rachael2}). By contrast, the starlight we observe from EPIC 211759736 comes from a point source, and only approximate inferences can be made about the starspots' structure. We know from the Sun that the spots are not uniform (umbra-penumbra) and the larger active regions (in solar terminology `active nests') contain both dark (cool spots) and bright (hot faculae) areas. On other stars, very probably, we are observing a mixture of these regions with an averaged effective temperature. In the following we use both the terms `spots' and `active regions', meaning the areas on the stellar surface where the activity is concentrated.
The shape of the photometric variations due to starspots in EPIC~211759736 (see Fig.~\ref{lightcurves_orbital}) changes relatively slowly. Between 2002 and 2007 the basic light curve shape was nearly sinusoidal and was probably caused by a single dominant region on the surface of the giant. If there were more than one region they were likely close to each other in longitude. From 2008 the light curves started to reveal two dominant, presumably detached spotted regions, manifested by non-sinusoidal light curves (see Fig.~\ref{lightcurves_orbital}). By the time of the K2 observations in 2015, a small secondary maximum appeared showing that the spotted regions had moved farther away from each other. The double-humped light variation observed in $BVR_CI_C$ colors and by K2 during 2018, shown in Fig.~\ref{BVRI_data}, upper panel, clearly demonstrates two active regions on the stellar surface with the maximum possible longitude difference of about 180$^\circ$ between them. The dominant minimum of the light curve drifts slowly from phase 0.6 to phase 0.1 in Fig.~\ref{lightcurves_orbital} showing that the average rotational period of the star is indeed shorter than the orbital one. In 2018 the light curves have two minima of nearly equal depths and one of those is near phase 0.9, continuing the migration of a long-lasting (from 2002 to date) active region. The observed slow drift of the light curve minima is possibly due to the latitudinal differential rotation (see Table~\ref{periods}).
To model the spotted light variations in the $BVR_CI_C$ dataset we used our own software based on the analytical equation applied to spot modeling by Budding (\cite{budding}), assuming circular spots. Fixed parameters were the effective wavelengths of the filters used, linear limb darkening coefficients (Howarth, \cite{howarth}) taking into account the stellar parameters, and the unspotted brightness in all bandpasses. We note that there is no way to deduce the true unspotted brightness of an active star showing the usual rotational and long-term light variations. On the other hand, one would expect a higher brightness level and smaller rotational variation with fewer and fewer spots. Therefore, we took the observed maximum magnitude as the unspotted reference brightness level. The rest of the activity is supposed to be distributed evenly and/or remain on the poles. Note that we have observed only two rotations of the star in a standard photometric system, and the archival and K2 data were obtained in different, non-standard bandpasses (except ASAS which has a $V$ magnitude bandpass), and therefore cannot be compared quantitatively.
We modeled in parallel the $B-V$, $V-R_C$, $V-I_C$ color observations, assuming that there are just two circular starspots. The parallel modeling of each of these colors results in spot coordinates (longitudes, latitudes), sizes, and a single spot temperature. For EPIC~211759736 we have a high rotational inclination angle (81.85$^\circ$), and thus the two stellar hemispheres are nearly interchangeable (supposing that the rotational axis is perpendicular to the orbit, the star is seen nearly edge-on), i.e., approximately invariant under inversion. Therefore, in the course of the modeling, the spot latitudes were kept fixed at the equator; generally, it is not possible to obtain reliable spot latitudes from photometric data. In this way we had 5 free parameters to fit for: two longitudes and sizes, and the spot temperature.
The results show that in early 2018 there were two cool spots (active regions) on the stellar surface, one facing the secondary component and the other on the opposite side, covering altogether about 10\% of the total stellar surface, with a temperature of $3960 \pm 300$~K, or about 800~K below the surface temperature of 4750~K. The individual results of the spot temperature modeling from the three different color indices are as follows: $B-V: 3968 \pm 305$, $V-R_C: 3936 \pm 364$, and $V-I_C: 3978 \pm 215$. Although the spot temperatures from the three color indices have substantial uncertainties, their values are remarkably close to each other. The resulting spot latitudes and sizes from the three color indices are within their mutual 1-{$\sigma$} uncertainties. The 4-color light curves and the fitted color indices are plotted in Fig.~\ref{BVRI_data}. (We note that experiments allowing the spots' latitudes to also be free parameters gave essentially the same results for the longitudes, sizes and temperature, but with much higher uncertainty due to the error propagation.)
By contrast, the average stellar temperatures from the color indices are as follows: $B-V : 5003 \pm 48, V-Rc: 4759 \pm 39, V-Ic: 4456 \pm 36$, where the errors are rms values. By comparison, the photospheric temperature from the TRES spectra is $4734 \pm 93$~K (rms). The three different results from the color indices show the presence of bright, hotter faculae (from $B-V$) and cool spots (from $V-I_C$), and a combination of these is reflected in the spot temperatures. Again, active regions are generally presumed to consist of both hotter and cooler regions than the surrounding quiet photosphere.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.99\columnwidth]{BarniK2_data_fit.pdf}
\caption{$BVR_CI_C$ light curves and color indices of EPIC~211759736 folded using the time of primary minimum and the orbital period from Table~\ref{specorbit}. At this epoch a clear double humped light curve is observed. The magenta curve represents the nearly contemporaneous K2 {\it Kepler raw} data from C18.
The three color indices are fitted with 3960~K spots (dots), see text for details.}
\label{BVRI_data}
\end{figure}
\section{Binary modeling and spots from K2 data}
\label{binary_model}
The accurate, two orbital-period-long $K2$ photometry made it possible to disentangle the brightness variations arising from the binarity and the spottedness. In order to do this, we first made an orbital phase-folded, binned mean light curve from the $K2$ data from C5 (see Fig.~\ref{fig:foldedlightcurve}). Then we carried out a simultaneous analysis of this folded $K2$ light curve and the RV curve with our MCMC-based light curve emulator code {\tt Lightcurvefactory} (Borkovits~et~al.,\,\cite{Borko13}, Rappaport~et~al.,\,\cite{Rappaport17}). Similar to the method described in Borkovits~et~al.\,(\cite{borkovitsetal18}), the light curve variations arising from stellar spots instead of the binarity, are simply modeled mathematically by a harmonic function of the form:
\begin{equation}
\Delta\mathcal{L}=\sum_{i=1}^4a_i\sin(2\pi f_it)+b_i\cos(2\pi f_it),
\label{eqn:spots}
\end{equation}
where the four frequencies ($f_1=0.027367\,\mathrm{d}^{-1}$, $f_2=0.054747\,\mathrm{d}^{-1}$, $f_3=0.078965\,\mathrm{d}^{-1}$, $f_4=0.134952\,\mathrm{d}^{-1}$) represent the four highest peaks in the Fourier spectrum of the $K2$ light curve from C5. The coefficients $a_i$ and $b_i$ are calculated via a linear least-squares fit. This function is applied to the residual light curve formed by subtracting the pure eclipsing binary model from the observed light curve, at each step in the MCMC process. Then, this mathematical model of the residual light curve is added to the binary model light curve, and the actual $\chi^2$ value is calculated for this mixed model light curve.
In order to obtain the preliminary, unspotted binary model, the following nine parameters were adjusted:
\begin{itemize}
\item[(i)]{5 orbital parameters: $P_\mathrm{orb}$, eccentricity $e$, argument of periastron $\omega$, inclination $i$, time of periastron passage $\tau$;}
\item[(ii)]{2 RV-curve-related parameters: systemic velocity $\gamma$ and spectroscopic mass function $f(m_2)$;}
\item[(iii)]{2 light curve related parameters: duration of the transit of the main-sequence secondary component ($\Delta t$), and the temperature ratio ($T_2/T_1$) of the two components.}
\end{itemize}
The temperature of the giant component was taken from our spectroscopic measurements (via template fitting; see Sect.\ref{sec:TRES}). The resulting temperature of $4734\pm93~K$ was rounded to 4750~K (Table \ref{params}). Furthermore, regarding the secondary component, we assumed that it is an unevolved MS star: note the difference between the rounded limb-darkened profile of the secondary eclipse and the sharp ingress and egress of the primary eclipse in Fig.~\ref{eclipses}. In keeping with this assumption, the mass and the radius of the secondary were calculated internally in the fitting code from its effective temperature via the use of the main-sequence $T(M)$ and $R(M)$ relations of Tout~et~al.\,(\cite{Tout}).
The parameters we obtained from this fit are listed in Tables\,\ref{specorbit} and \ref{params}, while the RV solution is plotted in Fig.\,\ref{RVfit}. These results are in good agreement with the {\it Gaia} parameters listed for this system ({\it Gaia} Collaboration et al., \cite{GAIA1}, \cite{GAIA2}) especially the temperature which is essentially the same, but the radius and luminosity values are also within their mutual 2~$\sigma$ and 3~$\sigma$ error bars (see the Introduction).
Once we have the basic orbital and stellar parameters determined, we can examine the starspots that were present on EPIC~211759736 during the K2 observations. We proceed by subtracting off the pure EB light curve from the original $K2$ light curve (see Fig.\,\ref{fig:disentangledK2lightcurve}).
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{E211759736lcEBfoldfit_new.pdf}
\caption{The orbital-phase folded $K2$ light curve of EPIC\,211759736 from C5 (red points), together with the pure EB model (grey) and the sum of the EB model and the mathematically described photospheric variations (black, see text for details and Eqn.~(\ref{eqn:spots}) in particular). The lower panel shows the residuals after subtracting the combined light curve model from the data.}
\label{fig:foldedlightcurve}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.5cm]{eclipses_new.pdf}
\caption{The two eclipses of EPIC~211759736 from C5, from the first (blue) and second (magenta) rotations with the rotational modulation removed. Phases were calculated using the time of primary minimum and the orbital period from Table~\ref{specorbit}. {\it Top:} primary minimum (giant eclipses secondary), {\it bottom:} secondary minimum (secondary star transits the giant). }
\label{eclipses}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=9 cm]{RV_fit.pdf}
\caption{Radial velocity curve for EPIC~211759736 obtained with the Tillinghast Reflector Echelle Spectrograph (TRES). See Sect.~\ref{sec:TRES}.}
\label{RVfit}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.7cm]{E211759736lcwoEBfit.pdf}
\caption{The original, preprocessed $K2$ light curve (red dots) together with the spottedness-only light curve (black curve, upper panel), obtained by the removal of the binarity-produced light variations, i.e. the eclipses, ellipsoidal light variation and Doppler boosting effect (lower panel).}
\label{fig:disentangledK2lightcurve}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.9 cm]{Saul_lc_final.pdf}
\caption{Time-series spot modeling of the $K2$ data free from the eclipses, ellipsoidal light variations, and any other effects of binarity, i.e., showing the spot modulations only. In the upper panel the $K2$ light curve is plotted in magenta, together with the almost identical fit from our 4-spot model (blue line). The residuals from the 4-spot model are plotted in the lower panel (black curve), while the deviations of the 2-spot model from the data is plotted in red.}
\label{spots_timeser}
\end{figure}
In the course of the modeling we took the spot temperature to be that derived from the $BVR_CI_C$ data (i.e., 3960~K, see Sect. \ref{5col}), since the \emph{Kepler} data have only one bandpass ($\sim$4500 to 8500 $\AA$), i.e., with basically no colour information. First, we assumed two active regions, but since the precision of the {\em Kepler} photometry is very high, we set the spots' latitudes, longitudes, and sizes as free parameters to be fitted. As another experiment we put 4 spots on the stellar surface with fixed latitudes: two spots were put at $20^\circ$ above the equator, and two spots at $-20^\circ$, below the equator, this way allowing for 8 free parameters. The fit of the time-series analysis from the 4-spot model is shown in Fig.~\ref{spots_timeser}. The small temporal change of the light curve over the two subsequent rotations can be well understood with small motions and size variations of the assumed circular spots (active regions), probably reflecting the emergence and decay of smaller spots within the assumed circular active regions. The goodness of the fit is seen in the lower panel of Fig.~\ref{spots_timeser} for the 2-spot model (in red color) and the 4-spot model (in black) as well.
The formal errors in the fitted spot longitudes for the $K2$ data are always $\lesssim 1^\circ$, while errors in the spot latitudes (if set as free parameters in the 2-spot model) are between $1^\circ$ and $2^\circ$, and the radii are accurate to a few tenths of a degree. As a comparison, in the case of the ground-based data, the errors in the spot longitudes are about $1^\circ - 2^\circ$, while the spot latitudes are indefinite. Furthermore, using ground-based observations, we find that the spot radii are accurate only to $3^\circ-4^\circ$ since the fit is not ideal due to the fixed spot latitudes and the much larger observational errors. We note that these errors should be regarded as internal errors of the method, i.e., they do not reflect true uncertainties in the actual spots since those are not simple circular or steady features. An early paper by K\H ov\'ari \& Bartus (\cite{test}) gives some good insight into the drawbacks of photometric starspot modeling.
Figure~\ref{spots} shows the locations of the active regions on the giant star's surface in 2015 and 2018 at the phases of the minima and the quadratures. Note that only the longitudes and sizes of the spots are reliably determined (see above and Sect.\ref{5col}), and only the fixed latitude results are displayed.
\begin{figure}[tbp]
\centering
\includegraphics[width=8 cm]{spots.pdf}
\caption{The positions of the active regions in 2015 from our 4-spot model fit to the $K2$ C5 data (top row)
and in 2018 from a 2-spot model to the ground-based data (middle row) and from a 4-spot model to the $K2$ C18 data (bottom row). The surface maps from left to right show the stellar hemispheres at primary minimum, first quadrature, secondary minimum and second quadrature, respectively.}
\label{spots}
\end{figure}
\subsection{Correlation between photometry and RV residuals}
It is interesting to compare the radial velocity residuals with the light variations using the nearly contemporaneous RV and $BVR_CI_C$ photometry from 2018, since the effects of the spots can alter the radial velocities (referred to as `radial velocity jitter'). As shown by \"Ozdarcan et al. \cite{jitter} (see their Fig.~2.) the radial velocity residuals show the same rotational period as the star, and variations in the light curve are correlated with changes in the jitter curve, clearly demonstrating the effect of the spots. Looking at Fig.\ref{RV_phot_fit} we see a similar feature, i.e., that the radial velocity jitter of EPIC~211759736 (upper panel) follows a similar pattern and period as the rotational modulation in brightness caused by spots (lower panel). The full jitter amplitude is somewhat high, but strong activity, higher $v\mathrm{sin}\,i$ (19.6 km/sec in our case), and lower spectral resolution can all cause higher amplitude jitter. Many important details on how the radial velocity jitter appears in spotted stars can be found in Korhonen et al.~(\cite{heidi}).
\begin{figure}[tbp]
\centering
\includegraphics[width=9 cm]{RV_phot.pdf}
\caption{Comparison of the radial velocity residuals ({\it top}, red dots) with the nearly contemporaneous photometry ({\it bottom}, green dots). The curves have similar shapes, probably reflecting a velocity jitter in the results caused by the spots.}
\label{RV_phot_fit}
\end{figure}
\begin{table}[t]
\caption{Orbital elements of EPIC~211759736}
\label{specorbit}
\centering
\begin{tabular}{l l}
\hline\hline \noalign{\smallskip}
Parameter & value\\
\noalign{\smallskip}\hline
\noalign{\smallskip}
P (days) & 36.522 $\pm$ 0.001 \\
T$_0$ (BJD) & 2457168.181 $\pm$ 0.001 \\
$\tau$ (HJD) & 2457149.82 $\pm$ 0.05 \\
a ($R_\odot$) & 63.8 $\pm$ 1.3 \\
e & 0.057 $\pm$ 0.003\\
$\omega$ (deg)& 269.2 $\pm$ 0.5\\
i (deg) & 81.85 $\pm$ 0.26\\
$\gamma$ (km/s) & 29.74 $\pm$ 0.08\\
$K_1$ (km/s) & 30.826\\
$K_2$ (km/s) & 57.257 \\
q & 0.54 $\pm$ 0.04\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Parameters of the components of EPIC\,211759736}
\label{params}
\centering
\begin{tabular}{l l l}
\hline\hline \noalign{\smallskip}
Parameter & Primary & Secondary\\
\noalign{\smallskip}\hline
\noalign{\smallskip}
Temperature, K & 4750 (adopted)& 5283$\pm$154\\
Radius, $R_\odot$ & 12.95$\pm$0.34 & 0.82$\pm$0.03\\
Mass, $M_\odot$ & 1.69$\pm$0.12 & 0.92$\pm$0.04\\
Luminosity, $L_\odot$ & 77.96$\pm$4.19 & 0.47$\pm$0.07\\
$M_V$, mag & 0.45$\pm$0.06 & 5.77$\pm$0.15\\
Gravity log $g/g_\odot$ & 2.44 & 4.58 \\
Metallicity & $-$0.27$\pm$0.13 & --- \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Discussion}
\begin{table*}[t]
\caption{EPIC~211759736 and other binaries with giant components in the evolutionary tracks}
\label{comp_params}
\centering
\begin{tabular}{l l l l l l l l l}
\hline\hline \noalign{\smallskip}
Binary & [Fe/H] & log($T_{\rm eff,1}$) (K ) & log($L_1$) ($L_\odot$) & log($T_{\rm eff,2}$) (K)& log($L_2$) ($L_\odot$)& $M_{1,\odot}$ & $R_{1,\odot}$ & ref.\\
\noalign{\smallskip}\hline\\
EPIC~211759736 & $-0.27\pm0.13$ & $3.677$ & $1.892\pm0.023$ & $3.723\pm0.013$ & $-0.331\pm0.066$ & $1.69\pm0.12$ & $13.0\pm0.3$ &1 \\
\noalign{\smallskip}
KIC~4569590& $-0.34\pm0.09$ & $3.673\pm0.014$ & $1.942\pm0.062$ & $3.810\pm0.014$ & $0.157\pm0.070$ &$1.56\pm0.10$ & $14.1\pm0.2$ & 2\\
KIC~9540226& $-0.33\pm0.04$ & $3.671\pm0.006$ & $1.853\pm0.029$ & $3.806\pm0.006$ & $0.168\pm0.032$ & $1.33\pm0.05$& $12.8\pm0.1$ & 2\\
& & & $1.905\pm0.035^a$ & & & $1.45\pm0.05$ & $13.6\pm0.2$ & 2\\
XX~Tri & $-0.27\pm0.03$ &$3.663$ & $1.452$ & & & $1.26\pm0.15$ & $10.9\pm1.2$ & 3,6\\
& &$3.661\pm0.010$ & $1.532\pm0.006$ & & & & &4\\
$\zeta$~And & $-0.30\pm0.05$ &$3.663\pm0.009$ & $1.98$ & & & $2.6\pm0.4$ & $16.0\pm0.2$ & 5\\
& &$3.657\pm0.012$ & $1.984\pm0.002$ & & & & &4\\
BE~Psc & 0.0 & $3.653\pm0.007$ & $1.723\pm0.076$ & $3.799\pm0.007$ & $0.708\pm0.070$ &$1.56\pm0.03$ &$12.0\pm0.7$& 7\\
\noalign{\smallskip}\hline
\noalign{\smallskip}
\end{tabular}
\tablefoot{$^a$: astroseismic value, 1: present work, 2: Gaulme et al. (\cite{gaulmeetal2}), 3: Ol\'ah et al. (\cite{overactive}), 4: {\it Gaia} DR2, 5: K\H ov\'ari et al. (\cite{zetaand}), 6: K\"unstler et al. (\cite{andreas}), 7: Strassmeier et al. (\cite{klausetal}).}
\end{table*}
It has been shown that some overactive K-giants in binaries, apart from their rotational modulation, exhibit cyclic, long-term light variations (`activity cycles') on the order of one magnitude in overall light variation due to strong and variable spot activity. The primaries of these binaries do not fit the theoretical evolutionary tracks, thereby resulting in irreconcilable age estimates (see, e.g., Ol\'ah et al. \cite{overactive}). The three stars studied in Ol\'ah et al. (\cite{overactive}), IL~Hya, XX~Tri and DM~UMa, are single-lined systems without eclipses, situated in the solar vicinity, and redder than the majority of giant stars in the solar neighborhood.
Of these three stars XX~Tri has the same metallicity as EPIC~211759736, so it is sensible to compare them. Additionally, we choose two systems containing K-giant stars, KIC~9540226 and KIC~4569690, from Gaulme at al. (\cite{gaulmeetal1}, \cite{gaulmeetal2}) which have similar metallicities as our target and XX~Tri. These latter two systems have well-determined stellar parameters from the combined {\em Kepler} light curve and radial velocity analysis. One of them, KIC~9540226, in an eccentric orbit, does not show any observable spot activity. The giant star of this system has parameters both from asteroseismic and binary modeling, and was chosen because of its lack of magnetic activity for comparison with the more active stars. On the other hand, both components of KIC~4569690 seem to have spots, and the rotational modulation period of the primary is equal to the orbital period of the binary whose orbit is circular. The giant primary of this system does not show any asteroseismic signal and is the only one of the three systems in Gaulme et al.~\cite{gaulmeetal2} (their Table 1) with an active component which has similar metallicity as our target star. Finally, we took the well-studied $\zeta$~And (K\H ov\'ari et al. \cite{zetaand}), also with similar metallicity; this is the first active giant star having a direct interferometric image (Roettenbacher et al. \cite{rachael1}), which is a huge advantage when trying to understand the degree of spottedness.
The properties of the four comparison stars and EPIC 211759736 are summarized in Table 6 where the parameters of the active eclipsing binary with giant primary BE ~Psc (of solar composition) are also given.
In Fig.\ref{tracks} we compare the positions of all five of these systems in the H-R diagram using the evolutionary tracks
of Bressan et al. (\cite{bressan}). The plotted tracks belong to the metallicity of the studied stars, which are near to [Fe/H]$\sim-0.3$ (see Table~\ref{comp_params}). This translates to Z$\sim0.006$ with the standard abundances of Asplund et al. (\cite{asplund}). The star and the evolutionary track belonging to a given object are shown in the same color. In the case of XX~Tri and $\zeta$~And, the locations from both the literature (filled symbols) and from the {\it Gaia} results (open symbols, {\it Gaia} Collaboration, \cite{GAIA1}, \cite{GAIA2}) are plotted. For $\zeta$~And the two values are essentially the same, while for XX~Tri they are very close, emphasizing the reliability of the temperatures and luminosities derived from earlier observations (Table~\ref{comp_params}) of these two active giant stars.
The masses of the secondary stars in the KIC systems match to within 1$\sigma$ their corresponding evolutionary tracks (Table~\ref{comp_params}). The secondary of EPIC~211759736 seems to be a bit too red for our determination that it is an unevolved MS star. However, this star is of solar type and may be found to show magnetic activity (as the primary does). That could result in a lower average temperature due to surface spots; for stellar temperature changes of active solar-type stars see Frasca \& Biazzo (\cite{fra_bia}).
Looking at Fig.~\ref{tracks} we see that all five giant primaries are situated below the tracks corresponding to their respective masses, though to differing degrees. In the middle panel of Fig.~\ref{tracks} the positions of EPIC~211759736 and the two KIC binaries are seen, enlarged. These latter two seem to be 1$\sigma$ below the corresponding evolutionary tracks for their respective masses, while the discrepancy is about 0.25~$M_\odot$ or 2$\sigma$ for EPIC~211759736.
A very large discrepancy ($\sim$1.4~$M_\odot$) is is found for $\zeta$~And (K\H ov\'ari et al. \cite{zetaand}), which has large spots on its surface; its inclination is well constrained to be $70\pm2.8$ degrees (Roettenbacher et al. \cite{rachael1}). However, the mass of $\zeta$~And was derived using such evolutionary tracks (for solar metallicities), which are now outdated. From the present evolutionary tracks on Fig.~\ref{tracks} the mass of $\zeta$~And is about 1.3~$M_\odot$, i.e., about half of its old value, and its age is around 3.5-4~Gyr; both of these sound reliable.
A similar discrepancy is seen for XX~Tri ($\gtrsim 0.5\,M_\odot$), as noted already in Ol\'ah et al. (\cite{overactive}), which has a long-term light variation with an amplitude of about one magnitude, and as well from time to time huge rotational modulations are present. Large rotational modulations are observed only for stars with high rotational inclination---low inclinations or pole-on stars exhibit very small amplitude or no rotational modulation at all. But on low-inclination objects it is still possible to observe large cyclic light variations, even though only about half of the stellar surface is visible to the observer (an example is V833~Tau, see Fig. 2. in Ol\'ah et al.~\cite{cycles}).
During the last decade stellar evolution calculations have become much more reliable (see the case of $\zeta$~And above), although the magnetic fields are still missing from the evolution models. The masses of the two KIC stars are not too discrepant from the theoretical value, and EPIC~211759736 is only 15\% higher in mass than implied by the evolutionary tracks. KIC~9540226 does not show light variations due to spots, and KIC~4569590 and EPIC~211759736 seem not to have strong activity which is inferred from their relatively small light-curve amplitudes originating from spots. The deviations from the direct modeling results for these three primary stars among the eclipsing binaries could be due to the still imperfect evolutionary models and/or different evolutions in binary systems. However, we do not have an independent mass determination for $\zeta$~And since it has no eclipses, but its light variation is also not very strong either in spot modulation or long-term timescale variations (cf. K\H ov\'ari et al. \cite{zetaand}). Therefore, very probably its recently determined mass is not too far from the correct one.
The other well-studied active giant in an eclipsing binary, BE~Psc, has strong long-term and rotational variability (Strassmeier et al. \cite{klausetal}), and in the quoted paper the masses of the components are well determined from photometric and radial velocity data. The components are plotted on an HRD in Fig.~\ref{bepsc} for metallicity close to solar. While the {\em secondary} stellar component is well matched by the evolutionary track corresponding to its known mass, the active {\em primary} deviates by about 30\% in the sense it has higher mass than its temperature and luminosity imply.
The case of XX~Tri is different than the others. Ol\'ah et al. (\cite{overactive}) did not find an acceptable mass for the star using {\it the same} evolutionary tracks as in the present paper; the derived temperature and luminosity point to an inconsistent mass and age on the HRD. This star is `overactive' with huge rotational modulations and long-term variability, and in this case we believe that the strong magnetic field was able to alter even its stellar structure. Present stellar evolution models that do not take into account the magnetic field are unable to predict a reasonable mass for XX~Tri. We note here that in the case of another `overactive' star, IL~Hya, weak evidence is found for changes in its stellar radius during the long-term cycle (Ol\'ah et al. \cite{overactive}). Even the question of radius changes in the well-observed nearest star, the Sun, is still open (cf. Kosovichev \& Rozelot \cite{kos_roz}).
The internal structure of giant stars with strong magnetic fields between the core and the widened, diluted atmosphere is not well studied. Supposing a flux tube dynamo scenario, the flux ropes created by the magnetic dynamo in the shear layer between the core and the convection zone rise up from the tachocline to the surface, causing the observable activity features, but can also remain trapped below the stellar surface (Holzwarth \& Sch\"uessler, \cite{volkmar}), and the consequences are unknown. The oscillations of some red giant stars are suppressed to undetectable level due to internal magnetic fields (Gaulme et al. \cite{gaulmeetal2}), but the mechanism for this is not completely clear.
\begin{figure}[tbp]
\centering
\includegraphics[width=8.2cm]{Z006_hrd.pdf}
\includegraphics[width=8.2cm]{Z006_hrd_giant1.pdf}
\includegraphics[width=8.2cm]{Z006_hrd_dwarf.pdf}
\caption{Position of EPIC~211759736 (large red square) on the HRD (Bressan et al. \cite{bressan}), in comparison with KIC~4569690 (green) and KIC~9540226 (blue, dynamically derived values) and magenta (astroseismic values). Additionally, two well-known active giants are plotted: XX~Tri and $\zeta$~And with both the literature and {\it Gaia}2 results (filled and empty squares, respectively). Tracks for 2.6 and 1.25 solar masses are displayed in gray dashed lines for the traditionally determined $\zeta$~And and XX~Tri masses. The middle and bottom panels show the details around the giant and dwarf components of the systems. See the text and Table~\ref{comp_params} for details. }
\label{tracks}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.5cm]{Z014_hrd_BEPsc.pdf}
\caption{Position of BE~Psc on the HRD with metallicity close to solar. The positions of the primary and secondary are marked with red and blue squares, respectively. The corresponding evolutionary tracks are plotted with the same color. While the secondary star matches its evolutionary track, slightly after having left the MS, the active primary deviates by about 30\% from its corresponding track (data are from Strassmeier et al. (\cite{klausetal}).}
\label{bepsc}
\end{figure}
\section{Summary and Conclusions}
In this paper we studied in detail EPIC~211759736, which is now the second best-studied/characterized active giant in an eclipsing binary after BE~Psc. We determined the physical parameters of both stellar components and provide a description of the rotational and long-term activity of the primary component. The temperatures and luminosities of both components were examined in the context of the HRD. We find that both the primary and the secondary components deviate from the evolutionary tracks corresponding to their masses in the sense that the stars appear in the HRD at lower masses than their true masses.
We compared EPIC~211759736 with the well-studied active giants BE~Psc, $\zeta$~And and XX~Tri. Among these, only BE~Psc has eclipses. Two KIC stars were also in the comparison sample. Except for BE~Psc, all the stars have very similar metallicities. We suggest three possible reasons for the inferred mass deviations, if any, from the evolutionary tracks: inexact evolution calculations for evolved stars,
the difference in stellar evolution when a star is in a binary system, and the effect of the strong magnetic field on the stellar structure.
Our results suggest that in the case of no/low activity, the evolutionary tracks agree fairly well with the derived masses of the stars: these are the two KIC stars, one without any observable activity and the other with low amplitude variability. Next, the target of our present study, EPIC~211759736, shows a higher amplitude rotational modulation and also long-term variability. And, in this case, the primary is about 15\% more massive than is implied by the evolutionary tracks. BE~Psc has even stronger activity, and its primary component is about 30\% more massive than the evolutionary tracks suggest. Finally, XX~Tri has by far the strongest activity, but it lacks eclipses, so only the mass function is determinable. In the case of XX~Tri it is not possible to get a reliable mass from the HRD.
The mass discrepancies seem to be larger as the strength of the stellar activity grows in our sample of active giant stars. Possibly the lack of magnetic fields in the evolution calculation and the effects of evolving in a binary play a role in their position on the HRD for all of our stars, but these would seem to be minor effects. The strength of the magnetic activity, however, may significantly alter even the structure of the stars. The KIC stars and $\zeta$~And possibly have weaker internal magnetic fields, while EPIC~211759736, BE~Psc and XX~Tri have progressively stronger magnetic fields revealed by their photometric behaviour. This is manifested clearly in the deviations of their masses from the theoretical predictions.
Observations of eclipsing binaries with active giant components is of crucial importance in describing stellar evolution after the MS in the presence of magnetic fields. EPIC~211759736 is now one of the two well-studied active giants in eclipsing binaries which should lead to better constraints on the structure and evolutionary trends for this type of magnetically active star.
Eclipsing spotted giant stars, such as EPIC~211759736, could benefit quite a bit from long-term, multicolor observations. These would allow us to study the average temperature changes of the active regions, which may then reflect the possible spot/plage ratio changes. More effort is needed to further characterise systems such as EPIC~211759736 and BE~Psc, as well as most of the active KIC stars from Gaume et al. (\cite{gaulmeetal2}), all of which have primary components between 1.0-1.6~$M_{\odot}$ and secondaries between 0.8-1.3~$M_{\odot}$. The different evolutionary stages of similar mass, solar-like stars in close binaries, with different activity levels of possibly magnetic origin in one or both components, should shed light on the effect of the magnetic field during the evolution on the MS and on the giant branch of the HRD.
\begin{acknowledgements}
Thanks are due to an anonymous referee for careful reading the the manuscript and for good suggestions.
We are grateful for H. Korhonen for advice concerning radial velocity jitter in active stars. DL and TJ gratefully acknowledge Allan R. Schmitt for making his lightcurve examining software {\tt LcTools} freely available.
This work has been supported by the Hungarian Science Research Program OTKA-K-113117.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. The DASCH project at Harvard is grateful for partial support from NSF grants AST-0407380, AST-0909073, and AST-1313370.
\end{acknowledgements}
|
1,941,325,221,008 | arxiv | \section{Introduction}
The possibility to study the primordial phases of our
Universe and its properties and evolution through CMB anisotropies
relies on our capability to precisely extract the cosmological signal
from observations \citep{teg00_fore}.
Maps of the microwave sky include many
Galactic and extragalactic astrophysical contributions.
A correct recovery of the CMB anisotropy field requires
an accurate removal of those foreground signals from the
observed maps. Current knowledge of the foreground components
permits one to retrieve the bulk of the cosmological
information encoded in the CMB anisotropy, and, in particular,
its angular power spectrum (APS).
Nevertheless, a deeper understanding of the foregrounds is
crucial to settle important cosmological
issues, which arose from the
{\sc WMAP}~\footnote{http://lambda.gsfc.nasa.gov/} results
\citep{naselski06_wmap1_et_fore,naselski06_wmap_ell4,chiang07_lowLanom}
and could potentially be addressed by the forthcoming
{\sc Planck}~\footnote{http://www.rssd.esa.int/Planck}
mission \citep{tauber04_cospar}.
Moreover, it would enable a precise reconstruction of the
individual foreground components and therefore
a complete astrophysical exploitation of the satellite data.
Galactic synchrotron emission is the major source of
contamination at frequencies below 50~GHz
for intermediate and large angular scales,
as recently confirmed by the impressive {\sc WMAP} results
\citep{ben03_wmap_1yr_fore,hins06_wmap_3yr_temp}.
Synchrotron radiation \citep{rybicki79} arises
from cosmic ray electrons gyrating in the magnetic field
of our Galaxy.
The energy spectrum and the density of the cosmic ray electrons
as well as the magnetic field strength vary across the Galaxy,
therefore the observed synchrotron emission will depend
on the frequency and on the region of the sky.
Radio observations at $\nu \lesssim 5$~GHz provide
the clearest picture of the Galactic synchrotron morphology,
since at these frequencies the diffuse non-thermal radiation
clearly dominates over all other emission components
outside the Galactic plane.
In the past, the 408~MHz all-sky map \citep{haslam82_408mhz}
has extensively been used as a template for the
Galactic synchrotron emission in foreground separation attempts
(e.g. Bouchet \& Gispert~1999; Bennett et al.~2003).
In addition, that map as well as other less suited
surveys have been exploited to find an appropriate
parametrization of the synchrotron emission APS to be
$i)$ extrapolated to the microwave range for estimating the
contamination of CMB anisotropies at different angular scales
\citep{giard01_hardtrao,bacci01_synch}
and
$ii)$ used to set priors in foreground separation applications
\citep{teg96_lf,bouchet99_wf,bouchet99_foresim}.
The outcome of these analyses is that the APS of the
synchrotron emission computed over large portions of the sky
can be modelled
by a power law, i.e. $C_{\ell} \sim k\; \ell^{\alpha}$
($\ell \sim 180^{\circ}/{\theta}$), with a spectral
index $\alpha \sim [-3,-2.5]$ for $\ell \lesssim 200$, corresponding
to angular scales $\theta \gtrsim 1^{\circ}$.
Clearly, such a general result is just a first step as it
does not describe the complexity of the synchrotron emission APS,
whose parameters are expected to change with frequency
and with sky direction.
We have carried out a detailed analysis of all-sky radio
maps and improved on previous attempts by providing an
accurate characterization of the synchrotron
emission APS.
The results obtained for the new 1.4~GHz polarization
all-sky survey (Reich et al., in prep.) will be
reported in a forthcoming companion paper.
The analysis presented in this paper focuses on
the synchrotron emission APS in total intensity.
A substantial improvement was possible by using
a new all-sky map at 1.42~GHz (Reich et al., in prep.),
which has a higher angular resolution and better sensitivity,
in addition to the all-sky map at 408~MHz.
These maps are currently the best suited data
for studying the Galactic synchrotron emission
at largest angular scales.
A more detailed description of some technical aspects related to this work
can be found in \citet{phdthesis}.
We extensively used the
{\tt HEALPix}\footnote{http://healpix.jpl.nasa.gov/}
software package \citep{gorski05_healpix}.
{\tt HEALPix} (Hierarchical Equal Area
isoLatitude Pixelization) is a curvilinear partition of
the sphere optimized for fast spherical harmonics
transforms and angular power spectrum estimation.
The latter task is performed by the facility {\tt Anafast}.
We also made use of the data reduction package
based on the NOD2-software \citep{haslam74_nod}. \\
The paper is organized as follows.
Sect.~2 describes the characteristics of the 408~MHz
and 1.42~GHz total intensity surveys, their projection
onto {\tt HEALPix} maps and noise considerations.
In Sect.~3 the Galactic radio emission APS
over large areas is examined, which reveals
the necessity of a discrete source subtraction
for a correct evaluation of the diffuse synchrotron APS.
The two all-sky maps are decomposed into a map of the
diffuse component and a map of discrete sources.
Their angular power spectra are derived and discussed.
The results obtained by fitting the angular power
spectra of the diffuse component maps are presented.
In Sect. 4 the radio survey angular power spectra
are extrapolated to the microwave range for a
comparison with the WMAP 3-yr results.
Sect. 5 is dedicated to a local analysis of the radio map APS.
We summarize our results and conclusions in Sect.~6. \\
\section{The data}
The present analysis focuses on the APS of the
Galactic synchrotron emission at radio frequencies.
However, the 23 GHz synchrotron component obtained
by \citet{hins06_wmap_3yr_temp}
using the WMAP 3-yr data has also been
considered to some extent and will be further
discussed in Sect.~\ref{res_extrapolation}.\\
\subsection{The 408~MHz and 1420~MHz surveys}
The 408~MHz map \citep{haslam82_408mhz} was produced by merging
different datasets obtained with large parabolic reflector
telescopes (Jodrell Bank 76~m,
Effelsberg 100~m and Parkes 64~m telescopes -
see Fig.~1 of Haslam et al.~1982), using a similar
observing strategy and the same calibration procedure.
The final map is characterized by an angular resolution of
$\theta_{HPBW} \sim 0\fdg85$ and a $20\arcmin$ pixel rms-noise
of about 670~mK.
The version used in the present analysis was corrected
for a zero level problem concerning the portion of the sky observed
from Jodrell Bank \citep{reich88_betasynchr}.\\
The total intesity map at 1420~MHz has been obtained by combining a northern and
a southern sky survey (Reich et al., in preparation).
Both surveys are on an absolute temperature scale and zero level
by using low resolution sky horn measurements \citep{testori01_ssky}.
This includes a correction for far-side lobe contamination
for single-dish telescopes.
Receiving systems were used, which provide total intensities unaffected by
linear polarization.
The northern sky survey
was observed with the Stockert 25-m telescope and
extends in declination from $-19^{\circ}$ to $90^{\circ}$
\citep{reich82_stock,reich86_stock}.
The southern sky survey was carried out with the Villa Elisa 30-m telescope
in Argentina and covers the latitude range
$\delta \in [-90^{\circ}$ to $-10^{\circ}]$
\citep{reich01_ssky}.
Both have an angular resolution $\theta_{HPBW} \sim 36'$
and overlap for latitudes in the range [$-19^{\circ}$,$-10^{\circ}$].
The resulting map has a $15\arcmin$ pixel rms-noise of $\sim 17 {\rm mK}$.
The original maps are provided
as equidistant cylindrical ({\tt ECP}) maps.
For the present analysis these maps have been projected
into the {\tt HEALPix} \citep{gorski05_healpix}
pixelization scheme adopted by the {\sc WMAP} and
{\sc Planck } Consortia.
For this purpose a simple regridding algorithm has been implemented,
which is described in detail by \citet{techrep}.
The reliability of the projection provided by this simple
approach has been verified by successfully performing forward
and backward transformations between the two tessellation
schemes.
The produced {\tt HEALPix} maps have a pixel size
of $\sim 7'$ (the number of pixels for an all-sky map
is $N_{pix}=12\,n_{side}^2$; here we used $n_{side}=512$).
\subsection{Noise estimate}
The authors of the radio maps have estimated the rms-noise
directly on the {\tt ECP} maps
by calculating the standard deviation of
low emission regions.
Going from a Cartesian representation of the sphere to the
{\sc HEALPix} scheme, the rms-noise per pixel should theoretically decrease
toward the polar caps, according to the formula:
$$\sigma_{pixel,{\rm {\tt HEALPix}}} \sim \frac{1}{\sqrt N} \times \sigma_{pixel,{\rm {\tt ECP}} }$$
where $N$ is the number of the {\tt ECP} pixels corresponding to each
{\tt HEALPix} pixel at a fixed latitude. Such a relation
holds under the hypothesis that the noise is Gaussian
and uncorrelated among the pixels. Both these assumptions
are not necessarily satisfied in the examined surveys,
in primis because the pixel size is about half the angular resolution.
Furthermore, the rms-noise quoted for the 408~MHz and 1420~MHz surveys
quantifies the temperature fluctuations per pixel, which is
due not only to the
receiver white noise, but also to the
contribution of unresolved sources
and of residual systematics effects (as ``scanning strategy effects''),
that could not be fully eliminated in the data
reduction procedure.
The overall rms-noise in the {\tt HEALPix} maps
is therefore probably higher than the guess determined
in this way, thus making this formula suitable for
deriving a lower limit.
With this formula we constructed
a map of rms-noise at both frequencies.
The rms-noise per pixel decreases for increasing latitude
and varies between $\sim 10$~mK and $\sim 18.7$~mK at 1420 MHz and
$\sim 0.5$~K and $\sim 0.7$~K at 408 MHz.
Due to varying integration times,
the rms-noise is not constant over the {\tt ECP} map,
but it is expected to diminish toward the celestial poles.
Also taking this effect into account, while applying
the above formula, the derived estimate
decreases at most by a factor $\sim 2$.
This way we obtain an optimistic lower limit for the rms-noise
in the {\tt HEALPix} maps.
The corresponding APS is computed as:
$$C^{noise}_{\ell} \sim c^{noise} \sim 4\pi \sigma^2/N_{pix}$$
where $\sigma$ is the mean value of the rms-noise
in the considered area and $N_{pix}$ is the total
number of pixels in the {\tt HEALPix} map.
A generous upper limit for the noise APS is provided instead by
the high multipole tail of the APS.
This way we have bracketed the noise APS for each
considered sky region.
\subsection{Statistically significant multipole range}
The {\tt HEALPix} maps produced at 408~MHz and 1420~MHz contain precise
statistical information only for angular scales that are:
\begin{itemize}
\item[-] larger than the beam,
i.e. for $\theta \gtrsim \theta_{HPBW}$, therefore the maximum multipole
relevant in the APS analysis of these surveys is
$\ell_{max} \sim 180^{\circ}/\theta_{HPBW}[^{\circ}]$;
\item[-] smaller than the maximum angular extent of the considered
area, $\theta_{cov}$, so that the coverage sets the minimum
multipole. A safe choice is
$\ell_{min} \sim 5 \times 180^{\circ}/\theta_{cov}[^{\circ}]$.
\end{itemize}
\section{The APS of large areas: analysis for various cuts}
\begin{figure}[!t]
\vskip -0.17cm
\hskip +0.8cm
\includegraphics[width=5cm,height=8cm,angle=90]{8435f01.ps}
\vskip +0.3cm
\caption{ Angular power spectra of the Galactic plane cut-offs
($|b_{gal}| \ge b_{cut}$) at radio frequencies.
Each line refers to a certain $b_{cut}$ (from the top: black $\to 40^{\circ}$,
fuxia $\to 50^{\circ}$, green $\to 60^{\circ}$).
}
\label{simmaps}
\end{figure}
To investigate the dependence of the synchrotron emission APS parameters
on latitude, the Galactic plane has been ``cut off'' from the maps at
different latitudes by setting to zero pixels with $|b_{gal}| \le b_{cut}$,
where $b_{cut}=5^{\circ},10^{\circ},20^{\circ},
30^{\circ},40^{\circ},50^{\circ},60^{\circ}$. At the same time,
this approach preserves the largest possible coverage,
important to keep the widest range of statistically significant multipoles.
We also considered ``asymmetric cuts'', i.e. regions
with $b_{gal} \ge b_{cut}$ (northern cuts)
and $b_{gal} \le -b_{cut}$ (southern cuts),
thus taking into account the difference
between the two Galactic hemispheres.
In fact, the northern hemisphere contains a large and bright feature of
the radio sky, i.e. the North Polar Spur (NPS), which is
widely believed to be
an old supernova remnant in the Solar System
vicinity \citep{salter83,egger95}.
We computed the corresponding APS by using the facility
{\tt Anafast} of the {\tt HEALPix} package and renormalized
it to account for the incomplete sky coverage.
The angular power spectra derived for
$|b_{gal}| \ge b_{cut}$ with $b_{cut} \ge 40^{\circ}$
are shown in Fig.~\ref{simmaps}, as representative examples.
All the recovered angular power spectra
flatten towards higher multipoles.
Such a behaviour of the APS might be due to noise,
systematic effects (``stripes''), discrete sources or might be an
intrinsic characteristic of the synchrotron emission
fluctuation field.
Instrumental white noise can be discarded because its APS should be
constant, whereas after the flattening the angular power spectra
decrease with $\ell$ as
in the presence of beam smoothing.
``Stripes'' are systematic baseline
distortions in the telescope's scanning direction
and are mainly due to the limited stability of the
receiving system and to the influence of weather conditions.
They could also be cancelled from the list of possible
causes from the comparison with a destriped version
of the 408~MHz map
\citep{plat03_destr}.
The cut-off APS of the two versions of the 408~MHz map
present only marginal differences at intermediate multipoles
(see Fig.~\ref{haslamvsplatania}).\\
\begin{figure}[!t]
\vskip -0.3cm
\hskip +0.3cm
\includegraphics[width=6cm,height=9.5cm,angle=90]{8435f02.ps}
\caption{Comparison between the cut-off angular power spectra of the original
(fuxia lines at the top) and destriped (blue) version of the 408~MHz map.
The cut-off angular power spectra of the difference map are also shown
(green lines at the bottom).}
\label{haslamvsplatania}
\end{figure}
\subsection{Discrete source subtraction}
\label{DS}
\begin{figure}[!t]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=5cm,angle=90,clip=]{8435f03a.eps}\\
\includegraphics[width=5cm,angle=90,clip=]{8435f03b.eps} \\
\end{tabular}
\end{center}
\caption{ Mollweide projection of the {\tt HEALPix} maps produced
at 408 MHz (top) and 1420 MHz (bottom)
by subtracting the discrete sources.
The maps are in Galactic coordinates.
The center of the maps is $l = 0 , b = 0$.
The Galactic longitude increases
toward the left up to $180^{\circ}$. }
\label{nosrc_et_src_maps_radio}
\end{figure}
Beside diffuse emission, a large number of discrete sources (DSs)
are visible in the radio maps. A DS subtraction has been done
by performing a 2-dimensional Gaussian fitting that
also provides an estimate of the diffuse
background, which is approximated by a tilted plane.
Such an estimate has been used to fill the pixels originally
corresponding to the subtracted DSs.
Where the background emission shows strong gradients
the source fitting is more difficult.
Consequently, the flux limit above which all discrete
sources most likely have been subtracted is different close to the
plane and far out of it.
Namely, for $|b| \gtrsim 45^{\circ}$ all the sources with
peak flux above $\sim 0.8$~Jy (respec. $\sim 6.4$~Jy)
have been subtracted from the 1420~MHz (respec. 408~MHz),
whereas for $|b| \lesssim 45^{\circ}$ such a
source detection threshold
is $\sim 4.6$~Jy (respec. $\sim 63.8$~Jy).
All discrete sources that could be reasonably fitted by
a Gaussian profile have been eliminated and two new maps have been
generated at 408~MHz and at 1420~MHz (Fig.~\ref{nosrc_et_src_maps_radio}).
\begin{figure}[!t]
\vskip +0.2cm
\centering
\begin{tabular}{cc}
\includegraphics[width=2.5cm,angle=90]{8435f04a.eps}&
\includegraphics[width=2.5cm,angle=90]{8435f04b.eps}\\
\includegraphics[width=3cm]{8435f04c.eps} &
\includegraphics[width=3cm]{8435f04d.eps} \\
\includegraphics[width=3cm]{8435f04e.eps} &
\includegraphics[width=3cm]{8435f04f.eps} \\
\end{tabular}
\caption{ {\tt HEALPix} maps of the Galactic plane cut-offs
extracted from the 1.4~GHz all-sky map after DS subtraction.
A mollweide projection is displayed for the cuts with
$|b_{cut}|\le 20^{\circ}$ and a gnomonic view (centered on a
Galactic pole)
for those with $|b_{cut}|\ge 40^{\circ}$.
}
\label{mapcut1420nosrc}
\end{figure}
\begin{figure}[!h]
\vskip +0.8cm
\centering
\begin{tabular}{cc}
\includegraphics[width=2.5cm,angle=90]{8435f05a.eps}&
\includegraphics[width=2.5cm,angle=90]{8435f05b.eps}\\
\includegraphics[width=3cm]{8435f05c.eps} &
\includegraphics[width=3cm]{8435f05d.eps} \\
\includegraphics[width=3cm]{8435f05e.eps} &
\includegraphics[width=3cm]{8435f05f.eps} \\
\end{tabular}
\vskip -0.3cm
\caption{ As in Fig.~\ref{mapcut1420nosrc}, but at 408~MHz.}
\label{mapcut408nosrc}
\end{figure}
The subtracted DSs (see Burigana et al.~2006 for
a map) are mostly point sources,
except for some rather extended objects, as
for instance the radiogalaxy Centaurus A,
appearing in the original radio maps right of the
Galactic center at $b_{gal} \sim 20^{\circ}$.
Such extended objects are among the brighter subtracted DSs and concentrate
along or in the proximity of the Galactic plane.
They are mainly Galactic sources, i.e. {\tt HII}-regions
or supernova remnants.
On the contrary, the DSs subtracted at medium and high latitudes
are nearly all extragalactic sources.
The maps of some Galactic plane cut-offs at 1420~MHz (respec. 408~MHz)
after DS subtraction are shown in
Fig.~\ref{mapcut1420nosrc} (respec. Fig.~\ref{mapcut408nosrc}).
The maps are displayed adopting different scales, in order
to emphasize the relative importance of the various components.
Note that ``scanning strategy effects'' are clearly visible
in the southern sky at high latitude in the map at 408 MHz.
As discussed above,
the angular power spectra of the destriped and original
versions of the Haslam map do not exhibit significant differences,
thus implying that ``stripes'' are not
an issue for the APS analysis at 1420 MHz.
Figure~\ref{allapsaftersub1420} (respec. Fig.~\ref{allapsaftersub408})
shows the APS of the Galactic plane cut-offs at 1420~MHz (respec. 408~MHz)
for the original map, for the DS-subtracted map and
for the map of subtracted DSs.
\begin{figure}[!h]
\hskip +1.0cm
\includegraphics[width=5cm,height=8cm,angle=90]{8435f06.ps}
\vskip +0.7cm
\caption{ APS of some Galactic plane cut-off for the 1420 MHz maps:
(from the top in each panel)
original ($\to$ fuxia), after discrete source subtraction ($\to$ blue)
and DSs only ($\to$ green). }
\label{allapsaftersub1420}
\end{figure}
The APS of the DS maps almost perfectly matches the flat
part of the original map APS at large $\ell$. This result
identifies DSs as the reason for the flattening of
the original APS and also confirms that the major contribution
from source contamination has been eliminated in the
DS-subtracted maps.
At high latitude the APS of the Galactic fluctuation
field is dominated by the DS contribution
for $\ell \gtrsim 100$, which is due to the enhanced
relative contribution of the DSs respect to the weak diffuse
background emission.
\begin{figure}[!b]
\vskip +0.2cm
\hskip +1cm
\includegraphics[width=5cm,height=8cm,angle=90]{8435f07.ps}
\vskip +0.5cm
\caption{ As in Fig.~\ref{allapsaftersub1420}, but at 408 MHz.}
\label{allapsaftersub408}
\end{figure}
Figure~\ref{allNScut_aps_src_408_1420} shows
the APS of the various cuts for the DS map at 1420 MHz.
Note that for all the southern cuts and for the northern cuts
with $b_{cut} \ge 30^{\circ}$ the DS angular power spectra
are rather flat up to $\ell \sim 100$ and then
decrease as for a beam smoothing.
On the contrary, for the northern cut at $5^{\circ} - 20^{\circ}$
the DS angular power spectra present a power law behaviour
at lower multipoles, that implies the existence of significant
fluctuations also at the larger angular scales, as expected
in presence of relatively extended discrete structures.
The DS APS of the cut at $20^{\circ}$ is superimposed
on that of the cut at $30^{\circ}$ for $\ell \gtrsim 100$,
whereas it exhibits a power law behaviour at lower multipoles.
The difference between the angular power spectra of the northern cuts
at $20^{\circ}$ and $30^{\circ}$ is then due to the DSs located in
the portion of the sky characterized by
$20^{\circ} \le b_{gal} \le 30^{\circ}$,
which includes Centaurus A,
thus explaining the power law behaviour of the APS at the
lower multipoles.
The same situation was found at 408 MHz.
\begin{figure}
\hskip +0.7cm
\includegraphics[width=4cm,height=9cm,angle=90]{8435f08.ps}\\
\caption{Angular power spectra of the northern ($b_{gal} \ge b_{cut}\;\to$
left panel) and southern ($b_{gal} \le -b_{cut}\;\to$ right)
cuts for the map of discrete sources at 1420~MHz.
Color legend (see online version):
black (dotted) $\to |b_{cut}|=5^{\circ}$,
black $\to |b_{cut}|=10^{\circ}$,
green $\to |b_{cut}|=20^{\circ}$, red $\to |b_{cut}|=30^{\circ}$,
dark blue $\to |b_{cut}|=40^{\circ}$, fuxia $\to |b_{cut}|=50^{\circ}$,
light blue $\to |b_{cut}|=60^{\circ}$.
}
\label{allNScut_aps_src_408_1420}
\end{figure}
\subsection{The APS after source subtraction}
The angular power spectra of the maps after source subtraction
approximately follow a power law, as expected for the diffuse
Galactic synchrotron emission.
For $|b_{cut}| \le 40^{\circ}$ the angular power spectra of
all symmetric ($|b| \ge b_{cut}$) and
asymmetric ($b \le -b_{cut}$, $b \ge b_{cut}$) Galactic cuts
are very similar to each other and appear progressively shifted
downward (see top panels of Fig.~\ref{allNScut_aps_nosrc_408_1420}).
This result reflects the fact that the Galactic
diffuse emission becomes weaker for increasing latitude.
\begin{figure}[!t]
\hskip +0.8cm
\includegraphics[height=9cm,angle=90]{8435f09.ps}\\
\caption{
Comparison between the APS of the cuts for the DS-subtracted map
at 1420~MHz.
The left (respec. right) panels display the angular power spectra
of the northern (respec. southern) cuts.
First (respec. second) row panels:
$b_{cut}=5^{\circ}-40^{\circ}$ (respec. $b_{cut}=40^{\circ}-60^{\circ}$)~.
Color legend (see online version):
black dotted $\to |b_{cut}|=5^{\circ}$, black $\to |b_{cut}|=10^{\circ}$,
green $\to |b_{cut}|=20^{\circ}$, red $\to |b_{cut}|=30^{\circ}$,
dark blue $\to |b_{cut}|=40^{\circ}$, fuxia $\to |b_{cut}|=50^{\circ}$,
dark green $\to |b_{cut}|=60^{\circ}$.
}
\label{allNScut_aps_nosrc_408_1420}
\end{figure}
The angular power spectra of the symmetric cuts
with $|b_{cut}| \ge 40^{\circ}$ are superimposed.
The same result holds for the angular power
spectra of the northern cuts, whereas in the southern hemisphere
the APS amplitude decreases for increasing $|b_{cut}|$.
This discrepancy leads to the conclusion that the angular
power spectra of the symmetric cuts with $|b_{cut}| \ge 40^{\circ}$
are mainly influenced by the northern hemisphere.
Indeed, the angular power spectra of the northern
cuts at $b_{cut} \ge 20^{\circ}$ have amplitudes
larger than those of the southern cuts at both frequencies.
As an example, Fig.~\ref{aps_NvsS_nosrc_1420} shows the
comparison between the angular power spectra of the
northern and the southern cuts at 1420 MHz.
\begin{figure}
\centering
\includegraphics[width=4cm,height=9cm,angle=90]{8435f10.ps}\\
\caption{Comparison between the angular power spectra of the northern
($b_{gal} > b_{cut}$, $\to$ blue - upper lines) and
southern ($b_{gal} < -b_{cut}$ $\to$ red - lower)
cuts for the DS-subtracted map at 1420 MHz.
}
\label{aps_NvsS_nosrc_1420}
\end{figure}
The angular power spectra of the two Galactic hemispheres
can reasonably be expected to be similar for the Galactic diffuse
synchrotron emission, while they turn out to be different in
amplitude and to some extent (mostly at smaller scales)
in shape. That difference results from the combination of two effects.
In the southern sky, the angular power spectra of the DS-subtracted maps
tend to flatten at $\ell \sim 150-200$ due to the presence
of unsubtracted sources, whose relative contribution to the fluctuation
field increases because of the low background signal.
In the northern hemisphere, the Galactic diffuse synchrotron
emission is strongly influenced by the radiation of the NPS.
\begin{figure}[!b]
\hskip +1.0cm
\includegraphics[width=5.5cm,height=8.5cm,angle=90]{8435f11.ps}\\
\caption{Angular power spectra of the northern (top panels)
and southern cuts (bottom) for the DS-subtracted map at 1420 MHz
together with the best fit curves obtained. The individual contributions
of synchrotron emission (blue lines) and of sources (green lines)
are also plotted (smoothed by the beam).
}
\label{bfit1420}
\end{figure}
\subsection{Fit of the APS after source subtraction}
\label{sect_fit_aps_nos}
The radio maps after source subtraction include
two astrophysical components:
the Galactic diffuse emission and the
(mainly) extragalactic source contribution, which
are convolved with the telescope beam and
contaminated by the instrument noise, that
can be approximately treated as white noise.
We therefore express the corresponding APS as
\begin{equation}
C^{map}_{\ell} \sim ( C_{\ell}^{synch}+ C_{\ell}^{src} ) W_{\ell} + c^{noise}
\label{aps_best_model}
\end{equation}
where
$W_{\ell}={\rm e}^{-\ell (\ell+1)\sigma_{b}^2}$
is the window function of the symmetric and Gaussian
beam\footnote{The pixel window function has been
also taken into account, but its effect is not
important here, because the pixel size is
significantly smaller than $\sigma_{b}$.},
with
$\sigma_{b}=\theta_{HPBW}[rad]/\sqrt{8{\rm ln}2)}$.
The synchrotron emission APS is empirically modelled
as $C_{\ell}^{synch}=k \ell^{\alpha}$.
We note that such an empirical choice qualitatively can be explained by
magnetohydrodynamic turbulence arguments
\citep{chep98_synch_aps,cho02_synch_aps,cho03_synch_aps}.
The contribution of the unsubtracted DSs is approximated
by a constant term, according to the formalism of Poisson
fluctuations from extragalactic point sources \citep{franc89}.
The contribution of fluctuations due to source clustering
is expected to be negligible with respect to the Poisson term
at the source detection threshold achieved in our
maps \citep{toffo98_src}.
\begin{table}[!b]
\newcommand\T{\rule{0pt}{2.6ex}}
\newcommand\B{\rule[-1.2ex]{0pt}{0pt}}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Coverage \T \B & \multicolumn{4}{|c|}{Best Fit parameters @ 1420 MHz}\\
\hline
\T \B & $k_{100}$ (mK$^2$) & $\alpha$ & $c^{src}$ (mK$^2$) & $c^{noise}$ (mK$^2$)\\
\hline
$b_{gal} \ge 5^{\circ}$ \T \B & $ 4.57 ^{+ 0.95}_{-0.13 }$ &
$-2.75 ^{+ 0.16}_{-0.03 }$ &
$ 0.621 ^{-0.621}_{ +0.079 }$ &
$0.0164 ^{+0.1222 }_{-0.0164}$ \\
\hline
$b_{gal} \le -5^{\circ}$ \T \B & $ 4.00 ^{+ 0.78}_{-0.09 }$ &
$-2.79 ^{+ 0.15}_{-0.02 }$ &
$ 0.522 ^{-0.522 }_{ +0.069 }$ &
$0.0164 ^{+0.1003 }_{-0.0164}$ \\
\hline
$b_{gal} \ge 10^{\circ}$ \T \B & $ 1.61 ^{+ 0.38}_{-0.10 }$ &
$-2.88 ^{+ 0.21}_{-0.12 }$ &
$ 0.227 ^{-0.227}_{ +0.061 }$ &
$0.0151 ^{+0.0425 }_{-0.0151}$ \\
\hline
$b_{gal} \le -10^{\circ}$ \T \B & $ 1.41 ^{+ 0.13}_{-0.27 }$ &
$-2.74 ^{+ 0.06}_{-0.28 }$ &
$ 0.128 ^{-0.128}_{ +0.107 }$ &
$0.0158 ^{+0.0279 }_{-0.0158}$ \\
\hline
$b_{gal} \ge 20^{\circ}$ \T \B & $ 0.78 ^{+ 0.22}_{-0.07 }$ &
$-2.88 ^{+ 0.21}_{-0.16 }$ &
$ 0.128 ^{-0.128}_{ +0.042 }$ &
$0.0077 ^{+0.0242 }_{-0.0077}$ \\
\hline
$b_{gal} \le -20^{\circ}$ \T \B & $ 0.41 ^{+ 0.10}_{-0.09 }$ &
$-2.83 ^{+ 0.13}_{-0.20 }$ &
$ 0.030 ^{-0.030}_{ +0.067 }$ &
$0.0146 ^{+0.0047 }_{-0.0146}$ \\
\hline
$b_{gal} \ge 30^{\circ}$ \T \B & $ 0.43 ^{+ 0.17}_{-0.05 }$ &
$-3.02 ^{+ 0.39}_{-0.02 }$ &
$ 0.128 ^{-0.128}_{ +0.004 }$ &
$0.0008 ^{+0.0251 }_{-0.0008}$ \\
\hline
$b_{gal} \le -30^{\circ}$ \T \B & $ 0.22 ^{+ 0.02}_{-0.06 }$ &
$-2.77 ^{+ 0.12}_{-0.28 }$ &
$ 0.030 ^{-0.030}_{ +0.034 }$ &
$0.0065 ^{+0.0051 }_{-0.0065}$ \\
\hline
$b_{gal} \ge 40^{\circ}$ \T \B & $ 0.40 ^{+ 0.01}_{-0.13 }$ &
$-2.66 ^{+ 0.25}_{-0.34 }$ &
$ 0.030 ^{-0.030}_{ +0.065 }$ &
$0.0113 ^{+0.0071 }_{-0.0113}$ \\
\hline
$b_{gal} \le -40^{\circ}$ \T \B & $ 0.11 ^{+ 0.05}_{-0.02 }$ &
$-2.77 ^{+ 0.25}_{-0.34 }$ &
$ 0.030 ^{-0.030}_{ +0.022 }$ &
$0.0049 ^{+0.0050 }_{-0.0049}$ \\
\hline
$b_{gal} \ge 50^{\circ}$ \T \B & $ 0.21 ^{+ 0.09}_{-0.02 }$ &
$-2.86 ^{+ 0.46}_{-0.26 }$ &
$ 0.056 ^{-0.056}_{ +0.027 }$ &
$0.0063 ^{+0.0086 }_{-0.0063}$ \\
\hline
$b_{gal} \le -50^{\circ}$ \T \B & $ 0.05 ^{+ 0.04}_{-0.00 }$ &
$-3.00 ^{+ 0.63}_{-0.07 }$ &
$ 0.030 ^{-0.030}_{ +0.007 }$ &
$ 0.0026 ^{+0.0066 }_{-0.0026}$ \\
\hline
$b_{gal} \ge 60^{\circ}$ \T \B & $ 0.23 ^{+ 0.06}_{-0.05 }$ &
$-2.81 ^{+ 0.40}_{-0.33 }$ &
$ 0.056 ^{-0.056}_{ +0.030 }$ &
$0.0062 ^{+0.0103 }_{-0.0062}$ \\
\hline
$b_{gal} \le -60^{\circ}$ \T \B & $ 0.03 ^{+ 0.03}_{-0.00 }$ &
$-3.02 ^{+ 0.81}_{-0.15 }$ &
$ 0.030 ^{-0.030}_{ +0.006 }$ &
$0.0021 ^{+0.0067 }_{-0.0021}$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{ Best fit parameters obtained by modelling
the angular power spectra of the northern and southern
cuts at 1420 MHz according to Eq.~\ref{aps_best_model}.
}
\label{BFpar_tab1420}
\end{table}
In order to derive the range of variability of the synchrotron
emission amplitude and slope, two extreme
cases have been considered (see Appendix C of La~Porta~2007
for
details).
The flattest synchrotron APS compatible with the data
is found by neglecting
the source term in Eq.~\ref{aps_best_model}
and the steepest one is recovered by assuming a
null noise contribution
and maximizing the source term.
We performed a least-square fit to the APS by exploring the
parameter space on adaptive grids\footnote{For this
purpose we implemented a specific algorithm and
tested its reliability with the MINUITS package of the CERN
libraries \citep{james75_minuits}.}.
The uncertainties on the best fit parameters are
derived as the difference with those obtained in
the two extreme cases.
Figures~\ref{bfit1420} and \ref{bfit408}
show the angular power spectra and the best fit curves
corresponding to the best model at the two frequencies,
while Tables~\ref{BFpar_tab1420} and \ref{BFpar_tab408}
list the obtained parameters and their uncertainties.
For the synchrotron term, we quote the value of the normalized
amplitude $k_{100}=k \times 100^\alpha$, which
corresponds to a physical quantity.
In fact, $k_{100}=C_{\ell=100}$, thus implying that
the normalized amplitude gives the mean temperature
fluctuations at angular scales of $\sim 2^{\circ}$.
Figure~\ref{bf_vs_lat_ti408_ti1420} shows the best
fit parameters of the synchrotron APS as a function of the
Galactic latitude. The normalized amplitude, $k_{100}$,
is maximum when the considered cut includes the lower
latitudes, where the Galactic radio emission peaks.
In particular, at 408 MHz (respec. at 1420 MHz)
$k_{100} \in [488,6527]\;{\rm mK}^2$
(respec. $[0.21,4.57]\;{\rm mK}^2$)
for the northern cuts and
$k_{100} \in [138,6734]\;{\rm mK}^2$
(respec. $[0.03,4.00]\;{\rm mK}^2$)
for the southern cuts.
The mean error on $k_{100}$ is of
$\sim 18\%$ for the cuts at the lower
latitude ($|b_{cut}| \in [5^{\circ},30^{\circ}]$)
and of $\sim 30-40\%$ for the others. The
uncertainty is larger for the cuts at higher
latitude due to the reduced multipole range suitable
for the fitting procedure.
The slope of the synchrotron APS for the northern cuts
varies in the interval $\sim [-3.0,-2.8]$ at 408 MHz and
$\sim [-3.0,-2.7]$ at 1420 MHz, while in
the southern cuts $\alpha \sim [-2.9,-2.6]$
and $\alpha \sim [-3.0,-2.7]$, respectively. The errors
on $\alpha$ are on average $\sim (5-7)\%$
for $|b_{cut}| < 30^{\circ}-40^{\circ}$ and
typically increase to $\sim 18\%$ for cuts at higher
latitude. \\
\begin{figure}[!t]
\hskip +0.5cm
\includegraphics[width=5.5cm,height=8.5cm,angle=90]{8435f12.ps}\\
\vskip +0.2cm
\caption{ As in Fig.~\ref{bfit1420}, but at 408 MHz. }
\label{bfit408}
\end{figure}
At both frequencies there is no evidence of a
systematic dependence of the synchrotron
emission APS slope on latitude
(see Fig.~\ref{bf_vs_lat_ti408_ti1420}). \\
\begin{figure}[!hb]
\begin{tabular}{cc}
\includegraphics[width=4.5cm,height=4cm]{8435f13a.ps}&
\includegraphics[width=4.5cm,height=4cm]{8435f13b.ps}\\
\end{tabular}
\vskip +0.4cm
\caption{ Best-fit parameters obtained for the Galactic radio synchrotron emission APS
against Galactic latitude. }
\label{bf_vs_lat_ti408_ti1420}
\end{figure}
The source term $c^{src}$ increases
for decreasing latitude, as expected given that the
source subtraction is less complete below $\sim 45^{\circ}$.
From extragalactic source counts at 1.4~GHz in total intensity
\citep{prand01_src_counts}, the expected source contribution is
$c^{src} \simeq 0.06 {\rm mK}^2$ for flux densities below $\sim 1$~Jy
($c^{src}\simeq [0.03 - 0.3] {\rm~mK}^2$ including
the quoted $1\sigma$ errors and also considering
the effect of the finite sampling) and
$c^{src} \simeq 0.30$~mK$^2$ ($c^{src}\simeq [0.15 - 1.50]$~mK$^2$ )
for flux densities below $\sim 5$~Jy.
At 408~MHz the available source counts \citep{jamr04_src_counts}
lead to $c^{src} \simeq 200$~mK$^2$ for
flux densities below $\sim 6.4$~Jy
($c^{src}\simeq [150 - 360$~mK$^2$) and
$c^{src} \simeq 660$~mK$^2$ for flux densities below $\sim 64$~Jy
($c^{src}\simeq [260 - 1150]\,{\rm mK}^2$).
The values resulting from the fits are consistent with the
above estimates.
\begin{table}[!h]
\newcommand\T{\rule{0pt}{2.6ex}}
\newcommand\B{\rule[-1.2ex]{0pt}{0pt}}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Coverage \T \B & \multicolumn{4}{|c|}{Best Fit parameters @ 408 MHz}\\
\hline
\T \B & $k_{100}$ (mK$^2$) & $\alpha$ & $c^{src}$ (mK$^2$) & $c^{noise}$ (mK$^2$)\\
\hline
$b_{gal} \ge 5^{\circ}$ \T \B & $ 6527. ^{ +1306.}_{-1349. }$ &
$-2.78 ^{ +0.14}_{-0.22 }$ &
$1797. ^{-1797.}_{+1223. }$ &
$187.09 ^{+377.84 }_{-187.09}$ \\
\hline
$b_{gal} \le -5^{\circ}$ \T \B & $ 6734. ^{+ 743.}_{ -789. }$ &
$-2.68 ^{ +0.06}_{-0.11 }$ &
$ 882. ^{-882.}_{+1244. }$ &
$291.87 ^{+143.62 }_{-291.87}$ \\
\hline
$b_{gal} \ge 10^{\circ}$ \T \B & $ 2700. ^{ + 467.}_{ -444. }$ &
$-2.80 ^{ +0.12}_{-0.21 }$ &
$ 530. ^{-530.}_{ +520. }$ &
$ 95.16 ^{+106.22 }_{-95.16}$ \\
\hline
$b_{gal} \le -10^{\circ}$ \T \B & $ 1919. ^{+ 111.}_{ -227. }$ &
$-2.70 ^{+ 0.04}_{-0.10 }$ &
$ 211. ^{-211.}_{ +438. }$ &
$103.99 ^{+ 53.09 }_{-103.99}$ \\
\hline
$b_{gal} \ge 20^{\circ}$ \T \B & $ 1147. ^{ + 215.}_{ -200. }$ &
$-2.83 ^{ +0.16}_{-0.19 }$ &
$ 236. ^{-236.}_{ +231. }$ &
$ 48.98 ^{ +39.85 }_{-48.98}$ \\
\hline
$b_{gal} \le -20^{\circ}$ \T \B & $ 493. ^{ + 103.}_{ -86. }$ &
$-2.87 ^{ +0.13}_{-0.19 }$ &
$ 158. ^{-158.}_{ +129. }$ &
$ 31.71 ^{ +37.20 }_{-31.71}$ \\
\hline
$b_{gal} \ge 30^{\circ}$ \T \B & $ 700. ^{ + 135.}_{ -104. }$ &
$-2.91 ^{+ 0.17}_{-0.16 }$ &
$ 181. ^{-181.}_{ +148. }$ &
$ 32.95 ^{ +38.54 }_{-32.95}$ \\
\hline
$b_{gal} \le -30^{\circ}$ \T \B & $ 305. ^{+ 112.}_{ -52. }$ &
$-2.88 ^{ +0.26}_{-0.14 }$ &
$ 155. ^{-155.}_{ +87. }$ &
$ 21.81 ^{ +32.50 }_{-21.81}$ \\
\hline
$b_{gal} \ge 40^{\circ}$ \T \B & $ 572. ^{ +205.}_{ -111. }$ &
$-2.86 ^{ +0.34}_{-0.18 }$ &
$ 230. ^{-230.}_{ +106. }$ &
$ 19.56 ^{ +41.28 }_{-19.56}$ \\
\hline
$b_{gal} \le -40^{\circ}$ \T \B & $ 234. ^{ + 97.}_{ -66. }$ &
$-2.77 ^{ +0.35}_{-0.29 }$ &
$ 105. ^{-105.}_{ +123. }$ &
$ 32.47 ^{ +15.08 }_{-32.47}$ \\
\hline
$b_{gal} \ge 50^{\circ}$ \T \B & $ 488. ^{ + 246.}_{ -73. }$ &
$-3.01 ^{ +0.46}_{-0.11 }$ &
$ 208. ^{-208.}_{ +117. }$ &
$ 27.07 ^{ +23.67 }_{-27.07}$ \\
\hline
$b_{gal} \le -50^{\circ}$ \T \B & $ 146. ^{ + 122.}_{ -34. }$ &
$-2.82 ^{ +0.66}_{-0.22 }$ &
$ 137. ^{-137.}_{ +77. }$ &
$ 19.95 ^{ +22.47 }_{-19.95}$ \\
\hline
$b_{gal} \ge 60^{\circ}$ \T \B & $ 509. ^{ +226.}_{ -80. }$ &
$-3.04 ^{ +0.39}_{-0.26 }$ &
$ 246. ^{-246.}_{ +91. }$ &
$ 19.11 ^{ +45.89 }_{-19.11}$ \\
\hline
$b_{gal} \le -60^{\circ}$ \T \B & $ 138. ^{ +71.}_{ -76. }$ &
$-2.59 ^{ +0.53}_{-0.80 }$ &
$ 77. ^{-77.}_{ +131. }$ &
$ 32.67 ^{ +11.66 }_{-32.67}$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{As in Table~\ref{BFpar_tab1420} but at 408 MHz.
}
\label{BFpar_tab408}
\end{table}
\section{Extrapolation to the microwave range}
\label{res_extrapolation}
In this section we extrapolate the results obtained
from the analysis of the 408 MHz and 1420 MHz surveys
over large areas to the microwave range, in order to
make a direct comparison with the WMAP 3-yr results.
The main objective of the WMAP mission was the
realization of a CMB anisotropy total intensity map
and the estimation of the corresponding APS.
By-products of the mission are maps of the foregrounds
contaminating the cosmological signal at
the five frequencies observed by
the satellite ($\nu \sim 23, 33, 41, 61, 94$~GHz), i.e.
the microwave emission from the Milky Way, characterized
by diffuse (dust, free-free, synchrotron)
and discrete (e.g, {\sc HII} regions, SNRs) components,
and from extragalactic sources.
The maps were worked out \citep{hins06_wmap_3yr_temp} by using
templates of the various astrophysical components,
constructed by exploiting ancillary data.
Then a pixel-by-pixel (MEM based) fit of all the maps
(see Bennett et al.,~2003 for a description of the method),
i.e. templates and {\sc WMAP} frequency maps after subtraction
of the CMB anisotropy field,
was performed posing some priors on the
spectral behaviour of the foregrounds.\\
\subsection{Comparison with the WMAP K-band synchrotron component}
\label{extrap_kmem_comp}
The 23 GHz (K-band) synchrotron map by \citet{hins06_wmap_3yr_temp}
is considered in this section for a comparison
with the 408 MHz and 1420 MHz data.
It is evident that such a map provides a picture of the global
non-thermal emission observed by the satellite at 23 GHz, rather
than the Galactic synchrotron component only. Several
extragalactic sources are clearly recognizable.
Furthermore, as pointed out by \citet{hins06_wmap_3yr_temp},
the diffuse non-thermal emission is concentrated at low
latitudes and appears remarkably well correlated with the
dust component~\footnote{Such a tight synchrotron-dust
correlation holds at all WMAP frequencies.}.
This might suggest the presence at 23 GHz
of anomalous dust emission, as due for example to spinning
dust grains \citep{draine98_spinndust}. In fact, most dusty active star-forming
regions are localized along the Galactic plane.
This hypothesis seems further
supported by the joint analysis of the {\sc WMAP} maps
(1-yr release) and the Green Bank Galactic Plane
Survey by Finkbeiner~(2004).
De Oliveira-Costa et al.~(2004) estimated the fluctuations
expected at 10 GHz and 15 GHz for the foreground component traced
by the {\tt K-MEM} synchrotron map by Bennett et al.~(2003) (1-yr
results), by cross-correlating the latter with the
Tenerife 10 GHz and 15 GHz CMB maps and all the {\sc WMAP} CMB maps.
They found values one order of magnitude below what is expected
for the synchrotron emission and
concluded that the {\tt K-MEM} synchrotron component
by Bennett et al.~(2003) is dominated by anomalous dust
emission even at $|b_{gal}|\gtrsim 20^{\circ}$.
\citet{hilde07_cosmosomas} also found evidence for
anomalous microwave emission at high Galactic latitudes
by cross-correlating the COSMOSOMAS 11 GHz observations
with the {\sc WMAP} K- and Ka-band map.
The same conclusion also was reached by \citet{dav06_foresepa},
who cross-correlated the {\sc WMAP} 1-yr map with foreground templates
in a dozen small patches located at medium and high latitude.
However, the origin of the spatial correlation
found at {\sc WMAP} frequencies between synchrotron
and dust emission is still a matter of debate.
Bennett et al.~(2003) claim that
the observed correlation is
the result of a spatially varying synchrotron
spectral index, which significantly alters the
morphology of the synchrotron emission with frequency.
\citet{hins06_wmap_3yr_temp} affirm that the issue is
left open also by the {\sc WMAP} 3-yr results and that
high quality and large coverage surveys at
$\nu \sim 5 - 15\;{\rm GHz}$ are needed
for a decisive test of both the above discussed
explanations of the synchrotron-dust correlation. \\
The separation of the free-free and synchrotron
emission in low latitude regions is also very uncertain.
On one hand, the 408 MHz map used as a template of the non-thermal
emission contains a non-negligible contribution ($\lesssim 10\%$) of
free-free at lower latitudes \citep{dickinson03_fftemplate,paladini05_ff}.
On the other hand, the H-$\alpha$ map used as
template for the free-free emission \citep{fink03_halpha_map}
cannot be properly corrected for dust extinction
for $|b_{gal}| \lesssim 5^{\circ}$, thus potentially leading
to an underestimation of the expected
thermal emission at 23 GHz.
The situation in the vicinity of the Galactic plane
is extremely complicated and remains unclear.\\
The extrapolation of the angular power
spectra derived at 408 MHz and 1420 MHz
to the microwave range is a delicate issue.
The astrophysical components contributing to
the fluctuation field APS of the radio
surveys scale with frequency in a different way.
For the Galactic radio emission between 408 MHz and 1420 MHz
\citet{reich04_betagalemiss} compiled a map of the
spectral index $\beta$ ($T_{b} \propto \nu^{\beta}$) that
reveals a complex structure, due to superposition
of the spectral behaviour of the map components
(synchrotron emission, sources, free-free).
The situation is further complicated by
a possible but not well known
steepening of the diffuse synchrotron emission
power spectrum above 10~GHz, due to the steepening
of the cosmic ray electron energy spectrum
\citep{band90_crspec,band91_crspec,strong07_CRrev}.
Last but not least, the fact that the astrophysical
components of the map scale with frequency in a different way
may imply a change in the overall
APS shape, since the relative weight of the
foreground contribution to the APS could
vary significantly. \\
Given the complexity of the open issues discussed above,
the following analysis merely aims to verify the consistency
between the information about the non-thermal radiation
APS coming from the 408 MHz and 1420 MHz data and
from the 23 GHz {\sc WMAP} data.
We focused on what happens at medium and high
Galactic latitudes, since the problems in interpreting the Galactic
emission are more complicated close to the plane. \\
We first carried out a source subtraction on the 23 GHz
map at intermediate and high latitudes ($|b_{gal}| \gtrsim 40^{\circ}$),
similar to that performed on the radio surveys.
The comparison of the map of subtracted sources with
the mask of sources produced by the {\sc WMAP} team
shows that most ($\sim 80\%$) of the objects have been
identified
and subtracted.
We have derived and compared the APS of the
original, source-subtracted and source map at 23 GHz
for some northern and southern cuts. Namely, we considered
$|b_{cut}|=40^{\circ},50^{\circ},60^{\circ}$ and
verified that $C_{\ell}^{orig.\;map} \sim c^{src}$ over
the significant multipole range (i.e. $\ell \gtrsim 20$). This means that in
the 23 GHz map the source contribution dominates the high latitude cut
APS at all angular scales. In Figure~\ref{radioaps_vs_wmap}
we show the APS of the 23 GHz map after source
subtraction for the asymmetric cuts with $b_{cut}=\pm40^{\circ}$.
For comparison, we also display the extrapolated radio APS
(from Tables ~\ref{BFpar_tab1420} and \ref{BFpar_tab408}), derived as
$$C_{\ell}(23)=C_{\ell}(\nu_{radio})\cdot (23/\nu_{radio})^{2\beta}$$
where $\nu_{radio}=0.408,1.420$~GHz. The frequency spectral
index $\beta$ is chosen case by case as the value that
brings the radio APS to overlay the {\sc WMAP} one
at the lower multipoles (the exact values are reported
in the figure caption).
\begin{figure}[!t]
\vskip +0.5cm
\centering
\hskip +0.8cm
\includegraphics[width=6cm,height=8cm,angle=90]{8435f14.ps}
\vskip +0.6cm
\caption{ Angular power spectra of the DS-subtracted radio maps
(green $\to$ 408 MHz, fuxia $\to$ 1420 MHz) extrapolated
to 23 GHz for a direct comparison with those of the WMAP 3-yr
DS-subtracted synchrotron component (blue).
The radio angular power spectra have been smoothed to $1^{\circ}$
to match the angular resolution of the 23 GHz map.
The frequency spectral indices
adopted in the extrapolation are $\beta_{(0.408-23){\rm GHz}}=-2.95$
and $\beta_{(1.4-23){\rm GHz}}=-2.90$ for the northern cut and
$\beta_{(0.408-23){\rm GHz}}=-2.90$ and $\beta_{(1.4-23){\rm GHz}}=-2.83$
for the southern one.}
\label{radioaps_vs_wmap}
\end{figure}
The extrapolated angular power spectra
are very similar to each other, but
steeper than those of {\sc WMAP}. Furthermore,
the frequency spectral indices needed in the extrapolation
suggest the existence of a steeper spectral behaviour
between 408 MHz and 23 GHz than between 1420 MHz and 23 GHz.
For the Galactic diffuse synchrotron emission we would
expect instead
$\beta_{(0.408-23){\rm GHz}} \ge \beta_{(1.4-23){\rm GHz}}$
because of the possible steepening of the cosmic ray
energy spectrum.
In order to obtain a more quantitative estimate of
the mean spectral index between the lower frequencies
and 23 GHz, we used the mean value of the APS
of the source-subtracted maps for $\ell \in [20,40]$.
In fact, in this range the APS is dominated
by the diffuse synchrotron emission, whereas at higher
multipoles the APS could still be influenced by the
contribution of unsubtracted sources.
Figure~\ref{beta_radio_wmap} shows the mean APS as a function
of frequency for the cuts with $b_{cut}=\pm 40^{\circ}$.
The spectral index $\beta$ is obtained as \\
$$<C_{\ell}(\nu_{1})>_{\ell\in[20,40]}=<C_{\ell}(\nu_{2})>_{\ell\in[20,40]} (\nu_{1}/\nu_{2})^{2\beta}\;{\rm .}$$
For the northern cut we found $\beta_{(0.408-23){\rm GHz}} \sim -2.92$
and $\beta_{(1.42-23){\rm GHz}} \sim -2.85$.
For the southern cuts, $\beta_{(0.408-23){\rm GHz}} \sim -2.76$
and $\beta_{(1.42-23){\rm GHz}} \sim -2.59$.
These values of $\beta$ have a typical uncertainty
of a few percent according to the
choice of the multipole range adopted to compute
the mean value of the APS. However, it turns out that
$\beta_{(0.408-23){\rm GHz}} < \beta_{(1.42-23){\rm GHz}}$
for all reasonable choices of the multipole range.
One possible explanation for such a result is that
the 23 GHz map includes one or more astrophysical components
beside synchrotron emission. On the one hand, the 23 GHz
map could still include some Galactic free-free emission,
residual from the component separation. Another and likely
more relevant candidate is Galactic spinning dust emission.
A rough estimate of the excess signal in the 23 GHz
map can be obtained by extrapolating
the radio results to 23 GHz, as shown in Fig.~\ref{beta_radio_wmap}.
The observed and extrapolated mean APS differ by a
factor of $\sim 2.5$ in the northern cut
and $\sim 8.4$ in the southern cut. Thus, they differ by
factors of $\sim 1.6$ and $\sim 2.9$ respectively
in terms of signal in the map. We repeated this
calculation by computing the mean APS over other
reasonable intervals of multipoles and found in this way
that the uncertainty for the given values of the excess
signal is about 20\%.
The difference between the observed and extrapolated
values of the mean APS is smaller in the northern
hemisphere, which is likely due to the compensatory
contribution of the NPS.
Finally, we find that the angular power spectra
of the northern and southern sky are almost superimposed
at 23 GHz for $|b_{cut}|\gtrsim 40^{\circ}$.
This can be interpreted as the combination of two effects.
The synchrotron emission of the NPS has a steeper frequency
spectrum than the average one \citep{reich88_betasynchr}
and the contribution of emission processes
other than synchrotron may be significant
at 23~GHz. Therefore, the relative importance of the NPS with
respect to the overall diffuse background diminishes from
408 MHz and 1420 MHz to microwave frequencies.
\begin{figure}[!t]
\hskip +1.5cm
\vskip +0.5cm
\includegraphics[width=6cm,height=10cm,angle=90]{8435f15.ps}
\vskip -0.4cm
\caption{ Mean angular power spectra, $<C_{\ell}(\nu)>_{\ell\in[20,40]}$, of
high latitude cuts ($b_{cut} = \pm 40^{\circ}$) against frequency.
The best fit of the mean APS of the two lower frequencies
is also plotted.
}
\label{beta_radio_wmap}
\end{figure}
\begin{table}[!t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{8}{|c|}{$\beta_{(0.408-1.420)GHz}$}\\
\hline
$|b_{cut}|$ & $5^{\circ}$ & $10^{\circ}$ & $20^{\circ}$ & $30^{\circ}$ & $40^{\circ}$ & $50^{\circ}$ & $60^{\circ}$ \\
\hline
$b_{gal} > |b_{cut}|$ & -2.9 & -2.9 & -2.9 & -3.0 & -3.1 & -3.2 & -3.2 \\
\hline
$b_{gal} < -|b_{cut}|$ & -2.9 & -2.9 & -2.9 & -2.9 & -3.0 & -3.1 & -3.1 \\
\hline
\end{tabular}
\end{center}
\caption{Frequency spectral index of the Galactic synchrotron
fluctuations between 408 MHz and 1420 MHz
derived from the APS.}
\label{beta_408_1420}
\end{table}
\subsection{Synchrotron contamination of the CMB anisotropies}
It is a standard practice to estimate the foreground
contamination of CMB anisotropies by means of the
corresponding APS, which is usually extrapolated
from the frequency range where the foreground component
is best observed. For the Galactic synchrotron emission
a constant spectral index in the interval $\sim[-2.5,-3.0]$
is commonly adopted, as suggested by the spectral
behaviour of the Galactic diffuse emission at radio frequencies.
We instead derive the spectral index directly from the
results of our APS analysis, thus identifying a proper
value for each cut considered.
As in the previous section, we compute the mean value
of the APS at 408 MHz and at 1420 MHz over the lower
multipoles ($\ell \in [20,40]$) and perform a linear
extrapolation based on the two points.
We prefer to work with the APS of the source-subtracted
map rather than with the synchrotron power law
derived by fitting it, since the former provides us with
a value that exclusively depends on observed data.
Our results are reported in Table~\ref{beta_408_1420}.
The obtained spectral indices vary by a few percent
for a different choice of the multipole range
used to calculate the mean APS.
Figure~\ref{cmbTT_vs_radio} shows the CMB APS recovered by WMAP
\citep{hins06_wmap_3yr_temp}, together with the synchrotron APS
derived from the 1420 MHz survey, extrapolated
to 30~GHz and 70~GHz (corresponding to the lowest
and the highest {\sc Planck-LFI} channels). We display
the results obtained for four coverage cases
($b_{cut}=\pm 5^{\circ},\pm 20^{\circ}$).
For comparison, we also extrapolated as above
the APS directly extracted from the map
for the region at $|b| \ge 5^\circ$
and for the all-sky.
The foreground dominates over the CMB at 30~GHz for a wide multipole range
if a mask excluding the Galactic plane is not applied.
The frequency spectrum of free-free emission,
relevant at low latitudes, is flatter than
that of the synchrotron emission. Thus, the
extrapolated APS provides a lower limit to the overall
Galactic foreground, even neglecting dust emission.
\begin{figure*}
\vskip +0.8cm
\hskip +4cm
\includegraphics[width=8cm,height=12cm,angle=90]{8435f16.ps}
\vskip +0.7cm
\caption{ Comparison between the CMB APS retrieved by \citet{hins06_wmap_3yr_temp}
and the synchrotron angular power spectra derived for some cuts of the
1420 MHz map (from Table~\ref{BFpar_tab1420} and Fig.~\ref{bfit1420}),
extrapolated to 30 GHz and 70 GHz with a spectral index of -2.9~.
Color legend (see online version):
blue (upper straight lines) $\to$ $C_{\ell}^{N}$ (northern cut),
red (lower) $\to$ $C_{\ell}^{S}$ (southern cut).
The left panels also display the APS for the all-sky (black line)
extrapolated as above.
The empty square in the top right panel marks the upper limit
on synchrotron contamination inferred from COBE-DMR observations
\citep{kogut96b_spinndust}.
}
\label{cmbTT_vs_radio}
\end{figure*}
The analysis of the low-frequency maps shows that the
APS amplitude of the northern Galactic hemisphere is
strongly influenced by the presence of the NPS.
Consequently, the results obtained at 1420 MHz
for the northern cuts constitute a conservative
upper limit for the Galactic diffuse synchrotron
emission and can be used together with those
of the southern cuts to bracket the synchrotron
APS at microwave frequencies.
At 30 GHz, a severe contamination is expected from the
synchrotron emission up to $\ell \sim 50$ for
an almost complete sky coverage ($b_{cut}=5^{\circ}$).
A mask excluding the region with $|b_{gal}| \le 20^{\circ}$
reduces the expected synchrotron signal to about half of
the CMB anisotropies for $\ell \gtrsim 10$, whereas
for lower multipoles the two are comparable.
\citet{kogut96b_spinndust} examined the COBE-DMR
results at 31.5~GHz for $|b_{gal}| \gtrsim 20^{\circ}$
and derived an upper limit
of $\sim 11 \mu$K on the temperature fluctuations
due to synchrotron emission on angular scales
of $\sim 7^{\circ}$. This value, marked in
Figure~\ref{cmbTT_vs_radio} by an empty
square, is in good agreement with the extrapolated APS for
the northern cut at $20^{\circ}$.
At 70 GHz, which is the most promising channel
for CMB anisotropy measurements
since
the overall foreground emission reaches a
minimum for $\nu \sim [60,80]$~GHz
(Bennett et al.~2003), the contribution of the
Galactic synchrotron emission to the microwave sky
fluctuation field is small over the multipole
range explored in our analysis ($\ell \gtrsim 10$).
The CMB anisotropies are larger than
the foreseen foreground fluctuations by a factor
$\gtrsim 10$ for a cut at $5^{\circ}$.
For $b_{cut} \sim 20^{\circ}$ the foreground signal
further decreases by a factor $\sim 2$.
The extrapolation of our results to $\ell \lesssim 10$
indicates that the cosmological signal should be a factor
$\gtrsim 2$ larger than the foreground at the
largest angular scales.
The precise recovery of the CMB APS for $\ell \lesssim 10$
therefore remains a delicate issue, since the
foreground emission is a competitive signal.
However, we note that the APS extracted directly from the map
shows a certain flattening toward lowest multipoles, slightly
improving the situation with respect to the above
power law extrapolation.
\section{The APS dependence on sky position: the local analysis}
We have also carried out the analysis of the APS on patches
of roughly $14\fdg7 \times 14\fdg7$, in
order to describe the local variations of the Galactic
emission at 408 MHz and 1420 MHz.
Significant changes in the amplitude of the synchrotron APS
with the considered portion of the sky are expected,
since the diffuse radio background gradually increases
toward the Galactic plane, where it reaches maximum intensity. \\
These patches correspond to the pixels of an {\tt HEALPix} map
at $n_{side}=4$ and allow the study of the angular power spectra
on the multipoles range $\sim[60,200-300]$.
An angular size $\theta_{patch}\sim 14\fdg7$ is a good compromise
between the wish to divide the sky in a large number of
areas and the need to preserve a relatively wide
interval of statistically relevant multipoles
($\ell \sim 180^{\circ}/\theta$).
We have computed the patch angular power spectra
for all the versions of the radio maps (original,
DS-subtracted and DSs only), both by using
the {\tt HEALPix} facility {\tt Anafast} and by integrating
the two point correlation function (see Appendix~D
of La Porta~2007 for details).
Despite the differences found in individual cases,
on the average there is a good agreement between
the angular power spectra derived with the two methods
(in Fig.~\ref{anaf_vs_cf_ex} some examples of bad,
fair and good cases are shown for the map at 1420 MHz
after DS subtraction).\\
The angular power spectra obtained by using {\tt Anafast}
typically present more oscillations\footnote{
{\tt Anafast} computes the APS in the Fourier space
by expressing the temperature fluctuation
field in spherical harmonics.
The APS is obtained working over the whole sky,
even if the map is zero outside
the patch taken into account (the derived APS is
then renormalized to the case of a full sky coverage).
This operation is heuristically equivalent
to computing the Fourier transform of a discontinuos function,
thus implying a Gibbs effect \citep{arfken01_book}.}
and tend to be slightly flatter than the correlation function
angular power spectra at $\ell \gtrsim 200$. However,
the correlation function results are less reliable at higher
multipoles, where the choice of the window function might
have a non negligible influence.
Consequently, we exploited the {\tt Anafast}
angular power spectra in the following analysis.
\begin{figure}
\hskip +0.6cm
\includegraphics[width=4cm,height=9cm,angle=90]{8435f17.ps}
\vskip +0.2cm
\caption{
Comparison between the angular power spectra derived using
{\tt Anafast} (fuxia) and via integration of the correlation
function. Some examples of good (left panel), fair and
poor (right) agreement are shown for the
map at 1420 MHz after DSs subtraction.
}
\label{anaf_vs_cf_ex}
\end{figure}
\subsection{Results}
The patch angular power spectra for the map after DS subtraction
are fitted exactly as done in the case of the
Galactic cuts (see Sect.~\ref{sect_fit_aps_nos}).
The maps of the obtained parameters are shown in
Figs.~\ref{BFparmap_ti1420} and \ref{BFparmap_ti408}.
The results derived by using the best model are summarized
in Tables~\ref{tab_local_res_04} and \ref{tab_local_res_14}.
The estimated relative error of the synchrotron APS slope
averaged over the ensamble of patches is
$|\Delta\alpha/\alpha| \sim 25\%$ at 408~MHz and 22\% at 1420~MHz.
The mean relative error of the normalized
amplitude, $k_{100}$, is $\sim 25\%$ at 408~MHz
and $\sim 20\%$ at 1420~MHz.
\begin{table}[!t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
parameter, $x$ & $x_{min}$ & $x_{max}$ & $< x >$ & $\sigma_{x}$ & $\%\,(x \in < x > \pm \sigma_{x})$ \\
\hline
$\alpha$ & -3.50 & -0.70 & -2.70 & 0.60 & 54 \\
${\rm log}(k_{100}^{1}/{\rm mK}^2)$ & 2.90 & 5.90 & 3.80 & 0.81 & 70 \\
${\rm log}(k_{100}^{2}/{\rm mK}^2)$ & 1.00 & 2.90 & 2.40 & 0.36 & 69 \\
\hline
\end{tabular}
\end{center}
\caption{Characteristics of the synchrotron APS best fit parameters
derived in the local analysis of the 408 MHz map. $k_{100}^{1}$ refers to
the patches covering about
the brightest half of the sky, which includes the Galactic plane and
the NPS. $k_{100}^{2}$ correspond to the other half with weak
high latitude emission. }
\label{tab_local_res_04}
\end{table}
\begin{table}[!t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
parameter, $x$ & $x_{min}$ & $x_{max}$ & $< x >$ & $\sigma_{x}$ & $\%\,(x \in < x > \pm \sigma_{x})$ \\
\hline
$\alpha$ & -4.00 & -1.00 & -2.80 & 0.60 & 70 \\
${\rm log}(k_{100}^{1}/{\rm mK}^2)$ & -0.30 & 3.00 & 0.66 & 0.91 & 78 \\
${\rm log}(k_{100}^{2}/{\rm mK}^2)$ & -1.90 & -0.31 & -0.85 & 0.38 & 67 \\
\hline
\end{tabular}
\end{center}
\caption{As in Table~\ref{tab_local_res_04}, but at 1420 MHz.}
\label{tab_local_res_14}
\end{table}
\begin{figure*}
\hskip -0.5cm
\begin{tabular}{ccc}
Fit no SRC & Best Model & Fit no Noise \\
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18a.eps}&
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18b.eps}&
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18c.eps}\\
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18d.eps}&
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18e.eps}&
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18f.eps}\\&
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18g.eps}&
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18h.eps}\\
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18i.eps} &
\includegraphics[width=4.0cm,height=6cm,angle=90]{8435f18j.eps} & \\
\end{tabular}
\caption{
Maps of the best fit parameters obtained by fitting the
angular power spectra of the local analysis patches at 1420 MHz.
The maps aligned along each row refer to the same parameter.
From the top, ${\rm log}(k_{100}^{synch}/{\rm mK}^2)$, $\alpha^{synch}$ ,
${\rm log} (c^{src}/{\rm mK}^2)$ and ${\rm log}(c^{noise}/{\rm mK}^2)$.
The first and third columns correspond to the extreme cases,
assuming respectively that the source contribution
or the noise contamination is negligible.
}
\label{BFparmap_ti1420}
\end{figure*}
\begin{figure*}
\hskip -0.5cm
\begin{tabular}{ccc}
Fit no SRC & Best Model & Fit no Noise \\
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19a.eps}&
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19b.eps}&
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19c.eps}\\
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19d.eps}&
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19e.eps}&
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19f.eps}\\
& \includegraphics[width=4cm,height=6cm,angle=90]{8435f19g.eps}&
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19h.eps}\\
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19i.eps} &
\includegraphics[width=4cm,height=6cm,angle=90]{8435f19j.eps} & \\
\end{tabular}
\caption{ As in Fig.~\ref{BFparmap_ti1420}, but at 408 MHz.}
\label{BFparmap_ti408}
\end{figure*}
The most striking result is that at each frequency
the maps of the corresponding parameters, derived
by adopting the three different fitting models,
show a very similar morphology.
Such a resemblance proves that the parameter patterns
revealed by the local analysis are reliable,
despite the uncertainties in the obtained
parameter values.\\
The slope of the synchrotron APS does not show a
systematic dependence on Galactic latitude, in agreement
with the findings of Sect.~3.3.
The normalized amplitude of the synchrotron APS, $k_{100}$, peaks
close to the Galactic plane, which reflects the
observed morphology.
A good correlation is found between the
normalized amplitude of the synchrotron APS
at 408 MHz and 1420 MHz,
which is defined by
${\rm log}(k_{100}^{408}/{\rm mK}^2) \sim {\rm A} + {\rm B}\,{\rm log}(k_{100}^{1420}/{\rm mK}^2)\,$,
where ${\rm A}=3.15\pm0.02$ and ${\rm B}=0.88\pm0.02$
(see Fig.~\ref{par_corr_2freq}).
We note that $10^{\rm A} \sim (408/1420)^{2\beta}$
with $\beta \sim -2.9$, in agreement with the results
of Table~\ref{beta_408_1420}.
\begin{figure}[!t]
\includegraphics[width=6cm,height=9cm,angle=90]{8435f20.ps}
\vskip -0.2cm
\caption{ Correlation between the best fit values obtained
for the synchrotron emission APS
normalized amplitude ($k_{100}$)
at 408 MHz and 1420 MHz.
}
\label{par_corr_2freq}
\end{figure}
The contribution of sources reaches a maximum in
the vicinity of the Galactic plane,
mainly because a less complete source subtraction was possible for
$|b_{gal}| \lesssim 45^{\circ}$ than at higher
latitudes (see Sect.~\ref{DS}).
The obtained source terms are in fair agreement with
the values estimated by using source counts.
This comparison is particularly significant
at 1420 MHz, where such estimates are more reliable.
We summarize the results obtained in this case in
Table~\ref{tab_csrc_14}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
parameter, $x$ & $x_{min}$ & $x_{max}$ & $< x >$ & $\sigma_{x}$ & $\%\,(x \in < x > \pm \sigma_{x})$ \\
\hline
${\rm log}\,(c^{src}_{1}/{\rm mK}^2)$ & -2.30 & -0.75 & -1.50 & 0.42 & 66 \\
\hline
${\rm log}\,(c^{src}_{2}/{\rm mK}^2)$ & -2.00 & 0.18 & -0.86 & 0.53 & 72 \\
\hline
\end{tabular}
\end{center}
\caption{Characteristics of the source term derived in the local
analysis at 1420 MHz for $|b_{gal}| \gtrsim 45^{\circ}$ ($c^{src}_{1}$)
and $|b_{gal}| \lesssim 45^{\circ}$ ($c^{src}_{2}$).}
\label{tab_csrc_14}
\end{table}
\section{Summary and conclusions}
The aim of our analysis is to improve our
understanding of the Galactic synchrotron
emission as a foreground for CMB dedicated experiments.
For this purpose, we carried out an unprecedented detailed
study of the Galactic radio emission, in terms of its
angular power spectrum (APS), using total intensity
all-sky maps at 408 MHz and 1420 MHz. \\
An accurate modeling of the synchrotron APS is missing
in the literature so far, but is urgently required for a more
precise and complete exploitation of the information
awaited from the {\sc Planck} satellite.
It constitutes a precious input for component separation
activities, both for the realization of spatial templates
of the foreground and for the definition of priors on
its spatial and frequency dependence.
\begin{itemize}
\item[{\bf 1.}] Being interested in the diffuse component of the
synchrotron emission, the brighter discrete sources (DS)
have been eliminated from the radio maps by
2-dimensional Gaussian fitting.
This approach is very flexible and also permits
the removal of extended structures.
\item[{\bf 2.}] The APS was computed
for both large areas and
small patches and several consistency tests were
used to check the reliability of the recovered
APS in the case of limited sky coverage (see Appendix~B of
La Porta 2007). \\
The study of the APS for various cuts, i.e. of
regions with Galactic latitudes above or below a
certain value $b_{cut}$ ($|b_{cut}| \in [5^{\circ},60^{\circ}]$),
allowed us to explore a possible dependence of
the mean properties of the Galactic synchrotron
emission on latitude, preserving at the same time
the largest possible coverage, which is important when
estimating the CMB APS because of the sampling variance.
Such cuts provided information for $\ell \in [\ell_{min},\ell_{max}]$,
where $10 \lesssim \ell_{min} \lesssim 30$ for increasing $b_{cut}$
and $\ell_{max} \sim 200 -300$ at 408 MHz and 1420 MHz, respectively. \\
The patches correspond to the pixels of an {\tt HEALPix} map at
$n_{side}=4$, which have an angular dimension of $\sim 15^{\circ}$
and permit us to investigate the local variations of the synchrotron APS
for multipoles larger than $\sim 60$. \\
The derived angular power spectra were modelled in both cases
according to Eq.~\ref{aps_best_model}
and a specific method was set up to find
the best least square fit on adaptive grids of the parameter
space and to evaluate the uncertanties on the retrieved
parameters (see Appendix~C of La Porta~2007 for details).
\item[{\bf 3.}] An indirect cross-check of the fit result reliability
was provided in the case of the Galactic cuts
by the estimated source terms, which are consistent with the
expectations from extragalactic source counts
at both frequencies.
Nowadays, source counts at 1.4 GHz are well established down to
very low flux limits.
It is remarkable that the source angular power spectra
obtained independently from source counts and from the
fit of the survey APS are in good agreement.
\item[{\bf 4.}] The slope of the synchrotron APS,
$C_{\ell} \sim k \ell^{\alpha}$,
changes with $b_{cut}$ without showing a well-defined regular trend,
although it is found to be typically
steeper for $b_{gal} \gtrsim 20^{\circ}$.
For the cuts, $\alpha$ varies in the range
$\sim [-3.0,-2.6]$ at both frequencies.
However, the analysis of the small patches gives evidence
that locally the synchrotron APS can be much flatter
for $\ell \gtrsim 60$, reaching in some cases
values of $\alpha \sim -0.8$.
The normalized amplitude,
$k_{100}=k \times 100^{\alpha}$,
gradually increases toward the Galactic plane,
following, as expected, the background radio
emission gradient.
A good correlation exists between the
results obtained for $k_{100}$
at 408 MHz and 1420 MHz, for both cuts and patches.
This is expected, given that
the spectral properties of the electron density distribution
responsible for the Galactic diffuse non-thermal emission
should be the same in that frequency range
and further supports the reliability of
the obtained estimates of the synchrotron APS.
\item[{\bf 5.}] The maps of $k_{100}$ and $\alpha$ resulting
from the local analysis represent the starting point for the
simulation of small-scale fluctuation fields to be added to
the DS-subtracted maps to build phenomenological
templates of the Galactic synchrotron emission.
At present, an empirical approach is the most reliable
way to proceed in the realization of realistic templates
of the foreground, given the poor knowledge
of the Galactic magnetic field and of the cosmic ray
electron density distribution needed for a 3-dimensional
physical modelling.
\end{itemize}
The issues (discussed in Sect.~\ref{extrap_kmem_comp})
related to the K-band synchrotron component retrieved
by \citet{hins06_wmap_3yr_temp} are a clear example
of the difficulties encountered in the foreground separation
due to the lack of adequate priors and guess.
We performed a source subtraction on that map and produced a
map of the Galactic diffuse non-thermal emission
at 23 GHz to be compared with those at lower frequencies.
\begin{itemize}
\item[{\bf 6.}] The extrapolation to 23 GHz of the APS
obtained at 408 MHz and 1420 MHz for higher latitude regions
($|b_{gal}|\gtrsim 40^{\circ}$) reveals that the
mean spectral index
($C_{\ell}(\nu) \propto \nu^{2\beta}$)
$\beta_{(0.408-23){\rm GHz}} \lesssim \beta_{(1.4-23){\rm GHz}}$, which
is the opposite of what is expected for synchrotron emission.
We estimate the excess of the
signal in the 23 GHz map to be $\gtrsim 50\%$ by using
the mean value of the APS at lower multipoles.
This result can be interpreted in terms of additional
contributions to the 23 GHz Galactic non-thermal emission,
which could be mainly due to anomalous dust.
\end{itemize}
A direct application of the presented analysis is to
determine the level of contamination of the CMB anisotropies
due to the Galactic diffuse synchrotron emission
at different angular scales.
The conclusions reported below refer to the cuts,
which are more relevant for CMB measurements
because of the large coverage.
However, similar considerations could be
repeated using the results of the local analysis,
thus allowing the identification of the clearest
sky areas, which is essential for the success of
ground-based CMB experiments.
\begin{itemize}
\item[{\bf 7.}] An important - although not unexpected - outcome
of the cut analysis is that
the amplitude of the APS for the northern hemisphere cuts
above $\sim 20^{\circ}$ is raised
by the presence of the NPS.
Consequently, the results obtained separately for the
two Galactic hemispheres were used to bracket the APS
of the synchrotron emission.
\item[{\bf 8.}] We used the APS results at 408 MHz and 1420
MHz to determine the frequency spectral index to be
adopted in the extrapolation to the microwave range
for each coverage case. In particular, we found that
$\beta_{(0.408-1.4){\rm GHz}} \in [-3.2,-2.9]$ with an
uncertainty of a few percent.
\item[{\bf 9.}] The extrapolation to 30~GHz of the
synchrotron APS obtained at 1420 MHz
led to a signal in good agreement
with the upper limit fixed by COBE-DMR, thus
supporting the reliability of our results.
At this frequency the synchrotron emission
constitutes a severe contamination of
CMB anisotropies at the largest angular scales
($\ell \lesssim 40$). Nevertheless, a cut at $\sim 20^{\circ}$
would reduce the synchrotron fluctuations to about
half the cosmological ones. The same holds
at $\nu \sim 70$~GHz for a cut at $\sim 5^{\circ}$ and
the situation further improves when excluding a larger
portion of the sky around the Galactic plane.
This implies that even though the current treatment of the
foregrounds does not permit an accurate removal of the
synchrotron emission, the latter does not prevent
the recovery of the bulk of the cosmological information
encoded in the CMB temperature APS.\\
\end{itemize}
A deeper understanding of the foreground
is indispensable to settle other important issues
in the perspective of the forthcoming {\sc Planck}
mission. {\sc Planck} should achieve a sensitivity comparable
to the cosmic variance and its performance should
be limited mainly by foregrounds, thanks to the
extremely accurate control of all instrumental systematic
effects \citep{buri04_PafterW,menn04_lfi}. \\
A first issue is the estimate of the CMB
temperature-polarization APS, which carries information
about the reionization history and the tensor-to-scalar
ratio (see, e.g. Kogut~2003, Kogut et al.~2003, and
references therein). Possible foreground residual
contamination in the total intensity CMB anisotropy
map would affect fine analysis based on
the estimate of the cross-correlation APS,
also because the polarized component of the
cosmological signal is orders of magnitude lower. \\
Another issue is the evaluation of the Gaussianity
of the primordial fluctuations, which in the standard
inflationary paradigm\footnote{
Simple (standard) inflationary scenarios
(see the reviews by Lyth \& Riotto~1999 and Linde~2005)
predict the existence of Gaussian density fluctuations
and of gravitational waves
with a nearly scale-invariant spectrum. }
generate the
structures observed in the Universe today.
Gaussianity tests are a powerful tool, complementary to
the tests exploiting the APS, which allow us
to probe the ``concordance''
model \citep{sperg06_wmap_3yr_param}
and also to distinguish among inflationary
models \citep{bart04_nonG}.
The level of non-Gaussianity predicted
by the ``concordance'' model
cannot be detected by {\sc WMAP},
but in principle it should be observable with {\sc Planck}.
Galactic foregrounds are non-Gaussian and anisotropic
and even low-level contamination in the maps
can produce detectable non-Gaussianities (see, e.g.
Naselsky et al.~2005), although they have minimal
effects on the estimated APS \citep{hin03_wmap_1yr_aps}.
Consequently, the foreground removal has to be
extremely accurate, so as not to limit {\sc Planck}
in verifying this crucial\footnote{Standard cosmologies
predict a minimum level of non-Gaussianity for the
the primordial perturbations. In alternative
cosmologies \citep{lyth03_curvaton,ark04_ghostinfl,alish04_altercosmol}
such a lower limit is even higher. Consequently,
detection or non-detection of non-Gaussianities
sheds light on the physics of the early Universe.}
prediction.
\begin{acknowledgements}
We are grateful to R. Wielebinski for a careful reading of the
original manuscript. We wish to thank G. De Zotti, L. Toffolatti
and R. Rebolo for helpful discussions.
L.L.P. warmly thanks R. Wielebinski and A. Zensus for
granting a post-doc fellowship.
C.B. acknowledge the support by
the ASI contract "Planck LFI Activity of Phase E2".
We are grateful to M. Genghini for technical support.
Some of the results in this paper have been derived
using the HEALPix \citep{gorski05_healpix} package.
The availability of the WMAP 3-yr maps is acknowledged.
We warmly thank the anonymous referee for useful comments.
\end{acknowledgements}
|
1,941,325,221,009 | arxiv | \section{Introduction}
\blfootnote{
%
%
%
%
%
\hspace{-0.65cm}
This work is licenced under a Creative Commons
Attribution 4.0 International License.
License details:
\url{http://creativecommons.org/licenses/by/4.0/}
}
Classifying relations between two entities in a given context is an important task in natural language processing (NLP). Take the following sentence as an example: ``Jewelry and other smaller [valuables]$_{e_1}$ were locked in a [safe]$_{e_2}$ or a closet with a deadbolt.'' The marked entities \textit{valuables} and \textit{safe} are of relation {\tt Content-Container}$({e_1}, {e_2})$. Relation classification plays a key role in various NLP applications, and has become a hot research topic in recent years.
Nowadays, neural network-based approaches have made significant improvement in relation classification, compared with traditional methods based on either human-designed features \cite{MaxEntRE,2010SVM} or kernels \cite{SpdKernel,EmbedTreeK}. For example, \newcite{CNN} and \newcite{CNN-NG} utilize convolutional neural networks (CNNs) for relation classification. \newcite{SDP-LSTM} apply long short term memory (LSTM)-based recurrent neural networks (RNNs) along the shortest dependency path. \newcite{EnsembleNN} build ensembles of gated recurrent unit (GRU)-based RNNs and CNNs.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{relation.pdf}
\caption{(a) The dependency parse tree corresponding to the sentence ``Jewelry and other smaller [valuables]$_{e_1}$ were locked in a [safe]$_{e_2}$ or a closet with a deadbolt.''
Red arrows indicate the shortest dependency path between $e_1$ and $e_2$. (b) The augmented data sample.}\label{fig:example}
\end{figure}
We have noticed that these neural models are typically designed in shallow architectures, e.g., one layer of CNN or RNN, whereas evidence in the deep learning community suggests that deep architectures are more capable of information integration and abstraction \cite{SpeechDRNN,TrainDRNN,OpinionDRNN}. A natural question is then whether such deep architectures are beneficial to the relation classification task.
In this paper, we propose the deep recurrent neural networks (DRNNs) to classify relations. The deep RNNs can explore the representation space in different levels of abstraction and granularity. By visualizing how RNN units are related to the ultimate classification, we demonstrate that different layers indeed learn different representations: low-level layers enable sufficient information mix, while high-level layers are more capable of precisely locating the information relevant to the target relation between two entities. Following our previous work \cite{SDP-LSTM}, we leverage the shortest dependency path~(SDP, Figure~\ref{fig:example}) as the backbone of our RNNs.
We further observe that the relationship between two entities are directed. Two sub-paths, separated by entities' common ancestor, can be mapped to {\tt subject-predicate} and {\tt object-predicate} components of a relation. By changing the order of these two sub-paths, we obtain a new data sample with the inversed relationship (Figure~\ref{fig:example}b). Such data augmentation technique can provide additional data samples without using external data resources.
We evaluated our proposed method on the SemEval-2010 relation classification task. Even if we do not apply data augmentation, the DRNNs model has achieved a high performance of 84.2\% $F_1$-score with a depth of 3, but the performance decreases when the depth is too large. This is because the deep RNN is a large model, which necessitates more data samples for training. Applying data augmentation can alleviate the problem of data sparseness and sustain a deeper RNN to improve the performance to 86.1\%.
The results show that both our deep networks and the data augmentation strategy have contributed to the relation classification task, and that they are coupled well together for further performance improvement.
The rest of this paper is organized as follows. Section~\ref{sRelatedwork} reviews related work; Section~\ref{sModel} describes our DRNNs model in detail. Section \ref{sExperiment} presents in-depth experimental results. Finally, we have conclusion in Section \ref{sConclusion}.
\section{Related Work}\label{sRelatedwork}
Traditional methods for relation classification mainly fall into two groups: feature-based or kernel-based. The former approaches extract different types of features and feed them into a classifier, e.g., a maximum entropy model~\cite{MaxEntRE}. Various features, including lexical, syntactic, as well as semantic ones, are shown to be useful to relation classification~\cite{2010SVM}. By contrast, kernel-based methods do not have explicit feature representations, but require predefined similarity measure of two data samples. \newcite{SpdKernel} design a kernel along the shortest dependency path (SDP) between two entities by observing that the relation strongly relies on SDPs.
\newcite {EmbedTreeK} combine structural information and semantic information in a tree kernel.
Neural networks have now become a prevailing technique in this task.
\newcite{RAE} design a recursive neural network along the constituency parse tree.
\newcite{CustomRNN}, also on the basis of recursive networks, emphasize more on important phrases;
\newcite{chainRNN} restrict recursive networks to SDP.
In our previous study~\cite{SDP-LSTM}, we introduce SDP-based recurrent neural network to classify relations.
\newcite{CNN}, on the other hand, apply CNNs to relation classification.
Along this line, \newcite{RankCNN} replace the common softmax loss function with a ranking loss in their CNN model.
\newcite{CNN-NG} design a negative sampling method for SDP-based CNNs.
Besides, representative hybrid models of CNNs and recursive/recurrent networks include \newcite{DepNN} and \newcite{EnsembleNN}.
\section{The Proposed Methodology}\label{sModel}
In this section, we describe our methodology in detail. Subsection~\ref{ssOverview} provides an overall picture of our DRNNs model. Subsections~\ref{ssRNN} and \ref{ssDeepRNN} describe deep recurrent neural networks. The proposed data augmentation technique is introduced in Subsection~\ref{ssDataAug}. Finally, we present our training objective in Subsection~\ref{ssObjective}.
\subsection{Overview}\label{ssOverview}
Figure~\ref{fArchitecture} depicts the overall architecture of the DRNNs model.
Given a sentence and its dependency parse tree,\footnote{
Parsed by the Stanford parser \cite{TypeDep}.} we follow our previous work \cite{SDP-LSTM} and build DRNNs on the shortest dependency path (SDP), which serves as a backbone. In particular, an RNN picks up information along each sub-path, separated by the common ancestor of marked entities. Also, we take advantage of four information channels, namely, word embeddings, POS embeddings, grammatical relation embeddings, and WordNet embeddings.
Different from \newcite{SDP-LSTM}, we design deep RNNs with up to four hidden layers so as to capture information in different levels of abstraction. For each RNN layer, max pooling gathers information from different recurrent nodes. Notice that the four channels (with eight sub-paths) are processed in a similar way. Then all pooling layers are concatenated and fed into a hidden layer for information integration. Finally, we have a softmax output layer for classification.
\subsection{Recurrent Neural Networks on Shortest Dependency Path}\label{ssRNN}
In this subsection, we introduce a single layer of RNN based on SDP, serving as a building block of our deep architecture.
Compared with a raw word sequence or a whole parse tree, the shortest dependency path (SDP) between two entities has two main advantages. First, it reduces irrelevant information; second, grammatical relations between words focus on the action and agents in a sentence and are naturally suitable for relation classification. Existing studies have demonstrated the effectiveness of SDP \cite{chainRNN,DepNN,SDP-LSTM,CNN-NG}; details are not repeated here.
Focused on the SDP, an RNN keeps a hidden state vector $\bm h$, changing with the input word at each step accordingly.
Concretely, the hidden state $\bm h_t$, for the $t$-th word in the sub-path, depends on its previous state $\bm h_{t-1}$ and the current word's embedding $\bm x_t$. For the simplicity and without loss of generality, we use vanilla recurrent networks with perceptron-like interaction, that is, the input is linearly transformed by a weight matrix and non-linearly squashed by an activation function, i.e.,
\begin{equation}
\bm h_t=f(W_\text{in}\bm x_t+W_\text{rec}\bm h_{t-1}+\bm b_h)\label{eqn:shallow}
\end{equation}
where $W_\text{in}$ and $W_\text{rec}$ are weight matrices for the input and recurrent connections, respectively.
$\bm b_h$ is a bias term, and $f$ is a non-linear activation function ($\operatorname{ReLU}$ in our experiment).
\begin{figure*}
\centering
\bigskip
\includegraphics[width=.9\textwidth]{overview.pdf}
\caption{The overall architecture of DRNNs.
Two recurrent neural networks pick up information along the shortest dependency path, separated by its common ancestor. We use four information channels, namely words, part-of-speech tags, grammatical relations (GR), and WordNet hypernyms.}\label{fArchitecture}
\end{figure*}
\subsection{Deep Recurrent Neural Networks}\label{ssDeepRNN}
Although an RNN, as described above, is suitable for picking information along a sequence (a subpath in our task) by its iterative nature, the machine learning community suggests that deep architectures may be more capable of information integration, and can capture different levels of abstraction.
A single-layer RNN can be viewed that it is deep along \textit{time steps}. When unfolded, however, the RNN has only one hidden layer to capture the current input, as well as to retain the information in its previous step. In this sense, single-layer RNNs are actually shallow in information processing \cite{TrainDRNN,OpinionDRNN}.
In the relation classification task, words along SDPs provide information from different perspectives. On the one hand, the marked entities themselves are informative. On the other hand, the entities' common ancestor (typically verbs) tells how the two entities are related to each other. Such heterogeneous information might necessitate more complex machinery than a single RNN layer.
Following such intuition, we investigate deep RNNs by stacking multiple hidden layers on the top of one another, that is, every layer treats its previous layer as input, and computes its activation similar to Equation~\ref{eqn:shallow}.
Formally, we have
\begin{equation}
\bm h_t^{(i)}=f(W_{\text{in}}^{(i-1)}\bm h_t^{(i-1)} + W_{\text{rec}}^{(i)}\bm h_{t-1}^{(i)}
+ W_\text{cross}^{(i-1)}\bm h_{t-1}^{(i-1)} + \bm b^{(i)})\label{eqn:deep}
\end{equation}
where the subscripts refer to time steps, and superscripts indicate the layer number.
To enhance information propagation, we add a ``cross'' connection for hidden layers ($i\ge 2$) from the lower layer in the previous time step, given by $W_\text{cross}^{(i-1)}\bm h_{t-1}^{(i-1)}$ in Equation~\ref{eqn:deep}. (See also $\nearrow$ and $\nwarrow$ arrows in Figure~\ref{fArchitecture}).
\bigskip
\subsection{Data Augmentation}\label{ssDataAug}
Neural networks, especially deep ones, are likely to be prone to overfitting. The SemEval-2010 relation classification dataset, we use, comprises only several thousand samples, which may not fully sustain the training of deep RNNs.
To mitigate this problem, we propose a data augmentation technique for relation classification by making use of the directionality of relationships.
The two sub-paths
\begin{compactitem}
\item[] $[$valuables$]_{e_1}$ $\rightarrow$ jewelry $\rightarrow$ locked
\item[] locked $\leftarrow$ in $\leftarrow$ closet $\leftarrow$ $[$safe$]_{e_2}$
\end{compactitem}
in Figure~\ref{fig:example}, for example, can be mapped to the {\tt subject-predicate} and
{\tt object-} {\tt predicate} components in the relation {\tt Content-}{\tt Container}$(e_1,e_2)$. If we change the order of these two sub-paths, we obtain
\begin{compactitem}
\item[] $[$safe$]_{e_1}$ $\rightarrow$ closet $\rightarrow$ in $\rightarrow$ locked
\item[] locked $\leftarrow$ jewelry $\leftarrow$ $[$valuables$]_{e_2}$
\end{compactitem}
Then the relationship becomes {\tt Container-}{\tt Content}$(e_1,e_2)$, which is exactly the inverse of {\tt Content-}{\tt Container}$(e_1,e_2)$. In this way, we can augment the dataset without using additional resources.
\subsection{Training Objective}\label{ssObjective}
For each recurrent layer and embedding layer (over each sub-path for each channel), we apply a max pooling layer to gather information. In total, we have 40 pools, which are concatenated and fed to a hidden layer for information integration.
Finally, a softmax layer outputs the estimated probability that two sub-paths ($s^\text{left}$ and $s^\text{right}$) are of relation $r$. For a single data sample $i$, we apply the standard cross-entropy loss, denoted as $J(s_i^\text{left}, s_i^\text{right}, r_i)$. With the data augmentation technique, our overall training objective is
\begin{align}\nonumber
J=&\sum_{i=1}^m J(s_i^\text{left}, s_i^\text{right}, r_i)
J(s_i^\text{right},s_i^\text{left}, r_i^{-1})
+\lambda\sum_{i=1}^\omega\|W_i\|_F
\end{align}
where $r^{-1}$ refers to the inverse of relation $r$. $m$ is the number of data samples in the original training set. $\omega$ is the number of weight matrices in DRNNs. $\lambda$ is a regularization coefficient, and $\|\cdot\|_F$ denotes Frobenius norm of a matrix.
For decoding (predicting the relation of an unseen sample), the data augmentation technique provides new opportunities, because we can use the probability of $r(e_1, e_2)$, $r^{-1}(e_2, e_1)$, or both. Section~\ref{ssExpDataAug} provides detailed discussion.
\section{Experiments}\label{sExperiment}
In this section, we present our experiments in detail.
Subsection~\ref{ssData} introduces the dataset;
Subsection~\ref{ssSetting} describes hyperparameter settings.
We discuss the details of data augmentation in Subsection~\ref{ssExpDataAug} and
the rationale for using RNNs in Subsection~\ref{ssRNNCNN}.
Subsection~\ref{ssResult} compares our DRNNs model with other methods in the literature.
In Subsection~\ref{ssDRNNsDepth}, we have quantitative and qualitative analysis of how the depth affects our model.
\subsection{Dataset}\label{ssData}
We evaluated our DRNNs model on the SemEval-2010 Task 8 dataset, which is an established benchmark for relation classification \cite{2010SVM}.
The dataset contains 8000 sentences for training, and 2717 for testing. We split 800 samples out of the training set for validation.
There are 9 directed relations and an undirected default relation \verb|Other|; thus, we have 19 different labels in total. However, the \verb|Other| class is not taken into consideration when we compute the official measures.
\subsection{Hyperparameter Settings}\label{ssSetting}
This subsection presents hyperparameters of our proposed model.
We basically followed the settings in our previous work \cite{SDP-LSTM}.
Word embeddings were 200-dimensional, pretrained ourselves using {\tt word2vec} \cite{Word2vce} on the Wikipedia corpus; embeddings in other channels were 50-dimensional initialized randomly.
The hidden layers in each channel had the same number of units as their embeddings (either 200 or 50); the penultimate hidden layer was 100-dimensional. An $\ell_2$ penalty of $10^{-5}$ was also applied as in \newcite{SDP-LSTM}, but we chose the dropout rate by validation with a granularity of 5\% for our model variants (with different depths).
We also chose the depth of DRNNs by validation from the set $\{1,2,\cdots,6\}$. The 3-layer and 4-layer DRNNs yield the highest performance with and without data augmentation, respectively. Section~\ref{ssDRNNsDepth} provides both quantitative and qualitative analysis regarding the effect of depth.
We applied mini-batched stochastic gradient descent for optimization, where gradients were computed by standard back-propagation.
\subsection{Data Augmentation Details}\label{ssExpDataAug}
\begin{table}[!t]
\centering
\bigskip
\begin{minipage}{0.55\textwidth}
\centering
\begin{tabular}{lc}
\hline
\textbf{Variant of Data augmentation\quad\quad} & \textbf{$F_1$}\\
\hline
No Augmentation & 84.16\\
Augment all relations & 83.43\\
Augment {\tt Other} only & 83.01\\
Augment directed relations only & 86.10\\
\hline
\end{tabular}
\caption{Comparing variants of data augmentation.}\label{tab:aug}
\end{minipage}~~
\begin{minipage}{.4\textwidth}
\centering
\begin{tabular}{c|cc}
\hline
& \multicolumn{2}{c}{Depth}\\
\cline{2-3}
& \hspace{.3cm}1\hspace{.3cm} &\hspace{.3cm} 2\hspace{.3cm}\\
\hline
\hspace{.3cm}CNN \hspace{.3cm}&\hspace{.3cm} 84.01 \hspace{.3cm}& \hspace{.3cm}83.78\hspace{.3cm}\\
\hspace{.3cm}RNN \hspace{.3cm}&\hspace{.3cm} 84.43 \hspace{.3cm}& \hspace{.3cm}85.04\hspace{.3cm}\\
\hline
\end{tabular}
\caption{Comparing CNNs and RNNs (also using $F_1$-score as the measurement).}\label{tab:CNN}
\end{minipage}
\end{table}
As mentioned in Section~\ref{ssData}, the SemEval-2010 Task 8 dataset contains an undirected class {\tt Other} in addition to 9 directed relations (18 classes). For data augmentation, it is natural that the inversed {\tt Other} relation is also in the {\tt Other} class itself. However, if we augment all the relations, we observe a performance degradation of 0.7\% (Table~\ref{tab:aug}). We deem the {\tt Other} class contains mainly noise, and is inimical to our model. Then we conducted another experiment where we only augmented the {\tt Other} class. The result verifies our conjecture as we obtained an even larger degradation of 1.1\% in this setting.
The pilot experiments suggest that we should take into consideration unfavorable noise when performing data augmentation. In this experiment, if we reverse the directed relations only and leave the {\tt Other} class intact, the performance is improved by a large margin of 1.9\%.
This shows that our proposed data augmentation technique does help to mitigate the problem of data sparseness, if we carefully rule out the impact of noise.
During validation and testing, we shall decode the target label of an unseen data sample (with two entities $e_1$ and $e_2$). Through data augmentation, we are equipped with the probability of $r^{-1}(e_2, e_1)$ in addition to $r(e_1, e_2)$. In our experiment, we tried several settings and chose to use $r^{-1}(e_2, e_1)$ only, because it yields the highest the validation result. We think this is probably because the {\tt Other} class brings more noise to $r$ than $r^{-1}$, as the {\tt Other} class is not augmented (and hence asymmetric).
We would like to point out that our data augmentation method is a general technique for relation classification, which is not \textit{ad hoc} to a specific dataset; that the methodology for dealing with noise is also potentially applicable to other datasets.
\subsection{RNNs vs. CNNs}\label{ssRNNCNN}
As both RNNs and CNNs are prevailing neural models for NLP, we are curious whether deep architectures are also beneficial to CNNs. We tried a CNN with a sliding window of size 3 based on SDPs, similar to~\newcite{CNN-NG}; other settings were as our DRNNs.
The results are shown in Table~\ref{tab:CNN}. We observe that a single layer of CNN is also effective, yielding an $F_1$-score slightly worse than our RNN. But the deep architecture hurts the performance of CNNs in this task. One plausible explanation is that, when convolution is performed, the beginning and end of a sentence are typically padded with a special symbol or simply zero. However, the shortest dependency path between two entities is usually not very long ($\sim$4 on average). Hence, sentence boundaries may play a large role in convolution, which makes CNNs vulnerable.
On the contrary, RNNs can deal with sentence boundaries smoothly, and the performance continues to increase with up to 4 hidden layers. (Details are deferred to Subsection~\ref{ssDRNNsDepth}.)
\bigskip
\subsection{Overall Performance}\label{ssResult}
\begin{table*}[!t]
\bigskip
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c|l|c}
\hline
\hline
\textbf{Model} &\centering \textbf{Features} &\textbf{$F_1$}\\
\hline
\multirow{2}{*}{SVM} & POS, WordNet, prefixes and other morphological features, & \multirow{3}{*}{82.2}\\
\multirow{2}{*}{\footnotesize\cite{2010SVM} } & depdency parse, Levin classes, PropBank, FanmeNet, & \\
& NomLex-Plus, Google $n$-gram, paraphrases, TextRunner & \\
\hline
RNN & Word embeddings & 74.8\\
\footnotesize\cite{RAE} & + POS, NER, WordNet & 77.6\\
\hline
MVRNN & Word embeddings & 79.1\\
\footnotesize\cite{MVRNN} & + POS, NER, WordNet & 82.4\\
\hline
CNN & Word embeddings & 69.7\\
\footnotesize\cite{CNN} & + position embeddings, WordNet & 82.7\\
\hline
Chain CNN & \multirow{2}{*}{Word embeddings, POS, NER, WordNet} &\multirow{2}{*}{82.7}\\
\footnotesize\cite{chainRNN}&&\\
\hline
CR-CNN & Word embeddings & 82.8\\
\footnotesize\cite{RankCNN} & + position embeddings & 84.1\\
\hline
FCM & Word embeddings & 80.6\\
\footnotesize\cite{FCM} & + dependency parsing, NER & 83.0\\
\hline
SDP-LSTM & Word embeddings & 82.4\\
\footnotesize\cite{SDP-LSTM} & Word + POS + GR + WordNet embeddings & 83.7\\
\hline
DepNN & Word embeddings + WordNet & 83.0\\
\footnotesize\cite{DepNN} & Word embeddings + NER & 83.6\\
\hline
depLCNN & Word + WordNet + words around nominals & 83.7\\
\footnotesize\cite{CNN-NG} & + negative sampling from NYT dataset & 85.6\\
\hline
Ensemble Methods & Word+POS+NER+WordNet embeddings, CNNs, RNNs + Stacking & 83.4 \\
\footnotesize\cite{EnsembleNN} & Word+POS+NER+WordNet embeddings, CNNs, RNNs + Voting & 84.1\\
\hline
\multirow{2}{*}{DRNNs} & Word+POS+GR+WordNet embeddings w/o data augmentation & 84.2\\
& + data augmentation &\textbf{86.1}\\
\hline
\hline
\end{tabular}
}
\caption{Comparison of previous relation classification systems.}
\label{tab:result}
\end{table*}
Table~\ref{tab:result} compares our DRNNs model with previous state-of-the-art methods.\footnote{This paper was preprinted on arXiv on 14 Jan 2016.}
The first entry in the table presents the highest performance achieved by traditional feature-based methods.
\newcite{2010SVM} feed a variety of handcrafted features to the SVM classifier and achieve an $F_1$-score of 82.2\%.
Recent performance improvements on this dataset are mostly achieved with the help of neural networks.
In an early study, \newcite{MVRNN} build a recursive network on constituency trees, but achieve a performance worse than \newcite{2010SVM}.
They extend their recursive network with matrix-vector interaction and elevate the $F_1$-score to 82.4\%.
\newcite{chainRNN} restrict the recursive network to SDP, which is slightly better than a sentence-wide network.
In our previous study \cite{SDP-LSTM}, we introduce recurrent neural networks based on SDP and improve the $F_1$-score to 83.7\%.
In the school of convolution, \newcite{CNN} construct a CNN on the word sequence; they also integrate word position embeddings, which benefit the CNN architecture.
\newcite{RankCNN} propose a similar CNN model, named CR-CNN, by replacing the common softmax cost function with a ranking-based cost function.
By diminishing the impact of the \verb|Other| class, they achieve an $F_1$-score of 84.1\%.
\newcite{CNN-NG} design an SDP-based CNN with negative sampling, improving the performance to 85.6\%.
Hybrid models of CNNs and RNNs do not appear to be very useful, achieving up to an $F_1$-score of 84.1\%~\cite{DepNN,EnsembleNN}.
\newcite{FCM} propose a Feature-rich Compositional Embedding Model (FCM), which combines unlexicalized linguistic contexts and word embeddings.
They do not use neural networks (at least in the usual sense) and achieve an $F_1$-score of 83.0\%.
Our DRNNs model, along with data augmentation, achieves an $F_1$-score of 86.1\%. Even if we do not apply data augmentation, the DRNNs model yields 84.2\% $F_1$-score, which is also the highest score achieved without special treatment to the noisy \texttt{Other} class. The above results show the effectiveness of DRNNs, especially trained with a large (augmented) dataset.
\subsection{Analysis of DRNNs' Depth}\label{ssDRNNsDepth}
\begin{figwindow}[0,r,%
\mbox{
\centering
\includegraphics[width=.4\textwidth]{depth.pdf}
},
\label{fDepEffect}
{Analysis of the depth.$^\text{3}$
}
]
In this subsection, we analyze the effect of depth in our DRNNs model.
We have tested the depth from the set $\{1, 2, \cdots, 6\}$, and plot the results in Figure~\ref{fDepEffect}. Initially, the performance increases if the depth is larger in both settings with and without augmentation. However, if we do not augment data, the performance peaks when the depth is 3. Provided with augmented training samples, the $F_1$-score continues to increase with up to 4 layers, and ends up with an $F_1$-score of 86.1\%.
\indent We next investigate how RNN units in different layers are related to the ultimate task of interest. This is accomplished by tracing back information from pooling layers. Noticing that the pooling layer takes maximum value in each dimension, we can compute how much a hidden layer's units are gathered by pooling for further processing. In this way, we are able to demonstrate the information flow in RNN hidden units. We plot three examples in Figure~\ref{fVis}. Here, rectangles refer to RNN hidden layers, unfolded along time. (Rounded rectangles are word embeddings.) The intensity of color reflects the ratio of the pooling proportion.
\end{figwindow}
\bigskip
\begin{itemize}
\item Sample 1: ``Until 1864 $[$vessels$]$$_{e_1}$ in the service of certain UK public offices defaced the Red Ensign with the $[$badge$]$$_{e_2}$ of their office'' with label \verb|Instrument-Agency|$({e_2}, {e_1})$.
Its two sub-paths of SDP are
\begin{compactitem}
\item[]$\text{$[$vessels$]$}_{e_1}\rightarrow\text{until}\rightarrow\text{defaced}$
\item[]defaced $\leftarrow$ with $\leftarrow$ $[$badge$]_{e_2}$
\end{compactitem}
From Figure~\ref{fVis}a, we see that entities like \textit{vessels} and \textit{badge}
are darker than the verb phrase \textit{defaced with} on the embedding layer.
When information is propagating horizontally and vertically, these entities are getting lighter, while the verb phrase becomes darker gradually.
Intuitively, we think that, considering the relation \verb|Instrument-Agency|$({e_2}, {e_1})$, it is less informative with only two entities \textit{vessels} and \textit{badge}.
When adding the semantic of verb phrase \textit{defaced with}, we are more aware of the target relation.{\color{white}\footnote{ Using vanilla RNN with a depth of 1, we obtained a slightly better accuracy in this paper than~\newcite{SDP-LSTM}.}}
\item Sample 2: ``Most of the $[$verses$]$$_{e_1}$ of the plantation songs had some reference to $[$freedom$]$$_{e_2}$'' with label \verb|Message-Topic|$({e_1}, {e_2})$.
Its two sub-paths of SDP are
\begin{compactitem}
\item[] $\text{$[$verses$]$}_{e_1}\rightarrow\text{of}\rightarrow\text{most}\rightarrow\text{had}$
\item[] had $\leftarrow$ reference $\leftarrow$ to $\leftarrow$ $[$freedom$]_{e_2}$
\end{compactitem}
Similar to Sample 1, we see from Figure \ref{fVis}b that the color of the ``pivot'' verb \textit{had} is getting darker vertically, and becomes the darkest in the fourth RNN layer, indicating the highest pooling portion.
This is probably because \textit{had} links two ends of the relation, \verb|Message| and \verb|Topic|.
\item Sample 3: ``A more spare, less robust use of classical $[$motifs$]_{e_1}$ is evident in a $[$ewer$]_{e_2}$ of 1784-85'' with label \verb|Component-Whole|$({e_1}, {e_2})$.
Its two sub-paths of SDP are
\begin{compactitem}
\item[]$\text{$[$motifs$]$}_{e_1}\rightarrow\text{of}\rightarrow\text{use}\rightarrow
\text{evident}$
\item[] evident $\leftarrow$ in $\leftarrow$ $[$ewer$]_{e_2}$
\end{compactitem}
Different from Figures~\ref{fVis}a and \ref{fVis}b, higher layers pay more attention to entities rather than their ancestor. In this example, \textit{motifs} and \textit{ewer} appear to be more relevant to the relation \verb|Component-|\verb|Whole| than their common ancestor \textit{evident}.
The pooling proportion of entities (\textit{motifs}, \textit{ewer}) is increasing, while other words' proportion is decreasing.
\end{itemize}
\begin{figure*}[!t]
\centering
\includegraphics[width=.97\textwidth]{vis.pdf}
\caption{Visualization of information propagation along multiple RNN layers.}
\label{fVis}
\end{figure*}
We summarize our findings as follows. (1) Pooled information usually peaks at one or a few words in the embedding layer. This makes sense because there is no information flow in this layer. (2) Information scatters over a wider range in hidden layers, showing that the recurrent propagation does mix information. (3) For a higher-level layer, the network pays more attention to those words that are more relevant to the relation, but whether entities or their common ancestor is more relevant is not consistent among different data samples.
\section{Conclusion}\label{sConclusion}
In this paper, we proposed deep recurrent neural networks, named DRNNs, to improve the performance of relation classification. The DRNNs model, consisting of several RNN layers, explores the representation space of different abstraction levels. By visualizing DRNNs' units, we demonstrated that high-level layers are more capable of integrating information relevant to target relations. In addition, we have designed a data augmentation strategy by leveraging the directionality of relations.
When evaluated on the SemEval dataset, our DRNNs model results in substantial performance boost. The performance generally improves when the depth increases; with a depth of 4, our model reaches the highest $F_1$-measure of 86.1\%.
\section*{Acknowledgments}
We thank all reviewers for their constructive comments.
This research is supported by the National Basic Research Program of China (the 973 Program) under Grant No.\@ 2015CB352201, the National Natural Science Foundation of China under Grant Nos.\@ 61232015, 91318301, 61421091, and 61502014.
\bibliographystyle{acl}
|
1,941,325,221,010 | arxiv | \section{Introduction}\label{chap:intro}
Over the last two decades, law enforcement agencies are relying more and more on statistical tools to build an objective criminal justice system, leading to a meteoric rise of ``predictive policing", loosely defined as ``\textit{the application of analytical techniques - particularly quantitative techniques - to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions}" \citep{perry2013predictive}. The proposed algorithms and methods attempt to uncover and exploit different aspects of crime activities data. For example, \citet{gotway1997generalized} use a spatial generalized linear model, that has been extended both by considering the temporal pattern as well as a non-linear modeling approach using generalized additive modeling in ST-GAM or LST-GAM \citep{wang2012spatio}. In a series of papers, \citet{mohler2011self, mohler2013modeling, mohler2015randomized} propose a self-exciting point process model that treats near-repeat nature of crimes \citep{townsley2000repeat} as aftershocks of an earthquake. This is the main driving force behind the popular crime forecasting software called PredPol (\url{https://predpol.com/}) that has been since adopted by many policing agencies throughout the US.
Apart from increasing the accuracy of prediction of future crime, it is also important to understand which geographical factors significantly contribute to crime. Such knowledge can inform a plan for allocating resources or making policy changes to either counteract the effect of a `risky' place or increase the intensity or presence of a `protective' place. This is also closely related to the goal of ensuring that a prediction rule does not suffer from algorithmic or systemic biases. This is particularly important, as with the increase in complexity and use of such data-based tools, there is growing concern and additional effort devoted to reducing the racial disparities in predictive policing, while producing dynamic and real-time forecasts and insights about spatio-temporal crime activities. For example, using a combination of demographically representative synthetic data and survey data on drug use, \citet{lum2016predict} point out that predictive policing estimates based on biased policing records often accentuate the racial bias instead of removing it. A natural solution seems to be the risk terrain modeling (RTM) framework of \citet{caplan2011risk}, that uses a simple but interpretable approach. In RTM, a separate map layer is created for each predictor, that are then combined to produce a composite map where contribution or importance of each factor can be evaluated in a model-based way.
We start with a brief review of the existing statistical methodology behind the most common crime forecasting tools.
\subsection{Literature Review}
\noindent \textbf{Self-exciting Point Process:} One of the popular statistical approaches to modeling criminal activities is self-exciting processes \citep{mohler2011self, mohler2013modeling, mohler2015randomized} that is characterized by the increasing probability of repeated events following an event, similar to aftershocks of an earthquake. Here the intensity of a discrete-time point process (criminal activities, in this context) is determined as a log-Gaussian Cox process (LGCP) whose intensity is self-excited by occurrence of many events in a short time-window. \citet{mohler2015randomized} found their approach outperformed a dedicated crime analyst who relied on existing intel and hotspot mapping.
\noindent \textbf{Generalized Additive Modeling for Spatio-temporal Data:} \cite{wang2012spatio} developed a more sophisticated model using a generalized additive modeling for spatio-temporal data (ST-GAM) that can be thought of as an extension of grid-based regression approaches that can account for non-linear relationships. Here, spatio-temporal features include previous crime activities, socio-economic and built-environment features at the grid-cell resolution indexed over time, and \cite{wang2012spatio} showed that their method outperforms spatial Generalized Linear Model (GLM) \citep{gotway1997generalized} where temporal information is not incorporated.
\noindent \textbf{Risk Terrain Modeling:} Risk terrain modeling, henceforth abbreviated as RTM, \citep{caplan2011risk, caplan2015risk, drawve2016metric} is a class of statistical methods that combines geographic features such as built, physical-environment and socioeconomic variables in a supervised learning set-up to provide insights and forecasts for crime activities at a chosen grid-level based on the proximity to features and social factors or density of features. A typical RTM approach involves three steps: (1) identify potentially relevant factors for the spatial varying response variable, (2) assign a value for each factor considered for each location or grid-cell spanning a common geography, and (3) combine the factor-specific raster maps in a supervised regression framework so that each factor can be judged in terms of its relevance for the crime outcome. The RTM approach, like several other models, alleviates some racial disparity concerns by moving the focus of the modeling approach from people to places. However, there are some key advantages of the RTM approach over the LST-GAM or Hawkes process based algorithms. Firstly, the underlying statistical methodology for RTM immediately provides interpretability to the factors influencing spatial clustering of crime or other response variables. Secondly, the raster-map based modeling framework lets us easily incorporate different machine learning and statistical tools of choice depending on their performance for a given jurisdiction. In this paper, we use Poisson GLM, spatial error model and random forest, but it is straightforward to add any number of methodologies to the mix and choose the best performing method or combine the disparate tools in an ensemble learning framework.
While these developments have been mostly focused on crime prediction and prevention, there is relatively less emphasis on other spatial events such as mental health calls that also require resource allocation from law enforcement agencies or the city. The goal of this paper is to extend the powerful and interpretable statistics and machine learning methodologies under the general umbrella of risk terrain modeling to the geo-spatial predictive modeling of mental health call locations in Little Rock, AR.
The outline of the paper is as follows: in Section \ref{chap:sp}, we describe the modeling approach and the different methodologies used in developing the risk terrain model for mental health calls. In Section \ref{chap:mental} we illustrate the spatial clustering and other descriptive features of the data as well as demonstrating the performance of the proposed framework. Finally, in Section \ref{chap:end}, we provide some new directions for research in this area.
\section{Spatial Forecasting}\label{chap:sp}
\subsection{Modeling Approach}
Our spatial modeling and forecasting framework is similar to RTM, with a key difference being the underlying statistical methodologies. In this paper, we use the following methodologies and compare both the important predictors chosen by the model as well as their predictive performance for forecasting mental health incidents in Little Rock, Arkansas. Little Rock regularly has above average violent and property crime rates when compared to other large U.S. cities \citep{chillar2020unpacking}. Data were obtained from several city departments, including the Little Rock Police Department, through an ongoing data-sharing Memorandum of Understanding (MOU) between researchers and Little Rock. Social data were obtained from the American Community Survey (5-year estimates). Mental health incidents from 2015 through 2017 are used to predict 2018 incidents.
\begin{description}
\item [Poisson Generalized Linear Model] The Poisson regression model belongs to a family of regression models called the generalized linear model (GLM). As a special case of the GLM family, the fitted Poisson regression model uses $\eta_i = \ln(\lambda)$ as canonical link and is of the form:
\[
\hat{y_i} = g^{-1}(x^T_i \hat{\beta}) = {\rm e}^{x^T_i \hat{\beta}}.
\]
Among several link functions commonly used with the Poisson distribution, the log link function ensures that $\lambda_i \geq 0$ which is crucial for the expected value of a count outcome of the response variable (mental health incidents) \citep{Montgomery}. In terms of model interpretation, parameters may be interpreted in a probabilistic sense which arises as an advantage from the fact that Poisson regression belongs to the GLM family. Consequently, significant factors present in the fitted model may be explained in strict probabilistic terms with respective levels of uncertainty.
\item [Random Forest] Random forest \citep{breiman2001random} falls into the non-linear/non-parametric category of supervised learning approaches known as decision trees. Decision trees are particularly known due to their inherent ease of use and interpretability in both regression and classification problems. For regression problems, which we focus on here, decision trees divide the predictor space into $J$ distinct and non-overlapping regions, $R_1,R_2,...,R_J$ also known as terminal nodes or leaves using the training data through a recursive binary splitting procedure. Note that a threshold is implemented so that the recursive binary splitting procedure ends when the number of observations at any terminal node falls below the set threshold. In addition to the preceding criteria, the aim is to obtain terminal nodes that minimize the residual sum of squares:
$$\sum^J _{j=1} \sum_{I \in R_j}(y_i - \hat{y}_{R_j})^2.$$
The results obtained are likely to over-fit the data due to the complexity of the resulting tree so, a cost-complexity pruning procedure is implemented to find a sub tree which minimizes the objective function:
$$\sum^{|T|}_{j = 1} \sum_{i:x_i \in R_j} (y_i - \hat{y}_{R_j})^2 + \alpha |T|,$$
thereby reducing the variance at the cost of little bias for better interpretation. As a preventative measure to not over-fit the training data and control the length of the tree, the penalty factor $\alpha$ is added to $|T|$, the number of terminal nodes. The predicted response for any observation that falls into the $R^{th}_i$ region is the mean response of all observations from the training data set that are in that same terminal node.
Single decision trees however are not as competitive when compared to other forms of linear or non-linear supervised learning models. One solution to build a more robust decision tree is known as random forests. Random forests build $B$-many trees to improve its performance using bootstrapped samples from the training data in a strategic manner that decorrelates the $B$-many trees, with the final prediction done by averaging the predictions from each of the individual trees. In the process of building each decision tree, at every stage or split, a random sample of size $m = \sqrt{p}$ predictors are chosen as candidates from the pool of $p$ predictors. As a result, strong predictors do not influence the building order of every tree (making them not look alike). This process decorrelates the $B$-many trees, as on average $\frac{p-m}{p}$ of the splits would not have such strong predictors thus reducing the variance and improving results. We refer the reader to \citet{James2014Introduction} for an in-depth discussion of random forests. In relation to crime, \citet{wheeler2021mapping} found their random forests model outperformed RTM and Kernel Density Estimations (KDE) for robbery prediction in Dallas, Texas.
\item [Spatial Econometric Model: Spatial Durbin Model]
Data containing a location/geographic component contain spatial dependencies among observations which may lead to spatial relationships. Spatial relationships occur not only in the dependent variables (response variables), but also in the independent variables (covariates) and residual terms ($\epsilon$). The proper terms defining spatial relationships among dependent variables, independent variables and residual terms are known as endogenous interaction, exogenous interaction and error interaction respectively. A model that accounts for all spatial relationships is the \textbf{Manski model \footnote{The Manski model is also known as the Generalized Nesting Spatial Model(GNS) \citep{Elhorst2014Spatial}}}, with the form:
\begin{equation}
{\bf Y} = \delta{\bf W}{\bf Y} + {\bf X}\beta + {\bf W}{\bf X}\theta + \u; \hspace{0.5cm} \u = \lambda{\bf W}\u + \epsilon.
\label{eq:Manski}
\end{equation}
Here $\delta$ is known as the spatial autoregressive coefficient, $\lambda$ is the spatial autocorrelation coefficient, ${\bf W}$ represents the spatial weights matrix that describes the spatial configuration of the unit samples, ${\bf X}$ is a matrix of exogenous variables or covariates and lastly $\theta \text{ and } \beta$ are unknown parameters to be estimated that explain the contribution of each predictor and their spatially lagged version \citep{Elhorst2014Spatial}.
For the purpose of this paper, both Manski and spatial Durbin error models were fitted onto the mental health spatial data. The Manksi model otherwise known as the general nesting spatial model for spatial events (mental health incidents) as a function of endogenous interactions (neighboring values or spatial lags), exogenous interactions (build environment, social factors etc.) and error interactions (spatial autocorrelation \& spatial heterogeneity). The spatial Durbin error model is a special case of a Manski model with $\delta = 0$, thus having the endogenous interactions removed. The spatial Durbin error model is of the form:
\begin{equation}
{\bf Y} = {\bf X}\beta + {\bf W}{\bf X}\theta + \u ; \hspace{0.5cm} \u = \lambda{\bf W}\u + \epsilon.
\label{eq:SDEM}
\end{equation}
\end{description}
\section{Analyzing mental health incidents in Little Rock}\label{chap:mental}
\subsection{Descriptive Statistics}
\subsubsection{Evidence of Clustering: Moran's I}
The underlying assumption at the start of this study was that mental health incident events in Little Rock were distributed as spatially heterogeneous points (\textit{i.e.,} clusters) rather than uniformly over the geographic region. To put matters into visual perspective, see Fig. \ref{fig:Incidents} where panel 1 represents the geographic locations of the recorded 2018 mental health incidents in Little Rock and panel 2 represents the same number of incidents but simulated as if they were following an uniform spatial distribution. Fig. \ref{fig:Incidents} shows the presence of spatial clusters of mental health incidents in Little Rock when compared with the uniform distribution. However, as visual comparisons could be interpreted as being subjective, we consider a measure of spatial auto-correlation to test the spatial heterogeneity. To be precise, we want to test the null hypothesis that the mental health incidents are uniformly distributed across the area of study (Little Rock) against the alternative hypothesis that they are more clustered than might be expected from usual randomness.
Clustering, when referring to the whole spatial pattern, can be described by a global statistic for spatial auto-correlation. However, to properly identify the location of clustered and non-clustered regions, a Local Indicator of Spatial Association (LISA) must be implemented. A LISA is any statistic that provides the extent of significant spatial clustering of similar values around a given observation (\textit{i.e.,} Local Spatial Statistic). It also establishes the connection between the local and global statistic for spatial association having the sum of all local spatial statistics be proportional to the global statistic thereby allowing the decomposition of global indicators \citep{LISA}.
Among a handful number of global tests for spatial auto-correlation including Geary's $C$ and the global Getis-$G$, Moran's $I$ is perhaps the most common global test, and is implemented in almost all common spatial toolboxes for testing auto-correlation \citep{bivand2008applied}. Spatial auto-correlation quantifies the degree to which similar features cluster and identifies their location. In the presence of spatial auto-correlation, we can predict the values of observation $i$ from the values observed at $j \in N_{i}$, the set of its proximate neighbors \citep{Pebesma2019Spatial}. As in typical correlation, Moran's $I$ value generally ranges from $-1$ to $+1$ inclusively as a result of having a normalizing factor, $n/(\sum^{n}_{i = 1} \sum^{n}_{j = 1} w_{ij})$ \citep{Moran_Range}. The contrast between spatial auto-correlation Moran's $I$ and Pearson or Spearman's correlation lie in the presence of the spatial weights matrix in Moran's $I$ statistic. The inclusion of the spatial weights matrix in Moran's $I$ enables the possibility of obtaining extreme values greater than the usual $[-1,1]$ bounds depending on the structure and composition of the weights matrix. Extreme values are obtained via the relation between the minimum and maximum eigenvalues from the spatial weights matrix. For a thorough discussion regarding the range and extreme values of Moran's $I$ we refer readers to \citep{Extreme_Morans}. A negative and significant Moran's $I$ value represents negative spatial auto-correlation indicating dissimilar values are next to each other. A positive and significant Moran's $I$ value represents positive spatial auto-correlation indicating evidence of clustering of like values.
\begin{figure}[!ht]
\centering
\includegraphics[scale = 0.45]{Images/Incidents_Randomness.pdf}
\caption{Panel 1 shows the observed mental health incidents in Little Rock in 2018. Panel 2 shows the distribution of simulated mental health incidents following a Uniform distribution, keeping the total number of incidents fixed.}
\label{fig:Incidents}
\end{figure}
In order to apply the spatial auto-correlation tests (both Global and Local Moran's $I$) onto the spatial data and induce a supervised learning framework, two critical prerequisite steps had to be executed, \textit{viz.} (a) identification of the $k$ nearest neighbors, and (b) assigning their respective weights using the package \textbf{spdep} \citep{SPDEP}. We first create a fishnet of grid cell size of $1000$m by $1000$m from Little Rock containing all the necessary attributes for the analysis, with each cell mapped to a centroid, which is necessary in order to extend the neighborhood criteria from contiguity to distance-based neighbors ($k$-nearest neighbors) \citep{Pebesma2019Spatial}.
Using $k$-nearest neighbors typically leads to asymmetric neighbors. However, this is not the case here as all centroids are uniformly spaced. A key advantage of using distance-based neighbors to ordinary polygon contiguity is that it ensures that all fishnet grid cells polygon representation (centroids) have $k$ neighbors. It is common practice to use $k = 8$ or $k = 4$ neighbors which are formally know as ``Queen case" and ``Rook case". For this paper, $k = 8$ nearest neighbors were used and located using the function \emph{knearneigh} and \emph{Knn2nb} from the package \textbf{spdep}. Following the identification of $8$-nearest neighbors for each centroid, their respective weights were assigned using the function \emph{nb2listw} from the package \textbf{spdep}.
After the identification of the neighbors of \emph{Grid 1}, spatial weights are assigned to the list of neighbors. The entries in the weight matrix specify how much value we want to attribute to each neighbor. In this current work, we assign equal weights to each grid's neighbors, implying that each neighbor will have a corresponding weight of $\frac{1}{8}$. This weight is then used to compute the mean neighbor values as $\text{\it weight} = \frac{1}{8} \sum_{i=2}^{9} \text{\it weight for neighbor}_i$. This is equivalent to averaging over all mental health incident cases occurring within the eight neighbor grid cells. Having obtained both neighbors and their respective weights, we test for the presence of spatial auto-correlation using both Global Moran's I and Local Moran's I, as described below.
\noindent \textbf{Global Moran's I} The process for calculating the global test for spatial auto-correlation uses local relationships between the observed spatial entity value and its defining neighbors \citep{bivand2008applied}.
\begin{definition}[Global Moran's I] Let $y_i$ be the $i^{th}$ observation, with the mean being $\bar{y}$, and let and $w_{ij}$ be the spatial weight of the link between $i$ and $j$, then Global Moran's $I$ statistic is given by the following formula:
\begin{align*}
I = \frac{n}{\sum^{n}_{i = 1} \sum^{n}_{j = 1} w_{ij}} \frac{\sum^{n}_{i = 1} \sum^{n}_{j = 1} w_{ij} (y_i - \bar{y}) (y_j - \bar{y})}{ \sum^{n}_{i = 1}(y_i - \bar{y})},
\end{align*}
where $I$ represents the ratio of the product of the variable of interest, adjusted for the spatial weights used.
\label{Global}
\end{definition}
Centering on the mean is equivalent to asserting that the correct model has a constant mean, and that any remaining patterning after centering is caused by the spatial relationships encoded in the spatial weights.
\noindent \textbf{Local Moran's $I$} Localized tests are built by breaking global measures into components which aids in the detection of clusters and hot-spots, where clusters are defined as groups of observations where neighbors have similar features and hot-spots are groups of observations with distinct neighbors \citep{bivand2008applied}.
\begin{definition}[Local Moran's I]
Local Moran's $I_i$ values consist of the $n$ individual components added to produce the global Moran's $I$ (definition \ref{Global}): where the assumption is that the global mean $\bar{y}$ is an accurate summary of the variable of interest $y$. Note that here we do not center the two components in the numerator, $(y_i - \bar{y})$ and $\sum^{n}_{j = 1} w_{ij} (y_j - \bar{y})$.
\begin{align*}
I_i = \frac{(y_i - \bar{y}) \sum^{n}_{j = 1} w_{ij} (y_j - \bar{y})}{ \frac{\sum^{n}_{i = 1} (y_i - \bar{y})^2}{n} }.
\end{align*}
\end{definition}
The global Moran's I value for the mental health incidents data was obtained as $0.22923$, computed using the function \emph{moran.test} from the \textbf{spdep} \textsc{R} package.
To test for the significance of Global Moran's $I$ statistic, a permutation bootstrap test with $999$ simulations was conducted via the \texttt{moran.mc} function from the \texttt{spdep} \textsc{R} package. The permutation test produces a sampling distribution of the test statistic Moran's $I$ under the null hypothesis of no spatial auto-correlation, which was used to derive a (pseudo) permutation p-value, calculated using the formula: $\mathrm{p-value} = \frac{N_{\mathrm{extreme}} + 1 }{N + 1}$, where $N_{\mathrm{extreme}}$ represents the number of simulated Moran's $I$ values more extreme than the observed Moran's $I$ statistic and $N$ denotes the total number of simulations \citep{gimond2019}.
The observed value of the Global Moran's $I$ statistic produces a pseudo p-value of $1/1000 = 0.001$ when compared to the simulated values obtained from the permutation test, indicating the probability of observing a test statistic that is as or more extreme compared to the current observed Moran's $I$ value is $0.001$ under the null hypothesis $H_0$. With the statistical significance of the Global Moran's $I$ established, a localized Moran's test was conducted to identify the location(s) of the possible mental health incident clustering using the function \textit{localmoran} from the \textbf{spdep} package. Similar to the global Moran's $I$ described above, the local Moran's $I$ evaluates the level of spatial auto-correlation among the $k$-nearest fishnet grid cells ($k = 8$, here) surrounding a given fishnet grid cell. Local Moran's test also computes the (pseudo) p-value indicating the significance of the spatial auto-correlation at the level of each fishnet grid cell. Note that na\"ively using a significance threshold of $\alpha = 0.05$ to determine which grid cells indicate a significant level of clustering will be flawed as one needs to adjust for multiple comparisons \citep{LISA}. To address the multiplicity issues, a Bonferroni adjustment was applied using the function \textit{p.adjustSP} from the \textbf{spdep} package. For the following figure, Panel 1 shows the count of health incident events throughout Little Rock; Panel 2 shows the local Moran's $I$ statistics at each grid cell, the final panel shows areas that exhibit statistically significant clustering \citep{gimond2019}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.75\linewidth]{Images/local_morans_I_Adjusted.pdf}
\caption{Local Moran's I plot illustrating the spatial clusters of mental health incident calls in Little Rock, AR.}
\label{fig:local_morans}
\end{figure}
The presence of positive and significant spatial auto-correlation in the mental health incidents data clearly substantiates our claim that such events are clustered in space, instead of uniformly distributed over the entire region of interest. Having obtained such results is essentially the first step in the process of identifying a proper model \citep{Pebesma2019Spatial}.
\subsubsection{Performance Comparison}
We compare the predictive performance of the four candidate methods in Table \ref{tab:modcomp}, and report the mean and standard deviation for each error measure. To better assess the accuracy of the models, we use four different error measures: Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) \& Root Mean Square Error (RMSE), see Table \ref{tab:modcomp}. The errors were calculated in a supervised learning set-up, where both Poisson regression and Random Forest models were built using leave-one-group-out cross-validation with the number of folds being equal to five. Below, we define the different error measures used to compare and describe the best performing model according to that criterion.
\begin{table}[!ht]
\centering
\footnotesize{
\begin{tabular}{| c | llllll|}
\hline
& \begin{tabular}[c]{@{}l@{}}MAPE\\ Mean\end{tabular} & \begin{tabular}[c]{@{}l@{}}MAPE \\ SD\end{tabular} & \begin{tabular}[c]{@{}l@{}}MAE \\ Mean\end{tabular} & \begin{tabular}[c]{@{}l@{}}MAE\\ SD\end{tabular} & \begin{tabular}[c]{@{}l@{}}RMSE \\ Mean\end{tabular} & \begin{tabular}[c]{@{}l@{}}RMSE\\ SD\end{tabular} \\
\hline
Poisson GLM & 1.3112 & 0.0308 & 0.9098 & 0.2699 & 2.9166 & 1.5893 \\
\hline
Random Forest & 1.306 & 0.0346 & 0.8677 & 0.1708 & 2.1904 & 0.9008 \\
\hline
Manski Model & 1.302 & NA & 0.7708 & NA & 2.5832 & NA \\
\hline
Spatial Durbin & 1.316 & NA & 0.6356 & NA & 2.135 & NA \\
\hline
\end{tabular}}
\caption{Model performance comparison.}
\label{tab:modcomp}
\end{table}
First, the Mean Absolute percentage Error (MAPE) statistic captures the model's accuracy in terms of percentage error. The MAPE is calculated using the following formula:
\[
MAPE = \frac{1}{n} \sum^{n}_{i = 1} \abs{\frac{A_i - F_i}{A_i}} \times 100,
\]
where $A_i$ is the $i^{th}$ actual observation and $F_i$ is the $i^{th}$ forecast value. Since the MAPE expresses the error as percentage, it can be relatively easier to interpret when compared to other statistic measures. The lower the percentage error, the more accurate the model represents the data. For a given model, it can be concluded that on average, the forecast is off by the MAPE. We can clearly see that on average all models forecasts were off by approximately 1.3\% with a standard deviation of approximately 0.0308 and 0.0346 for the Poisson GLM and Random Forest respectively. In terms of MAPE, all models perform relatively the same with the Manski model having the smallest MAPE.
The Mean Absolute Error (MAE) statistic captures on average how large the forecast error is expected. The MAE is given by the formula
\[
MAE = \frac{ \sum^{n}_{i = 1} \abs{A_i - F_i} }{n},
\]
where $A_i$ is the $i^{th}$ actual observation and $F_i$ is the $i^{th}$ forecast value. Spatial Durbin error model had on average the smallest forecast error of 0.6356 followed by the Manski Model with a MAE of 0.7708 and Poisson GLM having the largest forecast error of 0.9098.
The Root Mean Square Error (RMSE) or otherwise also known as the Root Mean Square Deviation calculates the square root of the average of the square errors. The RMSE measures the spread of the prediction errors. The RMSE is given by the formula
\[
RMSE = \sqrt{ \frac{\sum^n_{i = 1} (F_i - A_i)^2}{n}}.
\]
Spatial Durbin Error model had the smallest RMSE value of 2.135 followed by Random Forest with a RMSE of 2.1904 and the Poisson GLM having the largest RMSE of 2.9166.
\subsubsection{Goodness of fit metrics}
\begin{table}[!ht]
\centering
\begin{tabular}{|l|llll|}
\hline
& \begin{tabular}[c]{@{}l@{}}$R^2$\\ Mean\end{tabular} & \begin{tabular}[c]{@{}l@{}}$R^2$\\ SD\end{tabular} & \begin{tabular}[c]{@{}l@{}}LogDev\\ Mean\end{tabular} & \begin{tabular}[c]{@{}l@{}}LogDev\\ Sd\end{tabular} \\ \hline
Poisson glm & 0.3927 & 0.1517 & 0.6141 & 0.0509 \\ \hline
Random Forest & 0.3822 & 0.0582 & 0.5844 & 0.0403 \\ \hline
Manski Model & 0.4366 & NA & 0.6124 & NA \\ \hline
Spatial Durbin & 0.4735 & NA & 0.7102 & NA \\ \hline
\end{tabular}
\caption{Model goodness of fit comparison.}
\label{tab:Gofit}
\end{table}
In terms of Goodness of fit metrics, the R squared ($R^2$) values and logarithmic deviance score were used to evaluate the models. The most common measure is perhaps the $R^2$ that represents the percentage of variation explained by the model,
\[
R^2 = 1- \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2},\; \hat{y}_i \doteq \text{ predicted value of } y_i, \; \bar{y} = \text{ grand mean},
\]
thus a larger $R^2$ is indicative of a better model fit. Note that the Adjusted $R^2$ value was not computed as it is rather difficult to compute for random forest models and, thus, difficult to use in goodness-of-fit comparison. The Logarithmic Deviance score is a measure of the deviance between the predicted and observed counts, via the log likelihood ratio. To measure this, we calculate the likelihood ratio of the observed value and the predicted value based on a Poisson distribution. The goodness of fit reported here is the negative log of the probability density so a lower value indicates a better predictive ability. As seen in table \ref{tab:Gofit}, the spatial Durbin error model obtained the largest R square value followed by the Manski model. Note that despite having obtained the largest R square value \emph{i.e.,} the best model in terms of R square goodness of fit metric, it obtained the largest logarithmic deviance score thus the worst model in logarithmic deviance score goodness of fit metric for the mental health data. In terms of the Logarithmic Deviance score goodness of fit metric, the random forest model obtained the smallest score. This suggests that the random forest model had the smallest deviance between predicted and observed count of mental health incidents \emph{i.e.,} the best model of such category.
\begin{figure}[!ht]
\centering
\includegraphics[scale = 0.6]{Images/Comp_plots.pdf}
\caption{Predicted versus observed mental health incident cases plots by the candidate models.}
\end{figure}
\subsubsection{Feature Importance Comparison}
Finally, we look at the important features or variables driving the prediction for each of the four candidate methods. We call these measures `variable importance' following the nomenclature used by random forest literature, but for purely statistical models such as Poisson regression or spatial Durbin models, the quantities being compared are a measure of each variable's significance. As discussed before, this a key step in the prediction process as the identification of important variables help us in determining which environmental and social features are predominantly occupying each of these predictive processes, investigate whether they play a risky or protective role and then allocate resources accordingly.
A note about nomenclature for the features plotted on the following figures. There are three unique prefixes linked with each type of feature. Nearest neighbor (`NN') refers to features obtained by calculating the average distance between a fishnet grid cell centroid and its nearest neighbor in the Queen case definition. Euclidean distance (`ed') refers to features obtained by calculating the euclidean distance between a fishnet grid cell centroid and its first nearest neighbor. The prefix `agg' refers to the count of mental health incidents in a given fishnet grid cell. The term `agg' was coined based on the \emph{aggregate} function used in R to obtain the count of cases associated per fishnet cell.
\begin{table}[!ht
\centering
\scalebox{0.6}{
\begin{tabular}{|l|l|l|l|}
\hline
\rowcolor[HTML]{C0C0C0}
Poisson\_GLM & Random\_Forest
& Spatial\_Durbin & Manski
\\ \hline
agg\_Rentals\_Apts\_Over100units & NN\_PoliceFacilities
& agg\_Rentals\_Apts\_Over100units & agg\_Rentals\_Apts\_Over100units
\\ \hline
agg\_Rentals\_Apts\_LessThan100units & NN\_Banks
& agg\_FastFoodAndBeverage & agg\_FastFoodAndBeverage
\\ \hline
agg\_MajorDeptRetailDiscount & agg\_BusStops
& agg\_BusStops & agg\_BusStops
\\ \hline
agg\_FastFoodAndBeverage & agg\_GasStationAndConvMart
& agg\_Rentals\_Apts\_LessThan100units & agg\_GasStationAndConvMart
\\ \hline
agg\_MixedDrink\_BarRestClub & agg\_FastFoodAndBeverage
& agg\_GasStationAndConvMart & agg\_MajorDeptRetailDiscount
\\ \hline
agg\_BusStops & NN\_ChildCareServices
& agg\_MajorDeptRetailDiscount & agg\_Rentals\_Apts\_LessThan100units
\\ \hline
NN\_ReligiousOrgs & NN\_BarberAndBeautyShops
& agg\_HotelMotel.x & agg\_HotelMotel.x
\\ \hline
agg\_LiquorStores & NN\_ChildYouthServices
& agg\_MixedDrink\_BarRestClub & agg\_MixedDrink\_BarRestClub
\\ \hline
agg\_GasStationAndConvMart & agg\_LiquorStores
& NN\_Unsafe\_Vacant\_BldgsNEW & NN\_Unsafe\_Vacant\_BldgsNEW
\\ \hline
NN\_Unsafe\_Vacant\_BldgsNEW & NN\_ReligiousOrgs
& agg\_LiquorStores & agg\_LiquorStores
\\ \hline
\end{tabular}
}
\caption{Top ten covariates with decreasing order of significance for each model.}
\label{tab: top10}
\end{table}
Table \ref{tab: top10} summarizes the top ten most influential features from each model. We note here that four similar features were found among the set of top features selected for each of the four models. These common features were: \texttt{agg\_FastFoodAndBeverage}, \texttt{agg\_BusStops}, \texttt{agg\_LiquorStores}, \texttt{agg\_GasStationAndConvMart}. As the four models highlight the importance of the influence these features had on the models, further interdisciplinary study involving experts from criminology and local law enforcement is required to understand whether any causal relationship exists between these environmental factors and mental health incidents in Little Rock, Arkansas.
Figure \ref{fig:var_imp} illustrate the feature importance in descending order with respect to each model. In order to create a visual feature comparison between the random forest model feature importance and the remaining models, the $-\log_{10}P$-values of each predictor were plotted for the other three models.
\begin{figure}[!ht]
\centering
\includegraphics[height=4.5in]{Images/n20.pdf}
\caption{Variable importance/significance for each model.}
\label{fig:var_imp}
\end{figure}
\section{Conclusion}\label{chap:end}
In this paper we used a machine learning framework to understand the effect of socio-demographics as well as environmental factors in predicting the spatial clusters of mental health incidents in Little Rock, Arkansas. The use of Moran's I for exploratory data analysis of existent spatial auto-correlation revealed an uneven distribution of mental health incidents across the area of study. The primary aim of this paper was to expand the methodology under RTM by incorporating statistical models to predict mental health incidents based on socio-economic predictors and environmental factors. We compared four different statistical methods' prediction accuracy and goodness of fit to provide insight into the list of factors affecting mental health incidents in Little Rock, Arkansas. Results indicate that in terms of prediction accuracy, the spatial econometric models (Manski and spatial Durbin error model) performed better than their model counterparts by a small margin. For model goodness of fit based upon R squared and Logarithmic Deviance score respectively, spatial Durbin error model and random forest model performed the best. The incorporation of these models under the risk terrain framework would definitely serve law enforcement agencies to properly allocate resources and address the unequal distribution of these mental health incidents.
Furthermore, if law enforcement agencies adopt this framework, creating a meta model from the models generated may serve as a better tool if indecisive of which model to select based on prediction accuracy or goodness of fit. In addition to creating a meta model, the implementation of temporal features and regularization parameters would provide potentially better prediction and model goodness of fit results. The U.S. Federal Government has shown interest in crime prediction with the the National Institute of Justice holding a Real-Time Crime Forecasting Challenge in 2017. Beyond the above, it would also be meaningful to determine how these associations or patterns changed in relation to the ongoing Covid-19 pandemic, where mental and behavioral health services are needed even more and police are often the first responders to these types of calls.
\bibliographystyle{biometrika}
|
1,941,325,221,011 | arxiv | \section{Introduction}
\label{sec:introduction}
Population III (Pop.~III) stars, metal-free stars, or first stars are
epoch-making objects in the universe history. They bring an end to the
universe's dark ages, and mark the opening of metal enrichment in the
universe. It is also interesting that their formation mode is
completely different from those of Pop.~I and II stars. Their typical
mass is theoretically predicted to be $10$ -- $1000M_\odot$
\citep{1998ApJ...508..141O,2002Sci...295...93A,2004ARA&A..42...79B,2008Sci...321..669Y,2011Sci...334.1250H,2011MNRAS.413..543S,2012MNRAS.422..290S,2013RPPh...76k2901B,2013ApJ...773..185S,2014ApJ...792...32S,2015MNRAS.448..568H}. Direct
observations of Pop.~III stars are essential to investigate the
Pop.~III star era and Pop.~III stars themselves. Since massive stars
with $>10M_\odot$ have short lifetimes $\sim 10$~Myr, Pop.~III stars
should be explored in the high-redshift universe. So, the direct
observation is quite difficult, and consequently they have not yet
been detected so far. \cite{2018Natur.555...67B} have reported an
observation for a relic of Pop.~III stars, although further
confirmation is required, since the signal is much stronger than
predicted by existing cosmological models \citep{2018Natur.555...71B}.
Alternatively, Pop.~III stars can be explored in the Galaxy. If they
are born as low-mass stars, they have longer lifetimes than the Hubble
time. Low-mass Pop.~III stars are thought to be formed in the
circumstellar disk around massive Pop.~III stars
\citep{2008ApJ...677..813M,2011ApJ...727..110C,2011Sci...331.1040C,2011ApJ...737...75G,2012MNRAS.424..399G,2013MNRAS.435.3283M,2014ApJ...792...32S,2016MNRAS.463.2781C}.
We call such low-mass Pop.~III stars ``Pop.~III survivors''. However,
Pop.~III survivors have not been found, although great efforts have
been taken to \citep[e.g.][]{2006ApJ...639..897A,2015ARA&A..53..631F}.
One possibility of the absence of Pop.~III survivors is that Pop.~III
survivors suffer from metal pollution through accretion of
interstellar medium (ISM)
\citep{1981A&A....97..280Y,2015ApJ...808L..47K,2017MNRAS.469.4012S}.
\cite{2015ApJ...808L..47K} have considered Bondi-Hoyle-Lyttleton
accretion of ISM, and have asserted some metal-poor stars can be
Pop.~III survivors polluted by ISM. However,
\cite{2015MNRAS.453.2771J} have shown radiation pressure prevents
accretion of dust in ISM, and \cite{2017ApJ...844..137T} have shown
stellar wind prevents accretion of gas in ISM. Although stellar wind
in their model is Pop.~I stellar wind, \cite{2018PASJ..tmp...35S} have
made clear that stellar wind of metal-poor stars (Pop.~II and III
stars) prevents the ISM accretion more strongly than that of Pop.~I
stars. Eventually, Pop.~III survivors have iron abundance [Fe/H] only
up to $\sim -14$ \citep{2017ApJ...844..137T}. This metallicity is much
smaller than currently discovered very metal deficient stars
\citep[e.g.][]{2014Natur.506..463K}.
Recently, \cite{2017Natur.552..378M} have discovered the first
interstellar object (ISO) or interstellar asteroid, called
`Oumuamua. They have estimated the ISO number density is $\sim
0.1$~au$^{-3}$. \cite{2018ApJ...855L..10D} have also inferred the ISO
number density $\sim 0.2$~au$^{-3}$ from an estimate of the Pan-STARRS
survey volume. This number density is so high that ISOs can plunge
into and pollute Pop.~III survivors many times in lifetimes of
Pop.~III survivors. In this paper, we calculate an ISO accretion rate
onto Pop.~III survivors, and their metal pollution.
This paper is structured as follows. In
section~\ref{sec:AccretionRate}, we calculate an ISO accretion rate
onto Pop.~III survivors. In section~\ref{sec:Discussion}, we estimate
metallicity of polluted Pop.~III survivors, taking into account
surface convection zones of Pop.~III survivors. In
section~\ref{sec:Summary}, we summarize this paper.
\section{Accretion Rate}
\label{sec:AccretionRate}
We can express an ISO accretion rate onto Pop.~III survivors in number as
\begin{eqnarray}
\dot{N}_{\rm acc} = f n \sigma v, \label{eq:dnacc1}
\end{eqnarray}
where $n$ is an ISO cumulative number density with ISOs' radii larger
than $D$, $\sigma$ is cross section of collision between ISOs and
Pop.~III survivors, and $v$ is a relative speed between ISOs and
Pop.~III survivors. The value $f$ is a fraction of an ISO-rich region
in an orbit of a Pop.~III survivor. Next, we write an ISO accretion
rate in mass as
\begin{eqnarray}
\dot{M}_{\rm acc} = \int_{D_{\rm max}}^{D_{\rm min}} \left\{ f \frac{dn}{dD} \sigma v
\left[ m_0 \left( \frac{D}{D_0} \right)^3 \right] \right\}
dD, \label{eq:dmacc1}
\end{eqnarray}
where $m_0$ is the mass of an ISO with its radius $D_0$, $D_{\rm min}$ is
the minimum radius of an ISO reaching a Pop.~III survivor surface
without sublimation, and $D_{\rm max}$ is the maximum radius of an ISO
colliding with a Pop.~III survivor once at least. We assume the ISO
cumulative number density can be written as a single power-law
function. Then, we give the cumulative number
density as
\begin{eqnarray}
n = n_0 \left( \frac{D}{D_0} \right)^{- \alpha}, \label{eq:ndens}
\end{eqnarray}
where $n_0$ is the ISO cumulative number density with its radius
larger than $D_0$. From the observation of `Oumuamua, we adopt $n_0
\sim 0.2$~au$^{-3}$, and $D_0 \sim 100$~m in this paper
\citep{2018ApJ...855L..10D}. Since the power $\alpha$ has
not yet been constrained strictly even from an estimate of the
Pan-STARRS survey volume \citep{2018ApJ...855L..10D}, we consider a
wide range of the power $\alpha$. Rewriting
Equation~(\ref{eq:dmacc1}), we finally obtain the following equation:
\begin{eqnarray}
&\dot{M}_{\rm acc} = \dot{M}_{{\rm acc}, 0} \nonumber \\
&\times \left\{
\begin{array}{lc}
\displaystyle \frac{\alpha}{\alpha-3} \left[ \left(
\frac{D_{\rm min}}{D_0} \right)^{-\alpha+3} - \left( \frac{D_{\rm max}}{D_0}
\right)^{-\alpha+3} \right] & (\alpha > 3), \\
\displaystyle \alpha \left[ \log(D_{\rm max}) - \log(D_{\rm min}) \right] &
(\alpha = 3), \\
\displaystyle \frac{\alpha}{3-\alpha} \left[ \left(
\frac{D_{\rm max}}{D_0} \right)^{3-\alpha} - \left( \frac{D_{\rm min}}{D_0}
\right)^{3-\alpha} \right] & (\alpha < 3),
\end{array}
\right. \label{eq:dmacc2}
\end{eqnarray}
where
\begin{eqnarray}
\dot{M}_{{\rm acc}, 0} &= m_0 \dot{N}_{\rm acc,0}, \\
\dot{N}_{{\rm acc}, 0} &= f n_0 \sigma v. \label{eq:dnacc0}
\end{eqnarray}
The right sides of Equation~(\ref{eq:dmacc2}) are the same in the
cases of $\alpha>3$ and $<3$. We divide these cases for
visibility. The total mass of ISOs can be written as
\begin{eqnarray}
M_{\rm iso} &= \int \frac{dn}{dD} \left[ m_0 \left( \frac{D}{D_0}
\right)^3 \right] dD \\
&= - \frac{\alpha m_0 n_0}{D_0} \int \left( \frac{D}{D_0}
\right)^{-\alpha+2} dD.
\end{eqnarray}
Note that the total mass of ISOs diverges for $\alpha \le 3$ if the
power $\alpha$ keeps constant at $D \rightarrow \infty$. When we adopt
$\alpha \le 3$, we suppose there are a knee or cutoff at some size
$D$.
Now, we calculate the accretion rate in number, $\dot{N}_{{\rm acc}, 0}$. The
distribution of ISOs is concentrated in the Galactic disk region that
consists of more metal-rich Pop.~I stars, because ISOs are themselves
made from heavy elements. Therefore, we can safely assume that ISOs
orbit around the Galaxy with the Galactic disk at a circular velocity
of the Galaxy, $\sim 220 {\rm km~s^{-1}}$. On the other hand, Pop.~III survivors
must have been formed before the formation of the Galactic disk. They
would wander in the Galactic halo \citep[e.g.][]{2016ApJ...826....9I},
and are distributed in an isotropic manner with the average circular
velocity, $\sim 220$~km~s$^{-1}$. Eventually, a typical relative speed
between ISOs and Pop.~III survivors would be $\sqrt{2}$ times the
circular velocity, i.e. $v \sim 310 {\rm km~s^{-1}}$. Pop.~III survivors would
accrete ISOs only when they traverse the Galactic disk twice an
orbit. Let us consider, as a typical example, a Pop.~III survivor that
orbits at a distance from the Galactic center with the inclination
angle of $30$ degree with respect to the Galactic plane. This
inclination angle is the average value in isotropic velocity
distribution. If we take $400$~pc for the thickness of the Galactic
disk, we obtain $f$ in equation~(\ref{eq:dnacc1}) is $\sim 0.032$. We
may underestimate $f$. Pop.~III survivors spend longer time orbiting
in an ISO-rich region with decreasing the Galactocentric distance,
since the Galactic disk becomes thicker, and the Galactic bulge is
present at the Galactic center. Note that Pop.~III survivors could be
preferentially concentrated at the Galactic center, such as the
Galactic bulge
\citep{2006ApJ...653..285S,2010MNRAS.401L...5S,2010ApJ...708.1398T}.
Considering gravitational focusing, we obtain the cross section
$\sigma$ as
\begin{eqnarray}
\sigma &= \pi r_{*}^2 \left( 1 + \frac{2 G M_{*}}{r_{*} v^2}
\right),
\end{eqnarray}
where $r_{*}$ and $M_{*}$ are respectively the radius and mass of a
Pop.~III survivor, and $G$ is the gravitational constant. We adopt the
solar radius and mass for $r_{*}$ and $M_{*}$, respectively. This is
because Pop.~III survivors have $\lesssim 0.8M_\odot$ and similar
$M_{*}/r_{*}$ to that of the Sun \citep{2002ApJ...580.1100R}. Then, we
obtain $\sigma \sim 7.6 \cdot 10^{22}$~cm$^{2}$. Using the above $f$,
$\sigma$, and $v$, we get $\dot{N}_{{\rm acc}, 0}$ as
\begin{eqnarray}
\dot{N}_{{\rm acc}, 0} \sim 1.4 \cdot 10^{-4} \left( \frac{n_0}{0.2~\mbox{au}^{-3}}
\right)~\mbox{[yr$^{-1}$]}.
\end{eqnarray}
As is clear from the above equation, Pop.~III survivors have chances
at accreting a large number of ISOs in their lives, $1.4 \cdot 10^5$
times per $1$~Gyr.
Before proceeding to this calculation, we show accretion rates (or
collision rates) of larger objects such as stars and planets are
extremely small. In the solar neighborhood, stellar number density is
$\sim 0.1$~pc$^{-3}$. Then, $\dot{N}_{{\rm acc}, 0} \sim 8.8 \cdot
10^{-21}$~yr$^{-1}$ for stars. The number density of free floating
planets \citep{2011Natur.473..349S} could be $2000$ times higher than
the stellar number density \citep{2018ApJ...853L..27D}. Nevertheless,
the collision rate is $\sim 1.8 \cdot 10^{-17}$~yr$^{-1}$ for free
floating planets. It is clear that Pop.~III survivors have no chance
to collide with other stars and free floating planets.
We can obtain the accretion rate in mass, $\dot{M}_{{\rm acc}, 0}$, as
\begin{eqnarray}
\dot{M}_{{\rm acc}, 0} \sim 9.9 \cdot 10^{-25} \left( \frac{m_0}{1.4 \cdot
10^{13}~\mbox{g}} \right) \left( \frac{n_0}{0.2~\mbox{au}^{-3}}
\right)~\mbox{[$M_\odot$~yr$^{-1}$]},
\end{eqnarray}
where we assume the mass density of a spherical ISO is
$3$~g~cm$^{-3}$, when we derive $m_0$ for $D_0=100$~m, which is a
typical value of asteroids \citep{2012P&SS...73...98C}.
When ISOs approach to Pop.~III survivors, they would be strongly
radiated, and completely sublimated if they are small. If they are
sublimated, their debris would be blown away by stellar wind, and
would not be accreted onto Pop.~III survivors. Here, we estimate
$D_{\rm min}$, the minimum size of ISOs that reach Pop.~III survivors
without sublimated. An ISO with its radius $D$ spends a certain amount
of time ($\Delta t_{\rm cond}$) sublimated after it attains its
sublimation temperature on its surface. Supposing thermal energy is
conducted through diffusion process, we can give $\Delta t_{\rm cond}$
as
\begin{eqnarray}
\Delta t_{\rm cond} \sim \frac{D^2}{\kappa} \label{eq:diffusiontime}
\end{eqnarray}
where $\kappa$ is thermal conductivity of an ISO. We express a
distance between an ISO and Pop.~III survivor when the ISO attains its
sublimation temperature on its surface under the radiative equilibrium
as follows:
\begin{eqnarray}
&R = \left( \frac{L_{*}}{4 \pi \sigma_{\rm s}T^4} \right)^{1/2},
\\
&\sim 6.9 \cdot 10^{-2} \left( \frac{L_{*}}{3.8 \cdot 10^{33}
\mbox{erg~s$^{-1}$}} \right)^{1/2} \left( \frac{T}{1500 \mbox{K}}
\right)^{-2} \; \mbox{[au]} \label{eq:distance}
\end{eqnarray}
where $L_{*}$ is the bolometric luminosity of a Pop.~III survivor, $T$
is the sublimation temperature of an ISO, and $\sigma_{\rm s}$ is the
Stefan-Boltzmann constant. For the second equality, we adopt the solar
luminosity for $L_{*}$, and typical sublimation temperature of dust
grains \citep[e.g.][]{1994ApJ...421..640N} for $T$. We set the ISO's
albedo to zero, which is based on the albedo of `Oumuamua assumed by
\cite{2017Natur.552..378M}, $\sim 0.04$. This assumption of
albedo increases $D_{\rm min}$, and conservatively reduces metal pollution
of Pop.~III survivors, although some asteroids have albedo $\sim
0.2$ \citep{2016AJ....152...79W}. The velocity of an ISO at a
distance $R$ is calculated as
\begin{eqnarray}
v_{\rm R} &= \left( v^2 + \frac{2GM_{*}}{R} \right)^{1/2} \\
&\sim 3.5 \cdot 10^7 \; \mbox{[cm~s$^{-1}$]},
\end{eqnarray}
where we adopt $v=310$~km~s$^{-1}$, $M_{*}=1M_\odot$, and $R=0.069$~au
for the second equality. An ISO spends the time $R/v_{\rm R}$ reaching
Pop.~III survivors after the surface starts to be sublimated. We
equate $R/v_{\rm R}$ with $\Delta t_{\rm cond}$ for $D=D_{\rm min}$ in
Equation~(\ref{eq:diffusiontime}). Using Equation~(\ref{eq:distance}),
we can estimate $D_{\rm min}$, such that
\begin{eqnarray}
&D_{\rm min} \sim 3.0 \left( \frac{\kappa}{3 \cdot 10^6
\mbox{erg~cm$^{-1}$~K$^{-1}$}} \right)^{1/2} \\
&\times \left( \frac{L_{*}}{ 3.8 \cdot 10^{33} \mbox{erg~s$^{-1}$}}
\right)^{1/4} \left( \frac{T}{1500 \mbox{K}} \right)^{-1} \;
\mbox{[km]}.
\end{eqnarray}
We adopt the thermal conductivity of iron at $1000$~K for $\kappa$,
since `Oumuamua is rocky asteroid, not icy comet. Although hot corona
is expected to exist in the stellar atmosphere
\citep{2018PASJ..tmp...35S}, we expect that its effect is not
essential for ISOs with $D>1$~km because the density of the corona is
significantly low.
When an asteroid has size of $3.0$~km, it has $\sim 3.4 \cdot
10^{17}$~g, where the mass density is assumed to $3$~g~cm$^{-3}$. On
the other hand, comets with size of $\sim 10^{18}$~g can reach the
solar photosphere \citep{2015ApJ...807..165B}. Our $D_{\rm min}$ could be
consistent with the minimum size of comets plunging into the Sun,
since comets are volatile whereas asteroids are not.
We derive $D_{\rm max}$, the maximum radius of ISOs colliding with Pop.~III
survivors once at least. The number density of ISOs increases with
time via metal enrichment in the Galaxy. As a result, the ISO
cumulative number density is expected to be comparable to the present
value in the last $\sim$ few Gyr; we here define $\Delta t_{\rm iso}$
for this duration. Then we can derive $D=D_{\rm max}$ from $\dot{N}_{\rm acc}
\Delta t_{\rm iso} \sim 1$. Using Equation~(\ref{eq:dnacc1}),
(\ref{eq:ndens}), and (\ref{eq:dnacc0}), we can write $D_{\rm max}$ as
\begin{eqnarray}
D_{\rm max} \sim D_0 \left(\dot{N}_{{\rm acc}, 0} \Delta t_{\rm iso} \right)^{1/\alpha}.
\end{eqnarray}
The actual value of $\Delta t_{\rm iso}$ is unknown. So, we assume
$\Delta t_{\rm iso} \sim 5$~Gyr and $1$~Gyr. The former ($\Delta
t_{\rm iso} \sim 5$~Gyr) is equivalent to the solar age, or to the age
of the Galactic disk at the solar neighborhood
\citep[e.g.][]{2017MNRAS.472.3637G}. ISOs would be formed
simultaneously with the Galactic disk formation, if they are ejected
from the inner protoplanetary disk
\citep{2017RNAAS...1...13G,2017arXiv171103558P}. ISOs would be formed
$< 1$~Gyr after the Galactic disk formation, if their progenitors are
a sort of the Oort cloud around intermediate-mass stars with
$2-8M_\odot$, and are released when the intermediate-mass stars enter
into asymptotic giant branch phases
\citep{2011MNRAS.417.2104V}. Regardless of the formation scenarios of
ISOs, ISOs could be in the Galactic disk for $\Delta t_{\rm iso} \sim
5$~Gyr. We adopt the latter ($\Delta t_{\rm iso} \sim 1$~Gyr) in order
to take into account timescale on which ISOs accumulate in the
Galactic disk for more conservative constraints. Figure~\ref{fig:dmax}
shows $D_{\rm max}$ as well as $D_{\rm min}$ as a reference.
\begin{figure}[ht!]
\includegraphics[width=8cm]{dmax.eps}
\caption{Maximum and minimum size of ISOs which Pop.~III survivors
can accrete ($D_{\rm max}$ and $D_{\rm min}$, respectively) as a function of
the power $\alpha$. The solid and dashed curves indicate $D_{\rm max}$
for $\Delta t_{\rm iso}=5$~Gyr and $1$~Gyr, respectively. The
dotted curve indicates $D_{\rm min}$. \label{fig:dmax}}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=8cm]{macc-alpha.eps}
\caption{Total accretion mass as a function of the power $\alpha$.
Black and gray curves show the cases of $n_0=0.2$ and
$0.02$~au$^{-3}$. Solid and dashed curves indicate $\Delta t_{\rm
iso}=5$~Gyr and $1$~Gyr, respectively. \label{fig:macc}}
\end{figure}
We calculate the total accretion mass of ISOs onto a Pop.~III
survivor, $M_{\rm acc} \sim \dot{M}_{\rm acc} \Delta t_{\rm iso}$, using
Equation~(\ref{eq:dmacc2}). We draw $M_{\rm acc}$ as a function of $\alpha$
in Figure~\ref{fig:macc}. Figure~\ref{fig:macc} shows steep decrease
of $M_{\rm acc}$ at $\alpha \sim 4$ for $\Delta t_{\rm iso}=5$~Gyr and at
$\alpha \sim 3.5$ for $\Delta t_{\rm iso}=1$~Gyr due to $D_{\rm max} <
D_{\rm min}$ (see Figure~\ref{fig:dmax}) for $n_0=0.2$~au$^{-3}$. We can
see $M_{\rm acc} \gtrsim 10^{-16}M_\odot$, unless $D_{\rm max} < D_{\rm min}$, or
$\alpha$ is large. On the analogy of the cumulative number density of
asteroids in the main belt, those in Edgeworth-Kuiper belt, and long
period comets from sub-km to km
\citep[][respectively]{2009Icar..202..104G,2004AJ....128.1916K,2012MNRAS.423.1674F},
the power $\alpha$ could be close to, or shallower than $3$.
Our $M_{\rm acc}$ is even larger than ISM's $M_{\rm acc}$ by several orders of
magnitude. ISOs would contain about $10$~\% iron in mass, similarly to
the solar compositions \citep{2009ARA&A..47..481A}. Thus, Pop.~III
survivors accrete iron mass of $\gtrsim 10^{-17}M_\odot$ through
collision with ISOs. On the other hand, \cite{2017ApJ...844..137T}
have shown the total accreted iron mass from the gas component of ISM
is $\lesssim 10^{-19}M_\odot$ from ISM accretion.
The estimated value of $n_0$ can contain large uncertainties, since
`Oumuamua is only one ISO so far discovered. We pessimistically
decrease $n_0$ from $0.2$~au$^{-3}$ to $0.02$~au$^{-3}$ in order to
examine an effect of $n_0$ on metallicity of polluted Pop.~III
survivors. Figure~\ref{fig:macc} also shows $M_{\rm acc}$ of Pop.~III
survivors when $n_0=0.02$~au$^{-3}$. The $M_{\rm acc}$ decreases by more
than an order of magnitude. This is because $D_{\rm max}$ as well as
$\dot{N}_{{\rm acc}, 0}$ becomes smaller with $n_0$ decreasing. Moreover, $M_{\rm acc}$
steeply decreases at smaller $\alpha$ than in the case of $n_0 =
0.2$~au$^{-3}$ due to smaller $D_{\rm max}$. Nevertheless, $M_{\rm acc} \gtrsim
10^{-16}M_\odot$, if $\alpha \lesssim 3$. In conclusion, ISOs can be
the most dominant polluters of Pop.~III survivors.
\section{Discussion}
\label{sec:Discussion}
We estimate surface pollution of Pop.~III survivors, considering the
thickness of their convection zones under their surfaces. Accreting
metals are mixed only within the surface convective zone and do not
leak downward into the stable radiative zone. According to
\cite{2002ApJ...580.1100R}, metal-poor stars with $\lesssim
0.8M_\odot$ have their lifetimes $> 12$~Gyr. So, we suppose Pop.~III
survivors that were born after $< 1$~Gyr of the Big Bang and have mass
of $\lesssim 0.8M_\odot$. In the cases of $0.75M_\odot$ and
$0.7M_\odot$ stars, the mass fractions of convection zones are
respectively $10^{-2.5}$ and $10^{-2}$ in the last $5$~Gyr. On the
other hand, in a $0.8M_\odot$ star the mass fraction of a convection
zone rapidly decreases with time from $10^{-3.5}$ at $\approx 5$~Gyr
ago and $10^{-6}$ at $\approx 1$~Gyr ago.
We calculate metallicity of a Pop.~III survivor as follow:
\begin{eqnarray}
\mbox{[Fe/H]} \sim \log_{10} \left( \frac{1}{f_{\rm conv}}
\frac{\dot{M}_{\rm acc} \Delta t_{\rm pol}}{M_{*} Z_\odot} \right).
\end{eqnarray}
We set the mass fraction of metals in the Sun, $Z_\odot$, to $1.4$~\%
\citep{2009ARA&A..47..481A}. We set the mass fraction of a surface
convection zone, $f_{\rm conv}$, in reference to
\cite{2002ApJ...580.1100R} as follows. For $M_{*}=0.7$ and
$0.75M_\odot$, we adopt $f_{\rm conv}=10^{-2}$ and $10^{-2.5}$,
respectively, and $\Delta t_{\rm pol} = \Delta t_{\rm iso}$. For
$M_{*}=0.8M_\odot$ with $\Delta t_{\rm iso} = 1$~Gyr, we adopt $f_{\rm
conv}=10^{-6}$, and $\Delta t_{\rm pol} = \Delta t_{\rm iso}$. For
$M_{*}=0.8M_\odot$ with $\Delta t_{\rm iso} = 5$~Gyr, we calculate
[Fe/H], taking into account the time dependence of the mass fraction
of a convection zone. If the Pop.~III survivor is dominantly polluted
in the last $1$~Gyr, we adopt [Fe/H] the same as that in the case of
$M_{*}=0.8M_\odot$ with $\Delta t_{\rm iso} = 1$~Gyr. If not, we
calculate [Fe/H], adopting $f_{\rm conv}=10^{-3.5}$ and $\Delta t_{\rm
pol} = \Delta t_{\rm iso}$.
\begin{figure}[ht!]
\includegraphics[width=8cm]{metal-alpha.eps}
\caption{Metallicity of Pop.~III survivors as a function of the
power $\alpha$. Black, red, and blue curves indicate Pop.~III
survivors with $0.8$, $0.75$, and $0.7M_\odot$,
respectively. Solid and dashed curves show metallicity in the
cases of $\Delta t_{\rm iso} = 5$~Gyr and $1$~Gyr,
respectively. For the $0.8M_\odot$ case, the solid and dashed
curves overlap, when $\alpha<3.5$. \label{fig:pol}}
\end{figure}
We summarize the surface pollution of Pop.~III survivors in
Figure~\ref{fig:pol}, where we set $n_0 = 0.2$~au$^{-3}$. Since we
suppose ISO compositions are the same as the metal compositions of the
Sun, [Fe/H] is the same as metallicity [M/H]. For Pop.~III survivors
with $0.8M_\odot$, the metallicity in $\Delta t_{\rm iso}=5$~Gyr is
the same as in $\Delta t_{\rm iso}=1$~Gyr when $\alpha \lesssim 3.5$,
since the metal pollution in the last $1$~Gyr is dominant. Pop.~III
survivors with $0.7$ and $0.75M_\odot$ get metallicity with [Fe/H]
$\sim -8$ at most even if $\alpha \sim 2.5$. On the other hand,
Pop.~III survivors with $0.8M_\odot$ can get metallicity with [Fe/H]
$\gtrsim -6$, if $\alpha \gtrsim 2.5$. The metallicity [Fe/H] steeply
decreases at $\alpha \sim 4$ for $\Delta t_{\rm iso} = 5$~Gyr and
$\alpha \sim 3.5$ for $\Delta t_{\rm iso} = 1$~Gyr, since $D_{\rm max} <
D_{\rm min}$.
We use SAGA database \citep[e.g.][]{2008PASJ...60.1159S}, and search
for metal-poor stars with [Fe/H] $<-5$. Additionally, we investigate
their effective temperature in order to conjecture their
mass. According to \cite{2002ApJ...580.1100R}, mass of a Pop.~III
survivor is $\sim 0.8M_\odot$ if its effective temperature is $>
6000$~K, and is $\lesssim 0.75M_\odot$ if not. Then, we find three
stars with [Fe/H] $< -5$: SMSS~J031300.36-670839.3 with [Fe/H] $<
-7.3$ and $\sim 5100$~K \citep{2014Natur.506..463K}, SDSS~J1035+0641
with [Fe/H] $< -5.07$ and $\sim 6300$~K \citep{2015A&A...579A..28B},
and SDSS~J131326.89-001941.4 with [Fe/H] $\sim -5.00$ and $\sim
5200$~K \citep{2015ApJ...810L..27F}. SMSS~J031300.36-670839.3 and
SDSS~J131326.89-001941.4 could be Pop.~III survivors with $\sim
0.75M_\odot$, if $\alpha < 2$ for $D \gtrsim 10^2$~km. SDSS~J1035+0641
could be a Pop.~III survivors with $\sim 0.8M_\odot$, if $\alpha
\gtrsim 2.5$ up to $D \sim 10$~km. Therefore, SDSS~J1035+0641 has the
most loose conditions of ISOs among the three metal-poor stars to be a
Pop.~III survivor.
\section{Summary}
\label{sec:Summary}
We calculated the total accretion mass of ISOs onto Pop.~III
survivors. The mass is $\gtrsim 10^{-16}M_\odot$, if the power of the
ISO cumulative number density $\alpha$ is $\lesssim 4$. We can convert
the accretion mass to iron mass $\gtrsim 10^{-17}M_\odot$. This
accretion mass is even larger than ISM accretion mass by several
orders of magnitude. Therefore, ISOs can be the most dominant
polluters of Pop.~III survivors.
We estimated the surface metallicity of Pop.~III survivors polluted by
ISOs, considering convection zones of Pop.~III survivors. If Pop.~III
survivors have $0.7M_\odot$ and $0.75M_\odot$, their metallicity can
be [Fe/H] $\lesssim -8$. On the other hand, if Pop.~III survivors have
$0.8M_\odot$, their metallicity can be enhanced to [Fe/H] $\gtrsim
-6$. This is because the mass fraction of a convection zone is down to
$10^{-6}$ when their ages are $> 10$~Gyr.
The star SDSS~J1035+0641 has metallicity of [Fe/H] $\sim -5$, and
effective temperature of $6300$~K. It can have a thin convection zone,
and could be a Pop.~III survivors, if the ISO cumulative number
density has shallow power law with $\alpha \gtrsim 2.5$ up to $D \sim
10$~km. In order to conclude whether SDSS~J1035+0641 and other
metal-poor stars are Pop.~III survivors or not, we need ISO cumulative
number density up to $D \sim 10$~km.
We note that the ISO accretion mass strongly depends on the power of
the ISO cumulative number density, $\alpha$. If we apply to Pop.III
survivors located at the Galactocentric distance of 8 kpc, the
dependence of the metal pollustion is as follows. If $\alpha>4$,
Pop.~III survivors are never polluted by ISOs. If $3 < \alpha < 4$,
Pop.~III survivors can be polluted up to [Fe/H] $\sim -7$. If
$\alpha<3$ up to $D \sim 10$~km, Pop.~III survivors could hide in
metal-poor stars so far discovered. We expect the ISO cumulative
number density will be determined in near future.
Since Pop.~III survivors could be preferentially concentrated at the
Galactic center
\citep{2006ApJ...653..285S,2010MNRAS.401L...5S,2010ApJ...708.1398T},
we may underestimate the metal pollution of Pop.~III survivors. This
is because Pop.~III survivors spend longer time orbiting in an
ISO-rich region with the Galactocentric distance decreasing. Note that
the Galactic disk becomes thicker with the Galactocentric distance,
and the Galactic bulge is present at the Galactic center. In other
words, $f$ becomes larger as the Galactocentric distance becomes
smaller.
We should derive chemical abundance of Pop.~III survivors in order to
observationally confirm that Pop.~III survivors are most polluted by
ISOs, although we discuss only about [Fe/H] in this paper. The
chemical abundance would be determined by a combination of ISO
composition and volatility. In future work, we will obtain the
chemical abundance of Pop.~III survivors polluted by ISOs.
\ack
A. Tanikawa thanks I. Hachisu and K. Kakiuchi for fruitful
discussions. This research has been supported in part by MEXT program
for the Development and Improvement for the Next Generation Ultra
High-Speed Computer System under its Subsidies for Operating the
Specific Advanced Large Research Facilities, and by Grants-in-Aid for
Scientific Research (16K17656, 17H01105, 17H06360,
18H01250) from the Japan Society for the Promotion of
Science.
|
1,941,325,221,012 | arxiv | \section{Introduction}
Flavour changing neutral currents (FCNC) play an essential role in the search for New Physics (NP) effects. The leading order Standard Model (SM) process already occurs only at the loop-level and consequently any new physics (NP) effects beyond the SM may enter at the same level. However, up to some $2-3 \sigma$ deviations in FCNC
no signal of NP has been detected yet.
Due to this current lack of really significant deviations from SM predictions, any NP is either out of reach of the current colliders or has a peculiar flavour structure.
This is the famous flavour problem, i.e. the question why FCNCs are suppressed (for a review see Ref.~\cite{Isidori:2010kg}).
This problem must be solved in any viable NP model. In both options, a thorough investigation of the flavour structure is mandatory in order to explore the underlying NP model.
The inclusive decay mode $\bar B \to X_s \ell^+\ell^-$ is one of the most important, theoretically clean modes of the indirect search for new physics via flavour observables (for reviews see Refs.~\cite{Hurth:2003vb,Hurth:2010tk,Hurth:2012vp}).
Compared with the $\bar B \rightarrow X_s \gamma$ decay,
the inclusive $\bar B \rightarrow X_s \ell^+ \ell^-$ decay presents a
complementary and more complex test of the SM, given that different
perturbative electroweak contributions add to the decay rate. As a three body decay process it also offers more observables. Due to the presence of the lepton-antilepton pair, more structures contribute to the decay rate and some subtleties in the formal theoretical description arise which one needs to scrutinize.
It is generally assumed that this inclusive mode is dominated by perturbative contributions like the inclusive $\bar B \to X_s \gamma$ decay if one eliminates $c \bar c$ resonances with the help of kinematic cuts.
In the so-called perturbative $q^2$ windows below and above the
resonances, namely in the low-dilepton mass region $1\;{\rm GeV}^2 < q^2
= m_{\ell\ell}^2 < 6\;{\rm GeV}^2$ as well as in the high-dilepton mass
region where $q^2 > 14.4\;{\rm GeV}^2$
these perturbative contributions are well explored and have already reached a highly sophisticated level. The most recent analysis of all angular observables in the $\bar B \rightarrow X_s \ell^+\ell^-$ decay was given
in Ref.~\cite{Huber:2015sra}; it includes all available perturbative NNLO QCD, NLO QED corrections and also the {\it known} subleading power corrections.
{The inclusive mode $\bar B\rightarrow X_s \ell^+ \ell^-$ allows for an important crosscheck of the recent LHCb data on the corresponding exclusive mode. The so-called anomalies found in some angular observables of the exclusive decay $B \to K^* \mu^+\mu^-$~\cite{Aaij:2013qta,Aaij:2015oid}
cannot be interpreted unambiguously because of the unknown subleading power corrections in the theoretical framework of QCD improved factorization. One cannot decide at the moment if these deviations from the SM are
first signs for new physics beyond the SM, a consequence of the unknown hadronic power corrections or just statistical fluctuations.
As was shown in Refs.~\cite{Hurth:2013ssa,Hurth:2014zja}, the future measurements of the inclusive mode will be able to resolve this puzzle.}
{Belle and BaBar have measured the branching ratio using the sum-of-exclusive technique only. Unfortunately, the latest published measurement of Belle~\cite{Iwasaki:2005sy} is based on less than $30\%$ of the data set
available at the end of the Belle experiment, i.e. on a sample of $152 \times 10^6$ $B \bar B$ events only.
At least BaBar has published an analysis based on the whole data set of Babar using a sample of $471 \times 10^6$ $B \bar B$ events~\cite{Lees:2013nxa} which updated the former analysis of 2004~\cite{Aubert:2004it}. However, Belle has already measured the forward-backward asymmetry~\cite{Sato:2014pjr}, while BaBar presented a measurement of the CP violation in this channel~\cite{Lees:2013nxa}. All these measurements are still limited by the statistical errors.
The super flavour factory Belle~II at KEK will accumulate data samples that are two orders of magnitude larger~\cite{Belle2}. This will push experimental precision to its limit. Thus, also a precise understanding of the theoretical predictions is necessary. }
{The inclusive modes $B \rightarrow X_s \gamma$ and $B
\rightarrow X_s \ell^+ \ell^-$ are dominated by the partonic contributions which can be calculated perturbatively.
It is well-known that the heavy mass expansion (HME) makes it possible to calculate
the inclusive decay rates of a hadron containing a heavy quark, {\it if} only the leading operator in the effective
Hamiltonian (${\cal O}_7$ for $B \to X_s \gamma$, ${\cal O}_9$ for $B \to X_s \ell^+\ell^-$)
is considered~\cite{Chay:1990da,Bigi:1992su}. The HME represents a local operator product expansion (OPE) based on the optical theorem. The partonic contribution is the leading term in this expansion in power of $1/m_b$.
Due to the equations of motion, there is no contribution of order $\Lambda/m_b$. Thus, the corrections to the partonic contribution start with $1/m_b^2$ only and have a rather small numerical impact. For the inclusive decay $\bar B \to X_s \ell^+ \ell^-$
these leading hadronic power corrections with $1/m_b^2$ and $1/m_b^3$ have already been analysed in Refs.~\cite{Falk:1993dh, Ali:1996bm, Chen:1997dj, Buchalla:1998mt, Bauer:1999kf} (for the inclusive decay $\bar B \to X_s \gamma$ see Ref.~\cite{Mannel:2010wj}).}
{However, as already noted in Ref.~\cite{Ligeti:1997tc}, there is no OPE for the
inclusive decay $B \rightarrow X_s \gamma$ if one includes operators beyond the leading electromagnetic dipole operator ${\cal
O}_7$ into the analysis. Voloshin~\cite{Voloshin:1996gw} has identified such
a contribution to the total decay rate in the interference of the $b \to s
\gamma$ amplitude due to the electromagnetic dipole operator ${\cal
O}_7$ and the charming penguin amplitude due to the current-current
operator ${\cal O}_2$. It is described by matrix element of a non-local operator. This is an example of a so-called resolved photon contribution. Such a contribution contains subprocesses in which the photon couples to light partons
instead of connecting directly to the effective weak-interaction vertex~\cite{Benzke:2010js}.\footnote{It is possible to expand this non-local contribution to local operators again if one assumes that the charm is a heavy quark. Then the
first term in this expansion is the dominating one~\cite{Ligeti:1997tc,Grant:1997ec,Buchalla:1997ky}. This
non-perturbative correction is suppressed by $\lambda_2/m_c^2$ and is
estimated to be of order $3\%$ compared with the leading-order
(perturbative) contribution to $\Gamma_{b \to s \gamma}$.
But if one assumes that the charm mass scales as
$m_c^2\sim\Lambda_{\text{QCD}} m_b$, the charm penguin contribution must be
described by the matrix element of a non-local
operator~\cite{Benzke:2010js}.}}
{An analysis of all resolved photon contributions to the inclusive decay $\bar B \to X_s\gamma$
related to other operators in the weak
Hamiltonian has been presented in Ref.~\cite{Benzke:2010js} (see also Ref.~\cite{Lee:2006wn}).
All these non-local contributions manifestly demonstrate
the breakdown of the local OPE within the
hadronic power corrections. However, such non-local power corrections lead to a multi-scale problem which can be analysed well within soft-collinear effective theory (SCET).
These non-local matrix elements are very difficult to estimate. It has been shown that there is
an irreducible theoretical uncertainty of $\pm
(4-5)\%$ for the total $CP$ averaged decay rate, defined with a
photon-energy cut of $E_\gamma = 1.6$ GeV~\cite{Benzke:2010js}.}
In the present paper we explore the subleading power factorization of the inclusive decay $\bar B\rightarrow X_s \ell^+ \ell^-$ and its implications to observables.
Within the inclusive decay {$\bar B \to X_s \ell^+ \ell^-$},
the hadronic ($M_X$) and dilepton invariant ($q^2$) masses are independent
kinematical quantities. In order to suppress potential huge backgrounds one needs an invariant mass cut on the hadronic final state system ($M_X \lesssim 2\,\text{GeV}$). This cut poses no additional constraints in the high-dilepton mass region, but in the low-dilepton one the cut on the hadronic mass implies specific kinematics in which the standard OPE breaks down and non-perturbative b-quark distributions, so-called shape functions, have to be introduced. The specific kinematics of low dilepton masses $q^2$ and of small hadronic masses $M_X$ leads to a multi-scale problem for which soft-collinear effective theory (SCET) is the appropriate tool.
{A former SCET analysis uses the universality of the leading shape
function to show that the reduction due to the $M_X$-cut in all angular observables of the inclusive decay $\bar B\rightarrow X_s \ell^+ \ell$
can be accurately computed. The effects of subleading shape functions
lead to an additional uncertainty of $5\%$~\cite{Lee:2005pk,Lee:2005pw}.\footnote{In a later analysis~\cite{Lee:2008xc}
the uncertainties due to subleading shape functions are
conservatively estimated. Using the combined $B \to
X_s\gamma$, $B \to X_u\ell \bar\nu$ and $B \to X_s \ell^+\ell^-$
data the uncertainties due to leading and sub-leading shape functions can be reduced in the future~\cite{Lee:2008xc}.}
However, in all these previous analyses a problematic assumption is made, namely that $q^2$ represents a hard scale in the kinematical region of low $q^2$ and of small $M_X$.
As we will show explicitly in our present SCET analysis, the hadronic cut implies the scaling of $q^2$ being not hard but (anti-) hard-collinear in the low-$q^2$ region.}
The main goal of the paper is to identify the correct power counting of all the variables in the low-$q^2$ window of the inclusive decay $\bar B \rightarrow X_s \ell^+\ell^-$ within the effective theory SCET
if a hadronic mass cut is imposed. Furthermore we will analyse the resolved power corrections in a systematic way and present numerical estimates of the corresponding uncertainties.
As mentioned above, in these contributions the virtual photon couples to light partons instead of connecting directly to the effective weak-interaction vertex. Moreover, we will show that the resolved contributions - as a special feature - stay non-local when the hadronic mass cut is released. In this sense they represent an irreducible uncertainty independent of the hadronic mass cut.
The paper is organized as follows.
In section~\ref{sec:theory} we introduce the theoretical framework, in particular we identify the correct power counting and the factorization properties of the subleading contributions. In section~\ref{sec:differential} we derive the fully differential decay rate. In section~\ref{sec:example} we present the explicit calculation of the interference term of the ${\cal O}_7$ and the ${\cal O}_2$ operators. In Section~\ref{sec:contributions} we present the analytical results of all resolved contributions in the first subleading power. Their numerical impact is investigated in section~\ref{sec:numerics}. Finally we summarize and discuss the obtained results in
section \ref{sec:conclusion}.
\newpage
\section{Theoretical Framework} \label{sec:theory}
The effective operator basis for the underlying parton interaction of the semi-leptonic flavour changing neutral current decay $\bar B \to X_s \ell^+\ell^-$ is well-known~\cite{Buchalla:1995vs}. Many higher-order calculations have led to the availability of NNLO precision and NNLL resummation in the strong coupling $\alpha_s$. At the relevant scale $m_b$ of the $b$-quark, all heavier fields are integrated out, and the effective operator basis contains only active flavours. In our convention, corresponding to the one used in~\cite{Beneke:2001at}, the contributing operators are given by
\begin{subequations}\label{eq:op_basis}
\begin{alignat}{2}
{\cal O}_1^q &= (\bar q b)_{V-A} (\bar s q)_{V-A} & \qquad {\cal O}^q_2 &= (\bar q_i b_j)_{V-A} (\bar s_j q_i)_{V-A} \,, \\
{\cal O}_3 &= (\bar s b)_{V-A} \sum_{q}\,(\bar q q)_{V-A} & \qquad {\cal O}_4 &= (\bar s_i b_j)_{V-A} \sum_{q}\,(\bar q_j q_i)_{V-A} \,, \\
{\cal O}_5 &= (\bar s b)_{V-A} \sum_{q}\,(\bar q q)_{V+A} & \qquad {\cal O}_6 &= (\bar s_i b_j)_{V-A} \sum_{q}\,(\bar q_j q_i)_{V+A} \,, \\
{\cal O}_{7\gamma} &= -\frac{e}{8\pi^2}\,m_b\,
\bar s\sigma_{\mu\nu}(1+\gamma_5) F^{\mu\nu} b & \qquad {\cal O}_{8g} &= -\frac{g_s}{8\pi^2}\,m_b\,
\bar s\sigma_{\mu\nu}(1+\gamma_5) G^{\mu\nu} b\,, \\
{\cal O}_9 &= \frac{\alpha}{2\pi} (\bar sb)_{V-A} (\bar \ell \ell)_{V}& \qquad {\cal O}_{10} &= \frac{\alpha}{2\pi} (\bar sb)_{V-A} (\bar \ell\ell)_{A} \,,
\end{alignat}
\end{subequations}
with $q=u,c$ and $i,j$ denoting the color indices and $(\bar q_1 q_2)_{V\pm A} = \bar q_1 \gamma_\mu (1 \pm \gamma_5) q_2$.
Our sign convention is such that $iD_\mu=i\partial_\mu+g_s\,T^a A_\mu^a+e\,Q_f A_\mu$, where $T^a$ are the $SU(3)$ color generators, and $Q_f$ is the electric charge of the fermion in units of $e$. Using Standard Model CKM unitarity, with $\lambda_q=V_{qb} V_{qs}^*$ and $\lambda_u + \lambda_c + \lambda_t = 0$, we may write the effective Hamiltonian as
\begin{equation}\label{eq:WeakHamiltonian}
{\cal H}_\text{eff} = \frac{G_F}{\sqrt{2}} \sum_{q=u,c} \lambda_q\,
\bigg( C_1\,{\cal O}_1^q + C_2\,{\cal O}_2^q+ C_{7\gamma}\,{\cal O}_{7\gamma} + C_{8g}\,{\cal O}_{8g} + \sum_{i=3,...,6,9,10} C_i\,{\cal O}_i
\bigg) \,.
\end{equation}
The Wilson coefficients $C_i$ depend on the scale $\mu$ at which the operators are renormalized and in our convention $C_{7\gamma}$ is negative. Here the four-quark and QCD-penguin operators ${\cal O}_{1-6}$, and the QED and QCD dipole operators ${\cal O}_{7\gamma,8 g}$ can contribute via an appropriate contraction with the QED Lagrangian to the process in question.
\subsection{Set-up of the SCET ansatz}
Calculating the inclusive decay mode $\bar B \to X_s \ell^+\ell^-$ we face two problems. On the one hand, the integrated branching fraction is dominated by
resonant $q\bar q$ background, especially with $q=c$, i.e. resonant $J/\psi
\rightarrow \ell^+ \ell^-$ intermediate states for the (virtual) photon, which exceeds the non-resonant
charm-loop contribution by two orders of magnitude. This feature should not be misinterpreted as a striking failure of global parton-hadron duality as shown in Ref.~\cite{Beneke:2009az}.
However, $c \bar c$ resonances that show up as large peaks in the dilepton invariant mass spectrum are removed by appropriate kinematic cuts -- leading to so-called `perturbative $q^2$-windows', namely the low-dilepton-mass region $1\,{\rm GeV}^2 < q^2 = m_{\ell\ell}^2 < 6\,{\rm GeV}^2$, and also the high-dilepton-mass region with $q^2 > 14.4\,{\rm GeV}^2$.
On the other hand, in a realistic experimental environment we need to suppress potential huge backgrounds by an invariant mass cut on the hadronic final state system ($M_X \lesssim 2\,\text{GeV}$). This cut poses no additional constraints in the high-dilepton-mass region. But in the
low-dilepton mass region we have in the $B$ meson rest frame due to $q= P_B -P_X$
\begin{equation}
2\, M_B\, E_X \, = M_B^2 +M_X^2 -q^2\,.
\end{equation}
Thus, for low enough $q^2$ in combination with $M_X^2 \ll E_X^2$ the $X_s$ system is jet-like with $E_X \sim M_B$. This further implies hat $P_X$ is near the light cone.
Within these kinematic constraints, soft-collinear-effective theory (SCET)~\cite{Bauer:2001yt} is the appropriate tool to study the factorization properties of inclusive $B$-meson decays and to
analyse the multi-scale problem. The cuts in the two independent kinematic variables, namely the hadronic and dilepton invariant masses, force us to study the process in the so-called shape function region
with a large energy $E_X$ of order $M_B$ and low invariant mass $ M_X \sim \sqrt{m_b \Lambda_\text{QCD}}$ of the hadronic system. SCET enables us to systematically obtain a scaling law of the momentum components. In our set-up the scales $\Lambda_\text{QCD}$, $M_X$, and $M_B$ are relevant. For the ratio of these scales, one finds the following hierarchy:
\begin{equation}
\frac{\Lambda_\text{QCD}}{M_B} \ll \frac{M_X}{M_B} \ll 1\,.
\end{equation}
Hence, resumming logarithms between these scales becomes important. SCET allows to systematically resum the logarithms of these scale ratios, and more importantly factorizes the effects stemming from the different regions. This enables us to calculate the process in a consistent expansion, and to factorize off effects that can be calculated perturbatively. This reduces the non-perturbative quantities to a limited set of soft functions. Defining $\lambda = \Lambda_\text{QCD}/M_B$, we numerically have $M_X \lesssim \sqrt{M_B \Lambda_\text{QCD}} \sim M_B \sqrt{\lambda}$. This sets the power-counting scale for the possible momentum components in light-cone coordinates $n^\mu = (1,0,0,1)$ and $\bar n^\mu = (1,0,0,-1)$. Any four-vector may be decomposed according to $a^\mu = n\cdot a \,\,\bar n^\mu/2 + \bar n\cdot a \,\, n^\mu/2 + a_\perp^\mu\,.$
We use the short-hand notation $a \sim (n\cdot a, \bar n\cdot a, a_\perp)$ to indicate the scaling of the momentum components in powers of $\lambda$. Within the validity of SCET, we have a hard momentum region $p_\text{hard} \sim (1, 1, 1)$, a hard-collinear region $p_\text{hc} \sim ( \lambda, 1 , \sqrt{\lambda})$, an anti-hard-collinear region $p_{\overline{\text{hc}}} \sim ( 1, \lambda , \sqrt{\lambda})$ and a soft region $p_\text{soft} \sim (\lambda, \lambda, \lambda)$.
\begin{figure}[hpt]
\centering\includegraphics[scale=0.36]{ahc_scaling_kinematics}\hfill\includegraphics[scale=0.36]{hard_scaling_kinematics}
\caption{$q^2 = (n \cdot q) (\bar n \cdot q)$ with $q_\perp = 0$ for the two perturbative mass windows. The gray band shows the experimental hadronic invariant mass cut with the $K$ as the lowest mass state, and the red band corresponds to the $q^2$ cut. The blue lines indicate the scaling of the two light-cone components. Left: Low invariant mass window. Scaling of $q_{\overline{\text{hc}}}$ is indicated. Right: High invariant mass window, with the maximally allowed value of $M_B$. Scaling of $q_\text{hard}$ is indicated.}\label{Fig:scaling_law}
\end{figure}
As far as the two-body radiative decay is concerned, the kinematics imply $q^2 = 0$ and $E_\gamma \sim m_b/2$, and, taking into account the invariant mass and photon energy requirements, the scaling is fixed to be a hard-collinear hadronic jet recoiling against an anti-hard-collinear photon.
In the case of a lepton-antilepton pair in the final state, we need to restrict the momentum transfer to the leptons outside the mass window of the $c\bar c$ resonances as described above. In Fig.~\ref{Fig:scaling_law} we compare the momentum scaling of the lepton-antilepton pair in terms of the light-cone coordinate decomposition and the experimental cuts. The gray band corresponds to the hadronic invariant mass cut in order to suppress background, while the red band is the $q^2$ constraint to reject the $c\bar c$ resonances. The blue lines show the validity of SCET in terms of the momentum component scaling, on the left figure for an anti-hard-collinear scaling, while on the right one for a hard momentum scaling. Note that there exist two solutions for the left figure, as we may view the leptons to be anti-hard-collinear and the hadronic jet collinear and vice versa.
Obviously, the high mass window corresponds to hard leptons and is outside of the validity of a description in terms of SCET. It can be readily seen that the current mass cuts do not have an impact on this scenario. That is in contrast to the low $q^2$ region. The overlap of the red and gray band is the allowed region after experimental cuts, and it is in good agreement with our assumptions for the effective theory, which is approximately given by the blue rectangle. Therefore with assigning an anti-hard-collinear momentum to the virtual photon and a hard-collinear one to the hadronic system, we are in a good approximation in the validity window of both the experimental requirement and the effective theory.
To show this more explicitly, we can introduce the two light-cone components of the hadronic momentum with $n\cdot P_X\,\bar n\cdot P_X = M^2_X$ and $P_X^\perp = 0$
\begin{equation}
\bar n\cdot P_X = E_X + | \vec{P}_X | \sim {O}(M_B)\,\,,\, n\cdot P_X = E_X - | \vec{P}_X | \sim {O}(\Lambda_{\rm QCD})\,.
\end{equation}
Using the kinematical relations, the leptonic light-cone variables are given by
\begin{equation}
n\cdot q = M_B - n\cdot P_X\,,\, \bar n\cdot q = M_B - \bar n\cdot P_X = q^2 / (M_B - n\cdot P_X)\,.
\end{equation}
In Fig.~\ref{Fig:scaling_law_momentum}, we show the scaling of the momentum components of the hadronic system $n\cdot P_X$ and $\bar n \cdot P_X$ (left plot)
and of the lepton system $n\cdot q$ and $\bar n \cdot q$ (right plot) as function of $q^2$ for three different values of the hadronic mass cut.
It can be clearly seen, that for the experimentally invoked cuts without further assumptions other than assuming the effective two-body decay system $B\rightarrow X_s \gamma^*$ to be aligned along the light-cone axis without a perpendicular component, the hadronic system scales as hard-collinear, while the lepton system scales as anti-hard collinear.
However, as also can be extracted from these plots, a lower cut of $q^2 \lesssim 5 \text{ GeV}^2$ instead of $q^2 \lesssim 6 \text{ GeV}^2$ is preferred because a higher value of the $q^2$ cut pushes the small component to values slightly beyond our assumptions of the momentum component scaling and therefore neglected higher order terms may have a more sizable contribution. Nevertheless, the assumption of a hard $q$ momentum as used
in the calculations of Refs.~\cite{Lee:2005pk,Lee:2005pw,Lee:2008xc} is not appropriate. Moreover, it implies a different scaling and also a different matching of the operators. And as we will show below, this assumption would imply that there are no resolved contributions in the effective field theory.
\begin{figure}[hpt]
\centering\includegraphics[scale=0.36]{px_pm}\hfill\includegraphics[scale=0.36]{q_pm}
\caption{The scaling of the momentum components of the hadronic system $P_X^+ = n\cdot P_X$ and $P_X^- = \bar n \cdot P_X$ (left) and the lepton system $q^+ = n\cdot q$ and $q^- = \bar n \cdot q$ is plotted as a function of $q^2$ each for three values of the hadronic invariant mass.}\label{Fig:scaling_law_momentum}
\end{figure}
\subsection{Factorization theorem and operator matching and scaling}
We therefore describe the hadronic effects with SCET, corresponding to an expansion of the forward scattering amplitude in non-local operator matrix elements.
One derives a factorization formula for the considered process, in complete analogy to the radiative decay in \cite{Benzke:2010js}
\begin{align}\label{fact2}
&d\Gamma(\bar B\to X_s\ell^+ \ell^-)
= \sum_{n=0}^\infty\,\frac{1}{m_b^n}\, \sum_i\,H_i^{(n)} J_i^{(n)}\otimes S_i^{(n)} \nonumber \\
&\qquad + \sum_{n=1}^\infty\,\frac{1}{m_b^n}\,\bigg[ \sum_i\,H_i^{(n)} J_i^{(n)}\otimes S_i^{(n)}\otimes\bar J_i^{(n)}
+ \sum_i\,H_i^{(n)} J_i^{(n)}\otimes S_i^{(n)} \otimes\bar J_i^{(n)}\otimes\bar J_i^{(n)} \bigg] \,.
\end{align}
The formula contains the so-called direct contributions in the first line, while the second line describes the resolved contributions which occur first only at the order $1/m_b$ in the heavy-quark expansion.
Fig.~\ref{Fig:theorem} shows a graphical illustration of the three terms in the factorization theorem in the shape function region.
Here $H_i^{(n)}$ are the hard functions describing physics at the high scale $m_b$, $J_i^{(n)}$ are so-called jet functions characterizing the physics of the hadronic final state $X_s$ with the invariant mass in the range described above.
The hadronic physics associated with the scale $\Lambda_\text{QCD}$ is parametrized by the soft functions $S_i^{(n)}$. Similarly to the radiative decay investigated in Ref.~\cite{Benzke:2010js}, we have in addition resolved virtual-photon contributions in the second line, whose effects are
described by new jet functions $\bar J_i^{(n)}$. This occurs due to the coupling of virtual photons with virtualities of order $\sqrt{m_b \Lambda_\text{QCD}}$ to light partons instead of the weak vertex directly. Consequently they probe the hadronic substructure at this scale.
Resolved effects may occur as a single or double ``resolved'' contribution due to interference of the various operators, which also have the ``direct virtual-photon'' contribution.
Finally the soft or shape functions are defined in terms of forward matrix elements of non-local heavy-quark effective theory (HQET) operators. This limited set of shape functions can not be calculated perturbatively, yet
this allows a systematic analysis of hadronic effects in this decay mode.
We imply the convolution of the soft and jet function due to the occurrence of common variables with the symbol $\otimes$. Finally, we note that -- as already discussed in Ref.\cite{Benzke:2010js} -- there is not a complete proof of this factorization formula. There is one case in which there is a UV divergent convolution integral within the resolved contribution.
The contribution from ${\cal O}_8 - {\cal O}_8$ possesses an ultraviolet divergence, which cancels the $\mu$-dependence of the corresponding subleading jet function. This cancellation is expected and needed. However, a proper factorization of the anti-jet functions is needed to have a consistent description. Thus, this issue has been fixed by considering the convolution of the two anti-jet functions with the soft-function. The limit of the DimReg parameter $\epsilon$ needs to be taken after the convolution has been performed in order to obtain the proper factorization result, but this is contradictory to the assumption given in the factorization formula.~\footnote{We note that there are also divergent convolution integrals in SCET in power-suppressed contributions to hadronic $B$ meson decays. The important difference to our present case is that these divergences have an IR-origin.}
\begin{figure}
\begin{center}
\epsfig{file=FZ0,width=5.9cm}\hspace{-1.5cm}\epsfig{file=FZ1,width=5.9cm}\hspace{-1.5cm}\epsfig{file=FZ2,width=5.9cm}
{\caption{\label{Fig:theorem}
Graphical illustration of the three terms in the QCD factorization theorem (\ref{fact2}). The dashed lines represent soft interactions, which must be power expanded and factored off the remaining building blocks to derive factorization.}}
\end{center}
\end{figure}
Within this context, we consider only the low $q^2$ region. In this region, obeying the invariant mass constraint, the only sensible power-counting - as shown above - is to assume $q$ scales as an anti-hard-collinear momentum, while $P_X$ as a hard-collinear momentum just as in the radiative decay. In this sense, at least one of the leptons has to be anti-hard-collinear, while the other may be soft.
In our effective theory, we have, besides the initial heavy quark, active hard-collinear and anti-hard-collinear fermions, whose fields scale in $x$-space as
\begin{equation}
\xi_\text{hc} = W_{\bar n}^\dagger \xi_n \sim \sqrt{\lambda}\,, \qquad \xi_{\overline{\text{hc}}} = W_{ n}^\dagger \xi_{\bar n} \sim \sqrt{\lambda}\,.
\end{equation}
These two-component spinor fields obey the projector identities $P_n \xi_n = \xi_n$, $P_{\bar n} \xi_n = 0$ (and $n \leftrightarrow \bar n$) with $P_n = \frac{\slashed{n} \slashed{\bar n}}{4}$ and $P_{\bar n} = \frac{\slashed{\bar n} \slashed{ n}}{4}$. The quantities $W_n\,,W_{\bar n}$ are the familiar (anti)-hard collinear Wilson lines in SCET that render the Lagrangian gauge invariant. The soft and heavy quark fields scale as $h, q \sim \lambda^{3/2}$. The $b$-quark is described in terms of a HQET field and its velocity is given by $v^\mu = 1/2 (n^\mu + \bar{n}^\mu)$ and to leading order the $b$-field satisfies $\slashed{v} b = b$.
The projections of the gauge fields onto the components scale the same as the corresponding momentum components
\begin{equation}
\mathcal{A}^\mu_\text{hc} = W_{\bar n}^\dagger ( i D^\mu_\text{hc} W_{\bar n}) \sim (\lambda, 0, \sqrt{\lambda})\,\qquad\mathcal{A}^\mu_{\overline{\text{hc}}} = W_{n}^\dagger ( i D^\mu_{\overline{\text{hc} }} W_{n}) \sim (0, \lambda, \sqrt{\lambda})\,.
\end{equation}
Using this scaling, we can match the operators \eqref{eq:op_basis} onto the corresponding SCET operators and order them according to the scaling parameter $\lambda$. The relevant SCET Lagrangian for hard-collinear and soft fields (for anti-hard we need to replace $n \leftrightarrow \bar n$) obtained by the matching from the simple QCD (QED) Lagrangian is given by \cite{Beneke:2002ph,Beneke:2002ni}
\begin{equation} \label{eq:scet_lagrange}
{\cal L} = \bar{\xi}_n \left(i n\cdot D + i \slashed{D}_{\perp hc} \frac{1}{i \bar n\cdot D_{hc}}\, i\slashed{D}_{\perp hc} \right)
\frac{\slashed{\bar n}}{2} \, \xi_n + \bar{q}\, i \slashed{D}_{\rm s}(x) q + {\cal L}_{\xi q}^{(1)} ,
\end{equation}
where the superscript denotes the suppression in powers of $\sqrt{\lambda}$. The terms are explicitly given by
\begin{align}
{\cal L}^{(1)}_{\xi q} &= \bar q \,W_{\bar n}^\dagger i\slashed{D}_{\perp hc} \,\xi_n -
\bar{\xi}_n \,i\overleftarrow{\slashed{D}}_{\perp hc} W_{\bar n}\, q \, . \label{eq:scet_colsoft_sl}
\end{align}
In order to describe the process in question, we need to combine $\text{QCD}\otimes\text{QED}$ in terms of SCET. Kinematically we are in the situation where we need to describe the hadronic part in terms of SCET for a proper and consistent description, but also as far as QED is concerned, we have to describe these fields in terms of an SCET-like theory.
Thus, we investigate the matching of ${\cal O}_7$ onto SCET fields, where we consider the (virtual) photon to be power-counted as well.
The electromagnetic dipole operator is then written as
\begin{equation}
{\cal O}_{7\gamma}= -\frac{e}{8\pi^2}\,m_b\,
\bar s\sigma_{\mu\nu}(1+\gamma_5) F^{\mu\nu} b
\,.
\end{equation}
Suppressing the $-\frac{e m_b}{4\pi^2}\,e^{-im_b\,v\cdot x}$ factor and following the notation of \cite{Beneke:2002ph} ${\cal O}_{7\gamma}$ is matched onto the leading operator with $\mathcal{A}^\text{em}$ being the Wilson line dressed gauge-invariant photon field
\begin{equation}\label{eq:Q7_A}
{\cal O}_{7\gamma A}^{(0)} = \bar{\xi}_\text{hc}\,\frac{\bar{\slashed{n}}}{2}\, [i n\cdot \partial \slashed{\mathcal{A}}^\text{em}_{\perp}]\,(1+\gamma_5) h \,.
\end{equation}
We count the photon field as $ (n\cdot \mathcal{A}^\text{em}, \bar n\cdot \mathcal{A}^\text{em}, \mathcal{A}^\text{em}_\perp) \sim ( 0, \lambda , \sqrt{\lambda})$, where $n\cdot \mathcal{A}^\text{em} = 0$ follows from gauge invariance, despite of being off-shell.
We need to contract the photon from this operator with the QED Lagrangian in order to convert this virtual photon into a lepton-antilepton pair. Note that the contribution of ${\cal O}_7$ scales as $\lambda^\frac{5}{2}$.
The conversion of the virtual photon into hard-collinear leptons introduces no further suppression.
For the semi-leptonic operators, the matching leads to the following SCET operators
\begin{align}
{\cal O}_9 &= \frac{\alpha}{2\pi} (\bar sb)_{V-A} (\bar ll)_{V} &\quad &\rightarrow &\quad {\cal O}_9^{(1)} &= \frac{\alpha}{2\pi} ( \bar{\xi}_\text{hc}^s [1+\gamma^5 ] h) ( \bar{\xi}_\text{hc}^\ell \frac{\slashed{n}}{2} \xi_\text{hc}^\ell)\label{eqn:Q9scet} \\
{\cal O}_{10} &= \frac{\alpha}{2\pi} (\bar sb)_{V-A} (\bar ll)_{A} &\quad &\rightarrow &\quad {\cal O}_{10}^{(1)} &=\frac{\alpha}{2\pi} ( \bar{\xi}_\text{hc}^s [1+\gamma^5 ] h) ( \bar{\xi}_\text{hc}^\ell \frac{\slashed{n}}{2} \gamma^5 \xi_\text{hc}^\ell)\label{eqn:Q10scet}\,.
\end{align}
Both operators scale as $\lambda^{\frac12 +\frac32 +2 \frac12} = \lambda^3$, which is suppressed by $\lambda^\frac12$ against the contribution from ${\cal O}_7$. Note that this changes in the high $q^2$ region as in this case the leptons are hard and do not add a power suppression.
{Thus, according to the power counting in the low $q^2$ region, the leading order reference is given by the direct ${\cal O}_7 - {\cal O}_7$ contribution at the order of $\lambda^5$. If one takes into account all contributions up to order $1/m_b$ corrections, i.e. terms up to $\lambda^6$, corresponding to ${O}(\lambda)$ corrections to the leading direct contribution, then within the direct contributions one has to include only the leading part of ${\cal O}_{9,10} - {\cal O}_{9,10}$, but the subleading part of ${\cal O}_7 - {\cal O}_7$. This includes subleading soft and jet functions and the resolved contributions due to interference with other operators.}
In this paper, we calculate the resolved contributions, which we consider to order $1/m_b$. For this, we need to compute the resolved contributions from ${\cal O}_1 - {\cal O}_7$, ${\cal O}_7 - {\cal O}_8$ and ${\cal O}_8 - {\cal O}_8$
{ as in the $\bar B\to X_s\gamma$. They appear at the same order in the power counting in $\bar B\rightarrow X_s \ell^+ \ell^-$ , since the conversion of the photon into the hard collinear leptons is not power suppressed. Are there additional contributions? Indeed, the virtual photon could give rise to additional quantities in the operator matching, which where zero in the real case. In particular, subleading operators might contain factors of $\bar n\cdot q$ and $\bar n\cdot{\cal A}^\text{em}$. However, these operators contain the photon field directly (i.e. they do not couple to the photon via a Lagrangian insertion), and therefore do not give rise to resolved contributions. Also, there are no additional operators at leading power that contain these factors.}
The usual observables can be obtained from the triple differential rate in the form
\begin{equation}
\frac{\text{d}^2 \Gamma}{ \text{d}q^2 \text{d} z} = \frac38 \left[(1 + z^2) H_T (q^2) + 2 (1 - z^2) H_L(q^2) + 2 z H_A(q^2) \right] \label{eq:DiffRate}
\end{equation}
as given in \cite{Huber:2015sra}, and we will calculate the corrections to the structure functions $H_i$ below.
\section{Obtaining the Fully Differential Decay Rate}
\label{sec:differential}
The differential rate is obtained by calculating the restricted discontinuity
\begin{align}\label{eq:Discontinuity}
d\Gamma(\bar B\to X_s\ell^+ \ell^-) \propto \text{Disc}_\text{\,restr.}\,
\Big[ i\int d^4 x\,\langle\bar B| {\cal H}_\text{eff}^\dagger(x)\,
{\cal H}_\text{eff}(0) |\bar B\rangle \Big]
\,,
\end{align}
where the restriction implies that only cuts that contain the appropriate final states are taken into account.
At first order in the electromagnetic coupling the resulting expression can be decomposed into a hadronic and a leptonic tensor, $W^{\alpha \beta}$ and $L_{\alpha \beta}$ respectively
\begin{align}
d\Gamma(\bar B\to X_s\ell^+ \ell^-) = d\Pi^\text{Lept}\, L_{\alpha \beta}(p_{\ell^+},p_{\ell^-})\, W^{\alpha \beta}(v,p_{\ell^+}+p_{\ell^-}) \,,
\end{align}
with the leptonic phase space indicated by $d\Pi^\text{Lept}$.
The hadronic tensor $W^{\alpha \beta}$ contains the integration over the final state hadronic momentum and the total momentum conservation in its definition
\begin{equation}
W_{\alpha \beta} = \sum_{X_s}\int\frac{\text{d}^3 p_{X_s} }{(2\pi)^3 2E_{X_s}} \frac{1}{2 M_B}\langle B | {\cal O}^{\dagger,\text{had}}_\beta | X_s \rangle\langle X_s|{\cal O}^\text{had}_\alpha | B \rangle (2\pi)^4 \delta^{(4)} (P_B - p_{X_s} - p_{\ell^+}-p_{\ell^-} )\,.
\end{equation}
with the Fourier transformed operators ${\cal O}^\text{had}_\alpha$. This explicitly contains the on-shell condition.
For the leptonic tensor we have to distinguish between the contribution from the QED current insertion, and the direct contributions from ${\cal O}_{9,10}$, with the former defined as including the virtual photon propagator
\begin{align}
L_{\alpha \beta}^\text{QED} = -\left (\frac{-i}{q^2}\right)^2 \big(- i e \big)^2\,
\text{tr}\big(\slashed{p}_{\ell^+}\gamma_\alpha\,\slashed{p}_{\ell^-}\gamma_\beta\big)\,.
\end{align}
As will be shown below, for the current insertions only terms containing perpendicular components survive the contraction with the hadronic tensor to the first order. For the semi-leptonic contributions on the other hand the leptonic tensor is contracted with $n^\alpha n^\beta$, which can be seen from equations (\ref{eqn:Q9scet}) and (\ref{eqn:Q10scet}). But as explained above, there are no resolved contributions with the semi-leptonic operators to the first order in $1/m_b$. Thus, we can restrict ourselves to the insertion of a QED current in the following.
Below, the resolved $1 / m_b$ corrections to this hadronic tensor are calculated within the framework of SCET. Any desired distribution can then be recovered by performing the phase-space integration over the lepton momenta outlined below for our numerical study.
For an unpolarized three body decay we have two degrees of freedom. Remember that the hadronic on-shell condition leads to a delta distribution, respectively its derivative for power corrections, in the hadronic tensor. This condition is implicitly contained in the non-local matrix element, and therefore we can have at most a triple differential rate from the phase space, where this on-shell condition still needs to be evaluated. It is convenient to use the following three kinematic variables as it was already indicated in Eq.~\eqref{eq:DiffRate},
\begin{align}
v\cdot q\,\,;
q^2\,;\,
z &= \cos \theta = \frac{v\cdot p_{\ell^+} - v\cdot p_{\ell^-} }{ \sqrt{(v\cdot p_{\ell^-} + v\cdot p_{\ell^+} )^2 - q^2} \sqrt{1- 4\frac{ m_\ell^2 }{q^2}}}\,, \label{eq:LeptonAngle}
\end{align}
where $q = p_{\ell^+} + p_{\ell^-}$, $v = 1/2 (n +\bar n)$, and $z$ is defined as the angle of the positively charged anti-lepton with the flight direction of the
$B$-meson in the rest frame of the lepton-antilepton system ($\vec q = 0$). We keep the leptons massless in the following discussion. Then the structure functions in Eq.~\eqref{eq:DiffRate} can easily be identified.
In this notation it is obvious that $z$ is a Lorentz scalar, and in the $B$-rest frame $v\cdot p_{\ell_\pm} = E_{\ell_\pm}$.
We derive the phase-space result in full QED kinematics. It can be shown that expanding this calculation to the leading order in $\lambda$ is equal to the result calculated directly in leading order SCET.
Furthermore it is easy to verify that the leptonic part $I_{\alpha \beta}(v,q,z)$ defined in the contraction
\begin{align}
&\int d\Pi^\text{Lept}\, L_{\alpha \beta}^\text{QED}(p_{\ell^+},p_{\ell^-})\, W^{\alpha \beta}(v,p_{\ell^+}+p_{\ell^-})\nonumber\\
=\,&\int d\Pi^\text{Lept}\,\frac{d^4 q}{(2\pi)^4}\,(2\pi)^4 \delta^{(4)} (q-p_+-p_-)\,dz\,\delta\left(z - \frac{v\cdot p_{\ell^+} - v\cdot p_{\ell^-} }{ \sqrt{(v\cdot q)^2 - q^2} } \right)\nonumber\\
& \times\frac{-e^2}{(q^2)^2} \,
\text{tr}\big(\slashed{p}_{\ell^+}\gamma_\alpha\,\slashed{p}_{\ell^-}\gamma_\beta\big)\, W^{\alpha \beta}(v,q)\nonumber\\
\equiv\,&\int dv\cdot q\,dq^2\,dz\,\frac{\sqrt{v\cdot q^2 -q^2}}{(2\pi)^3}\,\frac{4\pi\alpha}{(q^2)^2}\big(- I_{\alpha \beta}(v,q,z) \big)\,W^{\alpha \beta}(v,q)\label{Eq:WLcont}
\end{align}
is transforming as a tensor under Lorentz transformations. Here, we have explicitly included the dependence on the angle $z$. The only invariants which occur in the integrand are $v\cdot p_{\ell^-}$ and $q \cdot p_{\ell^-}$. Therefore, using current conservation $q_\mu L^{\mu \nu} = 0 = q_\nu L^{\mu \nu}$ for massless leptons, we can decompose $I_{\alpha \beta}(v,q,z)$ as
\begin{align}
I^{\alpha \beta} (v, q, z) = \phantom{+}& I_1 (v\cdot q, q^2, z) \left( - g^{\alpha \beta} + \frac{q^\alpha q^\beta}{q^2} \right) \nonumber \\
+&I_2 (v\cdot q, q^2, z) \left(v^\alpha v^\beta + \frac{q^\alpha q^\beta (v\cdot q)^2}{q^4} - \frac{(v^\alpha q^\beta +v^\beta q^\alpha)(v\cdot q)}{q^2}\right) \nonumber \\
+&I_3 (v\cdot q, q^2, z) i \epsilon^{\alpha \beta \rho \sigma} v_\rho q_\sigma \,.\label{eqn:Idecomp}
\end{align}
Note that for the same reasons we may decompose the hadronic tensor $W^{\alpha \beta} (v,q)$ in a similar way, as it depends on $v^\mu$ and $q^\mu$, only.
In the case relevant for the resolved contribution we have to explicitly compute this decomposition for the insertion of a QED current. Then the leptonic structure functions are given by
\begin{align}
I_1 (v\cdot q, q^2, z) &= -\frac{q^2}{16 \pi} (1+z^2)\\
I_2 (v\cdot q, q^2, z) &= -\frac{q^2}{16 \pi} \frac{q^2}{(v\cdot q)^2-q^2} (1-3 z^2)\\
I_3 (v\cdot q, q^2, z) &= 0\,.
\end{align}
The absence of a linear component in $z$ shows that there exists no resolved contribution to the forward-backward asymmetry at this order. However, this result is already anticipated as neither ${\cal O}_9$ nor ${\cal O}_{10}$ contribute for resolved corrections at this order.
Expanding this result to order ${O}(\lambda)$, where we have to take into account that $q_\perp=0$ and that the open indices couple to a virtual photon field scaling as anti-hard-collinear, we obtain
\begin{equation}
I^{\alpha \beta} (v, q, z) = - g_\perp^{\alpha \beta} \frac{n\cdot q\, \bar{n} \cdot q}{16 \pi} (1+z^2) + {O}(\lambda)\,.
\end{equation}
In this sense, the Dirac structure reduces to the on-shell photon case. Combining this expanded result with the phase-space integration in Eq.~\eqref{Eq:WLcont}, we obtain
\begin{align}
&d\Pi^\text{Lept}\, L_{\alpha \beta}^\text{QED}(p_{\ell^+},p_{\ell^-})\, W^{\alpha \beta}(v,p_{\ell^+}+p_{\ell^-})\nonumber\\
= &\,dv\cdot q\,dq^2\,dz\, \frac{\alpha}{32\pi^3} (1+z^2)\,\frac{\sqrt{v{\cdot}q^2 - q^2}}{q^2} \,g_{\perp,\alpha \beta}\,W^{\alpha \beta}(v,q)\nonumber\\
\equiv &\, d\Lambda_{\alpha\beta}\,W^{\alpha \beta}(v,q)\,,
\end{align}
where we have defined the abbreviation $d\Lambda_{\alpha\beta}$ for later convenience. The transition to light-cone coordinates is easily obtained by using
\begin{align}
n \cdot q &= v\cdot q + \sqrt{v{\cdot}q^2 - q^2}\\
\bar{n} \cdot q &= v\cdot q - \sqrt{v{\cdot}q^2 - q^2}\,.
\end{align}
for an anti-hard-collinear momentum $q$. Neglecting $\lambda$ corrections it is easy to calculate
\begin{equation}
\text{d}v {\cdot} q \text{d} q^2 = \frac{n \cdot q}{2} \text{d}n{\cdot} q \text{d} \bar{n}{\cdot} q\,.
\end{equation}
where we have approximated $\sqrt{(v\cdot q)^2 - q^2} \approx \frac12 n\cdot q$.
Furthermore we find that in comparison with Eq.~\eqref{eq:DiffRate}, the only structure function that gets corrections of this type to the considered order is $H_T (q^2)$, while $H_A(q^2)$ and $H_L(q^2)$ do not.
Thus, we find
\begin{align}
d\Lambda_{\alpha\beta} =
dn\cdot q\,d\bar n\cdot q\,dz \frac{\alpha}{128\pi^3} (1+z^2)\,\frac{n\cdot q}{\bar n\cdot q} \,g_{\perp,\alpha \beta}\,.
\end{align}
With the appropriate replacement derived above we can therefore transit between the two differential rates, where we have to obey the power-counting in replacing the variables, by
\begin{equation}
\frac{ \text{d}^3 \Gamma}{\text{d}v {\cdot} q \text{d} q^2\,\text{d} z} = \frac{4 \bar n\cdot q}{n\cdot q} \frac{\sqrt{v{\cdot}q^2 - q^2}}{q^2} \frac{ \text{d}^3 \Gamma}{\text{d}n{\cdot}q\,\text{d}\bar n {\cdot}q\,\text{d} z}
\end{equation}
Finally, we can compare our results to the already known results of $B\rightarrow X_s \gamma$. This can be done by recomputing the phase-space and setting $\bar n \cdot q = 0$
\begin{equation}
d\Gamma(\bar B\to X_s\gamma) = dE_\gamma\, \frac{n\cdot q}{8\pi^2} \,g_{\perp,\alpha \beta} \,W^{\alpha \beta}(v,q)\label{Eq:PSgamma}\,.
\end{equation}
This corresponds to
\begin{align}
\frac{4\pi}{\alpha (1+z^2)}\frac{n\cdot q\,\,q^2}{\sqrt{v{\cdot}q^2 - q^2}} \frac{ \text{d}^3 \Gamma}{\text{d}v {\cdot} q \text{d} q^2\,\text{d} z} \bigg|_{\bar n\cdot q \rightarrow 0} &\rightarrow \frac{ \text{d} \Gamma}{\text{d}E_\gamma} \nonumber \\
\frac{16\pi}{\alpha (1+z^2)} \bar n\cdot q \frac{ \text{d}^3 \Gamma}{\text{d}n{\cdot}q\,\text{d}\bar n {\cdot}q\,\text{d} z} \bigg|_{\bar n\cdot q \rightarrow 0} &\rightarrow \frac{ \text{d} \Gamma}{\text{d}E_\gamma}\,.
\end{align}
\newpage
\section{\texorpdfstring{Explicit calculation of the ${\cal O}_1 - {\cal O}_{7\gamma}$ contribution}{Explicit calculation of the O1-O7 contribution}}
\label{sec:example}
For the explicit calculation of this resolved contribution we need to derive the expression for the loop with the emission of an anti-hard-collinear virtual photon and a soft gluon. We define (see Fig.~\ref{Fig:Q1loop})
\begin{figure}
\begin{center}
\epsfig{file=Q1loop1,width=5cm}\hspace{1cm}\epsfig{file=Q1loop2,width=5cm}
{\caption{\label{Fig:Q1loop}
Graphical illustration of the leading charm-quark loop contribution with the emission of an off-shell photon and a soft gluon induced by the operator }}
\end{center}
\end{figure}
\begin{equation}
{\cal A} = \frac{e q_q}{4\pi}\,\frac{g}{4\pi}\,\bar s\Gamma_2\,A\,\Gamma_1 b\,.
\end{equation}\\
Considering only those contributions that do not vanish between the Dirac structures $\Gamma_2\otimes\Gamma_1=\gamma^\mu(1-\gamma_5)\otimes\gamma_\mu(1-\gamma_5)$ the leading charm-quark loop contribution with the emission of an off-shell photon $q$ and a soft gluon $l_1$ is given in gauge invariant form by
\begin{align}
A =
\frac{i \gamma_{\beta } \gamma^5}{2\left(l_1\cdot q\right){}^2}
&\bigg[ \left(F_{\mu \alpha} \tilde{G}^{\mu \beta}+G_{\mu \alpha}\tilde{F}^{\mu \beta }\right) (q+l_1)^{\alpha }
\Big \lbrace
(q+l_1)^2\left(1-F\left(\frac{m_c^2}{(q+l_1)^2}\right)\right) \nonumber\\
&- q^2\left(1-F\left(\frac{m_c^2}{q^2}\right)\right) - q^2\left(G\left(\frac{m_c^2}{(q+l_1)^2}\right) - G\left(\frac{m_c^2}{q^2}\right)\right)
\Big \rbrace
\nonumber \\
&- F_{\mu \alpha } \tilde {G}^{\mu \beta } q^{\alpha } \Big \lbrace
(q+l_1)^2\left(1-F\left(\frac{m_c^2}{(q+l_1)^2}\right)\right) \nonumber\\
&- q^2\left(1-F\left(\frac{m_c^2}{q^2}\right)\right) - (q+l_1)^2\left(G\left(\frac{m_c^2}{(q+l_1)^2}\right) - G\left(\frac{m_c^2}{q^2}\right)\right)
\Big\rbrace \bigg]\,,
\end{align}
where we are using the convention
\begin{equation}
\tilde F^{\mu\nu} = -\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}\quad(\epsilon^{0123}=-1)\,,
\end{equation}
and have defined the penguin functions
\begin{align}
F(x)&=4x\arctan^2\frac{1}{\sqrt{4x-1}}\,, \label{FF}\\
G(x)&=2\sqrt{4x-1}\arctan\frac{1}{\sqrt{4x-1}}-2 \,.
\end{align}
For a real photon $q^2=0$ and $q^\alpha F_{\alpha\beta}=0$ the above expression reduces to
\begin{align}
A =
\frac{i \gamma_{\beta } \gamma^5}{2\left(l_1\cdot q\right){}^2}
\bigg[ \left(F_{\mu \alpha} \tilde{G}^{\mu \beta}+G_{\mu \alpha}\tilde{F}^{\mu \beta }\right) (q+l_1)^{\alpha }
\Big \lbrace
2\,l_1\cdot q\left(1-F\left(\frac{m_c^2}{2\,l_1\cdot q}\right)\right)\Big\rbrace \bigg]\,,
\end{align}
and we reproduce the result from $B \rightarrow X_s \gamma$. Note, that in the soft limit, where also $l_1\cdot q\to 0$ the product of the prefactor $1/(l_1\cdot q)^2$ with the specific combination of the penguin functions given above remains finite. As far as the field-strength tensors are concerned, the leading power is given by
\begin{align}
&(q+l_1)^\alpha \gamma_\beta \gamma^5 (G_{\mu \alpha} \tilde F^{\mu \beta} )\nonumber\\
=&\,\frac{1}{4}\,(n\cdot q)^2 i\epsilon^{\beta\sigma\mu\rho}\bar n_\rho\, \bar n^\alpha G_{\perp\alpha\mu}\,\epsilon_{\perp\sigma}^{(\gamma)*}
+{\cal O}(\lambda^3)\,,
\end{align}
where the polarization vector $\epsilon^{(\gamma)}$ represents an off-shell photon, which gives rise to the anti-hard-collinear propagator, when contracted with the QED current. Calculating the interference with the operator ${\cal O}_{7\gamma}$ we obtain the differential rate as
\begin{align}
d\Gamma_{1 7} =& \frac{1}{m_b}\,\text{Re}\Big[\hat\Gamma_{1 7}\frac{-\lambda_t^*\lambda_c}{|\lambda_t|^2}\Big]\,
d\Lambda_{\alpha\beta}\, e_c\, (n\cdot q)^2\,
\text{Re}\int d\omega \delta( \omega + m_b - n\cdot q)\int d\omega_1 \frac{1}{\omega_1 + i \epsilon}\nonumber\\
&\times\frac{1}{\omega_1}\left[
(\overline{n}\cdot q+\omega_1)\left(1-F\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right)\right)
-\overline{n}\cdot q \left(1-F\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right) \right.\nonumber\\
&\left.-\overline{n}\cdot q \left( G\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right) - G\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right)\right]\nonumber\\
&\times\int \!\frac{dt}{2\pi} e^{-i \omega t}\!\!\int \!\frac{dr}{2\pi} e^{-i \omega_1 r}
\frac{\langle B | \bar h(nt) \slashed{\bar n} [1+\gamma^5]\frac{i}{2} [\gamma^\mu_\perp, \gamma^\beta_\perp]\gamma_\perp^\alpha \bar{n}^\kappa g G_{\mu \kappa}(\bar n r) h(0) | B \rangle}{2M_B}
\end{align}
where we have defined the shorthand notation
\begin{equation}
\hat\Gamma_{i j} = \frac{G_F^2\alpha m_b^2}{4\pi^2}\, C_i C_j^*\,|\lambda_t|^2\,,
\end{equation}
and the $i\epsilon$ prescription may be dropped if we assume the soft function is well behaved in the limit $\omega_1\to 0$. The result obviously reproduces the known structure function result in the limit of a real photon.
For this we have to replace the leptonic tensor by $- g_{\kappa \sigma}$, the photon energy by $n\cdot q = 2 E_\gamma$ and $\bar n \cdot q = 0$. We then obtain for the contraction of the matrix element
\begin{align}
g_{\alpha \beta}[\gamma^\mu_\perp, \gamma^\beta_\perp]\gamma_\perp^\alpha
= 2 \gamma^\mu_\perp \,,
\end{align}
which exactly reproduces the soft function in the radiative decay.
{The same is true for the semi-leptonic decay. Due to $q_\perp=0$ the only remaining term of the decomposition of the leptonic tensor in (\ref{eqn:Idecomp}) is again $g_\perp^{\alpha\beta}$ and the Dirac structure in the shape function again reduces to the radiative case.}
Hence, no new structure function is involved to this order in the power-counting.
\newpage
\section{\texorpdfstring{Results to first order in $1/m_b$}{Results to first order in 1/mb}}
\label{sec:contributions}
Using standard relations explained in section \ref{sec:differential}, we automatically achieve the decomposition of the hadronic tensor into Lorentz structure functions. Below we have listed the results for the resolved contributions at order $\lambda$ for the hadronic tensor. The smooth limit $q^2\rightarrow 0$ reproduces the known results from Ref.~\cite{Benzke:2010js}. In the following we state our results for the CP-averaged rate, i.e. the result
factorizes into the real part of the strong matrix element and the weak prefactors.
We have three resolved operator combinations to order $1/m_b$. Namely ${\cal O}_{7\gamma} - {\cal O}_{8g}$,\, ${\cal O}_{8g} - {\cal O}_{8g}$, and ${\cal O}_1 - {\cal O}_{7\gamma}$.
Within the ${\cal O}_{7\gamma} - {\cal O}_{8g}$ contribution, there are three cut diagrams. Maintaining the same notation as in Ref.~\cite{Benzke:2010js}, we have for the two cuts with the hard-collinear gluon diagrams (see left diagrams in Figs.~\ref{Fig:diagrams0708} and~\ref{Fig:diagrams0708soft})
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.4]{Q7Q8SCETbc}\includegraphics[scale=0.4]{Q7Q8SCETcc}
\caption{Three cut diagrams arising from the matching of the ${\cal O}_{7\gamma} - {\cal O}_{8g}$ contribution onto SCET. Red indicates soft fields, black (anti-) hard-collinear fields. Hard fields are already integrated out. The left diagram with a hard-collinear gluon allows for two different cuts, while the diagram with the anti-hard-collinear gluon allows for one cut only.} \label{Fig:diagrams0708}
\end{center}
\end{figure*}
\begin{align}
d\Gamma_{78}^{(b)} &= -\frac{1}{m_b} \,\text{Re} \big[ \hat \Gamma_{7 8} \big] d\Lambda_{\alpha \beta}\, 16\pi\alpha_s\, e_q\, m_b\, n\cdot q\,
(g^{\alpha \beta}_\perp + i \epsilon_\perp^{\alpha\beta})\,\text{Re}\! \int d\omega \delta( \omega + m_b - n\cdot q) \nonumber \\
&\phantom{=\,}\,\times \int\frac{d\omega_1}{\omega_1 + \bar n \cdot q + i \epsilon}\, \frac{d \omega_2}{\omega_2 - i \epsilon} \left [\bar g_{78} (\omega, \omega_1,\omega_2,\mu) - \bar g_{78}^\text{cut} (\omega, \omega_1,\omega_2,\mu)\right]\,.
\end{align}
Here the hadronic functions $g_{7 8}$ are defined exactly the same way as already known from the case $B\rightarrow X_s \gamma$.
\begin{align}
&\bar g_{78}(\omega,\omega_1,\omega_2,\mu)
= \int\frac{dr}{2\pi}\,e^{-i\omega_1 r}\!\int\frac{du}{2\pi}\,e^{i\omega_2 u}\!
\int\frac{dt}{2\pi}\,e^{-i\omega t} \nonumber \\
&\times
\frac{\langle\bar B| \big(\bar h S_n\big)(tn)\,T^A\,
\overline{\Gamma}_n\,\big(S_n^\dagger s\big)(un)
\big(\bar s S_{\bar n}\big)(r\bar n)\,\Gamma_{\bar n}\,
\big(S_{\bar n}^\dagger S_{n}\big)(0)\,T^A
\big(S_n^\dagger h\big)(0) |\bar B\rangle}{2M_B} \nonumber \\
&\bar g_{78}^{\rm cut}(\omega,\omega_1,\omega_2,\mu)
= \int\frac{dr}{2\pi}\,e^{-i\omega_1 r}\!
\int\frac{du}{2\pi}\,e^{i\omega_2 u}\!
\int\frac{dt}{2\pi}\,e^{-i\omega t} \nonumber \\
& \times
\frac{\langle\bar B| \big(\bar h S_n\big)(tn)\,T^A\,
\overline{\Gamma}_n\,\big(S_n^\dagger s\big)((t+u)n)
\big(\bar s S_{\bar n}\big)(r\bar n)\,\Gamma_{\bar n}\,
\big(S_{\bar n}^\dagger S_{n}\big)(0)\,T^A
\big(S_n^\dagger h\big)(0) |\bar B\rangle}{2M_B} \,,
\label{eq:g78def1}
\end{align}
{where $S_n$ and $S_{\bar n}$ are soft Wilson lines connecting the soft fields in the matrix element and thereby ensuring gauge invariance. The exact space-time structure of the operator is depicted in the left of Fig.~\ref{Fig:diagrams0708soft}.}
However, for the cut diagram with an anti-hard-collinear gluon (see right diagrams in Fig.~\ref{Fig:diagrams0708}
and~\ref{Fig:diagrams0708soft}), we obtain
\begin{align}
d\Gamma_{78}^{(c)} &= \frac{1}{m_b} \text{Re} \big[ \hat \Gamma_{7 8} \big]d\Lambda_{\alpha \beta}\, 4\pi\alpha_s\, m_b\, n\cdot q\,
(g^{\alpha \beta}_\perp - i \epsilon_\perp^{\alpha \beta}) \,\text{Re}\!\int d\omega \delta( \omega + m_b - n\cdot q) \nonumber\\
&\phantom{=\,}\,\times \int\frac{d\omega_1}{\omega_1 - \omega_2 + \bar n \cdot q + i \epsilon}\, d \omega_2
\Big [ \left(\frac{1}{\omega_1 +\bar n \cdot q+ i \epsilon} + \frac{1}{\omega_2 - \bar n \cdot q - i \epsilon} \right) g_{78}^{(1)} (\omega, \omega_1,\omega_2,\mu) \nonumber \\
&\phantom{\times\Big[ }- \left(\frac{1}{\omega_1 + \bar n\cdot q + i
\epsilon} - \frac{1}{\omega_2 - \bar n \cdot q - i \epsilon} \right)
g_{78}^{(5)} (\omega, \omega_1,\omega_2,\mu)\Big]\,.
\end{align}
Again we find the same shape functions which are defined as
\begin{align}
& g_{78}^{(1)}(\omega,\omega_1,\omega_2,\mu)
= \int\frac{dr}{2\pi}\,e^{-i\omega_1 r}\!
\int\frac{du}{2\pi}\,e^{i\omega_2 u}\!
\int\frac{dt}{2\pi}\,e^{-i\omega t} \nonumber\\
&\times
\frac{\langle\bar B| \big(\bar h S_n\big)(tn)
\big(S_n^\dagger S_{\bar n}\big)(0)\,T^A\,
\slashed{\bar n} (1+\gamma_5)\,
\big(S_{\bar n}^\dagger h\big)(0)\,{\bf T}
\sum{}_q\,e_q\,\big(\bar q S_{\bar n}\big)(r\bar n)\,
\slashed{\bar n} \,T^A
\big(S_{\bar n}^\dagger q\big)(u\bar n)
|\bar B\rangle}{2M_B} \,, \nonumber\\
&g_{78}^{(5)}(\omega,\omega_1,\omega_2,\mu)
= \int\frac{dr}{2\pi}\,e^{-i\omega_1 r}\!
\int\frac{du}{2\pi}\,e^{i\omega_2 u}\!
\int\frac{dt}{2\pi}\,e^{-i\omega t} \nonumber \\
&\times
\frac{\langle\bar B| \big(\bar h S_n\big)(tn)
\big(S_n^\dagger S_{\bar n}\big)(0)\,T^A
\slashed{\bar n} (1+\gamma_5)\,
\big(S_{\bar n}^\dagger h\big)(0)\,{\bf T}
\sum{}_q\,e_q\,\big(\bar q S_{\bar n}\big)(r\bar n)\,
\slashed{\bar n}\gamma_5T^A
\big(S_{\bar n}^\dagger q\big)(u\bar n)
|\bar B\rangle}{2M_B} \,.\label{eq:g78def2}
\end{align}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.4]{Q7Q8HQETbc}\includegraphics[scale=0.4]{Q7Q8HQETcc}
\caption{Diagrams arising from the matching of the two ${\cal O}_{7\gamma} - {\cal O}_{8g}$ contributions onto HQET. Red indicates soft fields. Integrating out (anti-) hard-collinear fields leads to non-localities which are denoted by dashed lines.
} \label{Fig:diagrams0708soft}
\end{center}
\end{figure*}
{It is clear that the difference to the radiative decay is introduced through the non-vanishing $\bar n\cdot q$ that shifts the small component of the anti-hard-collinear propagator, which corresponds to the anti-hard-collinear jet function. With the same argument, we can already see that the direct contributions will not be affected in such a way, since $\bar n\cdot q$ is suppressed relative to the large component of any hard-collinear propagator.}
For the double resolved ${\cal O}_{8g} - {\cal O}_{8g}$ contribution involving twice the QCD dipole operator
(see diagrams in Fig.~\ref{Fig:diagrams0808}) we find
\begin{align}\label{O8O8}
d\Gamma_{88} &= \frac{1}{m_b} \,\text{Re}\big[\hat \Gamma_{8 8}\big]\,d\Lambda_{\alpha \beta}\,8\pi\alpha_s\,e_s^2\,m_b^2\,
(g^{\alpha \beta}_\perp + i
\epsilon_\perp^{\alpha \beta}) \,\text{Re}\!\int d\omega \delta( \omega + m_b - n\cdot
q) \nonumber \\
&\phantom{=\,}\,\times \int \frac{d\omega_1}{\omega_1 + \bar n \cdot q + i \epsilon}\, \frac{d \omega_2}{\omega_2 +\bar n \cdot q - i \epsilon} \bar g_{88} (\omega, \omega_1,\omega_2,\mu) \,.
\end{align}
Here the shape function $\bar g_{88}$ is again defined as in the radiative decay
\begin{align}
& \bar g_{88}(\omega,\omega_1,\omega_2,\mu)
= \int\frac{dr}{2\pi}\,e^{-i\omega_1 r}\!\int\frac{du}{2\pi}\,e^{i\omega_2 u}\!
\int\frac{dt}{2\pi}\,e^{-i\omega t} \label{eq:g88def}\\
& \times
\frac{\langle\bar B| \big(\bar h S_n\big)(tn)\,T^A
\big(S_n^\dagger S_{\bar n}\big)(tn)\,
\overline{\Gamma}_{\bar n}
\big(S_{\bar n}^\dagger s\big)(tn+u\bar n)
\big(\bar s S_{\bar n}\big)(r\bar n)
\Gamma_{\bar n}
\big(S_{\bar n}^\dagger S_{n}\big)(0)\,T^A
\big(S_n^\dagger h\big)(0)
|\bar B\rangle}{2M_B} \,. \nonumber
\end{align}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.4]{Q8Q8SCETc}\includegraphics[scale=0.4]{Q8Q8HQETc}
\caption{The cut diagram arising from the matching of the ${\cal O}_{8g} - {\cal O}_{8g}$ contribution onto SCET
(left) and onto HQET (right). Red indicates soft fields, black (anti-) hard-collinear fields. Hard fields are already integrated out. Dashed lines correspond to non-localities.} \label{Fig:diagrams0808}
\end{center}
\end{figure*}
As mentioned already in Section 2.2, there is a subtlety concerning the convolution integral in Eq.~\ref{O8O8}. When calculating the asymptotic behaviour of the soft function for $\omega_{1,2} \gg \Lambda_{\rm QCD}$ one finds that the convolution integrals are UV divergent \cite{Benzke:2010js}. This divergence is mirrored by an IR divergence in the direct contribution to ${\cal O}_{8g}-{\cal O}_{8g}$. In order to properly define all quantities it is necessary to split the convolution integrals in Eq.~\ref{O8O8} into an UV part with $\omega_{1,2} > \Lambda_{{\rm UV}}$ and an IR part with $\omega_{1,2} < \Lambda_{{\rm UV}}$. In the sum of direct and resolved contributions the divergence cancels, there remains, however, a logarithmic dependence on the parameter $\Lambda_{{\rm UV}}$ {in the perturbative part.}
For the ${\cal O}_1 - {\cal O}_{7\gamma}$ contribution (see Fig.~\ref{Fig:diagrams0107}) we have explicitly derived
\begin{align}
d\Gamma_{1 7} =& \frac{1}{m_b}\,\text{Re}\Big[\hat\Gamma_{1 7}\frac{-\lambda_t^*\lambda_c}{|\lambda_t|^2}\Big]\,
d\Lambda_{\alpha\beta}\, e_c\, (n\cdot q)^2\,
\text{Re}\int d\omega \delta( \omega + m_b - n\cdot q)\int d\omega_1 \frac{1}{\omega_1 + i \epsilon}\nonumber\\
&\times\frac{1}{\omega_1}\left[
(\overline{n}\cdot q+\omega_1)\left(1-F\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right)\right)
-\overline{n}\cdot q \left(1-F\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right) \right.\nonumber\\
&\left.-\overline{n}\cdot q \left( G\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right) - G\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right)\right]\nonumber\\
&\times\int \!\frac{dt}{2\pi} e^{-i \omega t}\!\!\int \!\frac{dr}{2\pi} e^{-i \omega_1 r}
\frac{\langle B | \bar h(nt) \slashed{\bar n} [1+\gamma^5]\frac{i}{2} [\gamma^\mu_\perp, \gamma^\beta_\perp]\gamma_\perp^\alpha \bar{n}^\kappa g G_{\mu \kappa}(\bar n r) h(0) | B \rangle}{2M_B}\,.
\end{align}
The decomposition of the Lorentz structure has been done above (see Section~\ref{sec:example}).
\begin{align}
d\Gamma_{1 7} =& \frac{1}{m_b}\,\text{Re}\Big[\hat\Gamma_{1 7}\frac{-\lambda_t^*\lambda_c}{|\lambda_t|^2}\Big]
\frac{\alpha}{24\pi^3}\,dn\cdot qd\bar n\cdot q\,\frac{(n\cdot q)^3}{\bar n\cdot q}\,
\text{Re}\int d\omega \delta( \omega + m_b - n\cdot q)\int d\omega_1 \frac{1}{\omega_1 + i \epsilon}\nonumber\\
&\times\frac{1}{\omega_1}\left[
(\overline{n}\cdot q+\omega_1)\left(1-F\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right)\right)
-\overline{n}\cdot q \left(1-F\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right) \right.\nonumber\\
&\left.-\overline{n}\cdot q \left( G\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right) - G\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right)\right]
\,g_{17}(\omega,\omega_1,\mu)\,,
\end{align}
with
\begin{eqnarray}
g_{17}(\omega,\omega_1,\mu)
&=& \int\frac{dr}{2\pi}\,e^{-i\omega_1 r}\!
\int\frac{dt}{2\pi}\,e^{-i\omega t} \\
&&\times \frac{\langle\bar B| \big(\bar h S_n\big)(tn)\,
\slashed{\bar n} (1+\gamma_5) \big(S_n^\dagger S_{\bar n}\big)(0)\,
i\gamma_\alpha^\perp\bar n_\beta\,
\big(S_{\bar n}^\dagger\,g G_s^{\alpha\beta} S_{\bar n}
\big)(r\bar n)\,
\big(S_{\bar n}^\dagger h\big)(0) |\bar B\rangle}{2M_B} \,.
\nonumber
\end{eqnarray}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.4]{Q1Q7SCETc}\includegraphics[scale=0.4]{Q1Q7HQETc}
\caption{The cut diagram arising from the matching of the ${\cal O}_{1} - {\cal O}_{7\gamma}$ contribution onto SCET
(left) and onto HQET (right). Red indicates soft fields, black (anti-) hard-collinear fields. Hard fields are already integrated out. Dashed lines correspond to non-localities.} \label{Fig:diagrams0107}
\end{center}
\end{figure*}
Finally some remarks are in order:
\begin{itemize}
\item Having listed our results for the triple differential decay rate above with the calculated phase space inserted, we find that there is no odd term in the variable $z$. Thus, there is no resolved contribution to the forward-backward asymmetry in the first subleading order.
\item Strictly speaking the CP averaging with the real part prescription is only valid because no linear term in $z$ appears, as for the CP conjugated rate we would have to replace $z\rightarrow -z$.
\item All diagrams show that if we considered the lepton momenta as hard, the resolved contributions would not exist. The hard momentum of the leptons would imply also a hard momentum of the intermediate parton. The latter would be integrated out at the hard scale and the virtual photon would be directly connected to the effective electroweak interaction vertex.
\item As the various results show, the shape function is non-local in both light cone directions. Thus, the resolved contributions stay non-local even when the hadronic mass cut is relaxed.
{In that case $n\cdot P_X=M_B-n\cdot q$ is not necessarily small anymore. We can therefore expand the shape function in powers of $\Lambda_\text{QCD}/(m_b-n\cdot q)$ which leads to a series of matrix elements that are local on the $n$-direction. However, the non-locality in the $\bar n$ direction is retained.}
In this sense the resolved contributions represent an irreducible uncertainty within the inclusive decay $\bar B \to X_s \ell^+\ell^-$.
\end{itemize}
\section{Numerical Analysis } \label{sec:numerics}
{First we discuss our input parameters. For the bottom-quark mass we use the low-scale subtracted heavy quark mass defined in the shape-function scheme: $m_b = 4.65\, \text{GeV}$~\cite{ Bosch:2004th}. However, we vary the mass between the running mass in the MS scheme, $\overline{m}_b^{\rm MS}(m_b) = 4.2\, \text{GeV}$, and the pole mass, $m_b^{\rm pole} = 4.8\, \text{GeV}$. The charm-quark mass enters as a running mass in the charm-penguin diagrams with a soft-gluon emission (within the interference of ${\cal O}_1$ with ${\cal O}_{7\gamma}$).
This diagram lives at the hard-collinear scale $\mu_{\rm hc} = \sqrt{m_b \Lambda_{\rm QCD}}$, thus, we choose $\overline{m}_c^{\rm MS}(\mu_{\rm hc} = 1.5\, \text{GeV}) = 1.131\, \text{GeV}$. The variation of $m_b$ implies
$1.45\, \text{GeV} < \mu_{\rm hc} < 1.55\, \text{GeV} $. We conservatively vary the hard-collinear scale
between $\mu_{\rm hc} = 1.4\, \text{GeV}$ and $\mu_{\rm hc} = 1.6\, \text{GeV}$.
Regarding the HQET parameters we adopt the choices of Ref.~\cite{Benzke:2010js}: We use $\lambda_2=(0.12\pm 0.02)$\,GeV$^2$ . For the first inverse moment of the $B$-meson light-cone distribution amplitude, we take the range
$0.25\,\mbox{GeV}<\lambda_B<0.75\,\mbox{GeV}$. For the parameter $F$ we use the relation $F=f_B\sqrt{M_B}$, and with $f_B=(193\pm 10)$\,MeV we finally obtain $0.177\,{\rm GeV}^3<F^2<0.217\,{\rm GeV}^3$.
We use NLO Wilson coefficients. However, in the BBL basis used in our analysis, the coefficients $C_{7\gamma}$ and $C_{8}$ are only known to LO. We crosschecked the numerical impact compared to using the
CMM basis~\cite{Chetyrkin:1996vx} for which all coefficients are known at least to NLO accuracy. We find that the numerical effect of the change is negligible in view of the other uncertainties within our analysis.}
\subsection{\texorpdfstring{Interference of ${\cal O}_1$ with ${\cal O}_{7\gamma}$}{Interference of Q1 with Q7}}
We are interested in the relative magnitude of the resolved contributions compared to the total decay rate, {i.e. the leading direct contributions to the decay rate which one also would consider when the decay rate was calculated within the OPE}
\begin{equation}
{\mathcal F}(q_\mathrm{min}^2,q_\mathrm{max}^2,M_{X,\mathrm{max}}^2)
=\frac{\Gamma_\mathrm{resolved}(q_\mathrm{min}^2,q_\mathrm{max}^2,M_{X,\mathrm{max}}^2)}{\Gamma_\mathrm{OPE}(q_\mathrm{min}^2,q_\mathrm{max}^2,M_{X,\mathrm{max}}^2)}\,,
\label{eqn:def}
\end{equation}
where the rate $\Gamma_\mathrm{OPE}$ is given by
\begin{align}
\Gamma_\mathrm{OPE} =&\, \frac{G_F^2\alpha m_b^5}{32\pi^4}\,|V_{tb}^*V_{ts}|^2\frac{1}{3}\frac{\alpha}{\pi}\int\frac{d\bar n\cdot q}{\bar n \cdot q}
\left(1-\frac{\bar n\cdot q}{m_b}\right)^2\nonumber\\
&\,\Bigg[ C_{7\gamma}^2\Bigg(1+\frac{1}{2}\frac{\bar n\cdot q}{m_b}\Bigg)
+(C_9^2+C_{10}^2)\Bigg(\frac{1}{8}\frac{\bar n\cdot q}{m_b}+\frac{1}{4}\left(\frac{\bar n\cdot q}{m_b}\right)^2\Bigg)
+C_{7\gamma}C_9\frac{3}{2}\frac{\bar n\cdot q}{m_b}\Bigg]\nonumber\\
\equiv&\,\frac{G_F^2\alpha m_b^5}{32\pi^4}\,|V_{tb}^*V_{ts}|^2\frac{1}{3}\frac{\alpha}{\pi}\, C_{\rm OPE}\,.
\end{align}
The last line defines the quantity $C_{\rm OPE}$. The integration limits are specified below.
The first term in the square brackets is the leading power in the $1/m_b$ expansion and corresponds to the direct contribution due to the interference of ${\cal O}_{7\gamma}$ with itself. The other terms are formally suppressed { in the shape function region in which we evaluate these direct contributions}. But
the large magnitude of the Wilson coefficients $|C_{9/10}|\sim 13|C_{7\gamma}|$ necessitates their inclusion into our uncertainty.
For the resolved contribution from the interference of ${\cal O}_1$ with ${\cal O}_{7\gamma}$ we find
\begin{align}
&{\mathcal F}_{17}
=\frac{1}{m_b^4}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c\,
\mathrm{Re} \int d\overline{n}\cdot q\,dn\cdot q\, \frac{(n\cdot q)^3}{\overline{n}\cdot q}
\int_{-p_+}^{\bar{\Lambda}} d\omega\,\delta(m_b - n\cdot q +\omega)
\int_{-\infty}^{+\infty}\frac{d\omega_1}{\omega_1+i\epsilon}\nonumber\\
&\frac{1}{\omega_1}\left[
(\overline{n}\cdot q+\omega_1)\left(1-F\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right)\right)
-\overline{n}\cdot q \left(1-F\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right)\right.\nonumber\\
&\left.-\overline{n}\cdot q \left( G\left(\frac{m_c^2}{n\cdot q(\overline{n}\cdot q+\omega_1)}\right) - G\left(\frac{m_c^2}{n\cdot q\overline{n}\cdot q}\right)\right)\right]
g_{17}(\omega,\omega_1,\mu)\,,
\label{eqn:f17}
\end{align}
where we have neglected terms proportional to $V_{ub}$. Here $p_+ \equiv n\cdot p = m_b - n\cdot q$, $\bar \Lambda = M_B - m_b$, and
the penguin functions $F$ and $G$ are defined in equation \ref{FF}.
The integration limits of the $n\cdot q$ and $\bar n\cdot q$ can be read of Fig.~\ref{Fig:scaling_law}. To order
$\lambda^2$ they are
\begin{align}
&\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{min}^2}{M_B}+\frac{M_{X,\mathrm{max}}^2q_\mathrm{min}^2}{M_B^3}}d\overline{n}\cdot q
\int_{\frac{q_\mathrm{min}^2}{\overline{n}\cdot q}}^{M_B} dn\cdot q \nonumber\\
+&\int_{\frac{q_\mathrm{min}^2}{M_B}+\frac{M_{X,\mathrm{max}}^2q_\mathrm{min}^2}{M_B^3}}^{\frac{q_\mathrm{max}^2}{M_B}}d\overline{n}\cdot q
\int_{M_B-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{M_B} dn\cdot q \nonumber \\
+&\int_{\frac{q_\mathrm{max}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}+\frac{M_{X,\mathrm{max}}^2q_\mathrm{max}^2}{M_B^3}}d\overline{n}\cdot q
\int_{M_B-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{\frac{q_\mathrm{max}^2}{\overline{n}\cdot q}}dn\cdot q
\end{align}
Since the integrand of the $\overline{n}\cdot q$ integration is not singular, the first and third line do not give a leading power contribution.
Note, that the integration limits of $n\cdot q$ are of $\mathcal{O}(1)$ in all terms, as they have to be, but the integration region is only of $\mathcal{O}(\lambda)$. To illustrate this we can substitute $n\cdot q\rightarrow m_b-p_+$. For convenience, we reverse the sign $p_+\rightarrow -p_+$. The integration in the first line of equation (\ref{eqn:f17}) can then be written as
\begin{equation}
\dots\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}}d\overline{n}\cdot q
\int_{\bar{\Lambda}-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{\bar{\Lambda}} dp_+
\, \frac{(m_b+p_+)^3}{\overline{n}\cdot q} \int_{p_+}^{\bar{\Lambda}} d\omega\,\delta(\omega-p_+)\dots\,.
\end{equation}
Changing the order of the $\omega$ and $p_+$ integrations,
\begin{equation}
\int_{\bar{\Lambda}-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{\bar{\Lambda}} dp_+
\int_{p_+}^{\bar{\Lambda}} d\omega
=\int_{\bar{\Lambda}-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{\bar{\Lambda}} d\omega
\int_{\bar{\Lambda}-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{\omega} dp_+
\end{equation}
and performing the $p_+$ integration to eliminate the $\delta$ distribution, yields for ${\mathcal F}_{17}$
\begin{align}
&{\mathcal F}_{17}
=\frac{1}{m_b^4}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c\,
\mathrm{Re} \int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}}d\overline{n}\cdot q
\, \frac{m_b^3}{\overline{n}\cdot q}
\int_{-\infty}^{+\infty}\frac{d\omega_1}{\omega_1+i\epsilon}\,\nonumber\\
&\frac{1}{\omega_1}\left[
(\overline{n}\cdot q+\omega_1)\left(1-F\left(\frac{m_c^2}{m_b(\overline{n}\cdot q+\omega_1)}\right)\right)
-\overline{n}\cdot q \left(1-F\left(\frac{m_c^2}{m_b\overline{n}\cdot q}\right)\right)\right.\nonumber\\
&\left.-\overline{n}\cdot q \left( G\left(\frac{m_c^2}{m_b(\overline{n}\cdot q+\omega_1)}\right) - G\left(\frac{m_c^2}{m_b\overline{n}\cdot q}\right)\right)\right]
\int_{\bar{\Lambda}-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{\bar{\Lambda}} d\omega\, g_{17}(\omega,\omega_1,\mu)\,.
\label{eqn:f17mb}
\end{align}
From this we define
\begin{equation}
h_{17}(M_{X,\mathrm{max}},\omega_1,\mu) = \int_{\bar{\Lambda}-\frac{M_{X,\mathrm{max}}^2}{M_B-\overline{n}\cdot q}}^{\bar{\Lambda}} d\omega\, g_{17}(\omega,\omega_1,\mu)\,.
\end{equation}
Since the soft function only has support for $\omega\sim\Lambda_\mathrm{QCD}$ we can take the limit $M_{X,\mathrm{max}}\rightarrow M_B$ to get\footnote{Note, that this does not work for $g_{88}$, since we would put two light quark fields at a light-like distance, which yields a divergent propagator.}
\begin{equation}
h_{17}(\omega_1,\mu) = \int\frac{dr}{2\pi}\,e^{-i\omega_1r}\frac{\,\langle B \!\mid\, \bar{h}(0)\fms{\overline{n}} i \gamma_\alpha^\perp\overline{n}_\beta gG^{\alpha\beta}(r\overline{n})h(0)\,\mid\! B \rangle\,}{2M_B}
\end{equation}
{Knowing the explicit form of the HQET matrix element we can derive general properties of the shape function $h_{17}$. Following the arguments given in Ref.~\cite{Benzke:2010js},
one can derive from PT invariance that the function is real and even in $\omega_1$.
One can also explicitly derive the general normalization of the soft function
\begin{equation}
\int^\infty_{-\infty} d\omega_1 h_{17}(\omega_1,\mu) = 2\, \lambda_2\,.
\end{equation}
Finally, the soft function $h_{17}$ should not have any significant structure (maxima or zeros) outside the hadronic range, and the values of $h_{17}$ should be within the hadronic range.}
In summary, we can write the relative contribution due to the interference of ${\cal O}_1$ with ${\cal O}_{7\gamma}$ as
\begin{align}
&{\mathcal F}_{17}
=\frac{1}{m_b}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c\,
\int_{-\infty}^{+\infty}d\omega_1\,
J_{17}(q_\mathrm{min}^2,q_\mathrm{max}^2,\omega_1)\,
h_{17}(\omega_1,\mu)
\label{eqn:full}
\end{align}
with
\begin{align}
&J_{17}(q_\mathrm{min}^2,q_\mathrm{max}^2,\omega_1) =
\mathrm{Re} \frac{1}{\omega_1+i\epsilon}
\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}} \frac{d\overline{n}\cdot q}{\overline{n}\cdot q}\,
\frac{1}{\omega_1} \nonumber\\
&\left[
(\overline{n}\cdot q+\omega_1)\left(1-F\left(\frac{m_c^2}{m_b(\overline{n}\cdot q+\omega_1)}\right)\right)
-\overline{n}\cdot q \left(1-F\left(\frac{m_c^2}{m_b\overline{n}\cdot q}\right)\right)\right.\nonumber\\
&\left.-\overline{n}\cdot q \left( G\left(\frac{m_c^2}{m_b(\overline{n}\cdot q+\omega_1)}\right) - G\left(\frac{m_c^2}{m_b\overline{n}\cdot q}\right)\right)\right]\,.
\end{align}
For the standard value of $q_\mathrm{min}^2$ and $q_\mathrm{max}^2$ the function $J_{17}$ is plotted in
Fig.~\ref{fig:J17}.
\begin{figure}[hpt]
\centering
\includegraphics[width=120mm]{J17plot}
\caption{$J_{17}$ for $q_\mathrm{min}^2=1\,$GeV$^2$ and $q_\mathrm{max}^2=6\,$GeV$^2$, together with the model function of equation \ref{eqn:h17b}.}\label{fig:J17}
\end{figure}
It is largest around $\omega_1=0$.
As a first trial for a model function for $h_{17}$, we use a Gaussian.
\begin{equation}
h_{17}(\omega_1)=\frac{2\lambda_2}{\sqrt{2\pi}\sigma}e^{-\frac{\omega_1^2}{2\sigma^2}}\,,
\end{equation}
with $\sigma =0.5\,$GeV as typical hadronic scale. This model function has all properties one derives from the explicit
HQET matrix element. Calculating the convolution integral, we find
\begin{align}
{\mathcal F}_{17\mathrm{Gaussian}}=\frac{1}{m_b}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c
\,(-0.252
\,\mathrm{GeV})\,.
\end{align}
Using a smaller $\sigma=0.1\,$GeV leads to $-0.304\,$GeV. We can express our numbers in percentages
\begin{align}
{\mathcal F}_{17\mathrm{exp}}\approx&\, +1.9\,\%\nonumber\\
\end{align}
Using a Gaussian for the soft function only yields negative numbers (positive percentages) for the expression in the square brackets.
Thus, this model function does not lead to a conservative bound on the size of ${\mathcal F}_{17}$.
Using the same function as in Ref.~\cite{Benzke:2010js}
\begin{equation}
h_{17}(\omega_1)=\frac{2\lambda_2}{\sqrt{2\pi}\sigma}\frac{\omega_1^2-\Lambda^2}{\sigma^2-\Lambda^2}e^{-\frac{\omega_1^2}{2\sigma^2}}\,,
\label{eqn:h17b}
\end{equation}
we also get positive numbers for this expression. If $\Lambda$ and $\sigma$ are chosen of order $\Lambda_{\text QCD}$ again all general properties derived for $h_{17}$ are fulfilled. For a parameter choice of $\sigma=0.5\,$GeV and $\Lambda=0.425\,$GeV one finds
\begin{equation} \label{convolution17}
{\mathcal F}_{17}=\frac{1}{m_b}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c\,(+0.075\,\mathrm{GeV}).
\end{equation}
For a different parameter choice, $\Lambda=0.575\,$GeV, on the other hand
\begin{equation}
{\mathcal F}_{17}=\frac{1}{m_b}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c\,(-0.532\,\mathrm{GeV}).
\end{equation}
which leads us to conservative estimate
\begin{equation}
{\mathcal F}_{17}\in [-0.5,+3.4]\,\%\,.
\end{equation}
By reducing the separation between $\Lambda$ and $\sigma$ one could reach larger values, but it would also increase the values of the soft function to outside the hadronic range.
As mentioned in the introduction, for the decay $\bar B \to X_s \gamma$, it is possible
to expand this non-local contribution to local operators {\it if} the charm quark is treated as heavy.
The first term in this expansion is the dominating one~\cite{Voloshin:1996gw,Ligeti:1997tc,Grant:1997ec,Buchalla:1997ky} which corresponds to the so-called Voloshin term.
This non-perturbative correction is suppressed by $\lambda_2/m_c^2$.
But if the charm mass is assumed to scale as
$m_c^2\sim\Lambda_{\text{QCD}} m_b$, what seems a more reasonable assumption, the charm penguin contribution must be described by the matrix element of a non-local
operator~\cite{Benzke:2010js}.
The same can be shown in the decay $\bar B \to X_s \ell^+\ell^-$.
In Ref.~\cite{Buchalla:1997ky}, the local Voloshin term was
derived from a local expansion assuming
{$\Lambda_\text{QCD}m_b/m_c^2$ to be small.}
We can rederive the leading term (according to our power counting) of their result from our general result above under the following assumptions.
Using a Gaussian as shape function and assuming this function being narrow enough,
one can expand the part of the integrand in square brackets in Eq.~(\ref{eqn:f17mb}) around $\omega_1=0$\footnote{{ The variable $(m_b \omega_1) / m_c^2$ corresponds to the parameter $t = k\cdot q / m_c^2$ in Ref.~\cite{Buchalla:1997ky} which is used there as expansion parameter. Note that we have already expanded in $\bar n\cdot q/m_b$ within the non-local contribution in order to single out the $1/m_b$ term.}}
\begin{align}
\big[\dots\big] =\,& \omega_1^2\bar n\cdot q\,\Bigg[
\frac{1}{2\bar n\cdot q^2} \nonumber\\
-\,&\frac{2m_c^2}{\bar n\cdot q^2}\frac{1}{4m_c^2-m_b\bar n\cdot q}\sqrt{\frac{4m_c^2-m_b\bar n\cdot q}{m_b\bar n\cdot q}}
\arctan\frac{1}{\sqrt{\frac{4m_c^2-m_b\bar n\cdot q}{m_b\bar n\cdot q}}}\Bigg] \nonumber\\
=&\,-\frac{m_b\omega_1^2}{12m_c^2} F_\mathrm{V}(r)\,,
\end{align}
where $ F_\mathrm{V}(r)$ is defined in Eq. (4) of \cite{Buchalla:1997ky} with $r=q^2/(4m_c^2)$ (which is different from the function $F$ defined in Eq.~\ref{FF}.). This corresponds exactly to the leading power in $1/m_b$ of the Voloshin term for $\bar B \to X_s \ell^+ \ell^-$ given in Ref.~\cite{Buchalla:1997ky}. For $F_\mathrm{V}(0) =1$, this results in the Voloshin term for $\bar B \to X_s \gamma$.
Numerically this approach is not advisable. Evaluating the leading $1/m_b$ Voloshin term yields
\begin{align}
{\mathcal F}_\mathrm{Voloshin,m_b^{-1}}=&\,\frac{1}{m_b}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c
\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}} \frac{d\overline{n}\cdot q}{\overline{n}\cdot q}\,
\left( -\frac{m_b2\lambda_2}{12m_c^2} \right)
F_\mathrm{V}\left(\frac{m_b\bar n\cdot q}{4m_c^2}\right)\nonumber\\
=&\,\frac{1}{m_b}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c
\,(-0.306\,\mathrm{GeV})\,.
\end{align}
Compared to our final estimate, we find that the Voloshin term significantly underestimates the possible charm contributions.
For comparison we finally consider the higher orders in $1/m_b$ of the Voloshin term
derived in Ref.~\cite{Buchalla:1997ky}. They are given by
\begin{align}
{\mathcal F}_\mathrm{Voloshin}=&\,\frac{1}{m_b}\frac{C_1(\mu)}{C_{\rm OPE}} e_c
\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}} \frac{d\overline{n}\cdot q}{\overline{n}\cdot q}\,
\left( -\frac{m_b2\lambda_2}{12m_c^2} \right)
F_\mathrm{B.I.}\left(\frac{m_b\bar n\cdot q}{4m_c^2}\right)\nonumber\\
&\,\Bigg[C_{7\gamma}(\mu)\Bigg(1+6\frac{\bar n\cdot q}{m_b}-\left(\frac{\bar n\cdot q}{m_b}\right)^2\Bigg)
+C_9(\mu)\Bigg(2\frac{\bar n\cdot q}{m_b}+\left(\frac{\bar n\cdot q}{m_b}\right)^2\Bigg)\Bigg]\nonumber\\
=&\,\frac{1}{m_b}\frac{C_1(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}} e_c
\,(+0.481\,\mathrm{GeV})\,.
\end{align}
We note that the higher order in $\bar n\cdot q $ are numerically small but the first subleading $C_9$ is numerically significant taking into account $ |C_{9/10}| \sim 13 |C_{7\gamma}|$. We also find that these
subleading contributions change the sign
\begin{align}
{\mathcal F}_\mathrm{Voloshin,m_b^{-1}}\approx&\, +1.9\,\%\nonumber\\
{\mathcal F}_\mathrm{Voloshin}\approx&\, -3.0\,\%
\end{align}
Clearly, within the Voloshin term there is a cancellation between the $C_{7\gamma}$ and the subleading
$C_9$ contribution but in our analysis in which we use $m_c^2 \sim m_b \Lambda_{\rm QCD}$ both terms get smeared out by different shape functions and, thus, the corresponding uncertainties have to be added up. These findings call for a calculation of the resolved contributions to order $1/m_b^2$ to collect all numerically relevant contributions~\cite{workinprogress}.
\subsection{\texorpdfstring{Interference of ${\cal O}_{7\gamma}$ with ${\cal O}_{8g}$}{Interference of Q7 with Q8}}
The relative uncertainty due to the interference of ${\cal O}_{7\gamma}$ and ${\cal O}_{8g}$ consists of two contributions ${\mathcal F}_{78}^{(b)}$ and ${\mathcal F}_{78}^{(c)}$.
{From the explicit form of the shape functions given in Eqs.~\ref{eq:g78def1} and~\ref{eq:g78def2} it can be deduced (see Ref.~\cite{Benzke:2010js}) that
the soft functions $\bar g_{78}$ and $g_{78}^{(1,5)}$ have support for $-\infty<\omega\le\bar\Lambda$ and $-\infty<\omega_{1,2}<\infty$, and
\begin{equation}\label{g78sym}
\int_{-\infty}^{\bar\Lambda}\!d\omega
\left[ g_{78}^{(1,5)}(\omega,\omega_1,\omega_2,\mu) \right]^*
= \int_{-\infty}^{\bar\Lambda}\!d\omega\,
g_{78}^{(1,5)}(\omega,\omega_2,\omega_1,\mu) \,.
\end{equation}
From PT invariance of the matrix element it follows that all the shape functions are real implying that the functions
\begin{equation}
h_{78}^{(1,5)} := \int_{-\infty}^{\bar \Lambda} d\omega\, g_{78}^{(1,5)}(\omega,\omega_1,\omega_2)
\end{equation}
are symmetric under the exchange of $\omega_1$ and $\omega_2$. Moreover, one also derives from the explicit form of the shape functions that }
\begin{equation}
\int d\omega\, \bar g_{78}(\omega,\omega_1,\omega_2) = \int d\omega\, \bar g_{78}^\mathrm{cut}(\omega,\omega_1,\omega_2)\,.
\end{equation}
Thus, the contribution ${\mathcal F}_{78}^{(b)}$ vanishes.
The other contribution is given by
\begin{align}
{\mathcal F}_{78}^{(c)} = \frac{1}{m_b}\,\frac{C_{8g}(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}}\,4\pi\alpha_s(\mu)\,
&\mathrm{Re}\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}}\frac{d\bar n\cdot q}{\bar n\cdot q}
\int d\omega_1\,d\omega_2\,\frac{1}{\omega_1-\omega_2+\bar n\cdot q+i\epsilon}\nonumber\\
\Bigg[&\Bigg(\frac{1}{\omega_1+\bar n\cdot q+i\epsilon}+\frac{1}{\omega_2-\bar n\cdot q-i\epsilon}\Bigg)h_{78}^{(1)}(\omega_1,\omega_2,\mu)\nonumber\\
-&\Bigg(\frac{1}{\omega_1+\bar n\cdot q+i\epsilon}-\frac{1}{\omega_2-\bar n\cdot q-i\epsilon}\Bigg)h_{78}^{(5)}(\omega_1,\omega_2,\mu)\Bigg]\,.
\end{align}
In the vacuum insertion approximation (see again Ref.~\cite{Benzke:2010js})
\begin{equation}\label{vacuumapproximation}
h_{78}^{(1)}(\omega_1,\omega_2,\mu) = h_{78}^{(5)}(\omega_1,\omega_2,\mu) =
-e_\mathrm{spec}\,\frac{F^2(\mu)}{8}\left(1-\frac{1}{N_c^2}\right)\phi_+^B(-\omega_1,\mu)\,\phi_+^B(-\omega_2,\mu)\,,
\end{equation}
where $F=f_B\sqrt{M_B}$, $e_\mathrm{spec}$ is the charge of the $B$ meson spectator quark, and $\phi_+^B$ is the light-cone distribution amplitude (LCDA). Since the LCDAs vanish for $\omega_i\to 0$, the $\omega_i$ integrals yield
\begin{align}
-e_\mathrm{spec}\,\frac{F^2(\mu)}{8}\left(1-\frac{1}{N_c^2}\right)
(-2)\mathrm{P}\int\frac{d\omega_1}{\omega_1-\bar n\cdot q}\,\phi_+^B(-\omega_1)\,\mathrm{P}\int\frac{d\omega_2}{\omega_1-\omega_2-\bar n\cdot q}\,\phi_+^B(-\omega_2)\,.
\label{eqn:pvints}
\end{align}
In order to estimate the magnitude of this contribution we use the model for the LCDAs given in \cite{Grozin:1996pq}
\begin{equation}
\phi_+^B(\omega)=\frac{\omega}{\omega_0}e^{-\omega/\omega_0}\,,
\end{equation}
where $\omega_0=\frac{2}{3}\bar \Lambda$. Then the principal value integrals of (\ref{eqn:pvints}) can be computed analytically and we find for the uncertainty
\begin{align}
{\mathcal F}_{78}^{(c)} = \frac{1}{m_b}\,\frac{C_{8g}(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}}\,&4\pi\alpha_s(\mu)\,e_\mathrm{spec}\,
\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}}\frac{d\bar n\cdot q}{\bar n\cdot q}
\frac{F^2(\mu)}{4}\left(1-\frac{1}{N_c^2}\right)
\frac{1}{4\omega_0^3}\nonumber\\
&\Bigg[-2\omega_0-(2\bar n\cdot q+\omega_0)e^\frac{\bar n\cdot q}{\omega_0}\mathrm{Ei}\left(-\frac{\bar n\cdot q}{\omega_0}\right)+\omega_0e^{-\frac{\bar n\cdot q}{\omega_0}}\mathrm{Ei}\left(\frac{\bar n\cdot q}{\omega_0}\right)\Bigg]\,,
\label{eqn:F78}
\end{align}
where the exponential integral is defined as
\begin{equation}
\mathrm{Ei}(z)=-\mathrm{P}\int_{-z}^\infty\frac{e^{-t}}{t}\,dt\,.
\end{equation}
{ Using our standard set of parameters, in particular the uncertainty of the parameter $F$ (see above), we integrate (\ref{eqn:F78}) numerically and find
\begin{equation} \label{convolution78}
{\mathcal F}_{78}^{(c)} \in \frac{1}{m_b}\frac{C_{8g}(\mu)C_{7\gamma}(\mu)}{C_{\rm OPE}}\,\, 4\pi\alpha_s(\mu)\,
e_{\rm spec}\, [0.058\, \mathrm{GeV},0.068\, \mathrm{GeV}]\,.\,
\end{equation}
We note that this estimate does not include any uncertainty due to the use of the VIA in Eq.~\ref{vacuumapproximation}. We can again express our numbers in percentages:
\begin{align}
&{\mathcal F}_{78}^{(c)} \in [-0.2,-0.1]\,\%.
\end{align}
}
\subsection{\texorpdfstring{Interference of ${\cal O}_{8g}$ with ${\cal O}_{8g}$}{Interference of Q8 with Q8}}
{The shape function $\bar g_{88}$
is more complicated than the ones in the previous cases, because not much is known about it. But from the explicit form and PT invariance, one can derive that $\bar g_{88}$ is real. One can show in addition that the convolution with the hard-collinear function is real (see Ref.~\cite{Benzke:2010js}). With $\bar h_{88}:= \int d \omega \bar g_{88}(\omega,\omega_1,\omega_2,\mu)$ we find for the convolution integral
\begin{align} \label{O8O82}
{\cal F}_{88} &= \frac{1}{m_b} \,\frac{C_{8g}(\mu)C_{8g}(\mu)}{C_\text{OPE}}\,4\pi\alpha_s(\mu)\,e_s^2\,
\text{Re}\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}}\frac{d\bar n\cdot q}{\bar n\cdot q} \nonumber \\
&\phantom{=\,}\,\times
\int \frac{d\omega_1}{\omega_1 + \bar n \cdot q + i \epsilon}\, \frac{d \omega_2}{\omega_2 +\bar n \cdot q - i \epsilon}\, 2\bar h_{88} (\omega_1,\omega_2,\mu) \,.
\end{align}
We cannot get any stricter estimation from the convolution, however we have been able to separate factors like $e_s^2$ etc, thus we estimate
\begin{align}
\Lambda(\mu) =&\,\text{Re}\int_{\frac{q_\mathrm{min}^2}{M_B}}^{\frac{q_\mathrm{max}^2}{M_B}}\frac{d\bar n\cdot q}{\bar n\cdot q}
\int \frac{d\omega_1}{\omega_1 + \bar n \cdot q + i \epsilon}\, \frac{d \omega_2}{\omega_2 +\bar n \cdot q - i \epsilon}\, 2\bar h_{88} (\omega_1,\omega_2,\mu)
\end{align}
to be of ${\cal O}(\Lambda_\text{QCD})$. So we assume $0\,\text{GeV}<\Lambda(\mu)<1\,\text{GeV}$\footnote{As mentioned below Eq.~\ref{eq:g88def} there is a subtlety concerning the convolution integral in Eq.~\ref{O8O82}. We state here that the logarithmic dependence on the parameter $\Lambda_{{\rm UV}}$ is assumed to be included in our hadronic parameter $\Lambda(\mu)$, which is therefore independent of $\Lambda_{{\rm UV}}$.}.
Compared to the estimates found in Eqs.~(\ref{convolution17}) and (\ref{convolution78}), this leads to a rather conservative estimate of the convolution integral
\begin{align}
&{\mathcal F}_{88} \in [0,0.5]\,\%\,.
\end{align}
\subsection{Summary of the numerical analysis}
Our estimates of the resolved contributions to the leading order
in $1/m_b$,
\begin{equation}
{\mathcal F}_{17}\in [-0.5,+3.4]\,\%,\,\,\,\,\,
{\mathcal F}_{78} \in [-0.2,-0.1]\,\%,\,\,\,\,\,
{\mathcal F}_{88} \in [0,0.5]\,\%\,.
\end{equation}
can be now summed up using the scanning method. Our final result is
\begin {align}
{\mathcal F}_{{1/m_b}}\in [-0.7,+3.8]\,.
\end{align}
As discussed, this estimate of the resolved contributions represents an irreducible theoretical uncertainty of the total decay rate of the inclusive $\bar B \to X_s \ell^+\ell^-$. The results in
Section~\ref{sec:contributions} allow to make analogous estimates for the other two independent
angular observables within the inclusive $\bar B \to X_s \ell^+\ell^-$.
\section{Conclusions} \label{sec:conclusion}
{The present and future measurements of the inclusive decay
$\bar B \rightarrow X_s \ell^+\ell^-$ need a hadronic mass cut in order to suppress potential huge background. The cut on the hadronic mass implies specific kinematics in which the standard local OPE breaks down and non-perturbative $b$-quark distributions, so-called shape functions, have to be introduced. The specific kinematics of low dilepton masses $q^2$ and small hadronic mass $M_X$ leads to a multi-scale problem for which soft-collinear effective theory is the appropriate tool.
In this paper, we have identified the correct power counting of all variables in the low-$q^2$ window of the inclusive decay $\bar B \rightarrow X_s \ell^+\ell^-$ within the effective theory SCET if such an hadronic mass cut is imposed. We have analysed the resolved power corrections at the order $1/m_b$ in a systematic way. Resolved contributions are those in which the virtual photon couples to light partons instead of connecting directly to the effective weak-interaction vertex. They stay non-local even if the hadronic mass cut is released. Thus, they represent an irreducible uncertainty independent of the hadronic mass cut.
We have presented numerical estimates of the corresponding uncertainties to the first order in $1/m_b$.
We find an overall uncertainty of ${\mathcal F}_{{1/m_b}}\in [-0.7,+3.8]$ for the decay rate.
Numerical estimates of the uncertainties in the case of the other two independent angular observables in the inclusive decay $\bar B \rightarrow X_s \ell^+\ell^-$ can be easily derived from the analytical results of this paper.
However, we have found indications that the subleading contributions to order $1/m_b^2$ might be numerically relevant due to the large ratio $C_9/C_{7\gamma}$ which calls for an additional calculation~\cite{workinprogress}.}
\acknowledgments
We thank Tobias Huber for valuable help and discussions. TH thanks the CERN theory group for its hospitality during his regular visits to CERN where part of this work was written. We thank Michael Fickinger for crucial input at an early stage of the project.
\newpage
|
1,941,325,221,013 | arxiv | \section{Introduction}
\label{sec:intro}
In this paper we describe a connection between the realizations of motivic fundamental groups of CM elliptic curves and the geometry of Bianchi hyperbolic threefolds. The first instance of this connection was described by A.Goncharov in \cite{goncharov-euler}.
\subsubsection*{Motivation} We aim to study the action of the motivic Galois group on the motivic fundamental group of an elliptic curve punctured at the $\mathfrak{p}$-torsion points with tangential base point $v_0$:
\begin{equation}
\Gal_\text{\rm Mot}\circlearrowright\pi_1^\text{\rm Mot}(E-E[\mathfrak{p}],v_0).
\label{eqn:gal_mot_action}
\end{equation}
The objects in (\ref{eqn:gal_mot_action}) are still conjectural, but we can study them in their realizations. The results of this paper are in the Hodge realization. However, the picture is easiest to introduce in the $\ell$-adic realization.
As a running example, take $E$ to be the CM elliptic curve $E=\mathbb{C}/(\mathbb{Z}+\mathbb{Z}[i])$ and $\mathfrak{p}\subset\mathbb{Z}[i]$ an ideal. The $\ell$-adic realization of the motivic fundamental group, $\pi_1^{(\ell)}(E-E[\mathfrak{p}],v_0)$, is simply the pro-$\ell$ completion of the topological fundamental group $\pi_1(E-E[\mathfrak{p}],0)$. It is equipped with an action of the absolute Galois group $\Gal(\ol\mathbb{Q}/\mathbb{Q})$ by automorphisms.
The Maltsev construction (\cite{deligne-p1p1}, \S9) makes out of $\pi_1^{(\ell)}(E-E[\mathfrak{p}],v_0)$ a pro-$\ell$ Lie algebra $A_{E,\mathfrak{p}}$ over $\mathbb{Q}_\ell$, generated by $H_1(E;\mathbb{Z})$ and loops around the punctures in $E[\mathfrak{p}]$. It carries two filtrations: by \emph{weight} and by \emph{depth}. The increasing weight filtration $W$ (see \cite{deligne-hodge1}) is invariant under the Galois action, and the geometric Frobenius element %
acts on $\text{\rm gr}^wA_{E,\mathfrak{p}}$ with eigenvalues of norm $\ell^{w/2}$. The descreasing depth filtration $D$ is defined by the lower central series of the linearization of \[\ker\pq{\pi_1^{(\ell)}(E-E[\mathfrak{p}],v_0)\to\pi_1^{(\ell)}(E,v_0)}.\]
The filtrations $W$ and $D$ induce filtrations on $\text{\rm End}(A_{E,\mathfrak{p}})$, and, by restriction, on the image of the action of $\Gal(\ol\mathbb{Q}/\mathbb{Q})$. Taking its associated graded Lie algebra for the weight filtration, we obtain a graded Lie algebra $\text{\rm Lie}_{(\ell)}(E,E[\mathfrak{p}])$, the \emph{elliptic Galois Lie algebra}. We study the quotient of this Lie algebra induced by the quotient of $A_{E,\mathfrak{p}}$ by the adjoint action of $H_1(E;\mathbb{Z})$, and take the coinvariants of the translation action of $E[\mathfrak{p}]$ on $E$ (amounting to averaging the base point).%
This quotient is called the \emph{symmetric} Galois Lie algebra $\text{\rm Lie}_{(\ell)}^{\rm sym}(E,E[\mathfrak{p}])$.
The structure of the depth-$d$ graded quotients of $\text{\rm Lie}_{(\ell)}^{\rm sym}(E,E[\mathfrak{p}])$ is well understood in depths 0 and 1. In depth 0, this algebra simply vanishes. In depth 1, it is abelian, and spanned over $\mathbb{Q}_\ell$ by the classes constructed by A.Beilinson \cite{beilinson-modular} and in a different way by A.Beilinson and A.Levin \cite{beilinson-levin}. These classes are parametrized by a $\mathfrak{p}$-torsion point on $E$ and an element of the symmetric algebra of $H_1(E;\mathbb{Z})$. These constructions work in the Hodge realization as well as in the $\ell$-adic one, and the mechanism of motivic correlators (described in \S2) gives alternative proofs of these statements. In particular, Beilinson and Levin's elliptic polylogarithms can be expressed in terms of the depth-1 Hodge correlator integrals -- Kronecker-Eisenstein series (\cite{beilinson-levin}, \S3).
In this paper, we focus on the depth 2, the first case in which there is a nonzero Lie bracket. To describe the structure of the elliptic Galois Lie algebra, we can consider its standard cochain complex. Recall that the standard cochain complex of a Lie algebra $L$ is a complex of the exterior powers of its dual $L^\vee$, where the coboundary map $\delta$ is the dualization of the Lie bracket $[\,,\,]:L\wedge L\to L$:
\[{\rm CE}\chain(L^\vee)=\pq{0\to L^\vee\tto\delta L^\vee\wedge L^\vee\to L^\vee\wedge L^\vee\wedge L^\vee\to\dots}.\]
If $L^\vee$ is a graded Lie coalgebra, then ${\rm CE}\chain(L^\vee)$ is also graded. Applying the construction to the associated graded for the depth filtration of $\text{\rm Lie}_{(\ell)}^{\rm sym}(E,E[\mathfrak{p}])$, we obtain a cochain complex that is graded by weight and depth. In depth 1 the complex is concentrated in degree 1. However, the depth-2 part of this complex has a nontrivial coboundary map: the depth-2 elements map to wedge products of Beilinson-Levin classes:
\begin{equation}
\text{\rm gr}^{D=2}\text{\rm Lie}^{\rm sym}_{(\ell)}(E,E[\mathfrak{p}])^\vee\to\pq{\text{\rm gr}^{D=1}\text{\rm Lie}^{\rm sym}_{(\ell)}(E,E[\mathfrak{p}])^\vee}^{\wedge2}.
\label{eqn:d2_complex}
\end{equation}
We connect (the Hodge analogue of) this complex to the geometry of Bianchi hyperbolic threefolds.
\subsubsection*{Bianchi tesselation} Now let us describe the other side of the story. The Bianchi tesselation of the upper half-space $\mathbb{H}^3$ for the ring $\mathbb{Z}[i]$ is the 3-dimensional version of the famous modular triangulation of the upper half-plane; the latter is the restriction of the Bianchi tesselation to the plane in $\mathbb{H}^3$ lying above the real line (see Fig.~\ref{fig:bianchi}). This beautiful construction was given by L.Bianchi in 1892 \cite{bianchi}; see \cite{goncharov-euler} for a modern review. The fundamental domain is an octahedron with vertices at $0,1,i,i+1,\f{1+i}{2},\infty)$. Through the standard action of $\mathrm{GL}_2(\mathbb{C})$ on $\mathbb{H}^3$, the group $\mathrm{GL}_2(\mathbb{Z}[i])$ acts transitively on the cells of the tesselation. If $\mathfrak{p}$ is a prime ideal in $\mathbb{Z}[i]$ and $\Gamma_1(\mathfrak{p})\subset\mathrm{GL}_2(\mathbb{Z}[i])$ is the congruence subgroup \[\Gamma_1(\mathfrak{p})=\cq{\begin{pmatrix}a&b\\c&d\end{pmatrix}\equiv\begin{pmatrix}1&0\\ * &1\end{pmatrix}\mod{\mathfrak{p}}},\]
the quotient $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$ is a finite-volume hyperbolic manifold with cusps.
\begin{figure}
%
%
%
\includegraphics[height=3in]{figures/bianchi.pdf}
\hspace{1in}
\includegraphics[height=3in]{figures/modular.pdf}
\simplecap{fig:bianchi}{\emph{Left:} The fundamental octahedron of the Bianchi tesselation for $\mathbb{Z}[i]$.\\\emph{Right:} The modular triangulation of the upper half-plane.}
\end{figure}
We build the following local system on this manifold. The group $H_1(E;\mathbb{Z})$ has the structure of a $\mathbb{Z}[i]$-module, giving $H_1(E;\mathbb{Z})\oplus H_1(E;\mathbb{Z})$ the structure of a $\mathrm{GL}_2(\mathbb{Z}[i])$-module. We take its symmetric algebra $\Sym\chain(H_1(E;\mathbb{Z})\oplus H_1(E;\mathbb{Z}))$. This $\mathrm{GL}_2(\mathbb{Z}[i])$-module determines an infinite-dimensional graded local system on $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$. Denote this local system by $T_2$.
Consider the chain complex of the Bianchi tesselation, placed in the cohomological degrees $[0,2]$. It is generated by the octahedral cells in degree 0, by ideal triangles in degree 1, and by geodesics in degree 2. Tensoring over $\Gamma_1(\mathfrak{p})$ with $\Sym\chain(H_1(E;\mathbb{Z})\oplus H_1(E;\mathbb{Z}))$, we get the chain complex of $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$ with coefficients in the local system $T_2$.
\subsubsection*{The main construction}
There is a Hodge analogue of $\text{\rm Lie}^{\rm sym}_{(\ell)}(E,E[\mathfrak{p}])$, denoted $\text{\rm Lie}^{\rm sym}_{\text{\rm Hod}}(E,E[\mathfrak{p}])$. In \S\ref{sec:relating} (Theorem~\ref{thm:bianchi_to_lie}), we construct a surjective morphism of complexes of graded $\mathbb{Z}[i]$-modules:
\begin{equation}
\pq{\text{\parbox{0.45\textwidth}{\centering chain complex of the Bianchi orbifold $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$ with coefficients in $T_2$}}}
\to\pq{\text{\parbox{0.3\textwidth}{\centering Hodge analogue of\\the complex (\ref{eqn:d2_complex})}}}
\label{eqn:main_morphism}
\end{equation}
In particular, we get surjective homomorphisms: \[H^i(\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3,T_2)\to H^i(\text{\rm gr}^{D=2}\text{\rm Lie}_{\text{\rm Hod}}^{\rm sym}(E,E[\mathfrak{p}]),\mathbb{Q}).\]
A key idea of A.Goncharov \cite{goncharov-euler}, which we develop further in this paper, is to map the cusps of the Bianchi orbifold $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$ to $\mathfrak{p}$-torsion points of $E$.
This itself generalizes a similar picture for modular curves, first described in \cite{goncharov-manin}, where cusps of modular curves are identified with $p$-torsion points of $\mathbb{G}_m$ in the study of the double logarithm at roots of unity.
When we advance to depth 2, the geodesics of the Bianchi tesselation map to wedge products of elements parametrized by $\mathfrak{p}$-torsion points, and the triangles must map to certain elements parametrized by three $\mathfrak{p}$-torsion points.
Let us elaborate the map (\ref{eqn:main_morphism}) in each degree. In degree 2, we map a geodesic $(\alpha,\beta)$ of the Bianchi tesselation modulo $\Gamma_1(\mathfrak{p})$, with a coefficient in the described local system to a wedge product of two depth-1 classes in the Galois Lie coalgebra. The data parametrizing a geodesic with a coefficient in $\Sym\chain(H_1(E;\mathbb{Z})\oplus H_1(E;\mathbb{Z}))\cong\Sym\chain(H_1(E;\mathbb{Z}))^{\otimes2}$ is \emph{identical} to the data parametrizing a pair of classes in the image. In degree 1, the domain is generated by triangles in the Bianchi tesselation with a coefficient in the local system. Thus the image should be described in terms of elements depending on three $\mathfrak{p}$-torsion points and three elements of $\Sym\chain(H_1(E;\mathbb{Z}))$. These elements are \emph{motivic correlators}, which we introduce in the remainder of the introduction and in \S2. These elements can be visualized as a sequence of $\mathfrak{p}$-torsion points and 1-forms on $E$ written around a circle, modulo some relations. Their coproduct has a simple combinatorial description, and their real Hodge periods can be explicitly computed via Feynman integrals.
In summary, the maps are constructed as follows:
\begin{align*}
\text{ideal triangle $(\alpha,\beta,\gamma)$}&\mapsto\text{element in $\text{\rm gr}^{D=2}\text{\rm Lie}^{\rm sym}(E,E[\mathfrak{p}])$ depending on $\mathfrak{p}$-torsion points $\alpha,\beta,\gamma$},\\
\text{geodesic $(\alpha,\beta)$}&\mapsto\text{wedge product of Beilinson-Levin elements determined by $\alpha$ and $\beta$}.
\end{align*}
Remarkably, the combinatorial structure of the Bianchi tesselation is preserved in the space of motivic correlators, and thus the chain complex of the Bianchi orbifold \emph{maps surjectively} onto the standard cochain complex of a quotient of the Galois Lie algebra of $E\setminus E[\mathfrak{p}]$.
\subsubsection*{Hodge realization} Now we will sketch this picture in the Hodge realization, which is the focus of this paper.
Let $E$ be a complex elliptic curve and $S\subset E$ a finite set of punctures. The pronilpotent completion of the fundamental group $\pi_1^\text{\rm nil}(E-S,v_0)$, with tangential base point at $v_0$, is a Lie algebra in the category of mixed $\mathbb{Q}$-Hodge structures. The category of mixed $\mathbb{Q}$-Hodge structures is canonically equivalent to the category of representations of a graded Lie algebra over $\mathbb{Q}$. Let us take its image in the representation defining $\pi_1^\text{\rm nil}(E-S,v_0)$, and consider the graded dual Lie coalgebra $\text{\rm Lie}_\text{\rm Hod}^\vee(E,S)$.
The Hodge correlators, introduced by A.Goncharov in \cite{goncharov-hodge-correlators}, are canonical elements
\begin{equation}
\text{\rm Cor}_\text{\rm Hod}(\Omega_0,z_0,\dots,\Omega_n,z_n)\in\text{\rm Lie}_\text{\rm Hod}^\vee(E,S),
\label{eqn:cor_hod}
\end{equation}
where $z_0,\dots,z_n\in S$ and $\Omega_1,\dots,\Omega_n$ are elements in the tensor algebra of $H^1(E;\mathbb{C})$. The coalgebra $\text{\rm Lie}_\text{\rm Hod}^\vee(E,S)$ carries a filtraion by depth; the element (\ref{eqn:cor_hod}) has depth $n$. These elements describe the real mixed Hodge structure on $\pi_1^\text{\rm nil}(E-S,v_0)\otimes\mathbb{R}$. Their canonical real periods are the Hodge correlator functions, functions of $n+1$ points on $E$. We find new linear relations among the elements (\ref{eqn:cor_hod}).
At a cusp on the modular curve, as $E$ degenerates to the nodal projective line, these relations specialize to known relations among periods of the mixed Tate motive associated with $\mathbb{P}^1$ punctured at a finite set of points. If $n=2$, our elliptic relations specialize to the full set of \emph{double shuffle relations}, the most general known relations, which were previously described by the author using Hodge correlators (\cite{malkin-shuffle}).
Suppose that $E$ is one of the CM elliptic curves $\mathbb{C}/(\mathbb{Z}+\mathbb{Z} i)$ or $\mathbb{C}/(\mathbb{Z}+\mathbb{Z}\pq{\f{1+\sqrt{-3}}{2}})$, $\mathcal{O}=\text{\rm End} E$, and $\mathfrak{p}$ is a prime in $\mathcal{O}$. The subalgebra $\text{\rm Lie}_\text{\rm Hod}^{\rm sym}(E,E[\mathfrak{p}])$ of $\text{\rm Lie}_\text{\rm Hod}(E,S)$ is constructed as in the $\ell$-adic case. We construct the morphism (\ref{eqn:main_morphism}) in this setting, where the object standing on the right is the complex ${\rm CE}\chain\pq{\text{\rm gr}^{D=2}\text{\rm Lie}_\text{\rm Hod}^{\rm sym}(E,E[\mathfrak{p}])}$.
Our construction simultaneously generalizes several results of A.Goncharov:
\begin{enumerate}
\item \emph{The relation between Voronoi complexes and mixed Tate motives: } The Bianchi complexes are the higher-degree analogues of the Voronoi complexes, complexes of $\mathrm{GL}_k(\mathbb{Z})$-modules from tesselations of the upper half-plane $\mathbb{H}^2$. A map from the Voronoi complexes to motivic objects associated with rational curves punctured at roots of unity constructed for $k=2,3,4$, using either multiple polylogarithms (\cite{goncharov-polylogs-modular}) or motivic correlators (\cite{goncharov-motivic-modular}), which satisfy the double shuffle relations. The relations we found for elliptic motivic correlators in depth 2 are deformations of the second shuffle relations.
\item \emph{Euler complexes: } The map from the Bianchi complexes to a space of motivic theta functions on elliptic curves constructed by \cite{goncharov-euler} in depth 2 and weight 4. We generalize this construction to all weights: \cite{goncharov-euler}'s map is the restriction of our map to the trivial local system.
\end{enumerate}
\subsubsection*{Structure}
In \S\ref{sec:hodge_and_motivic}, we review the construction of Hodge correlators. We in particular explain our results on the level of Hodge correlator integrals.
In \S\ref{sec:mc_elliptic_curves} we establish some properties of motivic correlators on elliptic curves. The main new result of this section is the dihedral symmetry relation for depth 2 correlators (Theorem~\ref{thm:dihedral_depth2}).
In \S\ref{sec:bianchi_and_modular} we review the definitions of the Bianchi complexes, define the modular complexes for imaginary quadratic fields, and construct a map between the two in the Gaussian and Eisenstein cases. In \S\ref{sec:relating} we combine the results of the two preceding sections to prove the main results relating Bianchi complexes and the elliptic Galois Lie algebra.
In \S\ref{sec:application}, we show how our results generalize those of \cite{goncharov-polylogs-modular,goncharov-euler,malkin-shuffle}.
\subsubsection*{Acknowledgements}
The author is grateful to A.B.\ Goncharov for suggesting this problem, for many helpful explanations, and for comments on a draft of this paper.
This material is based in part upon work supported by NSF grants
DMS-1440140, 1107452, 1107263, and 1107367.
\section{Hodge and motivic correlators}
\label{sec:hodge_and_motivic}
\subsection{Real Hodge point of view: Relations on Hodge correlator integrals}
Let $X$ be a complex curve (in this paper, $X=\mathbb{P}^1(\mathbb{C})$ or an elliptic curve). The Hodge correlator functions, defined in \cite{goncharov-hodge-correlators}, are functions
\[\text{\rm Cor}_\mathcal{H}(x_0,\dots,x_n),\]
where each $x_i$ is either a point of $X$ or a 1-form representing a class in $H^1(X;\mathbb{C})$. The \emph{depth} of this expression is the number of points among the $x_i$ minus one, if $X$ is an elliptic curve, or the number of nonzero points among the $x_i$ minus one, if $X=\mathbb{P}^1$. The \emph{weight} is $n$ plus the depth.
The Hodge correlators depend on a choice of a base point $s\in X$ and a tangent vector $v_0$ at $s$. If $n=1$ and $x_0,x_1\in X$, then $\text{\rm Cor}_\mathcal{H}(x_0,x_1)$ is a (normalized) Green's function with pole at $s$. In particular,
\begin{itemize}
\item If $X=\mathbb{P}^1$ and $s=\infty$, then \[\text{\rm Cor}_\mathcal{H}(x_0,x_1)=G_\infty(x_0,x_1)=(2\pi i)^{-1}\log\aq{x_0-x_1}+C.\] The constant $C$ depends on the choice of tangent vector at $\infty$, but the correlator is independent of this constant in weight $>2$, so we will ignore it when convenient. The correlator for other tangential base points can be derived using the fact that it is invariant under automorphisms of $\mathbb{P}^1$ acting on the base point and the arguments.
\item If $X$ is the elliptic curve $\mathbb{C}/(\mathbb{Z}+\mathbb{Z}\tau)$, where $\Im(\tau)>0$, with coordinate $z$ inherited from the complex plane, then \[\text{\rm Cor}_\mathcal{H}(x_0,x_1)=G_s(x_0,x_1)=G_{\rm Ar}(x_0,x_1)-G_{\rm Ar}(x_0,s)-G_{\rm Ar}(s,x_1)+C.\] Here $G_{\rm Ar}$ is the Arakelov Green's function, the unique solution to the elliptic partial differential equation $(2\pi i)^{-1}\partial\ol\partial G_{\rm Ar}(x)={\rm vol}_E-\delta_0$. It has the Fourier expansion
\begin{equation}
G_{\rm Ar}(z)=\f{2\Im(\tau)}{2\pi i}\sum_{\gamma\in(\mathbb{Z}+\mathbb{Z}\tau)\setminus\cq0}\f{\exp\pq{2\pi i\Im(z\ol\gamma)/\Im(\tau)}}{\aq\gamma^2}.
\label{eqn:fourier}
\end{equation}
The Arakelov Green's function has a logarithmic singularity at 0. Hence, the function $\text{\rm Cor}_\mathcal{H}(x_0,x_1)$ has singularities of the form $\log\aq z$ at the divisors $x_0=x_1$, $x_0=s$, $x_1=s$.
\item \textbf{Remark: } The Green's function on $\mathbb{P}^1$ is a specialization of the one on $E$. Precisely, write $G^{E_\tau}$ for the Green's function on $E=\mathbb{C}/(\mathbb{Z}+\mathbb{Z}\tau)$ with base point 0. Then, taking $z$ to be the coordinate on $E_\tau$ inherited from the complex plane, such that the section $z\in E_\tau$ approaches $e^{2\pi iz}\in\mathbb{P}^1$, with appropriate choice of tangential base points, \[\lim_{\tau\to+i\infty}G^{E_\tau}(z_1,z_2)=G_{1}\pq{e^{2\pi iz_1},e^{2\pi iz_2}}=\log\aq{\f{1}{e^{2\pi iz_1}-1}-\f{1}{e^{2\pi iz_2}-1}}.\] (This can be shown by a residue computation or an application of the Kronecker limit formula. We will require this fact in \S\ref{sec:degen}.)
\end{itemize}
If $n\geq2$, the Hodge correlators are defined as a sum of integrals depending on plane trivalent trees. Picture the $x_0,\dots,x_n$ written counterclockwise along the boundary of a disc, and consider a trivalent tree $T$ embedded in the disc with leaves at the $n+1$ boundary points. The tree has $n-1$ interior vertices $V^\circ$ and $2n-1$ edges $E_0,\dots,E_{2n-2}$. The embedding into the plane gives a canonical orientation ${\rm Or}_T\in\cq{\pm1}$ (an ordering of the edges up to even permutation).
Assign to each interior vertex $v\in V^\circ$ a copy of $X$, called $X_v$, with coordinate $x_v$. Then assign to each edge $E_i$ either a function $f_i$ or a 1-form $\omega_i$, as follows:
\begin{enumerate}[(1)]
\item If $E_i=(u,v)$ is an interior edge, let $f_i=G_s(x_u,x_v)$, a function on $X_u\times X_v$.
\item If $E_i=(u,x_j)$ is a boundary edge with the leaf decorated by a point $x_j\in X$, let $f_i=G_s(x_u,x_j)$, a function on $X_u$.
\item If $E_i=(u,x_j)$ is a boundary edge with $x_j=\omega$ a 1-form, let $\omega_i=\omega(x_u)$, a 1-form on $X_u$.
\end{enumerate}
Without loss of generality, $E_0,\dots,E_k$ are the edges labeled by a function (i.e., not boundary edges decorated by a 1-form). Suppose also that each form is either purely holomorphic or purely antiholomorphic (which we may do because the Hodge correlators are linear in the forms); let there be $p$ and $q$ forms of these types, respectively. Then, setting $d^\mathbb{C}=\partial-\ol\partial$, we define
\begin{equation}
c_T(x_0,\dots,x_n)=(-2)^k\binom{k}{\f12(k+p-q)}^{-1}{\rm Or}_T\int_{X^{V^\circ}}f_0\,d^\mathbb{C} f_1\wedge\dots\wedge d^\mathbb{C} f_k\wedge\omega_{k+1}\wedge\dots\wedge\omega_{2n-2}.
\end{equation}
The Hodge correlator is the sum of such expressions over all plane trivalent trees,
\begin{equation}
\text{\rm Cor}_\mathcal{H}(x_0,\dots,x_n)=\sum_Tc_T(x_0,\dots,x_n).\label{eqn:def_hodge_corr}
\end{equation}
The Hodge correlator is independent of the choice of ordering of edges. As a function of the arguments that are points on $X$, it is either purely real or purely imaginary.
Fig.~\ref{fig:hc_example} shows a simple example of the integral corresponding to one of the two trees contributing to $\text{\rm Cor}_\mathcal{H}(a,b,c,\omega)$.
\begin{figure}
\centering
\includegraphics[width=0.2\textwidth]{figures/elliptic1.pdf}
\[\int_{z_1,z_2}G(z_1,a)\,d^\mathbb{C} G(z_1,z_2)\wedge d^\mathbb{C} G(z_2,b)\wedge d^\mathbb{C} G(z_2,c)\wedge\omega(z_1)\]
\simplecap{fig:hc_example}{One of the trees contributing to $\text{\rm Cor}_\mathcal{H}(a,b,c,\omega)$.}
\end{figure}
The Hodge correlators satisfy a family of \emph{(first) shuffle relations}. For $i,j>0$, let $\Sigma_{i,j}$ be the set of $(i,j)$-shuffles, permutations $\sigma\in S_{i+j}$ such that $\sigma(1)<\dots<\sigma(i)$ and $\sigma(i+1)<\dots<\sigma(i+j)$. The $(i,j)$-shuffle relation states:
\begin{equation}
\sum_{\sigma\in\Sigma_{i,j}}\text{\rm Cor}_\mathcal{H}(x_0,x_{\sigma^{-1}(1)},x_{\sigma^{-1}(2)},\dots,x_{\sigma^{-1}(i+j)})=0.
\end{equation}
For Hodge correlators of depth 2 on an elilptic curve with arbitrary base point, we found a second shuffle relation. It has the form:
\begin{equation}
\text{\rm Cor}_\mathcal{H}(S_{n_0,n_0'},0,S_{n_1,n_1'},a,S_{n_2,n_2'},a+b)
+\text{\rm Cor}_\mathcal{H}(S_{n_0,n_0'},0,S_{n_2,n_2'},b,S_{n_1,n_1'},a+b)
+\text{lower-depth terms}=0,
\label{eqn:ec_second_shuffle}
\end{equation}
where an argument $S_{n,n'}$ indicates that we sum over all possible ways to insert in some order the arguments \[\underbrace{\omega,\dots,\omega}_n,\underbrace{\ol\omega,\dots,\ol\omega}_{n'}.\]
The highest-depth terms in these relations arise from shuffles of the \emph{differences} between successive arguments, $x_i-x_{i-1}$, together with the 1-forms between those arguments. For example, in (\ref{eqn:ec_second_shuffle}) we have shuffled $a$ (with $n_1$ copies of $\omega$ and $n_1'$ of $\ol\omega$) with $b$ (with $n_2$ $\omega$'s and $n_2'$ $\ol\omega$'s).
We describe the lower-depth correction terms in \S\ref{sec:mc_modulo_depth}. In the simplest case -- weight 4 -- the full relation is:
\begin{align*}
\text{\rm Cor}_\mathcal{H}(0,a,a+b)
+\text{\rm Cor}_\mathcal{H}(0,b,a+b)
-(&\text{\rm Cor}_\mathcal{H}(0,\omega,\ol\omega,a+b)
+\text{\rm Cor}_\mathcal{H}(0,\ol\omega,\omega,a+b))\\
-\f12\biggl(
&\text{\rm Cor}_\mathcal{H}(0,a,\omega,\ol\omega)
-\text{\rm Cor}_\mathcal{H}(0,a,\ol\omega,\omega)\\
&+\text{\rm Cor}_\mathcal{H}(0,b,\omega,\ol\omega)
-\text{\rm Cor}_\mathcal{H}(0,b,\ol\omega,\omega)\\
&+\text{\rm Cor}_\mathcal{H}(\omega,\ol\omega,a,a+b)
-\text{\rm Cor}_\mathcal{H}(\ol\omega,\omega,a,a+b)\\
&+\text{\rm Cor}_\mathcal{H}(\omega,\ol\omega,b,a+b)
-\text{\rm Cor}_\mathcal{H}(\ol\omega,\omega,b,a+b)
\biggr)&=0.
\end{align*}
The relation (\ref{eqn:ec_second_shuffle}) can be formulated simply as a functional equation on biperiodic functions of three complex variables. It states that a sum of several integrals over an elliptic curve is equal to 0, modulo correlators of depth 1 (which are expressed by Kronecker-Eisenstein series). However, this functional equation is difficult to prove. To understand it, we will need to upgrade it to the Hodge-theoretic or motivic setting.
\begin{figure}
\centering
\includegraphics[height=0.2\textwidth]{figures/elliptic2.pdf}
\parbox{0.1\textwidth}{\centering\vskip-0.1\textwidth\Huge+\vskip0.1\textwidth}
\includegraphics[height=0.2\textwidth]{figures/elliptic3.pdf}
\simplecap{fig:second_shuffle}{The highest-depth terms of the second shuffle relation on an elliptic curve.}
\end{figure}
The second shuffle relations have a prehistory. The first objects known to satisfy a system first and second shuffle relations of this form were the multiple polylogarithms (see \cite{goncharov-polylogs-modular}). These relations follow from two alternative expressions for multiple polylogarithms: as power series and as iterated integrals. In \cite{malkin-shuffle}, for $X=\mathbb{P}^1$, the author found second shuffle relations for Hodge correlators, in every depth, and described the lower-depth terms. In depth 2, these relations depend on integers $n_0,n_1,n_2\geq0$ and points $a,b\in\mathbb{G}_m\setminus\cq1$. They state:
\begin{align}
\text{\rm Cor}_\mathcal{H}(\underbrace{0,\dots,0}_{n_0},1,\underbrace{0,\dots,0}_{n_1},a,\underbrace{0,\dots,0}_{n_2},ab)
+\text{\rm Cor}_\mathcal{H}(\underbrace{0,\dots,0}_{n_0},1,\underbrace{0,\dots,0}_{n_2},b,\underbrace{0,\dots,0}_{n_1},ab)&\nonumber\\
+\,\text{lower-depth terms}&=0.
\label{eqn:p1_second_shuffle}
\end{align}
The highest-depth terms in these relations arise from shuffles of the quotients between successive arguments, $\f{x_i}{x_{i-1}}$, together with the 0s between those arguments. For example, in (\ref{eqn:p1_second_shuffle}) we have shuffled $a$ (with $n_1$ 0s) with $b$ (with $n_2$ 0s). See Fig.~\ref{fig:second_shuffle}) for an illustration.
Conjecturally, the first and second shuffle relations give all linear relations among the Hodge correlators on $\mathbb{P}^1$. While the first shuffle relations emerge from the trivalent tree construction -- they hold on the level of the \emph{integrands} in (\ref{eqn:def_hodge_corr}) -- the proof of the second shuffle relations is difficult, requiring motivic or Hodge-theoretic arguments even in depth 2. %
Note the similarity between (\ref{eqn:p1_second_shuffle}) and (\ref{eqn:ec_second_shuffle}). In fact, as an elliptic curve degenerates to a nodal projective line, a variant of the second shuffle relation (\ref{eqn:ec_second_shuffle}) specializes to (\ref{eqn:p1_second_shuffle}).
\subsection{Hodge-theoretic / motivic point of view: Correlators and motivic $\pi_1$}
We briefly review the construction of Hodge and motivic correlators from \cite{goncharov-hodge-correlators}. Hodge correlators are objects in the fundamental Lie coalgebra of the category of $\mathbb{R}$-mixed Hodge structures, and are Hodge-theoretic upgrades of the Hodge correlator functions.
\subsubsection{Summary}
In \cite{goncharov-hodge-correlators}, the Hodge correlator functions $\text{\rm Cor}_\mathcal{H}(x_0,\dots,x_n)$ of the previous section were upgraded to elements of the Tannakian Lie coalgebra $\text{\rm Lie}_\text{\rm Hod}^\vee$ of the category of real mixed Hodge structures
\begin{equation}
\text{\rm Cor}_\text{\rm Hod}(z_0,\dots,z_n)\in\text{\rm Lie}_\text{\rm Hod}^\vee.\label{eqn:elem_cor_hod}
\end{equation}
The upgraded Hodge correlators (\ref{eqn:elem_cor_hod}) that satisfy the first shuffle relations, and their coproduct in the coalgebra $\text{\rm Lie}_\text{\rm Hod}^\vee$ is given by a simple formula, which we give below.
One of the main results of this paper is that the elements (\ref{eqn:elem_cor_hod}) satisfy a second shuffle relation in depth 2.
\subsubsection{Hodge-theoretic setup}
Let $\mathrm{MHS}_\mathbb{R}$ of be the tensor category of $\mathbb{R}$-mixed Hodge structures and $\mathrm{HS}_\mathbb{R}$ the category of $\mathbb{R}$-pure Hodge structures. Every object of $\mathrm{MHS}_\mathbb{R}$ is filtered by weight, and $\mathrm{MHS}_\mathbb{R}$ is generated by the simple objects $\mathbb{R}(p,q)+\mathbb{R}(q,p)$ ($p,q\in\mathbb{Z}$). By Deligne's theory \cite{deligne-hodge3}, the cohomology of a (possibly singular) complex variety is a mixed Hodge structure.
The \emph{Galois Lie algebra} of the category of mixed Hodge structures, $\text{\rm Lie}_\text{\rm Hod}$, is the algebra of tensor derivations of the functor $\text{\rm gr}^W:\mathrm{MHS}_\mathbb{R}\to\mathrm{HS}_\mathbb{R}$. It is a graded Lie algebra in the category $\mathrm{HS}_\mathbb{R}$, and $\mathrm{MHS}_\mathbb{R}$ is equivalent to the category of graded $\text{\rm Lie}_\text{\rm Hod}$-modules in $\mathrm{HS}_\mathbb{R}$. Let $\text{\rm Lie}_\text{\rm Hod}^\vee$ be its graded dual. A canonical \emph{period map} \[p:\text{\rm Lie}_\text{\rm Hod}^\vee\to\mathbb{R}\] was defined in \cite{goncharov-hodge-correlators}, \S1.11.
Let $X$ be a smooth curve, $S\subset X$ a finite set of punctures, $s\in S$ a distinguished puncture (called the base point), and $v_0$ a distinguished tangent vector at $s$. The pronilpotent completion $\pi_1^\text{\rm nil}(X\setminus(S\cup\cq s),v_0)$ of the fundamental group $\pi_1(X\setminus S,s)$ carries a mixed Hodge structure, depending on $v_0$, and thus there is a map
\[\text{\rm Lie}_\text{\rm Hod}\to\text{\rm Der}\pq{\text{\rm gr}^W\pi_1^\text{\rm nil}(X\setminus S,v_0)}.\]
\subsubsection{Hodge correlator coalgebra}
\label{sec:hodge_cor_coalg}
The \emph{Hodge correlator coalgebra} is defined by \cite{goncharov-hodge-correlators} as
\begin{equation*}
\CLie_{X,S,v_0}^\vee:=\f{T(\mathbb{C}\bq{S}^\vee\oplus H^1(X;\mathbb{C}))}{\text{relations}}\otimes H_2(X).
\end{equation*}
Note that $H_2(X)\cong\mathbb{R}(1)$. If $[h]\in H_2(X)$ is the fundamental class, we write $x(1)$ for $x\otimes[h]$. This coalgebra is graded by weight. It is more finely graded by the Hodge bidegree, or \emph{type}, where points in $S$ have type $(1,1)$, holomorphic and antiholomorphic 1-forms have type $(1,0)$ or $(0,1)$, respectively, and $H_2(X)$ has type $(-1,-1)$, extended to be additive with respect to the tensor product. The weight of an element of type $(p,q)$ is $p+q$.
The relations are the following:
\begin{enumerate}[(1)]
\item Cyclic symmetry: $x_0\otimes\dots\otimes x_n=x_1\otimes\dots\otimes x_n\otimes x_0$.
\item (First) shuffle relations:
\begin{equation*}
\sum_{\sigma\in\Sigma_{i,j}}x_0\otimes x_{\sigma^{-1}(1)}\otimes\dots\otimes x_{\sigma^{-1}(i+j)}=0.
\end{equation*}
\item Take the quotient by the elements of nonpositive weight.
\end{enumerate}
An action of the graded dual Lie algebra $\CLie_{X,S,v_0}$ by derivations on $\text{\rm gr}^W\pi_1^\text{\rm nil}(X\setminus S,v_0)\otimes\mathbb{C}$
was constructed by \cite{goncharov-hodge-correlators}. This action is injective; its image consists of the \emph{special derivations} \[\text{\rm Der}^S\pq{\text{\rm gr}^W\pi_1^\text{\rm nil}(X\setminus S,v_0)\otimes\mathbb{C}},\] those which act by 0 on the loop around $\infty$ and preserve the conjugacy classes of all the loops $s\in S\setminus\cq{s}$.
Dualizing this map composed with the action of $\text{\rm Lie}_\text{\rm Hod}$, we get the \emph{Hodge correlator morphism} of Lie coalgebras:
\[\text{\rm Cor}_\text{\rm Hod}:\CLie^\vee_{X,S,v_0}\to\text{\rm Lie}_\text{\rm Hod}^\vee.\]
Let $\text{\rm Lie}_\text{\rm Hod}^\vee(X,S,v_0)$ denote the image of this action, and let $\text{\rm Lie}_\text{\rm Hod}^\vee(X,S)$ denote the algebra generated by the $\text{\rm Lie}_\text{\rm Hod}^\vee(X,S,v_0)$ for all choices of base point. (Below, we will fix $S=E[p]$ for $E$ an elliptic curve, so $\text{\rm Lie}_\text{\rm Hod}^\vee(X,S)$ does not depend on the choice of base point in $S$.)
We will also write $\text{\rm Cor}_\text{\rm Hod}(x_0,\dots,x_n)$ for $\text{\rm Cor}_\text{\rm Hod}\pq{(x_0\otimes\dots\otimes x_n)(1)}$, or $\text{\rm Cor}_s(\dots)$, when we wish to specify the base point.
The Lie coalgebra structure on $\CLie_{X,S,v_0}^\vee$ has a simple description on the generators. There are two terms in the coproduct, $\delta_{\rm D}$ and $\delta_{\rm Cas}$, which are each sums over ``cuts'' of the element \[C=\pq{x_0\otimes\dots\otimes x_n}\otimes[h],\] which we picture as $x_0,\dots,x_n$ written counterclockwise around a circle.
\begin{enumerate}[(1)]
\item Term $\delta_S$: Consider a line inside the circle beginning at a point on the circle labeled by a puncture $x_i$ and ending between two adjacent points. It cuts the circle into two parts $C_1$ and $C_2$, which share only the point $x_i$, where $C_1$ lies clockwise of $x_i$. This contributes to the coproduct the term $C_1\wedge C_2$, and $\delta_{\rm D}C$ is the sum of these terms over all such cuts. That is,
\[\delta_SC=\sum_{\stackrel{\rm cyc}{x_0\in S}}\sum_{p=1}^n\pq{\pq{x_0\otimes x_p\otimes\dots\otimes x_n}\otimes[h]}\wedge\pq{\pq{x_0\otimes x_1\otimes\dots\otimes x_{p-1}}\otimes[h]},\] where the outer sum is only taken over those cyclic reorderings where $v_0$ is a puncture.
(See Fig.~\ref{fig:coproduct}, top.)
\item Term $\delta_{\rm Cas}$: Consider a line inside the circle beginning between two points $y_1$ and $z_1$ and ending between two points $y_2$ and $z_2$. It cuts the circle into two parts $C_1$ and $C_2$, in which $y_1$ and $z_2$ are adjacent and in which $y_2$ and $z_1$ are adjacent. We insert a point labeled $\omega$ between $y_1$ and $z_2$ on $C_1$ and a point labeled $\omega^\vee$ between $z_2$ and $y_1$ on $C_2$ to obtain $C_1'$ and $C_2'$, then take the sum over $\omega$ in a fixed symplectic basis $\cq{\omega_i}$ of $H^1(X;\mathbb{C})$. This contributes the term $C_1'\wedge C_2'$, and $\delta_{\rm Cas}$ is the sum of these terms over all such cuts. That is,
\begin{equation}
\delta_{\rm Cas}C=\sum_{p=0}^n\sum_{q=0}^n\sum_{i=1,2}\pq{\pq{x_p\otimes\dots\otimes x_{q-1}\otimes\omega_i}\otimes[h]}\wedge\otimes\pq{\pq{x_q\otimes\dots\otimes x_{p-1}\otimes\omega_i^\vee}\otimes[h]}.
\end{equation}
(See Fig.~\ref{fig:coproduct}, bottom.)
\end{enumerate}
The term $\delta_{\rm Cas}$ are absent if $X=\mathbb{P}^1$. If $E$ is an elliptic curve, $\CLie_{X,S,v_0}^\vee$ is graded by weight and filtered by depth, and the terms $\delta_{\rm Cas}$ disappear in the associated graded $\text{\rm gr}^D\CLie_{X,S,v_0}^\vee$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{figures/coproduct-ec1.pdf}\\
\includegraphics[width=0.7\textwidth]{figures/coproduct-ec2.pdf}
\simplecap{fig:coproduct}{\emph{Above:} The typical term in the $\delta_S$ component of the coproduct.\\\emph{Below:} The typical term in the $\delta_{\rm Cas}$ component.}
\end{figure}
\subsubsection{Period map and Hodge correlator functions}
Recall that the Hodge correlator functions $\text{\rm Cor}_\mathcal{H}(x_0,\dots,x_n)$ satisfy cyclic symmetry and shuffle relations, so we may also denote by $\text{\rm Cor}_\mathcal{H}$ the function
\begin{align*}
\text{\rm Cor}_\mathcal{H}:\CLie^\vee_{X,S,v_0}&\to\mathbb{C},\\
(x_0\otimes\dots\otimes x_n)(1)&\mapsto\text{\rm Cor}_\mathcal{H}(x_0,\dots,x_n).
\end{align*}
The dual to the Hodge correlator $\text{\rm Cor}_\mathcal{H}:\CLie_{X,S,v_0}^\vee\to\mathbb{C}$, an element of $\CLie_{X,S,v_0}$, is called the \emph{Green operator} $\mathbf{G}_{v_0}$. It can be viewed as a special derivation of $\text{\rm gr}^W\pi_1^\text{\rm nil}(X\setminus S,v_0)\otimes\mathbb{C}$, and defines a real mixed Hodge structure on $\pi_1^\text{\rm nil}(X\setminus S,v_0)$. An element $x\in\CLie^\vee_{X,S,v_0}$ of type $(p,q)$ provides a framing $\mathbb{R}(p,q)+\mathbb{R}(q,p)\to\text{\rm gr}^W_{p+q}\pi_1^\text{\rm nil}(X\setminus S,v_0)$, and $\text{\rm Cor}_\text{\rm Hod}(x)$ is the element of $\text{\rm Lie}_\text{\rm Hod}^\vee$ induced by this framing.
As made precise by a main result of \cite{goncharov-hodge-correlators}, $\text{\rm Cor}_\mathcal{H}$ factors through the Hodge correlator map to $\text{\rm Lie}_\text{\rm Hod}^\vee$ and the period map $\text{\rm Lie}_\text{\rm Hod}^\vee\to\mathbb{C}$, and the resulting mixed Hodge structure on $\pi_1^\text{\rm nil}$ coincides with the standard one.
\begin{thm}[\cite{goncharov-hodge-correlators}, Theorem 1.12]
\begin{enumerate}[(a)]
\item For $x\in\CLie^\vee_{X,S,v_0}$, $\text{\rm Cor}_\mathcal{H}(x)=(2\pi i)^{-n}p(\text{\rm Cor}_\text{\rm Hod}(x))$, where $p$ is the canonical period map $\text{\rm Lie}_\text{\rm Hod}^\vee\to\mathbb{R}$.
\item The mixed Hodge structure on $\pi_1^\text{\rm nil}$ determined by the dual Hodge correlator map coincides with the standard mixed Hodge structure on $\pi_1^\text{\rm nil}$.
\end{enumerate}
\label{thm:hc_main_point}
\end{thm}
Furthermore, let $X/B$ be a smooth curve over a base $B$. For a collection of nonintersecting sections $S$ and choice of relative tangent vector $v_0$, we can analogously define $\CLie^\vee_{X/B,S,v_0}$. In this setting, for $x\in\CLie^\vee_{X/B,S,v_0}$, \cite{goncharov-hodge-correlators} constructs a connection on the fiberwise $\text{\rm Cor}_\text{\rm Hod}(x)$ that makes $\text{\rm Cor}_\text{\rm Hod}(x)$ a variation of mixed Hodge structures over $B$. We have the following essential fact, which follows from the Griffiths transversality condition:
\begin{lma}
If $x\in\CLie^\vee_{X/B,S,v_0}$ of type $(p,q)$, and weight $p+q=n>2$, has $\delta(\text{\rm Cor}_\text{\rm Hod}(x))=0$, and $\text{\rm Cor}_\mathcal{H}(x|_b)=0$ at some $b\in B$, then $\text{\rm Cor}_\text{\rm Hod}(x)=0$.
\label{lma:d0rigid}
\end{lma}
\begin{proof}
If $\delta(\text{\rm Cor}_\text{\rm Hod}(x))=0$, then $\text{\rm Cor}_\text{\rm Hod}(x)\in\Ext^1(\mathbb{R}(0),\mathbb{R}(p,q)+\mathbb{R}(q,p))$, which is one-dimensional and rigid by the Griffiths transversality condition. Hence the variation is constant and captured by the period $p:\Ext^1(\mathbb{R}(0),\mathbb{R}(p,q)+\mathbb{R}(q,p))\to\mathbb{C}$.
\end{proof}
One of the main results of this paper is that the relations (\ref{eqn:ec_second_shuffle}) hold for the elements $\text{\rm Cor}_\text{\rm Hod}$: the equality between functions is upgraded to a relation in the fundamental Lie coalgebra of mixed Hodge structures.
\subsubsection{Motivic correlators}
Let $F$ be a number field. Beilinson's conjectures (\cite{beilinson-height-pairing}) predict that there is a category $\mathcal{MM}_F$ of mixed motives over $F$. Every object in $\mathcal{MM}_F$ should have a weight filtration, and there should be a functor $\text{\rm gr}^W:\mathcal{MM}_F\to\mathcal{PM}_F$, where $\mathcal{PM}_F$ is the category of pure motives over $F$. For every embedding $\sigma:F\to\mathbb{C}$, there should be a realization functor $r_\sigma:\mathcal{MM}_F\to\mathrm{MHS}_F$. For every simple object $M\in\mathcal{MM}_F$, there should be an injective regulator map
\[\text{\rm reg}:\Ext_{\mathcal{MM}_F}^1(\mathbb{Q}(0),M)\to\bigoplus_{F\to\mathbb{C}/\text{complex conj.}}\Ext^1_{\mathrm{MHS}_\mathbb{R}}(\mathbb{R}(0),r_\sigma(M))).\]
The \emph{fundamental (motivic) Lie algebra} $\text{\rm Lie}_{\text{\rm Mot}/F}$ is the algebra of tensor derivations of the functor $\text{\rm gr}^W$, a graded Lie algebra in the category $\mathcal{PM}_F$, and $\mathcal{MM}_F$ is equivalent to the category of graded $\text{\rm Lie}_{\text{\rm Mot}/F}$-modules. An embedding $\sigma$ induces a map $r_\sigma:\text{\rm Lie}_{\text{\rm Mot}/F}^\vee\to\text{\rm Lie}_\text{\rm Hod}^\vee$.
Let $X$ be a curve defined over $F$, $S\subset X(F)$ a finite set of punctures, and $v_0$ the distinguished tangent vector at $s\in S$. There is expected to be a \emph{motivic fundamental group} $\pi_1^\text{\rm Mot}(X\setminus S,v_\infty)$, a prounipotent group scheme in the category $\mathcal{MM}_F$. The Hodge realization of its Lie algebra should be $\pi_1^\text{\rm nil}(X\setminus S,v_0)$. As it is an object in $\mathcal{MM}_F$, there is an action $\text{\rm Lie}_{\text{\rm Mot}/F}\to\text{\rm Der}\pq{\text{\rm gr}^W\pi_1^\text{\rm Mot}}$.
The construction of the Hodge correlator coalgebra $\CLie^\vee_{X,S,v_0}$ can be upgraded to the motivic setting, simply by replacing all the Hodge-theoretic objects by their motivic avatars.
For example, the definition of the \emph{motivic correlator coalgebra} mimics that of its Hodge realization:
\begin{equation*}
\pq{\CLie^\text{\rm Mot}_{X,S,v_0}}^\vee:=\f{T\pq{(\mathbb{Q}(1)^{S})^\vee\oplus H^1(X)}}{\text{relations}}\otimes H_2(X),
\end{equation*}
a graded Lie coalgebra in the category of pure motives over $F$, where the relations imposed are the cyclic symmetry, first shuffles, and quotient by nonpositive weight. Then $\CLie^\text{\rm Mot}_{X,S,v_0}$ is isomorphic to the algebra of special derivations of $\text{\rm gr}^W\pi_1^\text{\rm Mot}(X-S,v_0)$, and there is a \emph{motivic correlator map}
\begin{equation*}
\text{\rm Cor}_\text{\rm Mot}:\pq{\CLie^\text{\rm Mot}_{X,S,v_0}}^\vee\to\text{\rm Lie}_{\text{\rm Mot}/F}^\vee.
\end{equation*}
We will write $\text{\rm Cor}_\text{\rm Mot}(x_0,\dots,x_n)$ for $\text{\rm Cor}_\text{\rm Mot}(\pq{x_0\otimes\dots\otimes x_n}(1))$.
Fix an embedding $\sigma:F\to\mathbb{C}$. We have the composition of the realization functor with the period map:
\[
\text{\rm Cor}_\mathcal{H}\circ r_\sigma:\pq{\CLie^\text{\rm Mot}_{X,S,v_0}}^\vee\otimes\mathbb{C}\to\CLie_{X,S,v_0}^\vee\otimes\mathbb{C}\to\mathbb{C}.
\]
By Theorem~\ref{thm:hc_main_point}, it coincides with the composition
\[
\pq{\CLie^\text{\rm Mot}_{X,S,v_\infty}}^\vee\to\text{\rm Lie}_\text{\rm Mot}^\vee\to\text{\rm Lie}_\text{\rm Hod}^\vee\to\mathbb{C}.
\]
We can summarize all of the described objects and maps defined as follows:
\[
\xymatrix{
\text{\rm Der}^S(\text{\rm gr}^W\pi_1^\text{\rm Mot}(X\setminus S,v_0))^\vee\ar@{-}[r]
&(\CLie_{X,S,v_0}^\text{\rm Mot})^\vee\ar[r]^{\quad\text{\rm Cor}_\text{\rm Mot}}\ar[d]^r
&\text{\rm Lie}_{\text{\rm Mot}/F}^\vee\ar[d]^r
\\
\text{\rm Der}^S(\text{\rm gr}^W\pi_1^\text{\rm nil}(X\setminus S,v_0))^\vee\ar@{-}[r]
&(\CLie_{X,S,v_0}^\vee)\ar[r]^{\quad\text{\rm Cor}_\text{\rm Hod}}\ar[dr]_{\text{\rm Cor}_\mathcal{H}}&\text{\rm Lie}_\text{\rm Hod}^\vee\ar[d]^p
\\
&&\mathbb{C}.
}
\]
Relations among the motivic correlators can be proven by showing that they hold in the Hodge realization under any complex embedding. Precisely, there is the following fact, which is an immediate consequence of the (hypothetical) injectivity of the regulator and Lemma~\ref{lma:d0rigid}.
\begin{lma}
Suppose $x\in\pq{\CLie_{X,S,v_\infty}^\text{\rm Mot}}^\vee$ is of type $(p,q)$ with weight $p+q>2$, $\delta\text{\rm Cor}_\text{\rm Mot}(x)=0$, and $\text{\rm Cor}_\mathcal{H}(r(x))=0$ for every embedding $r:F\to\mathbb{C}$. Then $\text{\rm Cor}_\text{\rm Mot}(x)=0$.
\label{lma:d0h0_rational}
\end{lma}
This fact allows us to lift relations on Hodge correlators to relations on motivic correlators. In particular, all results in this paper -- the second shuffle relations for Hodge correlators and the map from the Bianchi complexes to an algebra of Hodge correlators -- should hold with ``Hodge'' replaced by ``motivic''.
Assuming the motivic formalism, the results in the Hodge realization can then be translated to the $\ell$-adic realization, via the motivic correlators. In particular, the results stated in the introduction would hold for the $\ell$-adic elliptic Galois algebra.
\section{Motivic correlators on elliptic curves}
\label{sec:mc_elliptic_curves}
\subsection{Main properties}
\subsubsection{Definitions}
We work with a complex elliptic curve $E$. Recall $S\subset E$ is a finite set of punctures. Let $\mathcal{O}=\text{\rm End}(E)$, so either $\mathcal{O}=\mathbb{Z}$ or a lattice in an imaginary quadratic field.
Let $\omega,\ol\omega$ be a symplectic basis for $H^1(E;\mathbb{C})$. $\mathcal{CL}_{E,S,v_0}^\vee$ is generated by elements
\begin{align*}
C_s(\Omega_0,s_0,\dots,\Omega_n,s_n)
=&\underbrace{\omega_{0,1}\otimes\dots\otimes\omega_{0,k_0}}_{\Omega_0}\otimes\cq{s_0}\\
&\otimes\underbrace{\omega_{1,1}\otimes\dots\otimes\omega_{1,k_1}}_{\Omega_1}\otimes\cq{s_1}\\
&\otimes\cdots\\
&\otimes\underbrace{\omega_{n,1}\otimes\dots\otimes\omega_{n,k_n}}_{\Omega_n}\otimes\cq{s_n}%
\end{align*}
$s_i\in S$ and $\Omega_i$ range over the basis of $T_\mathbb{Z}(H^1(E,\mathbb{C}))$ consisting of elements $\bigotimes_{j=1}^{k_i}\omega_{i,j}$ with $\omega_{i,j}\in\cq{\omega,\ol\omega}$. This generator lies in the component of $\mathcal{CL}_{E,S,s}^\vee$ of depth $n$ and weight $2n+\sum_{i=0}^nk_i$.
Suppose a tangent vector $v_s$ has been chosen at each $s\in S$. We assemble the $\mathcal{CL}_{E,S,v_s}^\vee$ as the base point $s$ ranges over $S$ into a Lie coalgebra
\[\widetilde\mathcal{CL}_{E,S}^\vee:=\bigoplus_{s\in S}\mathcal{CL}_{E,S,v_s}^\vee.\] All direct summands are isomorphic, but the maps $\text{\rm Cor}_\text{\rm Hod}$ on different components do not coincide. We will write $\text{\rm Cor}_s$ as a short notation for the map $\text{\rm Cor}_\text{\rm Hod}$ on the component corresponding to $s$,, extended so that $\text{\rm Cor}_s(s,\dots)=0$, i.e., the correlator of an element that contains the base point vanishes.
\subsubsection{Generating series}
We will package the correlators of depth $n$ into generating series in $2(n+1)$ commuting formal variables $t_0,\ol t_0,t_1,\ol t_1,\dots,t_n,\ol t_n$. We identify $t_i,\ol t_i$ with generators of $H_1(E,\mathbb{Z})$ dual to $\omega,\ol\omega$. That is, the monomials in the $t_i,\ol t_i$ are identified with the generators of $\bigotimes_{i=0}^n\Sym(H_1(E,\mathbb{Z}))$.
For $x_0,\dots,x_n\in S$ and $s\in S$, define the generating series
\begin{align}
&\Theta_s\pg{x_0:x_1:\dots:x_n}{t_0:t_1:\dots:t_n}
=\,&\sum_{\Omega_0,\dots,\Omega_n}\text{\rm Cor}_s\pq{\Omega_0,x_0,\dots,\Omega_n,x_n)}(\Omega_0^{*}\otimes\dots\otimes\Omega_n^{*}),\label{eqn:def_theta}
\end{align}
where the sum is taken over the basis of $T_\mathbb{Z}(H^1(E,\mathbb{C}))$ as above. The coefficient of $\prod_it_i^{m_i}\ol t_i^{m_i'}$ is the sum of all generators where $m_i$ copies of $\omega$ and $m_i'$ copies of $\ol\omega$ appear between $s_i$ and $s_{i+1}$. Letting $S_{m,m'}$ be the sum of generators of the degree-$(m,m')$ component of $T_\mathbb{Z}(H^1(E,\mathbb{C}))$, i.e., the sum of all permutations of $\omega^{\otimes m}\otimes\ol\omega^{\otimes m'}$, this sum can be written
\begin{equation}
\text{\rm Cor}_s\pq{S_{m_0,m_0'}\otimes(x_0)\otimes S_{m_1,m_1'}\otimes(x_1)\otimes\dots\otimes S_{m_n,m_n'}\otimes(x_n)}.
\label{eqn:symm_corr}
\end{equation}
These coefficients are called the \emph{symmetric} Hodge correlators.
We also define, for $w_0,\dots,w_n\in E$ with $w_0+\dots+w_n=0_E$,
\begin{align*}
\Theta^{*}_s\pg{w_0,w_1,\dots,w_n}{t_0:t_1:\dots:t_n}&=\Theta_s\pg{0:w_1:w_1+w_2:\dots:w_1+\dots+w_n}{t_0:t_1:\dots:t_n},
\end{align*}
and, for $u_0+\dots+u_n=0$,
\begin{align*}
\Theta_s\pg{x_0:x_1:\dots:x_n}{u_0,u_1,\dots,u_n}&=\Theta\pg{x_0:x_1:\dots:x_n}{0:u_1:u_1+u_2:\dots,u_1+\dots+u_n}.
\end{align*}
The subspace generated by the elements of $\CLie_{E,S}^\vee$ having the form of the argument of (\ref{eqn:symm_corr}) is dual to a certain quotient of the Lie algebra $\text{\rm Der}^S(\text{\rm gr}^W\pi_1^{\rm nil}(E-S,v_0))$. This is the quotient by the image of the adjoint action of $H_1(E;\mathbb{Z})$ mentioned in the introduction. In depth 0 and weight $>1$, the elements (\ref{eqn:symm_corr}) vanish, by the shuffle relations. In depth 0 and weight 1 -- i.e., elements $\text{\rm Cor}(\omega_1,s_0)$ -- the elements are identified with elements $[s_0]-[s]$ in the Jacobian of $E$ (see \cite{goncharov-hodge-correlators}, \S10.5), and, in particular, vanish if $s$ and $s_0$ are torsion points. As we will see below, modulo the depth filtration, the symmetric ccorrelators form a subcoalgebra, as the terms $\delta_{\rm Cas}$ of the coproduct vanish.
Now let us establish some basic properties of the generating series.
\begin{lma}
\begin{enumerate}[(a)]
\item For $n>0$, the generating series $\Theta\pg{:}{:}$ are homogeneous in the $t_i$ and satisfy the dihedral symmetry relations:
\begin{align*}
&\Theta_s\pg{x_0:\dots:x_n}{t_0:\dots:t_n}\\
=\,&\Theta_s\pg{x_0+x:\dots:x_n+x}{t_0+t:\dots:t_n+t}&\text{(homogeneity)}\\
=\,&\Theta_s\pg{x_1:\dots:x_n:x_0}{t_1:\dots:t_n:t_0}&\text{(cyclic symmetry)}\\
=\,&(-1)^{n+1}\Theta_s\pg{x_n:\dots:x_1:x_0}{t_n:\dots:t_1:t_0}.&\text{(reflection)}
\end{align*}
\item For an automorphism $\phi\in\Aut(E)$,
\[
\Theta_s\pg{x_0,\dots,x_n}{t_0:\dots:t_n}
=
\Theta_s\pg{\phi(x_0),\dots,\phi(x_n)}{\phi\cdot t_0:\dots:\phi\cdot t_n},
\]
where $\phi$ acts on the $t_i$ by the adjoint action on $H_1(E,\mathbb{Z})$.
\item The elements $\Theta_s\pg{x_0:x_1:\dots:x_n}{u_0,u_1,\dots,u_n}$ satisfy the first shuffle relations:
\begin{equation}
\sum_{\sigma\in\Sigma_{i,j}}\Theta_s\pg{x_{\sigma^{-1}(1)}:\dots:x_{\sigma^{-1}(i+j)}:x_0}{u_{\sigma^{-1}(1)},\dots,u_{\sigma^{-1}(i+j)},u_0}=0.
\label{eqn:gf_first_shuffle}
\end{equation}
\end{enumerate}
\label{lma:theta_dihedral}
\end{lma}
\begin{proof}
The dihedral symmetry relations in (a) and the relation (b) are clear from the defnition of Hodge correlators.
The difficult part is homogeneity in $t_i$ and the first shuffle relation. For the former, it is enough to show
\[
\Theta_s\pg{x_0:\dots:x_n}{0:t_1:\dots:t_n}
=
\Theta_s\pg{x_0:\dots:x_n}{t_0:t_0+t_1:\dots:t_0+t_n}.
\]
Consider the coefficient of $\prod_it_i^{m_i'}\ol{t_i}^{m_i'}$ in the sum defining each side (\ref{eqn:def_theta}). For each $i$, fix an an ordering $\omega_{i,1}\dots\omega_{i,m_i+m_i'}$ of the word $\omega^{m_i}\ol\omega^{m_i'}$ and look at the terms in this coefficient in which the elements indexed by $t_i$ appear in the order specified by the word.
If $m_0=m_0'$, then both sides have exactly one such term \[\text{\rm Cor}_\text{\rm Hod}(x_0,1,x_1,\bigotimes_i\omega_{1,i}\dots,x_n,\bigotimes_i\omega_{n,i})),\] and they coincide. Otherwise, the coefficient on the left side is 0, while the terms on the right side are exactly the first shuffle relation on \[\text{\rm Cor}_\text{\rm Hod}(x_0,\underbrace{\bigotimes\omega_{0,i}},\underbrace{x_1,\bigotimes\omega_{1,i}\dots,x_n,\bigotimes\omega_{n,i})},\] which is 0. This proves homogeneity in the $t_i$.
Finally, (c) also follows from the first shuffle relation on the coefficients. To obtain the relation where $\cq{1,\dots,i}$ are shuffled with $\cq{i+1,\dots,i+j}$, we keep $x_0$ fixed and shuffle the $x_1,\dots,x_i$ and the forms indexed by $u_1,\dots,u_i$ with the other elements. (The proofs are identical for those for correlators on $\mathbb{P}^1$; see \cite{malkin-shuffle}, Lemma 17.)
\end{proof}
\subsubsection{Coproduct}
The coproduct of the generating function $\Theta_s$ is in general difficult to write down. However, we can describe the terms of highest depth, which come from the $\delta_S$ component of the coproduct.
\begin{lma}
The coproduct of the generating functions $\Theta_s$ is given by
\begin{align*}
&\delta\Theta_s\pg{x_0:\dots:x_n}{t_0:\dots:t_n}=\\
= &\sum_{\rm cyc}\sum_{k=0}^n\Theta_s\pg{x_0:\dots:x_k}{t_0:\dots:t_k}\wedge\Theta_s\pg{x_k:x_{k+1}:\dots:x_n}{t_0:t_{k+1}:\dots:t_n} \\&+ \text{\rm lower depth terms}.
\end{align*}
The coproduct of the generating functions $\Theta_s^{*}$ is given by
\begin{align}
&\delta\Theta_s^{*}\pg{x_0,\dots,x_n}{t_0:\dots:t_n}=\nonumber\\
= &\sum_{\rm cyc}\sum_{k=0}^n\Theta_s^{*}\pg{-(x_1+\dots+x_k),x_1:\dots,x_k}{t_0:t_1:\dots:t_k}\wedge\Theta_s^{*}\pg{x_0,x_{k+1},\dots,x_n}{t_0:t_{k+1}:\dots:t_n} \nonumber\\&+ \text{\rm lower depth terms}.\label{eqn:thetastar_coproduct}
\end{align}
The lower-depth terms are Hodge correlators of elements that do not depend on $s$.
\end{lma}
\begin{proof}
The formula for the coproduct of $\Theta_s$ arises from the definition of the $\delta_S$ term of the coproduct. The formula for the coproduct of $\Theta_s^{*}$ would follow immediately from that for $\Theta_s$ if the $\Theta_s$ were invariant under an additive shift of the arguments $x_i$. This is Theorem~\ref{thm:depth_indep} below, which is independent of (\ref{eqn:thetastar_coproduct}).
\end{proof}
These formulas for the coproduct formally coincide with those for the dihedral Lie coalgebra, defined by A.Goncharov in \cite{goncharov-dihedral} in order to study multiple polylogarithms, as well as in the quasidihedral Lie coalgebra modulo the depth filtration, defined by the author in \cite{malkin-shuffle} to study Hodge correlators on $\mathbb{P}^1$.
\subsection{Symmetric correlators modulo depth}
\label{sec:mc_modulo_depth}
In this section, $H^1(X)$ always refers to $H^1(X;\mathbb{C})$.
\subsubsection{Change of base point formula}
Fix $p\in S$. Let us define a map $\rho_p:T(H^1(X))\to T(H^1(X)\oplus \mathbb{Q}[S])$ as follows.
For a word $\omega_1\otimes\dots\otimes\omega_n\in T(H^1(X))$,
\[\rho_p(\omega_1\otimes\dots\otimes\omega_n)=\sum_k(-1)^k\sum_{\stackrel{i_1<i_2,\dots<i_k<n}{i_{j+1}>i_j+1}}\omega_1\otimes\dots\otimes(\gen{\omega_{i_j},\omega_{i_j+1}}(p))\otimes\dots\otimes\omega_n,\] where $\gen{,}$ is the skew-symmetric pairing: $\gen{\omega,\ol\omega}=-\gen{\ol\omega,\omega}=1$. That is, it is the sum over all possible replacements of pairs $(\omega\otimes\ol\omega)$ and $(\ol\omega\otimes\omega)$ by the puncture $p$, taken with appropriate sign. For example, we have:
\begin{align*}
\rho_p(1)&=1,\\
\rho_p(\omega)&=\omega,\\
\rho_p(\omega\otimes\omega)&=\omega\otimes\omega,\\
\rho_p(\omega\otimes\ol\omega)&=(\omega\otimes\ol\omega)-(p),\\
\rho_p(\omega\otimes\ol\omega\otimes\omega)&=(\omega\otimes\ol\omega\otimes\omega)-(p\otimes\omega)+(\omega\otimes p),\\
\rho_p(\omega\otimes\ol\omega\otimes\omega\otimes\ol\omega)&=(\omega\otimes\ol\omega\otimes\omega\otimes\ol\omega)-(p\otimes\omega\otimes\ol\omega)-(\omega\otimes\ol\omega\otimes p)+(\omega\otimes p\otimes\ol\omega)+(p\otimes p).
\end{align*}
For $a\in S$, define $\rho_p(a)=(a)-(p)$, extended by linearity to $\mathbb{Q}[S]$. Then, extend $\rho_p$ to $CT(H^1(X)\oplus\mathbb{Q}[S])$: if $x_0,\dots,x_k\in\mathbb{Q}[S]$, and $\Omega_0,\dots,\Omega_k\in T(H^1(X))$, then
\[\rho_p(\Omega_0\otimes x_0\otimes\dots\otimes \Omega_k\otimes x_k)=\rho_p(\Omega_0)\otimes\rho_p(x_0)\otimes\dots\otimes\rho_p(\Omega_k)\otimes\rho_p(x_k).\]
\begin{lma}[Change of base point formula]
Suppose that $p\neq q$. Then the following relation holds for Hodge correlators in weight $>2$:
\begin{equation}
\text{\rm Cor}_p(x)=\text{\rm Cor}_q\pq{\rho_p(x)}.
\label{eqn:change_base_point}
\end{equation}
\label{lma:change_base_point}
\end{lma}
On the right side stands a sum of correlators obtained from the one on the left by taking all possible replacements of punctures and pairs of adjacent cohomology classes with $(p)$, taken with the appropriate sign.
Before proceeding to the proof, let us illustrate the formula on some examples. In weight $4$,
\begin{align*}
\text{\rm Cor}_p(a,b,c)&=\text{\rm Cor}_q(a,b,c)-\text{\rm Cor}_q(p,b,c)-\text{\rm Cor}_q(a,p,c)-\text{\rm Cor}_q(a,b,p),\\
\text{\rm Cor}_p(a,b,\omega,\ol\omega)&=\text{\rm Cor}_q(a,b,\omega,\ol\omega)\\&\quad-\text{\rm Cor}_q(p,b,\omega,\ol\omega)-\text{\rm Cor}_q(a,p,\omega,\ol\omega)-\text{\rm Cor}_q(a,b,p)+\text{\rm Cor}_q(p,p,\omega,\ol\omega),\\
\text{\rm Cor}_p(a,\omega,\ol\omega,\omega,\ol\omega)&=\text{\rm Cor}_q(a,\omega,\ol\omega,\omega,\ol\omega)\\&\quad-\text{\rm Cor}_q(p,\omega,\ol\omega,\omega,\ol\omega)-\text{\rm Cor}_q(a,p,\omega,\ol\omega)+\text{\rm Cor}_q(a,\omega,p,\ol\omega)-\text{\rm Cor}_q(a,\omega,\ol\omega,p)\\&\quad+\text{\rm Cor}_q(p,p,\omega,\ol\omega)-\text{\rm Cor}_q(p,\omega,p,\ol\omega)+\text{\rm Cor}_q(p,\omega,\ol\omega,p).
\end{align*}
If the left side of the expression only contains punctures, we recover a formula identical to the one found by \cite{goncharov-rudenko}, Theorem 2.6, for Hodge correlators on the punctured $\mathbb{P}^1$. More generally, for symmetric correlators, we have:
\begin{cly}
Suppose that $p\neq q$. Then we have the relation in weight $>2$:
\begin{align*}
&\text{\rm Cor}_p(S_{m_0,m_0'}\otimes x_0\otimes S_{m_1,m_1'}\otimes x_1\otimes\dots\otimes S_{m_n,m_n'}\otimes x_n)
\\&=
\text{\rm Cor}_q(S_{m_0,m_0'}\otimes((x_0)-(p))\otimes\dots\otimes S_{m_n,m_n'}((x_n)-(p)))
\\&=
\sum_k(-1)^k\sum_{i_1<\dots<i_k}\text{\rm Cor}_q(S_{m_0,m_0'}\otimes x_0\otimes \dots \otimes p\otimes\dots\otimes p \otimes\dots\otimes S_{m_n,m_n'}\otimes x_n),
\end{align*}
where on the right the punctures $x_{i_1},\dots,x_{i_k}$ are replaced with $q$.
\label{cly:sym_change_base_point}
\end{cly}
\begin{proof}
For all $m,m'\geq0$, $\rho_p(S_{m,m'})=S_{m,m'}$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lma:change_base_point}]
We first prove the change of base point formula in the real Hodge realization, i.e, that it holds on the level of the Hodge correlator functions $\text{\rm Cor}_\mathcal{H}$.
The Green's functions associated to the points $p$ and $q$ are related by \[G_p(x,y)=G_q(x,y)-G_q(x,p)-G_q(y,p)+C,\] where $C$ is a constant that depends on the choices of tangent vectors at $p$ and $q$. Now consider any tree contributing to the Hodge correlator of $\Omega_0\otimes(x_0)\otimes\dots\otimes\Omega_k\otimes x_k$. Write the Green's function $G_p(x,y)$ assigned to each edge in terms of the $G_q$, and examine the contribution of the three terms in $G_p(x,y)-G_q(x,y)$: $G_q(x,p)$, $G_q(y,p)$, and $C$ for a given edge. There are three cases:
\begin{enumerate}[(1)]
\item An external edge $E$ decorated by a puncture $a$, assigned the function $G_p(x,a)$. Assigning the form $-G_q(x,p)$ to $E$ gives the correlator where $a$ has been replaced by $-(p)$. The terms $C$ and $G_q(a,p)$ are constants. Because the Hodge correlator has weight $>2$, there is at least one internal edge in the tree, so the correlator where a constant has been placed on $E$ is the integral of an exact form $d^\mathbb{C}(\dots)$.
\item An internal edge $E$ that splits the tree into two parts, one of which is decorated by two 1-forms. Suppose that in $G_p(x,y)$, the vertex assigned the variable $x$ is adjacent to external vertices labeled $\omega_1$ and $\omega_2$. Then the terms $G_q(x,p)$ and $C$ are independent of $y$, and the integral splits into a product; the integrand for the subtree growing from $y$ is an exact form, so we get 0. For the term $-G_q(y,p)$, the integral also splits into a product of $\int_E\omega_1\wedge\omega_2$ and the correlator with $x$ replaced by an external vertex $-(p)$.
\item An internal edge $E$ that splits the tree into two parts, each of which is decorated by at least one puncture. Then, as in the previous case, each term in the expression for $G_p(x,y)$ is independent of either $x$ or $y$. The integral splits into a product of two factors, one of which is 0.
\end{enumerate}
We conclude that the change of base point is computed by adding all possible replacements of external punctures $a$ by $-(p)$ and pairs $\omega_1\otimes\omega_2$ by $-\gen{\omega_1,\omega_2}(p)$. This implies the lemma.
(Note that the assumption of weight $>2$ was crucial to all arguments involving integration of the exact form.)
One easily checks by induction that the coproducts of the two sides of (\ref{eqn:change_base_point}) are equal. This implies the result on the level of the Hodge correlators $\text{\rm Cor}_\text{\rm Hod}$.
\end{proof}
\subsubsection{Independence on base point}
In this part, we prove the following important result.
\begin{thm}
The symmetric Hodge correlators in weight $>2$ are independent of the base point modulo the depth filtration.
Precisely, let $x\in CT(H^1(X)\oplus\mathbb{Q}[S])$. Then there exists $\widetilde x$, equal to $x$ modulo lower-depth terms, such that $\text{\rm Cor}_p(\widetilde x)$ is independent of $p$.
\label{thm:depth_indep}
\end{thm}
In terms of generating functions, this theorem implies:
\begin{cly}
The generating functions $\Theta^{*}$ satisfy the dihedral symmetry relations of Lemma~\ref{lma:theta_dihedral}:
\begin{align*}
&\Theta^{*}_s\pg{w_0,\dots,w_n}{t_0:\dots:t_n}\\
=\,&\Theta^{*}_s\pg{w_1,\dots,w_n,w_0}{t_1:\dots:t_n:t_0}\\
=\,&(-1)^{n+1}\Theta^{*}_s\pg{w_n,\dots,w_1,w_0}{t_n:\dots:t_1:t_0}
\end{align*}
modulo lower-depth terms that are independent of $s$.
\label{cly:thetastar_dihedral}
\end{cly}
\begin{proof}
By the cyclic symmetry and dihedral relations on correlators, these expressions are equal up to an additive shift in the correlators' arguments, equivalently, a change in base point.
\end{proof}
Notice that all terms on the right side of (\ref{eqn:change_base_point}) have \emph{higher} or equal depth to the left side. It will be necessary to find correction terms of lower depth to obtain a formula of the form \[\text{\rm Cor}_p(h_0\otimes x_0\otimes\dots\otimes h_k\otimes x_k)+\text{\rm Cor}_p(\text{lower depth})=\text{\rm Cor}_q(h_0\otimes x_0\otimes\dots\otimes h_k\otimes x_k)+\text{\rm Cor}_q(\text{l.d.})\] when each $h_i$ a symmetric expression $S_{m,m'}$.
The proof of the theorem relies on a key construction. We will find elements:
\[S_{m_0,m_0'}*S_{m_1,m_1'}*\dots*S_{m_n,m_n'}\in T(H^1(X))\]
such that
\begin{align}
&\rho_p(S_{m_0,m_0'}*S_{m_1,m_1'}*\dots*S_{m_n,m_n'})=\label{eqn:star_property}
\\&=
\sum_k\sum_{i_1<\dots<i_k} \pq{S_{m_0,m_0'}*\dots*S_{m_{i_1-1},m_{i_1-1}'}}\otimes(p)\otimes\pq{S_{m_{i_1},m_{i_1}'}*\dots*S_{m_{i_2-1},m_{i_2-1}'}}\otimes(p)\otimes\dots.\nonumber
\end{align}
Before showing how to construct these elements, let us prove the theorem, assuming these elements exist.
\begin{proof}[Proof of Theorem~\ref{thm:depth_indep}]
Consider an element
\[x=S_{m_0,m_0'}\otimes x_0\otimes S_{m_1,m_1'}\otimes x_1\otimes\dots\otimes S_{m_n,m_n'}\otimes x_n.\]
Let $I$ be a proper subset of $\cq{0,\dots,n}$. Write $I$ as the union of its cyclically contiguous subsets, each of the form $\cq{i,i+1,\dots,i+k}$ (indices modulo $n+1$). Let $x_{/I}$ be the element formed by replacing each \[S_{m_i,m_i'}\otimes x_i\otimes\dots\otimes x_{i+k}\otimes S_{m_{i+k},m_{i+k}'}\] by $S_{m_i,m_i'}*\dots*S_{m_{i+k},m_{i+k}'}$.
Now consider the corrected element:
\[\widetilde x=\sum_I(-1)^{\aq I}x_{/I}.\]
It is equal to $x$ modulo the depth filtration. Also, let \[y_q=S_{m_0,m_0'}\otimes q\otimes S_{m_1,m_1'}\otimes q\otimes\dots\otimes S_{m_n,m_n'}\otimes q,\] and define $\widetilde y_q$ in the same way. By a standard inclusion-exclusion argument, the property (\ref{eqn:star_property}) implies that
\[\rho_p(\widetilde x+\widetilde y_q)=\widetilde x + \text{(terms containing $q$)}.\]
Because the correlator with base point $q$ is zero for the terms containing $q$, this gives
\[\text{\rm Cor}_p(\widetilde x+\widetilde y_q) = \text{\rm Cor}_q(\widetilde x).\] On the other hand, the Hodge correlator $\text{\rm Cor}_p(\widetilde y_q)$ depends only on $p-q$, and thus $p\mapsto\text{\rm Cor}_p(\widetilde x)-\text{\rm Cor}_0(\widetilde x)$ provides a group homomorphism $E\to\mathbb{R}$, and must be 0. Therefore, $\text{\rm Cor}_p(\widetilde x)$ is independent of $p$.
\end{proof}
\begin{lma}
There exist elements, independent of choice of symplectic basis of $H^1(X)$, satisfying (\ref{eqn:star_property}).
\label{lma:depth_correction_term}
\end{lma}
\begin{proof}
We produce such elements explicitly:
\[S_{m_0,m_0'}*\dots*S_{m_k,m_k'}=\f{1}{2^k}\sum_{n_0,n_0',\dots,n_k,n_k'}\pm S_{n_0,n_0'}\otimes S_{n_1,n_1'}\otimes\dots\otimes S_{n_k,n_k'},\]
where the sum is taken over the $n_i,n_i'\geq0$ such that:
\begin{align*}
n_i+n_i'&=\begin{cases}m_i+m_i'+1&i=0,k\\m_i+m_i'+2&0<i<k\end{cases},\\
(n_0-n_0')+\dots+(n_k-n_k')&=(m_0-m_0')+\dots+(m_k-m_k').
\end{align*}
A term is taken with the sign $-$ if there is an odd number of $i$ ($i=0,\dots,k-1$) such that
\[(n_0-n_0')+\dots+(n_i-n_i')<(m_0-m_0')+\dots+(m_i-m_i'),\]
otherwise with the sign $+$.
Examples:
\begin{align*}
S_{0,0}*S_{0,0}&=\f12\pq{\omega\ol\omega-\ol\omega\omega},\\
S_{0,0}*S_{1,0}&=\f12\pq{\omega\o\ol\omega+\omega\ol\omega\omega-\ol\omega\omega\o},\\
S_{0,0}*S_{0,0}*S_{0,0}&=\f14\pq{\omega\o\ol\omega\ob+\omega\ol\omega\omega\ol\omega-\omega\ol\omega\ob\omega-\ol\omega\omega\o\ol\omega+\ol\omega\omega\ol\omega\omega+\ol\omega\ob\omega\o}.
\end{align*}
We explain the construction by picture. The basis elements of $T(H^1(X))$ of a given weight are in bijection with lattice paths: a word $\omega_1\otimes\dots\otimes\omega_n$ corresponds to the path whose $i$-th step is $(1,0)$ if $\omega_i=\omega$ and $(0,1)$ if $\omega_i=\ol\omega$. The elements of $T(H^1(X)\oplus\mathbb{Q}[p])$ are lattice paths that also allow the diagonal step $(1,1)$, corresponding to $(p)$. (The points of the lattice path are simply the Hodge bidegrees of the initial subwords.) The map $\rho_p$ replaces a path by the sum of all paths obtained by replacing steps (up, right) or (right, up) with diagonal steps, in the latter case changing the sign.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[height=0.25\textwidth]{figures/lp1.pdf}&
\includegraphics[height=0.25\textwidth]{figures/lp2.pdf}\\
$S_{m_0,m_0'}\otimes(p)\otimes S_{m_1,m_1'}$&$S_{m_0,m_0'} * S_{m_1,m_1'}$
\end{tabular}
\simplecap{fig:lp1}{Construction of the element $S_{m_0,m_0'}*S_{m_1,m_1'}$: the paths crossing the rays marked $-$ and $+$ are taken with the corresponding sign.}
\end{figure}
To construct the element, we first consider the concatentation of paths in $S_{m_0,m_0'}$, \dots, $S_{m_k,m_k'}$, with a step $(1,1)$ inserted between each pair. Draw a diagonal line $\ell_i$ bisecting the step that was inserted between $S_{m_{i-1},m_{i-1}'}$ and $S_{m_i,m_i'}$. Any path of the Hodge bidegree $\pq{\sum m_i+k,\sum m_i'+k}$ appears in a unique term $S_{n_0,n_0'}\otimes\dots\otimes S_{n_k,n_k'}$, and each $S_{n_i,n_i'}$ is the sum of paths between the lines $\ell_i$ and $\ell_{i+1}$. The sign of a path is determined by the rays on which it crosses the diagonal lines: $+$ if below the step, $-$ if above. See Figure~\ref{fig:lp1}.
\begin{figure}[t]
\includegraphics[height=0.25\textwidth]{figures/lp3.pdf}
\includegraphics[height=0.25\textwidth]{figures/lp4.pdf}
\simplecap{fig:lp2}{The point of nonconcavity contributing a term to the right side of (\ref{eqn:star_property}).}
\end{figure}
Now fix a choice of a ray of each such diagonal, and consider the terms coming from lattice paths crossing these rays. We claim that any such term satisfies (\ref{eqn:star_property}) modified by a factor of $\f{1}{2^k}$. Indeed, these are the lattice paths lying in a certain rectilinear region (right part of the figure). Most terms in $\rho_p$ are canceled; the only terms remaining are those with segments $(1,1)$ at the points of nonconcavity of this region. This is precisely the expression on the right of (\ref{eqn:star_property}). See Figure~\ref{fig:lp2}.
\end{proof}
The simplest example of the corrected correlator, for $(a)\otimes(b)\otimes(c)$:
\begin{align*}
(a)\otimes(b)\otimes(c)&-\f12\pq{\omega\otimes\ol\omega-\ol\omega\otimes\omega}\otimes(b)\otimes(c)\\&-\f12(a)\otimes\pq{\omega\otimes\ol\omega-\ol\omega\otimes\omega}\otimes(c)\\&-\f12(a)\otimes(b)\otimes\pq{\omega\otimes\ol\omega-\ol\omega\otimes\omega}.
\end{align*}
(The terms where two points were replaced are 0, because of the reflection relations.)
\subsection{Second shuffle relations}
\subsubsection{The depth 2 case: dihedral symmetry}
\begin{thm}
The corrected symmetric Hodge correlators in depth 2 satisfy the second shuffle (dihedral symmetry) relations modulo terms of lower depth that are independent of the base point.
Precisely, the corrected element for
\begin{align}
&S_{m_0,m_0'}\otimes(0)\otimes S_{m_1,m_1'}\otimes(x_1)\otimes S_{m_2,m_2'}\otimes(x_1+x_2) \nonumber\\+\, &S_{m_0,m_0'}\otimes(0)\otimes S_{m_2,m_2'}\otimes(x_2)\otimes S_{m_1,m_1'}\otimes(x_1+x_2)\label{eqn:elliptic_second_shuffle}
\end{align}
lies in the kernel of the map $\text{\rm Cor}_s$ for every $s$.
\label{thm:dihedral_depth2}
\end{thm}
\begin{proof}
The corrected element for (\ref{eqn:elliptic_second_shuffle} changes sign under the map $x\mapsto(x_1+x_2-x)$ and reflection. On the other hand, it is invariant under this operation up to an additive shift (i.e., change in base point).
\end{proof}
\subsubsection{Relations in higher depth}
The \emph{second shuffle relations} are relations of the form
\[\sum_{\sigma\in\Sigma_{i,j}}\Theta^{*}\pg{x_0,x_{\sigma^{-1}(i)},x_{\sigma^{-1}(2)},\dots,x_{\sigma^{-1}(i+j)})}{t_0,t_{\sigma^{-1}(1)},t_{\sigma^{-1}(2)},\dots,t_{\sigma^{-1}(i+j)}}+\dots,\]
perhaps with additional terms of lower depth. The Hodge correlators on $\mathbb{P}^1$ are known to obey such relations, in addition to the first shuffle relations, the structural relations in $\mathcal{CL}^\vee_{X,S,v_0}$; the lower-depth terms were described precisely by \cite{malkin-shuffle}.
The relation of Theorem~\ref{thm:dihedral_depth2} is a special case of a second shuffle relation. In depth $>2$, the second shuffle relations are not equivalent to dihedral symmetry. However, one hopes for a generalization.
\begin{conj}
The second shuffle relations for symmetric elliptic Hodge correlators hold modulo the depth filtration. The lower-depth terms are independent of the base point $s$.
\label{cnj:second_shuffle}
\end{conj}
The lower-depth correction terms in depth $>2$ are not known. In particular, the corrected correlators do not satisfy the second shuffle relations in higher depth. However, calculations in low weight support this conjecture. We may expect the elliptic relations to be deformations of the relations for $\mathbb{P}^1$ (see \S\ref{sec:degen}).
\section{Bianchi hyperbolic threefolds and modular complexes}
\label{sec:bianchi_and_modular}
\subsection{Bianchi tesselations and orbifolds}
\subsubsection{Definition}
Let $K=\mathbb{Q}[\sqrt{-d}]$ be an imaginary quadratic field with lattice of integers $\mathcal{O}$. The \emph{Bianchi tesselation} (\cite{bianchi}) is an ideal polyhedral tesselation of the upper half-space $\mathbb{H}^3$ associated with $\mathcal{O}$, whose cell complex has a natural structure of a complex of $\mathrm{GL}_2(\mathcal{O})$-modules. We define it now.
Let $\ol\mathcal{F}$ be the space of positive semidefinite Hermitian forms on $(\mathcal{O}^2\otimes_\mathcal{O}\mathbb{C})^*$. The subset $\mathcal{F}$ of positive definite forms is a dense open subset of $\ol\mathcal{F}$. We identify $\mathbb{H}^3$ and its compactification $\ol\mathbb{H}^3=\mathbb{H}^3\cup\mathbb{P}^1(\mathbb{C})$ with the real projectivizations of $\mathcal{F}$ and $\ol\mathcal{F}$, respectively. The action of $\mathrm{GL}_2(\mathbb{C})$ on $\mathbb{C}^2$ provides an action on $\ol\mathcal{F}$ that descends to an action on $\ol\mathbb{H}^3$.
Every $v\in\mathcal{O}^2$ provides a positive semidefinite form $\aq{\gen{-,v}}^2\in\partial\ol\mathcal{F}$. The convex hull of the set \[\cq{\aq{\gen{-,v}}^2:\text{$v$ a primitive vector in $\mathcal{O}^2$}}\] is a polyhedron in $\ol\mathcal{F}$ with vertices on the boundary. The polyhedron projects to an ideal tesselation of $\mathbb{H}^3$ with vertices on $\mathbb{P}^1(\mathcal{O})\subset\mathbb{P}^1(\mathbb{C})$. Let $B^\bullet$ be the polyhedral cell complex over $\mathbb{Z}$ of this ideal tesselation. We will shift this complex in degree so that the space of $i$-dimensional cells it in degree $3-i$ ($i=0,1,2,3$). We get a cohomological complex \[B^0\tto\partial B^1\tto\partial B^2\tto\partial B^3.\]
The group $\mathrm{GL}_2(\mathcal{O})$ acts on the Bianchi tesselation, giving $B^\bullet$ the structure of a complex of left $\mathrm{GL}_2(\mathcal{O})$ modules.
The quotient $\mathrm{GL}_2(\mathcal{O})\setminus\mathbb{H}^3$ is a finite-volume hyperbolic threefold with cusps in bijection with the ideal class group of $\mathcal{O}$. If $\Gamma$ is a finite-index subgroup of $\mathrm{GL}_2(\mathcal{O})$, the quotient $\Gamma\setminus\mathbb{H}^3$ is also a finite-volume hyperbolic threefold with a finite map to $\mathrm{GL}_2(\mathcal{O})\setminus\mathbb{H}^3$.
A right $\mathrm{GL}_2(\mathcal{O})$-module $T$ provides a local system on $\Gamma\setminus\mathbb{H}^3$, which we also denote by $T$. Then the chain complex of $\mathrm{GL}_2(\mathcal{O})\setminus\mathbb{H}^3$ with coefficients in $T$ is \begin{equation}T\otimes_\Gamma B^\bullet\cong\pq{\mathbb{Z}[\Gamma\setminus\mathrm{GL}_2(\mathcal{O})]\otimes T}\otimes_{\mathrm{GL}_2(\mathcal{O})}B^\bullet.\label{eqn:loc_sys_cc}\end{equation}
\subsubsection{The Gaussian and Eisenstein cases}
Following \cite{goncharov-euler}, for $d=1$ ($\mathcal{O}=\mathbb{Z}[i]$) and $d=3$ ($\mathcal{O}=\mathbb{Z}[\rho]$) we have the following description of the Bianchi complexes in degrees 1 and 2.
The action of $\mathrm{GL}_2(\mathcal{O})$ is transitive on the $i$-dimensional cells for each $i$. Choose $\mathrm{GL}_2(\mathcal{O})$-generators $G_i\in B^i$: we may take
\begin{align*}
G_1&=\text{(the ideal triangle $(1,0,\infty)$)}\\
G_2&=\text{(the geodesic $(0,\infty)$)}
\end{align*}
where $(v_1,\dots,v_n)$, $v_i\in\mathbb{P}^1(\mathcal{O})=\mathbb{P}(V^2(\mathcal{O}))$, denotes the oriented cell with ideal vertices at $v_1,\dots,v_n$ under the identification of $\mathbb{P}^1(\mathbb{C})$ with the boundary of $\ol\mathbb{H}^3$.
Let $D_i$ be the subgroup of $\mathrm{GL}_2(\mathcal{O})$ stabilizing $G_i$.
The group $D_1$ stabilizing the triangle $(0,1,\infty)$ is isomorphic to \[S_3\times \mathcal{O}^\times.\] The first component $S_3$ acts on $(v,w)\in \mathcal{O}\oplus\mathcal{O}$ by permutations of the triple $(v,w,-v-w)$, i.e., the generators of $S_3$ are represented by
\[(123)\mapsto\begin{pmatrix}0&-1\\1&-1\end{pmatrix},\quad(12)\mapsto\begin{pmatrix}0&1\\1&0\end{pmatrix}.\] The second component acts by scalars. There is a sign homomorphism $\chi_1:D_1\to\mathbb{Z}$ keeping track of the action of $D_1$ on the orientation, with $\chi_1((123))=1$ and $\chi_1((12))=-1$. So the space of 2-cells is \[B^1=\mathbb{Z}[GL_2(\mathcal{O})]\otimes_{D_1}\chi_1.\]
The group $D_2$ stabilizing the geodesic $(0,\infty)$ is isomorphic to \[S_2\ltimes(\mathcal{O}^\times\times\mathcal{O}^\times),\] with $S_2$ acting on $\mathcal{O}^\times\times\mathcal{O}^\times$ by permutation of the factors. The nontrivial element of $S_2$ acts by $(v,w)\mapsto(w,v)$ and $\mathcal{O}^\times\times\mathcal{O}^\times$ acts diagonally. There is a sign homomorphism $\chi_2:D_2\to\mathbb{Z}$, and the space of 1-cells is \[B^2=\mathbb{Z}[GL_2(\mathbb{Z}[i])]\otimes_{D_2}\chi_2.\]
Let $\mathfrak{p}$ be a prime ideal in $\mathcal{O}$. The group $\mathrm{GL}_2(\mathcal{O})$ acts on the quotient $(\mathbb{Z}[i]/\mathfrak{p})^2$. Let $\Gamma_1(\mathfrak{p})$ be the stabilizer in $\mathrm{GL}_2(\mathcal{O})$ of the vector $(0,1)\in(\mathbb{Z}[i]/\mathfrak{p})^2$. The action on the vector $(0,1)$ provides an isomorphism of $\mathrm{GL}_2(\mathcal{O})$-modules \[\mathbb{Z}[\Gamma_1(\mathfrak{p})\setminus\mathrm{GL}_2(\mathcal{O})]\cong\mathbb{Z}[\mathbb{F}_\mathfrak{p}^2-0],\quad\mathbb{F}_\mathfrak{p}=\mathcal{O}/\mathfrak{p}.\]
The chain complex (\ref{eqn:loc_sys_cc}) of $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$ with coefficients in a local system $T$ is then identified in degrees 1 and 2 with
\begin{align*}
T\otimes_{\Gamma_1(\mathfrak{p})}B^\bullet
&\cong\pq{\mathbb{Z}[\Gamma_1(\mathfrak{p})\setminus\mathrm{GL}_2(\mathcal{O})]\otimes T}\otimes_{\mathrm{GL}_2(\mathcal{O})}B^\bullet
\\
&\cong\pq{\mathbb{Z}[\mathbb{F}_\mathfrak{p}^2-0]\otimes T}\otimes_{\mathrm{GL}_2(\mathcal{O})}\pq{\mathbb{Z}[\mathrm{GL}_2(\mathcal{O})]\otimes_{D_\bullet}\chi_\bullet}.%
\end{align*}
This space is generated in degree $i$ by elements
\[\pq{(\alpha,\beta)\otimes t}\otimes(G_i),\quad(\alpha,\beta)\in\mathbb{F}_\mathfrak{p}^2-0,\quad t\in T.\]
\subsection{Modular complexes}
\subsubsection{Definition}
Let $\mathcal{O}=\mathbb{Z}$ or the lattice of integers in an imaginary quadratic field. We are going to define the modular complexes $M^\bullet_k$, complexes of left $\mathrm{GL}_k(\mathcal{O})$-modules that generalize the complexes defined by \cite{goncharov-polylogs-modular} for $\mathrm{GL}_k(\mathbb{Z})$.
Fix a $k$-dimensional $\mathcal{O}$-vector space $V$. An \emph{extended basis} of $V$ is a sequence of vectors $\gen{v_0,v_1,\dots,v_k}$, $v_i\in V$, such that $v_0+\dots+v_k=0$ and $v_1,\dots,v_k$ form a basis of $V$. (Consequently, any other set of $k$ vectors in this sequence form a basis.) We also use the notation
\begin{align*}
\bq{v_1,\dots,v_k}&=\gen{-v_1-\dots-v_k,v_1,v_2,\dots,v_k},\\
\bq{v_1:\dots:v_k}&=\bq{v_2-v_1,v_3-v_2,\dots,v_k-v_{k-1},-v_k}.
\end{align*}
The set $B_V$ of extended bases of $V$ is a principal homogeneous space for $\mathrm{GL}(V)$.
The complex of left $\mathrm{GL}_k(\mathcal{O})$-modules $M^\bullet_k$ lies in the degrees $1,\dots,n$. The module $M^1_k$ is the quotient of $\mathbb{Z}[B_V]$ by the double shuffle relations
\begin{align}
\sum_{\sigma\in\Sigma_{i,j}}\bq{v_{\sigma^{-1}(1)}:\dots:v_{\sigma^{-1}(i+j)}}&=0,&\text{\it(first shuffle)} \label{eqn:modular_first_shuffle}\\
\sum_{\sigma\in\Sigma_{i,j}}\bq{v_{\sigma^{-1}(1)},\dots,v_{\sigma^{-1}(i+j)}}&=0.&\text{\it(second shuffle)}\label{eqn:modular_second_shuffle}
\end{align}
\begin{lma}[\cite{goncharov-polylogs-modular}, Theorem 4.1]
The double shuffle relations imply the dihedral symmetry relations:
\begin{align}
\gen{v_0,v_1,\dots,v_k}=\gen{v_1,\dots,v_k,v_0}=(-1)^{k+1}\gen{v_k,\dots,v_1,v_0}=\gen{-v_0,-v_1,\dots,-v_k}.\label{eqn:modular_dihedral}
\end{align}
\end{lma}
The module $M^n_k$ is generated by elements
\[[v_1,\dots,v_{k_1}]\wedge\dots\wedge[v_{k_{n-1}+1},\dots,v_{k_n}],\]
where each block $\bq{v_{k_{i-1}+1},\dots,v_{k_i}}$ is an extended basis of a sublattice $V_i$ in $V$, and $V=V_1\oplus\dots\oplus V_n$ (from which is follows that $k_1+\dots+k_n=k$). The double shuffle relations are imposed on each of the blocks, and the blocks anticommute.
The coproduct $\delta:M^1_k\to M^2_k$ is defined by
\[\delta\gen{v_0,v_1,\dots,v_k}=\sum_{\rm cyc}\sum_{i=1}^k[v_0,\dots,v_{i-1}]\wedge[v_{i+1},\dots,v_k]\]
with the outer cyclic sum is over $\cq{0,1,\dots,k}$. The coproduct is extended by the Leibniz rule to the higher degrees, i.e.,
\[\delta(x_1\wedge\dots\wedge x_n)=\sum_{i=1}^n(-1)^{i+1}x_1\wedge\dots\wedge\delta(x_i)\wedge\dots\wedge x_n.\]
We will also consider the \emph{relaxed modular complex} $\widetilde M_k^n$, in which impose only the first shuffle relations (\ref{eqn:modular_first_shuffle}) and the dihedral symmetry relations (\ref{eqn:modular_dihedral}). By the lemma, the modular complex is the quotient of the relaxed modular complex by the second shuffle relations (\ref{eqn:modular_second_shuffle}).
\subsubsection{Relating the Gaussian and Eisenstein Bianchi and modular complexes for $k=2$}
In this section, suppose $\mathcal{O}=\mathbb{Z}[i]$ or $\mathbb{Z}[\rho]$. We will construct an isomorphism between the modular complex $M_2\chain$ and the Bianchi complex $B\chain$ in degrees 1 and 2.
Recall that $B\chain$ is generated by the ideal triangle $(1,0,\infty)$ in degree 1 and the geodesic $(0,\infty)$ in degree 2, with the boundary map given by
\[(1,0,\infty)\mapsto(1,0)+(0,\infty)+(\infty,1).\]
The modular complex $M\chain_2$ is generated in degree 1 by the extended basis $[e_1,e_2]$, with the coproduct
\[\bq{e_1,e_2}\mapsto\bq{-e_1-e_2}\wedge\bq{e_2}+\bq{e_1}\wedge\bq{-e_1-e_2}+\bq{e_2}\wedge\bq{e_1}.\]
Making as before the identification of $\mathbb{P}^1(\mathcal{O})$ with $\mathbb{P}^1(V)$, define the map $\psi:M\chain_2\to B\chain$ by
\[
\psi\pq{\gen{v_1,v_2,v_3}}=\text{the triangle $(v_1,v_2,v_3)$},\quad \psi\pq{[v_1]\wedge[v_2]}=\text{the geodesic $(v_1,v_2)$}.
\]
\begin{lma}
The map $\psi$ is an isomorphism of complexes of $\mathrm{GL}_2(\mathcal{O})$-modules.
\label{lma:modular_bianchi}
\end{lma}
\begin{proof}
By construction, $\psi$ is a surjective map of abelian groups. We must verify (1) $\psi$ commutes with the action of $\mathrm{GL}_2(\mathcal{O})$, (2) $\psi$ commutes with the coproduct, (3) $\psi$ respects the double shuffle relations, and the images of the double shuffle and anticommutation relations are all relations in $B\chain$.
(1) holds by construction. For (2), notice that
\begin{align*}
\delta\bq{e_1,e_2}
&=\bq{-e_1-e_2}\wedge\bq{e_2}+\bq{e_1}\wedge\bq{-e_1-e_2}+\bq{e_2}\wedge\bq{e_1}.\\
&=\bq{e_2}\wedge\bq{e_1} + \begin{pmatrix}0&-1\\1&-1\end{pmatrix}\bq{e_2}\wedge\bq{e_1} + \begin{pmatrix}0&-1\\1&-1\end{pmatrix}^2\bq{e_2}\wedge\bq{e_1}
\end{align*}
and that $\begin{pmatrix}0&-1\\1&-1\end{pmatrix}$ acts by cyclic permutation on $(0,1,\infty)$.
For (3), double shuffle relation in $M^1_2$ is just equivalent to dihedral symmetry, which is precisely the relation imposed by $\otimes_{D_1}\chi_1$. The only relations in $M_2^2$ are the anticommutation relation and the relation $[v_1]=[-v_1]$, whose images are the only relations among the 1-cells in $B^2$.
\end{proof}
As a consequence, the chain complex of $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$ with coefficients in a local system $T$ is idenfied with
\[\pq{\mathbb{Z}[\mathbb{F}_\mathfrak{p}^2-0]\otimes T}\otimes_{\mathrm{GL}_2(\mathcal{O})}M^\bullet_2\]
and generated in degree $i$ by
\[\pq{(\alpha,\beta)\otimes t}\otimes[v_1,v_2],\quad(\alpha,\beta)\in\mathbb{F}_\mathfrak{p}^2-0,\quad t\in T.\]
\subsection{Relating the modular and Bianchi complexes to the Galois Lie coalgebra}
\label{sec:relating}
\subsubsection{Motivic correlators at torsion points and averaged base point Hodge correlators}
Let $E$ be an elliptic curve, $p$ a prime, $\mathfrak{p}\subset\text{\rm End} E$ a prime over $p$, and $S=E[\mathfrak{p}]$. There is an canonical up to root of unity choice of tangent vector $v_0$ at $0\in E$, given by the Dedekind eta function. Extend it to a translation-invariant vector field on $E$ and take $v_s$ to be its fiber at $s$.
Recall that we packaged the Lie coalgebras $\mathcal{CL}^\vee_{E,S,v_s}$ into a coalebra $\mathcal{CL}^\vee_{E,S}=\bigoplus_s\mathcal{CL}^\vee_{E,S,v_s}$. The $\mathcal{CL}^\vee_{E,S,v_s}$ for different $s$ are canonically isomorphic, so there is a natural diagonal $D\subset\mathcal{CL}^\vee_{E,S}$. The image of $D$ under $\text{\rm Cor}_\text{\rm Hod}$ is the space of \emph{averaged base point correlators}. Equivalently, it is the image of the \emph{averaged base point correlator map} $\text{\rm Cor}_{\rm av}=\f{1}{\aq{E[\mathfrak{p}]}}\sum_s\text{\rm Cor}_s$. The image of the restriction to the space of symmetric correlators is called the coalgebra of \emph{symmetric averaged base point Hodge correlators} and denoted $\text{\rm Lie}_{\rm sym}^\vee(E,E[\mathfrak{p}])$. It is the dual to the quotient of $\text{\rm Lie}_\text{\rm Hod}(E,E[\mathfrak{p}])$ induced by the quotient of $\text{\rm gr}^W\pi_1^{\rm nil}(E-E[\mathfrak{p}],v_0)$ by the adjoint action of $H_1(E;\mathbb{Z})$ and the translation action of $E[\mathfrak{p}]$ on $E$.
\subsubsection{Relaxed modular complexes and Hodge correlators}
\label{sec:modular_motivic}
Suppose that $E$ is an elliptic curve, and $\mathcal{O}$ its endomorphism ring, and suppose $\mathfrak{p}$ is a prime in $\mathcal{O}$.$\mathfrak{p}$.
Let $T_k$ denote the graded right $\mathrm{GL}_k(\mathcal{O})$-module $\Sym\pq{H_1(E;\mathbb{Z})^{\oplus k}}\otimes\mathbb{Q}$, identified with the algebra of polynomials in the variables $t_1,\ol t_1,\dots,t_k,\ol t_k$, and let $\Gamma_1(\mathfrak{p})\subset\mathrm{GL}_k(\mathcal{O})$ be the stabilizer of the vector $(0,\dots,0,1)\in(\mathcal{O}/\mathfrak{p})^k$. We will define a map $\theta$ from the relaxed modular complex with coefficients in $T_k$ to the depth $k$ component of the standard cochain complex of the Lie coalgebra $\text{\rm gr}^D\text{\rm Lie}_{\rm sym}^\vee(E,E[\mathfrak{p}])$:
\[\theta:T_k\otimes_{\Gamma_1(\mathfrak{p})}\widetilde M_k\chain\to{\rm CE}\chain\pq{\text{\rm gr}^D\text{\rm Lie}_{\rm sym}^\vee(E,E[p])}_{D=k}.\]
Fix an extended basis $\gen{v_1,\dots,v_k,v_0}$ of $V_k(\mathcal{O})$. Also fix an identification of $\mathbb{F}_\mathfrak{p}$ with $E[\mathfrak{p}]$.%
We will abuse notation and identify $\alpha\in\mathbb{F}_\mathfrak{p}$ with $\alpha\in E[p]$. Last, we identify the domain of $\theta$ with
\[\pq{\mathbb{Z}[\mathbb{F}_\mathfrak{p}^k-0]\otimes T_k}\otimes_{\mathrm{GL}_2(\mathcal{O})}M_k\chain.\]
In the degree 1 component, define the map on the level of generating series by
\begin{align}
&\sum_{n_1,n_1',\dots,n_k,n_k'}\pq{(\alpha_1,\dots,\alpha_k)\otimes t_1^{n_1}\ol t_1^{n_1'}\dots t_k^{n_k}\ol t_k^{n_k'}}\otimes\bq{v_1,\dots,v_k}\nonumber
\\
&\mapsto
\f{1}{\aq{E[\mathfrak{p}]}}\sum_{s\in E[\mathfrak{p}]}\Theta_s^{*}\pg{\alpha_1,\dots,\alpha_k,-\pq{\alpha_1+\dots+\alpha_k}}{t_1:\dots:t_k:0}.
\label{eqn:theta}
\end{align}
The maps in higher degrees are given by
\begin{align*}
&\pq{\mathbb{Z}[\mathbb{F}_\mathfrak{p}-0]\otimes T_k}\otimes_{\mathrm{GL}_2(\mathcal{O})}M_k^n\to\pq{\bigwedge^n\text{\rm gr}^D\text{\rm Lie}_{\rm sym}^\vee(E,E[p])}_{D=k},\\
\sum_{n_1,n_1',\dots,n_k,n_k'}&\pq{(\alpha_1,\dots,\alpha_k)\otimes t_1^{n_1}\ol t_1^{n_1'}\dots t_k^{n_k}\ol t_k^{n_k'}}\otimes\pq{\bq{v_1,\dots,v_{k_1}}\wedge\dots\wedge\bq{v_{k_{n-1}+1},\dots,v_{k_n}}}\\
\mapsto\f{1}{\aq{E[\mathfrak{p}]}}\sum_{s\in E[\mathfrak{p}]}&\Theta_s^{*}\pg{\alpha_1,\dots,\alpha_{k_1},-(\alpha_1+\dots+\alpha_{k_1})}{t_1:\dots:t_{k_1}:0}\wedge\dots\wedge\\&\wedge\Theta_s^{*}\pg{\alpha_{k_{n-1}+1},\dots,\alpha_{k_n},-(\alpha_{k_{n-1}+1}+\dots+\alpha_{k_n})}{t_{k_{n-1}+1}:\dots:t_{k_n}:0}.
\end{align*}
\begin{thm}
The map $\theta$ is a well-defined surjective morphism of complexes of graded $\mathcal{O}$-modules.
\label{thm:relating}
\end{thm}
\begin{proof}
The map $\theta$ is a morphism of graded $\mathrm{GL}_k(\mathcal{O})$-modules by construction (recall the $t_i,\ol t_i$ are dual to the cohomology generators $\omega,\ol\omega$), and is surjective by construction. We need to verify that the map $\theta$ respects (1) the first shuffle relations, (2) the dihedral symmetry relations, (3) the coproduct. We show the three in order.
The first shuffle relation on the image holds termwise -- for each $s\in E[p]$ -- and is equivalent to the relation on the dual generating series (Lemma~\ref{lma:theta_dihedral}(c)):
\[\sum_{\sigma\in\Sigma_{i,j}}\Theta_s\pg{\beta_{\sigma^{-1}(1)}:\dots:\beta_{\sigma^{-1}(k)}:0}{u_{\sigma^{-1}(1)},u_{\sigma^{-1}(2)},\dots,u_{\sigma^{-1}(i+j)},-(u_1+\dots+u_{i+j})},\]
where $t_i=u_1+\dots+u_i$, $\alpha_i=\beta_i-\beta_{i-1}$. This is the first shuffle relation on the generating series $\Theta_s\pg{:}{,}$, which holds a priori.
The images of the dihedral symmetry relations are exactly the relations of Corollary~\ref{cly:thetastar_dihedral}, which hold modulo the correlators of elements that are independent of $s$.
Finally, the map $\theta$ intertwines the coproduct. The general case follows from the degree 1. Set $t_0=0$. By (\ref{eqn:thetastar_coproduct}), we have
\begin{align}
\delta\theta\biggl(\sum_{n_1,n_1',\dots,n_k,n_k'}&\pq{(\alpha_1,\dots,\alpha_k)\otimes t_1^{n_1}\ol t_1^{n_1'}\dots t_k^{n_k}\ol t_k^{n_k'}}\otimes\bq{v_1,\dots,v_k}\biggr)=\nonumber\\
=\f{1}{\aq{E_\mathfrak{p}}}\sum_{s\in E[\mathfrak{p}]}\biggl(\sum_{\rm cyc}\sum_{i=0}^k&\Theta_s^{*}\pg{\alpha_1,\dots,\alpha_i,-(\alpha_1+\dots+\alpha_i)}{t_1:\dots:t_i:t_0}\wedge\nonumber\\&\wedge\Theta_s^{*}\pg{\alpha_{i+1},\dots,\alpha_k,-(\alpha_{i+1}+\dots+\alpha_k)}{t_{i+1}:\dots:t_k:t_0}+\text{\rm lower depth terms}\biggr),\label{eqn:dtheta}
\end{align}
where the lower-depth terms are correlators of elements independent of $s$, and the cyclic sum is over indices modulo $k+1$. On the other hand, we have
\begin{align}
\delta\biggl(\sum_{n_1,n_1',\dots,n_k,n_k'}&\pq{(\alpha_1,\dots,\alpha_k)\otimes t_1^{n_1}\ol t_1^{n_1'}\dots t_k^{n_k}\ol t_k^{n_k'}}\otimes\bq{v_1,\dots,v_k}\biggr)\nonumber\\
&=\biggl(\sum_{n_1,n_1',\dots,n_k,n_k'}\pq{(\alpha_1,\dots,\alpha_k)\otimes t_1^{n_1}\ol t_1^{n_1'}\dots t_k^{n_k}\ol t_k^{n_k'}}\otimes\sum_{\rm cyc}\sum_{i=0}^k-[v_1,\dots,v_i]\wedge[v_{i+1},\dots,v_k]\biggr).\label{eqn:justd}
\end{align}
The cyclic shift in $\mathrm{GL}_k(\mathcal{O})$, which maps $v_0\mapsto v_1\mapsto v_2\mapsto\dots\mapsto v_k\mapsto v_0$, acts by the transpose action on the $t_i$ by $t_i\mapsto t_{i+1}-t_1$ (indices modulo $k+1$; recall $t_{k+1}=t_0=0$).
Thus the image of (\ref{eqn:justd}) under $\theta$ agrees with (\ref{eqn:dtheta}) in each summand of the cyclic sum, except with an additive shift of the arguments. It remains to apply the homogeneity of the $\Theta_s^{*}$.
\end{proof}
\subsubsection*{Remark} Why do we define the map $\theta$ using averaged base point correlators? It would have been possible to define the map to $\text{\rm Lie}_\text{\rm Hod}^\vee(E,E[\mathfrak{p}])$ using the correlators with fixed base point, $\text{\rm Cor}_s$. However, this map would be zero. Indeed, any correlator with base point $s$ vanishes modulo the depth filtration in $\text{\rm Lie}_\text{\rm Hod}^\vee(E,E[\mathfrak{p}])$ induced by $\text{\rm Cor}_s$, since any correlator can be written modulo those of lower depth by the change of base point formula (\ref{eqn:change_base_point}). Those correlators of lower depth depend on $s$, so this does not imply the image of $\text{\rm Cor}_{\rm av}$ is zero.
On the other hand, the map $\theta$ can be modified, replacing $E[\mathfrak{p}]$ by its subgroup of order $p$ (if $\aq{E[\mathfrak{p}]}=p^2$). We will use this when we specialize $\theta$ to the nodal projective line.
\subsubsection{Bianchi complexes and Hodge correlators in depth 2}
Let $k=2$ and $E$ one of the CM elliptic curves with endomorphism ring $\mathcal{O}=\mathbb{Z}[i]$ or $\mathbb{Z}[\rho]$. According to Lemma~\ref{lma:modular_bianchi}, there is an isomorhpism $\psi:B\chain\to M_2\chain$ from the Bianchi complex to the modular complex. The relaxed modular complex $\widetilde M_2\chain$ is canonically isomorphic to the modular complex $M_2\chain$, since the second shuffle relations are equivalent to the dihedral symmetry relations. Thus we have a map
\[\theta\circ\psi:T_2\otimes_{\Gamma_1(\mathfrak{p})} B\chain\to{\rm CE}\chain\pq{\text{\rm gr}^D\text{\rm Lie}_{\rm sym}^\vee(E,E[\mathfrak{p}])}_{D=2}.\]
The complex of the left side is the chain complex with coefficients in the local system $T_2$ on the orbifold $\Gamma_1(\mathfrak{p})\setminus\mathbb{H}^3$. We arrive at the following important result:
\begin{thm}
Let $E$ be one of the CM elliptic curves $E=\mathbb{C}/\mathbb{Z}[i]$ or $E=\mathbb{C}/\mathbb{Z}[\rho]$. Then \[\theta\circ\psi:{\rm CE}\chain\pq{\text{\rm gr}^D\text{\rm Lie}^\vee_{\rm av}(E,E[\mathfrak{p}])}_{D=2}\] is a surjective morphism of complexes.
\label{thm:bianchi_to_lie}
\end{thm}
It is tempting to extend Thorem~\ref{thm:relating} to higher depth by showing the map $\theta$ descends to the modular complex $M_k\chain$. This requires showing the second shuffle relations for the averaged base point Hod correlators modulo the depth filtration. The following would follow from Conjecture~\ref{cnj:second_shuffle}:
\begin{conj}
The map $\theta$ descends to a morphism of complexes
\[\theta:T_k\otimes_{\Gamma_1(\mathfrak{p})}M_k\chain\to{\rm CE}\chain\pq{\text{\rm gr}^D\text{\rm Lie}_{\rm sym}^\vee(E,E[\mathfrak{p}])}_{D=k}.\]
\end{conj}
\section{Applications}
\label{sec:application}
\subsection{The weight 4 case: Euler complexes}
\label{sec:euler}
Let us show how the map in Theorem~\ref{thm:bianchi_to_lie} generalizes those constructed by \cite{goncharov-levin,goncharov-euler}. To be consistent with those sources, we use the motivic language in this section, but the same results hold in the Hodge realization as well.
\subsubsection{The elements $\theta_E$}
For torsion points $a,b,c\in E$ with $a+b+c=0$, elements $\theta_E(a,b,c)$ are constructed by \cite{goncharov-euler} as follows.
For $E$ an elliptic curve over a field $k$, $\mathfrak{p}\subset\text{\rm End} E$ a prime over $p$, and $z$ a nonzero $\mathfrak{p}$-torsion point on $E$, there are elements $\theta_E(z)$, which are $p$-torsion elements in $\ol k_z^{*}\otimes\mathbb{Z}\bq{\f1p}$, where $k_z$ is the extension generated by the coordinates of $z$. They are identified with weight-2 elements in the mixed Tate Lie coalgebra $\text{\rm Lie}_{\text{\rm MT}/\ol k}^\vee$. The real period of the motive $\theta_E(z)$ is $-\log\aq{\theta_E(z)}$.
The elements $\theta_E(a:b:c)$ lie in the Bloch group of $\ol k$, which is identified with the weight-4 part of $\text{\rm Lie}_{\text{\rm MT}/\ol k}^\vee$. We also use the notation $\theta_E(a,b,c)=\theta_E(a:a+b:a+b+c)$, which is unambiguous when $a+b+c=0$ because the $\theta_E(a:b:c)$ are invariant under translation. They are characterized by the following properties:
\begin{enumerate}[(1)]
\item The coproduct is given by \[\delta\theta_E(a,b,c)=\theta_E(a)\wedge\theta_E(b)+\theta_E(b)\wedge\theta_E(c)+\theta_E(c)\wedge\theta_E(a).\]
\item The real period of $\theta_E(a:b:c)$ is given up to a constant multiple by the averaged Chow dilogarithm (\cite{goncharov-arakelov}). The latter can be rewritten as
\begin{equation}
\f{1}{p^5}\sum_{x\in E[p]}\int_{E(C)}\log\aq{f_{a,x}}\,d^\mathbb{C}\log\aq{f_{b,x}}\wedge d^\mathbb{C}\log\aq{f_{c,x}},
\label{eqn:theta_period}
\end{equation}
where $f_{a,b}$ is a function on $E$ with $\div f_{a,b}=p(\cq{a}-\cq{b})$.
\end{enumerate}
\subsubsection{The elements $\theta_E$ and motivic correlators}
According to \cite{goncharov-hodge-correlators}, \S10.5.5, for $a,b\in E[p]$, the elements $\theta_E(a-b)$ are equal up to a constant multiple to $\text{\rm Cor}_{\rm av}\pq{a,b}$. There is a version for the depth 2 elements.
\begin{lma}
Let $E$ be an elliptic curve over a number field. Then, for $a,b,c\in E[p]\setminus\cq0$ with $a+b+c=0$, the elements $\theta_E(a:b:c)$ are equal up to a constant multiple to $\text{\rm Cor}_{\rm av}(a,b,c)$.
\label{lma:avg_are_theta_e}
\end{lma}
\begin{proof}
The coproduct formulas for the $\theta_E$ and the $\text{\rm Cor}_{\rm av}$ concide (\cite{goncharov-hodge-correlators}, Lemma 10.9). It remains to see the periods are equal. Indeed, we take $f_{a,x}$ such that $\log\aq{f_{a,x}(z)}=pG_x(a,z)$, and likewise for $b$ and $c$. Then the formula (\ref{eqn:theta_period}) is evidently a constant multiple of the Hodge correlator
\[\f{1}{\aq{E[p]}}\sum_{x\in E[p]}\text{\rm Cor}_{\mathcal{H},x}(a,b,c),\] as desired.
\end{proof}
The map constructed by \cite{goncharov-euler}, for $\mathcal{O}=\mathbb{Z}[i]$ or $\mathbb{Z}[\rho]$ is:
\begin{align*}
\theta':\mathbb{Z}[\Gamma_1(\mathfrak{p})\setminus\mathrm{GL}_2(\mathcal{O})]\otimes_{\Gamma_1(\mathfrak{p})} M\chain_2&\to\text{\rm Lie}_\text{\rm Mot}^\vee(E,E[p]),\\
(\alpha_1,\alpha_2)\otimes[v_1,v_2]&\mapsto\theta_E(\alpha_1,\alpha_2,-(\alpha_1+\alpha_2)),\\
(\alpha_1,\alpha_2)\otimes\pq{[v_1]\wedge[v_2]}&\mapsto\theta_E(\alpha_1)\wedge\theta_E(\alpha_2).
\end{align*}
\begin{thm}
The map $\theta'$ is a constant multiple of the component of $\theta\circ\psi$ corresponding to the constant term of the local system $T$.
\end{thm}
\begin{proof}
After unraveling the definitions, in degree 1, this is exactly Lemma~\ref{lma:avg_are_theta_e}, while in degree 2 it amounts to showing that
\[\text{\rm Cor}_{\rm av}(0,a)\wedge\text{\rm Cor}_{\rm av}(0,b)=\f{1}{\aq{E[p]}}\sum_s\text{\rm Cor}_s(0,a)\wedge\text{\rm Cor}_s(0,b).\]
Expanding the sums and using that $\text{\rm Cor}_s(x,y)\sim\theta_E(x-y)-\theta_E(x-s)-\theta_E(y-s)$ (where we set $\theta_E(0)=0$), this simplifies to
\[\sum_s\theta_E(s)\wedge\theta_E(b-s)+\sum_s\theta_E(a-s)\wedge\theta_E(s)+\sum_s\theta_E(a-s)\wedge\theta_E(b-s)=0.\]
The three sums are both symmetric and antisymmetric under the involutions $s\mapsto b-s$, $s\mapsto a-s$, and $s\mapsto a+b-s$, respectively, so the sum is 0.
\end{proof}
A slight abuse of notation has taken place: $\theta$ maps to $\text{\rm gr}^D\text{\rm Lie}_{\rm sym}^\vee(E,E[\mathfrak{p}])$ and $\theta'$ to $\text{\rm Lie}_\text{\rm MT}^\vee(E,E[p])$. However, there is no discrepancy, as the second shuffle relation in weight 4 and depth 2 holds without the lower-depth correction terms, and so the constant term of $\theta$ can be viewed as a map to $\text{\rm Lie}_{\rm sym}^\vee(E,E[p])$. Precisely:
\begin{lma}
For $E$ any elliptic curve and $a,b\in E[\mathfrak{p}]$,
\begin{align*}
\text{\rm Cor}_{\rm av}(a,b,S_{0,0}*S_{0,0})&=0,\\\text{\rm Cor}_{\rm av}(a,S_{0,0}*S_{0,0}*S_{0,0})&=0.
\end{align*}
\end{lma}
\begin{proof}
For the first equality, recall that \[S_{0,0}*S_{0,0}=-\f12(\omega\otimes\ol\omega-\ol\omega\otimes\omega).\] It is easily verified that the coproduct is 0. For the periods, there are two trees contributing to the integral expansion of the correlator. For the tree where $\omega,\ol\omega$ are not incident to a common interior vertex, the terms with $\omega\otimes\ol\omega$ and with $\ol\omega\otimes\omega$ sum to 0. The other tree contributes a constant multiple of
\begin{align*}
&\sum_{s\in E[\mathfrak{p}]}\int_EG_s(z,w)\,d^\mathbb{C} G_s(w,a)\wedge d^\mathbb{C} G_s(w,b)\wedge \omega(z)\wedge\ol\omega(z)\\
&=\sum_{s\in E[\mathfrak{p}]}\int_EG_{\rm Ar}(s-w)\, d^\mathbb{C} G_s(w,a)\wedge d^\mathbb{C} G_s(w,b)\\
&=\sum_{s\in E[\mathfrak{p}]}\int_EG_{\rm Ar}(s-w)\, d^\mathbb{C}(G_{\rm Ar}(a-w)-G_{\rm Ar}(s-w))\wedge d^\mathbb{C}(G_{\rm Ar}(b-w)-G_{\rm Ar}(s-w))&=0.
\end{align*}
This follows from the distribution relations for the function $G_{\rm Ar}$, which state that
\[\sum_{s\in E[\mathfrak{p}]-0}G_{\rm Ar}(s)=0.\]
The second equality follows simply from dihedral symmetry.
\end{proof}
\subsection{Degeneration to rational curves: Voronoi complexes and multiple $\zeta$-values}
\label{sec:degen}
In this section we study the behavior of the motivic correlators at the boundary of the moduli space $\mathcal{M}_{1,n}'$ of elliptic curves with $n$ marked points and a distinguished tangent vector. The results here are a new case of the specialization theorem for correlators on rational curves (\cite{malkin-shuffle}, \S4), and the definitions and proof are analogous. (There is a similar picture for other boundary strata and for higher-genus curves, which can be regarded as the higher-weight version of the results of R.Wentworth \cite{wentworth} about degeneration of Green's functions. We do not expand this subject here.)
\subsubsection{Setup}
It will be enough for us to consider the top boundary stratum in $\ol\mathcal{M}_{1,n}'$ in which the elliptic curve $E$ degenerates to nodal $\mathbb{P}^1$. On an open subset of this stratum, all marked points remain distinct. Furthermore, we will consider degeneration along the direction $\tau=it$, $t\to\infty$ on the modular curve.
Consider an elliptic curve $E$ over $\omega B\to\ol\mathcal{M}_{1,n}'$, with an open subset $B\to\mathcal{M}_{1,n}'$, whose complement $D=\ol B\setminus B$ is a normal crossings divisor. A Hodge correlator on $E$ determines a variation of mixed Hodge structures over $B$, which has a canonical extension along every normal vector to $D$. We will describe this canonical extension in the aforementioned case.
The curve $E_\tau\cong\mathbb{C}/\pq{\mathbb{Z}+\mathbb{Z}\tau}$ has canonical coordinate $z$, and the nonsingular locus of nodal $\mathbb{P}^1$ has coordinate $z$ (with the node at $z=0,\infty$) such that
\[s_a = \begin{cases}z=a&\tau\neq0\\z=e^{2\pi ia}&\tau=0\end{cases},\]
for $a\in\mathbb{C}$, is a smooth section over $t\in(0,\infty]$. Also fix the relative 1-forms $\omega=\f{1}{\sqrt{\Im \tau}}dz,\ol\omega=\f{1}{\sqrt{\Im\tau}}\ol{dz}$ on $E$, which have limit 0 on $\mathbb{P}^1$.
Let $v_0$ be a relative tangent vector at $s_0$ and $Z_S\subset\mathbb{C}^{*}$, $S=\cq{s_a:a\in Z_S}$. Let $\mathcal{D}\subset\mathcal{CL}_{E/B,S,v_0}^\vee$ be the subcoalgebra generated by the sections $s_a$, where all the $s_a$ factors are distinct, and the relative 1-forms $\omega,\ol\omega$. Also fix the tangent vector $v_0=\f{\partial}{\partial z}$ at $1\in\mathbb{P}^1$.
Let us define a degeneration map
\[\pi_D:\mathcal{D}\to\CLie_{\mathbb{P}^1,Z_S\cup\cq{0},v_0}^\vee\oplus\CLie_{\mathbb{P}^1,Z_S\cup\cq{\infty},v_0}^\vee.\]
Let $\mathcal{D}_T$ be the subspace of $\mathcal{D}$ generated by the $s_a$ and elements $\omega\otimes\ol\omega$ and $\ol\omega\otimes\omega$, which we call the elements of Tate type. If $x\in\mathcal{D}$ is not of Tate type, we set $\pi_D(x)=0$. Otherwise, we set
\begin{align*}
\pi_D(s_a)&=e^{2\pi i a},\\
\pi_D(\omega\otimes\ol\omega)=-\pi_D(\ol\omega\otimes\omega)&=\f12\pq{(0)+(\infty)},
\end{align*}
extended to preserve the tensor product.
(One can verify, using straightforward but cumbersome combinatorics, that this map is well-defined, i.e., respects the first shuffle relations. This is not required for the results below, since we only require the composition of $\pi_D$ with the Hodge correlator map, a fortiori well-defined.)
\begin{lma}
The map $\pi_D$ is a morphism of coalgebras.
\end{lma}
\begin{proof}
Each term in the coproduct of a generator not of Tate type clearly has a factor that is not of Tate type, because the coproduct preserves the weight. So it is enough to see the map respects the coproduct on the generators of Tate type, considering only the terms of the coproduct where both generators are of Tate type.
Let us do this for the first component of the map, to $\mathcal{CL}^\vee_{\mathbb{P}^1,Z_s\cup\cq0,v_0}$; the other is analogous. Let $x$ be a generator in $\mathbb{D}_T$. The coproduct of $x$ has two types of terms:
\begin{enumerate}[(1)]
\item the cuts with vertex at some $s_a$ (the term $\delta_S$;
\item the cuts that give the terms $\delta_{\rm Cas}$.
\end{enumerate}
The coproduct of $\pi_D(x)$ has two types of terms:
\begin{enumerate}[(1$'$)]
\item the cuts with vertex at some $a\neq0$;
\item the cuts with vertex at a 0.
\end{enumerate}
The terms (1) that have both factors of Tate type are in obvious bijection with the (1'): observe that the segment that is cut must have the same number of $\omega$ and $\ol\omega$ factors on each side -- see Figure~\ref{fig:degeneration_coproduct}, left. Similarly, the terms (2) that have both factors of Tate type are in bijection with the (2') -- see Figure~\ref{fig:degeneration_coproduct}, right.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\textwidth]{figures/degen-bijection1.pdf}
\hspace{0.2\textwidth}
\includegraphics[width=0.3\textwidth]{figures/degen-bijection3.pdf}
\\
\includegraphics[width=0.3\textwidth]{figures/degen-bijection2.pdf}
\hspace{0.2\textwidth}
\includegraphics[width=0.3\textwidth]{figures/degen-bijection4.pdf}
\simplecap{fig:degeneration_coproduct}{}
\end{figure}
Suppose now that $\ol B,D$ are as above, and that $D$ maps to the boundary stratum in $\ol\mathcal{M}_{1,n}'$. Let $\Spec_\infty\text{\rm Cor}_\text{\rm Hod}(x)$ denote the canonical extension of the variation $\text{\rm Cor}_\text{\rm Hod}(x)$ on $B$ to a normal vector to $D$. We then have the following result.
\begin{thm}
Supose $x\in\mathcal{D}$ of weight $n>2$.
\begin{enumerate}[(a)]
\item This specialization of the Hodge correlator $\text{\rm Cor}_\text{\rm Hod}(x)$ coincides with the Hodge correlator of the degeneration map:
\[
\xymatrix{
\mathcal{D}_{w>2}\ar[r]^{\pi_D\quad\quad\quad\quad\quad\quad}\ar[d]_{\text{\rm Cor}_\text{\rm Hod}}
&\pq{\mathcal{CL}_{\mathbb{P}^1,Z_S\cup\cq{0},v_0}^\vee\oplus\mathcal{CL}_{\mathbb{P}^1,Z_S\cup\cq{\infty},v_0}^\vee}_{w>2}\ar[d]^{\text{\rm Cor}_\text{\rm Hod}}\\
\text{\rm Lie}_{\text{\rm Hod}/B}^\vee\ar[r]^{\Spec_D}&\text{\rm Lie}_{\text{\rm Hod}}^\vee
}.
\]
\item The Hodge correlator functions on $E$ specialize to the Hodge correlators on $\mathbb{P}^1$. That is, if $x\in\mathcal{D}$ and $\tau=it$, then
\[
\lim_{t\to\infty}\text{\rm Cor}_\mathcal{H}^{(E_\tau)}(x)\sim\text{\rm Cor}_\mathcal{H}^{\mathbb{P}^1}(\pi_D(x)).
\]
With an appropriate choice of tangent vector on $E_\tau$, this also holds in weight 2.
\end{enumerate}
\end{thm}
\begin{proof}
We may let $\mathcal{M}$ be the moduli space of sets $Z_S$ of $n$ ordered points in $\mathbb{C}^{*}$ and $\ol B=(0,\infty]\times\mathcal{M}$. We then simultaneously show the following:
\begin{enumerate}[(1)]
\item The periods of $\text{\rm Cor}_\text{\rm Hod}(x)$ extend continuously to $D$.
\item The periods of the specialization of $\text{\rm Cor}_\text{\rm Hod}(x)$ (i.e., the limits of the periods at $D$) coincide with the periods of the degenerations $d(x)$.
\end{enumerate}
The proof is by induction on $w$. Let us see how these imply the result.
Because the coproduct commutes with specialization, $\Spec_D\text{\rm Cor}_\text{\rm Hod}(x)-\text{\rm Cor}_\text{\rm Hod}(\pi_D(x))$ lies in $\Ext^1_D(\mathbb{R}(0),\mathbb{R}(p,q))$, which is one-dimensional and controlled by the period. By (2) it coindices with the period of the degeneration, which immediately gives (1). This implies (a) and (b) in weight $w$.
To show (1), let $q=e^{2\pi i\tau}=e^{-2\pi t}$ be a parameter at the cusp. We show that for $x\in\mathcal{D}$, $\text{\rm Cor}_\mathcal{H}(x)$ can be represented as a polynomial in $\log q$, where the coefficient of $\log q$ appearing in positive degree has coefficients vanishing at $q=0$ (\emph{tame logarithmic singularities}). This is shown by induction: if $x$ is of weight $w>2$, then $d^\mathbb{C}\text{\rm Cor}_\mathcal{H}(x)$ is expressed in terms of periods of $\delta x$. The latter has logarithmic singularities, by the inductive hypothesis and the fact that the Hodge correlators in weight 1 have logarithmic singularities (see the lemma that follows). Therefore, $\text{\rm Cor}_\mathcal{H}(x)$ has tame logarithmic singularities.
By rigidity of $\Ext^1$, we conclude that the difference $\lim_{t\to\infty}\text{\rm Cor}_\mathcal{H}^{(E_\tau)}(x)-\text{\rm Cor}_\mathcal{H}^{(\mathbb{P}^1)}(\pi_D(x))$ is independent of the point on $D$, that is, of the choice of $Z_S$. The following lemma implies (3).
\end{proof}
This lemma comprises the analytic ingredients in the preceding proof:
\begin{lma}
\begin{enumerate}[(a)]
\item Let $x$ be an element of $\mathcal{D}$ of weight 2. Then $\text{\rm Cor}_\mathcal{H}(x)$ has a logarithmic singularity at $q=0$, and there are constants $c,C$ such that $\lim_{t\to\infty}\text{\rm Cor}_\mathcal{H}^{(E_\tau)}(x)-\f Ct-c\text{\rm Cor}_\mathcal{H}^{(\mathbb{P}^1)}(\pi_D(x))=0$.
\item Let $x=x_0\otimes\dots\otimes x_n$, where each $x_i\in\cq{s_a,\omega,\ol\omega}$, be a generator in $\mathcal{D}$ of weight $w>2$. Suppose $\lim_{t\to\infty}\text{\rm Cor}_\mathcal{H}^{(E_\tau)}(x)-\text{\rm Cor}_\mathcal{H}^{(\mathbb{P}^1)}(\pi_D(x))$ is independent of the choice of $Z_S$. Then this difference is 0.
\end{enumerate}
\end{lma}
\begin{proof}
\begin{enumerate}[(a)]
\item There are three main cases to consider (the remaining ones are symmetric): $x=(s_a)\otimes(s_b)$, $x=(s_a)\otimes\omega\otimes\ol\omega$, $x=(s_a)\otimes\omega\otimes\omega$. The last of those is trivial. For the first two, we use the fact that there is a constant $C$ such that
\[\lim_{t\to\infty}\pq{G_{\rm Ar}^{(E_{it})}(a)-\f Ct}=\log\aq{\pq{1-e^{2\pi i(a)}}\pq{1-e^{-2\pi i(a)}}}.\]
Therefore, for an appropriate choice of tangent vector $v_t$ at $0\in E_{it}$, we have
\begin{align*}
\lim_{t\to\infty}G_{v_t}^{E_{it}}(a,b)
&=\log\aq{\f{\pq{1-e^{2\pi i(a-b)}}\pq{1-e^{-2\pi i(a-b)}}}{{\pq{1-e^{2\pi ia}}\pq{1-e^{-2\pi ia}}}\pq{1-e^{2\pi ib}}\pq{1-e^{-2\pi ib}}}}\\
&=2\log\aq{\f{e^{2\pi ia}-e^{2\pi ib}}{\pq{1-e^{2\pi ia}}\pq{1-e^{2\pi ib}}}}
&=cG_{v_0}^{(\mathbb{P}^1)}(a,b),
\end{align*}
Where $v_0=\f{\partial}{\partial z}$ is a tangent vector at $1\in\mathbb{P}^1$. This completes the case $x=(s_a)\otimes(s_b)$.
For the case $x=(s_a)\otimes\omega\otimes\ol\omega$, notice that $\text{\rm Cor}_\mathcal{H}^{(E_{it})}(x)=-G_{\rm Ar}^{(E_{it})}(a)$, so \[\lim_{t\to\infty}\text{\rm Cor}_\mathcal{H}^{(E_{it})}(x)=-\log\aq{(1-e^{2\pi ia})(1^{-2\pi ia})}.\]
On the other hand, we also have
\begin{align*}
G^{(\mathbb{P}^1)}_{v_0}(z,0)&=\log\aq{\f{z}{1-z}},\\
G^{(\mathbb{P}^1)}_{v_0}(z,\infty)&=\log\aq{\f{1}{1-z}},\\
G^{(\mathbb{P}^1)}_{v_0}(z,0)+G^{(\mathbb{P}^1)}_{v_0}(z,\infty)
&=\log\aq{\f{z}{(1-z)^2}}
&=-\log\aq{\pq{1-z}\pq{1-1/z}}.
\end{align*}
So we have shown that \[\lim_{t\to\infty}\text{\rm Cor}_\mathcal{H}^{(E_{it})}(x)=\f c2\pq{G_{v_0}^{(\mathbb{P}^1)}(e^{2\pi ia},0)+G_{v_0}^{(\mathbb{P}^1)}(e^{2\pi ia},\infty)}=c\text{\rm Cor}_\mathcal{H}^{(\mathbb{P}^1)}(\pi_D(x)).\]
\item Let $s_a$ be one of the factors in $x$ (without loss of generality, $x_0=s_a$). We will integrate over $a$ on the segment $[0,1]$. For arbitrary $\tau$, we have
\[\int_{a=0}^1\text{\rm Cor}_\mathcal{H}^{(E_\tau)}(x)\,da=0,\]
since $\int_{a=0}^1 G^{(E_\tau)}(a,z)\,da=0$ for all $z$ by the properties of the Arakelov Green's function, and
\[\int_{a=0}^1\text{\rm Cor}_\mathcal{H}^{(\mathbb{P}^1)}(\pi_D(x))\,da=0,\]
since $\int_{\aq z=1}\text{\rm Cor}_\mathcal{H}^{(\mathbb{P}^1)}(z,b,c)=\mathcal{L}_2\pq{\f{z-b}{z-c}}=0$ by the properties of the dilogarithm.
Therefore, \[\int_{a=0}^1\pq{\lim_{t\to\infty}\text{\rm Cor}_\mathcal{H}^{(E_\tau)}(x)-\text{\rm Cor}_\mathcal{H}^{(\mathbb{P}^1)}(\pi_D(x))}\,da=0.\] The integrand is independent of $a$, so it is 0.
\end{enumerate}
\end{proof}
\subsubsection{Shuffle relations in depth 2}
Let $x_0,\dots,x_k\in E$ and $m_0,\dots,m_k\geq0$. Define
\[C_{m_0,\dots,m_k}(x_0,\dots,x_k):=\underbrace{S_{0,0}*\dots* S_{0,0}}_{m_0}\otimes(x_0)
\otimes\dots\otimes
\underbrace{S_{0,0}*\dots* S_{0,0}}_{m_k}\otimes(x_k).\]
There is a version of the corrected correlator for this element, where subsets of $\cq{x_0,\dots,x_k}$ are replaced by ``$*$''. We write it in depth 2:
\begin{align*}
\overline{C}_{m_0,m_1,m_2}(x_0,x_1,x_2):=
C_{m_0,m_1,m_2}(x_0,x_1,x_2)
&-C_{m_0,m_1+m_2}(x_0,x_2)
-C_{m_1,m_2+m_0}(x_1,x_0)
-C_{m_2,m_0+m_1}(x_2,x_1)\\
&+C_{m_0+m_1+m_2}(x_0)
+C_{m_0+m_1+m_2}(x_1)
+C_{m_0+m_1+m_2}(x_2).
\end{align*}
We have the following variant of Theorem~\ref{thm:dihedral_depth2}:
\begin{lma}
For $m_0,m_1,m_2\geq0$ and $a,b,c\in E$ with $a+b+c=0$,
\[
\text{\rm Cor}\pq{\ol C_{m_0,m_1,m_2}(a,a+b,a+b+c)+\ol C_{m_0,m_2,m_1}(a,a+c,a+b+c)}=0.
\]
\end{lma}
This is a different version of a second shuffle relation. The proof is identical to that of Theorem~\ref{thm:dihedral_depth2} (we may suppose $a=0$).
Now suppose $a,a+b,a+c,a+b+c$ are distinct and take the correlator with base point 0 over a family with varying $\tau$:
\begin{equation}
\text{\rm Cor}\pq{\ol C_{m_0,m_1,m_2}(s_a,s_{a+b},s_{a+b+c})+\ol C_{m_0,m_2,m_1}(s_a,s_{a+c},s_{a+b+c})}=0.
\label{eqn:variation_of_shuffle}
\end{equation}
An abuse of notation has taken place: the definition of $\ol C$ is assumed to use the relative 1-forms $\omega,\ol\omega$ as in the previous section.
The specialization of the correlator of $\ol C_{m_0,m_1,m_2}(s_a,s_{a+b},s_{a+b+c})$ as $\tau\to i\infty$ is easily seen to be
\begin{align*}
\text{\rm Cor}(\pi_D(\ol C_{m_0,m_1,m_2}(s_{x_0},s_{x_1},s_{x_2})))
=&\;
\text{\rm Cor}_{m_0,m_1,m_2}^{(v_0)}(e^{2\pi ix_0},e^{2\pi ix_1},e^{2\pi ix_2})
\\&-\f12\biggl(
\text{\rm Cor}_{m_0,m_1+m_2}^{(v_0)}(e^{2\pi ix_0},e^{2\pi ix_2})
\\&+\text{\rm Cor}_{m_1,m_2+m_0}^{(v_0)}(e^{2\pi ix_1},e^{2\pi ix_0})
\\&+\text{\rm Cor}_{m_2,m_0+m_1}^{(v_0)}(e^{2\pi ix_2},e^{2\pi ix_1})\biggr)
\\&+\text{\rm Cor}^{(v_0)}\text{(terms with $\infty$)}.
\end{align*}
(recall $v_0$ is the tangent vector at $1\in\mathbb{P}^1$). In particular, by varying $a$ and applying an automorphism of $\mathbb{P}^1$, we find that the specialization of the relation (\ref{eqn:variation_of_shuffle}) holds for any choice of base point at $\mathbb{P}^1\setminus\cq{0,\infty}$. When it is specialized to $\infty$, the terms with $\infty$ in the specialized correlator vanish. We obtain the relation:
\begin{align*}
\text{\rm Cor}_{m_0,m_1,m_2}(e^{2\pi ia},e^{2\pi i(a+b)},e^{2\pi i(a+b+c)})
&-\f12\biggl(
\text{\rm Cor}_{m_0,m_1+m_2}(e^{2\pi ia},e^{2\pi i(a+b+c)})
\\&+\text{\rm Cor}_{m_1,m_2+m_0}(e^{2\pi i(a+b)},e^{2\pi ia})
\\&+\text{\rm Cor}_{m_2,m_0+m_1}(e^{2\pi i(a+b+c)},e^{2\pi i(a+b)})\biggr)\\
\\+\;\text{\rm Cor}_{m_0,m_1,m_2}(e^{2\pi ia},e^{2\pi i(a+c)},e^{2\pi i(a+b+c)})
&-\f12\biggl(
\text{\rm Cor}_{m_0,m_1+m_2}(e^{2\pi ia},e^{2\pi i(a+b+c)})
\\&+\text{\rm Cor}_{m_1,m_2+m_0}(e^{2\pi i(a+c)},e^{2\pi ia})
\\&+\text{\rm Cor}_{m_2,m_0+m_1}(e^{2\pi i(a+b+c)},e^{2\pi i(a+c)})
\biggr)
\end{align*}
correlators now taken with base point at $\infty$. Finally, fix $1=e^{2\pi ia}$ and let $\alpha=e^{2\pi ib}$, $\beta=e^{2\pi ic}$. Rescaling, we arrive at
\[
\text{\rm Cor}_{m_0,m_1,m_2}(1,\alpha,\alpha\beta)
+\text{\rm Cor}_{m_0,m_2,m_1}(1,\beta,\alpha\beta)
-\text{\rm Cor}_{m_0,m_1+m_2}(1,\alpha\beta)
-\text{\rm Cor}_{m_2+m_0,m_1}(1,\alpha)
-\text{\rm Cor}_{m_1+m_0,m_2}(1,\beta).
\]
This is precisely \cite{malkin-shuffle}'s relation (\ref{eqn:p1_second_shuffle}) in depth 2.
\subsubsection{Remark} Let $\mu_p\subset\mathbb{G}_m$ denote the $p$-th roots of unity. In \cite{goncharov-motivic-modular} (\S2.7), a map from the modular complex for $\mathrm{GL}_2(\mathbb{Z})$ to the standard cochain complex of $\text{\rm gr}^D\text{\rm Lie}_\text{\rm Hod}^\vee(\mathbb{G}_m,\mathbb{G}_m-\mu_p)$ is defined using motivic correlators, by a formula similar to (\ref{eqn:theta}):
\[\gamma_2\chain:M_2\chain\otimes_{\Gamma_1(\mathfrak{p})}\mathbb{Q}\to{\rm CE}\chain\pq{\text{\rm gr}^D\text{\rm Lie}_\text{\rm Hod}^\vee}_2.\]
Alternatively, we can obtain such a map by specializing the map $\theta$. For a generic elliptic curve $E=\mathbb{C}/(\mathbb{Z}+\mathbb{Z}\tau)$, we have $\mathcal{O}=\mathbb{Z}$. We use the variant of the map $\theta$ (\ref{eqn:theta}) defined using an order-$p$ subgroup of $E[\mathfrak{p}]$ (see the remark at the end of \S\ref{sec:modular_motivic}). For a family of elliptic curves $E_\tau$ degenerating to nodal $\mathbb{P}^1$ as $\tau\to+i\infty$, make a continuous choice of such an identification with the subgroup of $E[\mathfrak{p}]$ with real coordinates: $z=0,\f1p,\f2p,\dots,\f{p-1}{p}\in E_\tau[p]$. Thus we have a family of sections $s_i$ ($i\in\mathbb{F}_\mathfrak{p}$), which specialize to the $p$-th roots of unity on $\mathbb{P}^1$. We recover the map $\gamma_2\chain$ by specializing the formula (\ref{eqn:theta}).
\bibliographystyle{alpham}
|
1,941,325,221,014 | arxiv | \section{Introduction}
A density matrix is a Hermitian matrix satisfying the conditions, the normalized trace and non-negativity of eigenvalues
\begin{equation}
\text{Tr}(\rho)=1,~~~ \lambda_i\geq 0,
\end{equation}
where $\rho$ is a density matrix and $\lambda_i$ are its eigenvalues.
These conditions along with the hermiticity specify relations among elements of the matrix, i.e. a density matrix needs to be parametrized. Moreover, some specific physical situations may impose additional conditions. This presentation is specially interested in cases that some eigenvalues of the matrix are degenerate. In this case first a diagonalized form of the matrix need be considered (the spectral representation).
\begin{equation}
\rho=UDU^\dagger.
\end{equation}
When a diagonal eigenvalue matrix $D$ is degenerate, the number of independent parameters not only for an eigenvalue matrix but also a unitary matrix $U$ must be reduced from those for a non-degenerate case. A commutant is a mathematical object to express the symmetries occurring due to degeneracy, which is defined as a commuting unitary matrix with an eigenvalue matrix. In general, a diagonal phase matrix is a commutant for any Hermitian matrix, regardless of degeneracy. However, when some of eigenvalues are equal, more matrices in addition to a diagonal phase matrix can commute with a diagonal eigenvalue matrix. Therefore, if a unitary matrix can be parametrized for a commutant to be factored out from it, such a part in a unitary matrix can be eliminated.
The purpose of this paper is to find a systematic way to get rid of redundant parameters in a general $n$-dimensional degenerate density matrix. For a density matrix when two eigenvalues are equal, $\lambda_i=\lambda_j$, a possible commutant other than a diagonal phase matrix is a rotation matrix having a $(i,j)$ rotation block. This example suggests that degeneracies could be identified through a separable and factorizable matrix unit with a one-to-one correspondence. If so, it would be some combination of a rotation matrix and phases. Finally, a unitary matrix or a commutant can be constructed as a product of such units and a general phase matrix.\\
The main questions of this paper are following. The first is whether degrees of degeneracies can be practically countable. Second, if so, how are they related with a unitary matrix? Lastly, what could be a separable matrix unit corresponding to one degree of degeneracy? For practical purposes related to these issue a simple diagram will be introduced how to transform one phase representation to another.
\section{Independent degrees of freedom in a unitary matrix}\label{Sec:Independent degrees of freedom in a unitary matrix}
This section will investigate independent degrees of freedom in a unitary matrix for a degenerate density matrix without using a specific representation. Without loss of generality an example for $n=4$ will be considered, first for instance, for a case $\lambda_1=\lambda_2$ and $\lambda_3\neq\lambda_4$ and then the other case $\lambda_1=\lambda_2$ and $\lambda_3=\lambda_4$ (Note that the convention $\lambda_1\geq\lambda_2\geq\lambda_3\geq\lambda_4$ is used here.). In the end it will be realized from this example that it can be a convenient choice, a unitary matrix as a product of rotation matrices with each of them having one phase and a general phase matrix on the right or the left. The following splitting of an eigenvalue matrix is convenient for the purpose.
\begin{equation}\label{eq:D}
D=\lambda_1\left(\begin{array}{cccc}
1&0&0&0\\
0&1&0&0\\
0&0&1&0\\
0&0&0&1
\end{array}\right)
+
\left(\begin{array}{cccc}
0&0&0&0\\
0&0&0&0\\
0&0&\Delta\lambda_{31}&0\\
0&0&0&\Delta\lambda_{41}
\end{array}\right)
=\lambda_1I+D^\prime,
\end{equation}
where $\Delta\lambda_{31}\equiv\lambda_3-\lambda_1$, $\Delta\lambda_{41}\equiv\lambda_4-\lambda_1$.
The first term $\lambda_1I$ is separated from a unitary matrix and the density matrix can be written,
\begin{equation}\label{eq:density matrix splitting}
\rho=\lambda_1I+
UD^\prime U^\dagger,
\end{equation}
where
\begin{equation}\label{eq:UD1}
U D^\prime=\left(\begin{array}{cccc}
\otimes&\otimes&a_1&b_1\\
\otimes&\otimes&a_2&b_2\\
\otimes&\otimes&a_3&b_3\\
\otimes&\otimes&a_4&b_4
\end{array}\right)
\left(\begin{array}{cccc}
0&0&0&0\\
0&0&0&0\\
0&0&\Delta\lambda_{31}&0\\
0&0&0&\Delta\lambda_{41}
\end{array}\right).
\end{equation}
Irrelevant elements due to the zero entries of the first and second rows in $D^\prime$ are denoted by $\otimes$, which do not contribute to a density matrix. Eq.(\ref{eq:UD1}) is written,
\begin{equation}\label{eq:UD2}
UD^\prime=P_R\left(\begin{array}{cccc}
\otimes&\otimes&|a_1|&e^{i\delta_1}|b_1|\\
\otimes&\otimes&|a_2|&e^{i\delta_2}|b_2|\\
\otimes&\otimes&|a_3|&e^{i\delta_3}|b_3|\\
\otimes&\otimes&|a_4|&|b_4|
\end{array}\right)P_L
\left(\begin{array}{cccc}
0&0&0&0\\
0&0&0&0\\
0&0&\Delta\lambda_{31}&0\\
0&0&0&\Delta\lambda_{41}
\end{array}\right),
\end{equation}
where $P_R$ and $P_L$ are external diagonal phase matrices. $A=(a_1,a_2,a_3,a_4)$ and $B=(b_1,b_2,b_3,b_4)$, where $a_i$ and $b_i$ are complex, are vectors satisfying unitary conditions. The focus here is to count a change of the number of independent internal degrees of freedom due to degeneracy. It is well-known that the number of internal degrees of freedom for a general $n$-dimensional unitary matrix is $(n-1)^2$, so there are nine internal degrees of freedom in this unitary matrix. After the possible phases are absorbed into external phases, the total number of degrees of freedom in $A$ and $B$ are eleven. Unitary conditions, i.e. orthonormality conditions eliminate four degrees of freedom. Thus, seven degrees of freedom remain. Two degrees of freedom have been removed from a non-degenerate case. It can be concluded that one degree of degeneracy corresponds to these two degrees of freedom. It is clear that a two-dimensional rotation matrix is a commutant in this case. It can be guessed that the other additional redundant parameter could come from a phase. This example can be easily generalized to $n$-dimensional case.\\\\
1. Degrees of freedom are contained only in $n-2$ vectors.\\
2. Each of $n-2$ vectors has $n$ real parameters, so total independent real parameters are $n(n-2)$.\\
3. All the phases from one of $n-2$ vectors can be absorbed into external phases, so $(n-1)(n-3)$ internal phases remain.\\
4. There are $2\times\frac{(n-2)(n-3)}{2}+n-2$ orthonormality conditions among $n-2$ vectors.\\
The total number of independent internal parameters is
\begin{equation}
n(n-2)+(n-1)(n-3)-(n-2)(n-3)-(n-2)=(n-1)^2-2.
\end{equation}
It can be seen that degeneracy is countable and identified with redundancy, i.e. one degree of degeneracy can be assigned to $\lambda_1=\lambda_2$ and corresponds to two degrees of redundancies (two redundant parameters) in a unitary matrix. Next, when one more eigenvalue is equal to $\lambda_1$ or $\lambda_2$, i.e. $\lambda_1=\lambda_2=\lambda_3$, one redundant column vector increases. As any arbitrary degrees of degeneracies for one eigenvalue are given, the number of independent parameters can be calculated. If $\Delta$ is defined the number of the same eigenvalues of one kind,
\begin{equation}\label{eq:internal degrees of freedom}
\begin{array}{ccl}
&&n(n-\Delta)+(n-1)(n-\Delta)-(n-1)-2\times\frac{(n-\Delta)(n-\Delta-1)}{2}-(n-\Delta)\\
&=&(n-1)^2-\Delta(\Delta-1).
\end{array}
\end{equation}
It can be noticed that the change $\Delta(\Delta-1)$ is the number of redundant parameters and is 2 times the number of possible pairs in the same eigenvalues. Thus, it can be concluded that two redundant parameters occur per one pair of eigenvalues. Now through the following argument this idea can be generalized. It is better to define degrees of degeneracy, the number of possible pairs in the same eigenvalues.
\begin{equation}
\text{Degrees of degeneracy}=\sum_i\frac{1}{2}\Delta_i(\Delta_i-1),
\end{equation}
where $\Delta_i$ is the number of the same eigenvalues of $i$ kind.
\begin{equation}
\text{The number of redundant parameters}=\sum_i\Delta_i(\Delta_i-1),
\end{equation}
If an additional distinct degeneracy is added, i.e. $\lambda_3=\lambda_4$.
\begin{equation}
U D^\prime=\Delta\lambda_{31}\left(\begin{array}{cccc}
\otimes&\otimes&a_1&b_1\\
\otimes&\otimes&a_2&b_2\\
\otimes&\otimes&a_3&b_3\\
\otimes&\otimes&a_4&b_4
\end{array}\right)
\left(\begin{array}{cccc}
0&0&0&0\\
0&0&0&0\\
0&0&1&0\\
0&0&0&1
\end{array}\right)
\end{equation}
and a density matrix is
\begin{equation}
\begin{array}{ccc}
\rho&=&\Delta\lambda_{31}\left(\begin{array}{cccc}
0&0&a_1&b_1\\
0&0&a_2&b_2\\
0&0&a_3&b_3\\
0&0&a_4&b_4
\end{array}\right)
\left(\begin{array}{cccc}
0&0&0&0\\
0&0&0&0\\
a^*_1&a^*_2&a^*_3&a^*_4\\
b^*_1&b^*_2&b^*_3&b^*_4
\end{array}\right)\\
&=&\Delta\lambda_{31}\left(\begin{array}{cccc}
|x_1|^2&x_1\cdot x^*_2&x_1\cdot x^*_3&x_1\cdot x^*_4\\
x_2\cdot x^*_1&|x_2|^2&x_2\cdot x^*_3&x_2\cdot x^*_4\\
x_3\cdot x^*_1&x_3\cdot x^*_2&|x_3|^2&x_3\cdot x^*_4\\
x_4\cdot x^*_1&x_4\cdot x^*_2&x_4\cdot x^*_3&|x_4|^2
\end{array}\right),
\end{array}
\end{equation}
where $x_i=(a_i,b_i)$. The last form has symmetries in a two-dimensional rotation and a common phase transformation,
\begin{equation}
\begin{array}{ccl}
(a_i,b_i)&\rightarrow & (e^{i\delta}a_i\cos\theta + b_i\sin\theta,-e^{i\delta}a_i\sin\theta +b_i\cos\theta)\\
&=&(a_i\cos\theta + e^{i\eta}b_i\sin\theta,-a_i\sin\theta + e^{i\eta}b_i\cos\theta).
\end{array}
\end{equation}
These symmetries eliminate two degrees of freedom, one rotation angle and one phase, in a unitary matrix. A more degenerate case, for instance $\lambda_3=\lambda_4=\lambda_5$, assigned to three degrees of degeneracy, three possible pairs, has six redundancies, i.e. three-dimensional rotation and three phases. This can be seen in Eq.(\ref{eq:UD2}) that there remain only one vector $B$ and one eigenvalue $\Delta\lambda_{41}$. All the phases on $B$ can be taken out to external phases $P_L$ and remaining degrees of freedom become three, which matches with Eq.(\ref{eq:internal degrees of freedom}), $(4-1)^2-3(3-1)=3$.
\section{Commutant of an eigenvalue matrix}\label{Sec:Commutant of an eigenvalue matrix}
A commutant is a mathematical realization of symmetries between an eigenvalue matrix and a unitary matrix. Redundancy in a unitary matrix is just consequence of the symmetries and must be able to be expressed in terms of a commutant. That is, once a complete commutant for a density matrix is found, the redundant parmeters from a unitary matrix are expected to be factored out in a commutant form. As seen in the last section the degeneracies can be countable, identified with redundant parameters and thus should be expressed in terms of a commutant. As degrees of degeneracies increase one by one, a previous commutant should be changed, multiplied by another commutant. Thus, a unitary matrix itself should be formed as a product of the smallest possible commutants. It seems to be a rotation block associated with phases.\\
A commutant for a non-degenerate eigenvalue matrix is a diagonal phase matrix, so a phase matrix part in a unitary matrix, adjacent to an eigenvalue matrix is redundant regardless of degeneracy and must be removed. If eigenvalues are degenerate, the corresponding commutant has more degrees of freedom. For one degree of degeneracy, for example $\lambda_i=\lambda_j$, the corresponding commutant should include a product of a diagonal phase matrix and a two-dimensional rotation matrix. The most general possible commutant in this case is
\begin{equation}
D_n=\left(\begin{array}{cccc}
e^{i\delta_1}&0&\cdots&0\\
0&e^{i\delta_2}&\cdots&0\\
0&0&e^{i\delta_3}&0\\
0&0&\cdots&0\\
\vdots&\vdots&\vdots&\vdots\\
0&0&0&\cdots
\end{array}\right)
\left(\begin{array}{cccc}
1&0&\cdots&0\\
0&\ddots&\cdots&0\\
0&\ddots&\left(\begin{array}{cc}
c&s\\
-s&c\end{array}\right)&0\\
0&0&0&0\\
0&0&0&\cdots
\end{array}\right)
\left(\begin{array}{cccc}
e^{i\eta_1}&0&\cdots&0\\
0&e^{i\eta_2}&\cdots&0\\
0&0&e^{i\eta_3}&0\\
0&0&\cdots&0\\
\vdots&\vdots&\vdots&\vdots\\
0&0&0&\cdots
\end{array}\right).
\end{equation}
This is a possible unit for the smallest degeneracy. However, when a unitary matrix or a complete commutant is made by a product of such matrices, the number of independent phases have to be considered. In the above representation all the phases except one can be moved to either the left or the right to the rotation matrix,
\begin{equation}
D_n=\left(\begin{array}{cccc}
1&0&\cdots&0\\
0&\ddots&\cdots&0\\
0&\ddots&\left(\begin{array}{cc}
e^{i\delta^\prime_i}&0\\
0&1\end{array}\right)&0\\
0&0&0&0\\
0&0&0&\cdots
\end{array}\right)
\left(\begin{array}{cccc}
1&0&\cdots&0\\
0&\ddots&\cdots&0\\
0&\ddots&\left(\begin{array}{cc}
c&s\\
-s&c\end{array}\right)&0\\
0&0&0&0\\
0&0&0&\cdots
\end{array}\right)
\left(\begin{array}{cccc}
e^{i\eta^\prime_1}&0&\cdots&0\\
0&e^{i\eta^\prime_2}&\cdots&0\\
0&0&e^{i\eta^\prime_3}&0\\
0&0&\cdots&0\\
\vdots&\vdots&\vdots&\vdots\\
0&0&0&\cdots
\end{array}\right).
\end{equation}
The phase matrix on the right to the rotation matrix can be again moved further leaving only one phase when another unit is multiplied to the right. A unitary matrix can be made by multiplying all the possible block each of which consists of one left phase, a two-dimensional rotation and a general phase matrix on the right. A commutant can be also made in the same way. Define a convenient block, $W$
\begin{equation}
W=\left(\begin{array}{cc}
e^{i\delta}&0\\
0&1\end{array}\right)
\left(\begin{array}{cc}
c&s\\
-s&c\end{array}\right).
\end{equation}
A unitary matrix can be written as a product of these $n(n-1)/2$ units $W_i$ and a general phase matrix $Q_n$.
\begin{equation}
U_n=W_1W_2\cdots W_{n(n-1)/2}Q_n,
\end{equation}
where
\begin{equation}
W_i=\left(\begin{array}{cccc}
1&0&\cdots&0\\
0&\ddots&\cdots&0\\
0&\ddots&W
&0\\
\vdots&\vdots&\vdots&\vdots\\
0&0&0&\cdots
\end{array}\right),~~~
Q_n=\left(\begin{array}{cccc}
e^{i\eta_1}&0&\cdots&0\\
0&e^{i\eta_2}&\cdots&0\\
0&0&e^{i\eta_3}&0\\
0&0&\cdots&0\\
\vdots&\vdots&\vdots&\vdots\\
0&0&0&\cdots
\end{array}\right).
\end{equation}
Let this representation be called ``the one phase-one rotation'' representation. It is known that the total number of independent parameters for a unitary matrix for $n$-dimension is $n^2$. It can be verified that the number of parameters in this representation also matches with it.
\begin{equation}
\frac{n(n-1)}{2}\times 2+n=n^2.
\end{equation}
\section{Equivalent angular representations}\label{Sec:Equivalent angular representations}
\subsection{Surjectivity}
It is practically important to know which angular representations among given representations of unitary matrices with rotations and phases are equivalent to a general description. Before considering this problem it is worthwhile to clarify whether an angular representation is onto, i.e. surjective to the canonical coordinates of the first kind, i.e. exponential map $U=e^X$, where $X$ is a general $n$-dimensional anti-hermitian matrix. By using the Baker\--Campbell\--Hausdorff formula it is obvious that the domain of the representation $U=e^X$ is greater than or equal to that of the representation $e^{X_1}\cdots e^{X_r}$, where $X=X_1+\cdots+X_r=X$ is composed of $X_i(\theta_\alpha,\cdots)$ parametrized with $(\theta_\alpha, \cdots)$.\\
In the same way it can be proved that all the group elements in the form $U=e^X$ can be written in a from $e^{X_1}\cdots e^{X_r}$, that is, both of the representations are equivalent. Assume that there is an element $e^X$, where $X=tX_1+\cdots+tX_r$ which could not be written in some decomposition form $e^{tX^\prime_1}\cdots e^{tX^\prime_r}$, where $X^\prime_i=X_i(\theta^\prime_\alpha,\cdots)$. By using the Baker\--Campbell\--Hausdorff formula,
\begin{equation}
e^{tX_1+t^2\delta X_1}\cdots e^{tX_r+t^2\delta X_r}=e^{tX_1+\cdots +tX_r +t^2(\delta X_1+\delta X_2+\cdots+\delta X_r)+O(t^2)+O(t^3)},
\end{equation}
where $\delta X_i$ has not been specified yet. To eliminate $t^2$ order terms in the exponential in the right hand side, choose $t^2(\delta X_1+\cdots+\delta X_r)$ as $-O(t^2)$, which is coming from the commutators among $tX_i$, independent of $\delta X_i$. This implies that $e^{tX_1+\cdots+tX_r}$ is equal to $e^{tX_1+t^2\delta X_1}\cdots e^{tX_r+t^2\delta X_r}$ up to $O(t^3)$. Again, $t^3\delta X^\prime_i$ can be included in the exponential in the left hand side to eliminate $t^3$ order. In this way, all the higher order differences can be removed up to any order. It means that $e^{tX_1+\cdots+tX_r}$ can be expressed in a from $e^{tX^\prime_1}\cdots e^{tX^\prime_r}$.
\begin{equation}
e^{tX_1+t^2\delta X_1+t^3\delta X^\prime_1}\cdots e^{tX_r+t^2\delta X_r+t^3\delta X^\prime_r}=e^{tX_1+\cdots +tX_r +t^3(\delta X^\prime_1+\delta X^\prime_2+\cdots+\delta X^\prime_r)+O(t^3)}.
\end{equation}
Thus, the previous assumption is not correct. Therefore, if $X=X_1+\cdots+X_r$ is a general anti-hermitian matrix, the exponential map $e^{X_1}\cdots e^{X_r}$ is onto the representation in the canonical coordinates of the first kind $e^X$.
\subsection{Phase transformed rotation matrix}\label{Sec:Phase transformed rotation matrix}
As it has been just proven that any decomposition from a general anti-hermitian matrix $X$ is equivalent to a representation in the canonical coordinates of the first kind. It is well-known that a three-dimensional unitary matrix can be represented with three rotations, one internal and five external phases. It is called the KM parametrization in a particle physics \cite{Kobayashi:1973fv}. It can be also generalized to $n$-dimension. This representation is suitable when external phases are not physical and absorbed into fermion fields. Here starting with a possible simple decomposition from the representation in the canonical coordinates of the first kind, it will be shown that it can reduced to the representation, the one phase-one rotation-representation, defined in the section \ref{Sec:Commutant of an eigenvalue matrix}. It is a convenient representation for a degenerate density matrix. It will be shown how phases can be moved to another places using diagrams in the last subsection \ref{Sec:Phase manipulations}. A general anti-hermitian matrix $X$ in $n$-dimension is
\begin{equation}
X=\left(\begin{array}{ccccc}
i\alpha_1&z_{12}&\cdots&z_{1,n-1}&z_{1,n}\\
-z^*_{12}&i\alpha_2&z_{23}&\cdots&z_{2,n}\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
-z^*_{1,n-1}&-z^*_{1,n-1}&\cdots&i\alpha_{n-1}&z_{n-1,n}\\
-z^*_{1,n}&-z^*_{2,n}&\cdots&-z^*_{n-1,n}&i\alpha_n
\end{array}\right).
\end{equation}
Let $X$ be decomposed as follows.
\begin{equation}
X_{1,1}=\left(\begin{array}{ccccc}
i\alpha_1&0&\cdots&0&0\\
0&0&0&0&0\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&0&0\\
0&0&0&0&0
\end{array}\right),~~
X_{2,2}=\left(\begin{array}{ccccc}
0&0&\cdots&0&0\\
0&i\alpha_2&0&0&0\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&0&0\\
0&0&0&0&0
\end{array}\right),\cdots
X_{n,n}=\left(\begin{array}{ccccc}
0&0&\cdots&0&0\\
0&0&0&0&0\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&0&0\\
0&0&0&0&i\alpha_{n}
\end{array}\right),
\end{equation}
\begin{equation}
X_{12}=\left(\begin{array}{ccccc}
0&z_{12}&\cdots&0&0\\
-z^*_{12}&0&0&0&0\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&0&0\\
0&0&0&0&0
\end{array}\right),~~
X_{13}=\left(\begin{array}{ccccc}
0&0&z_{13}&\cdots&0\\
0&0&0&0&0\\
-z^*_{13}&0&\ddots&\vdots&\vdots\\
0&0&0&0&0\\
0&0&0&0&0
\end{array}\right),\cdots
X_{n-1, n}=\left(\begin{array}{ccccc}
0&0&\cdots&0&0\\
0&0&0&0&0\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&0&z_{n-1,n}\\
0&0&0&-z^*_{n-1,n}&0
\end{array}\right).\nonumber
\end{equation}
A vector $z_{ij}$ can be written $e^{i\theta_{ij}}|z_{ij}|$, so the matrix $e^{X_{ij}}$ can be expressed,
\begin{equation}
e^{X_{ij}}=e^{X_{i,i}(\theta_{ij})}e^{X_{ij}(|z_{ij}|)}e^{X_{i,i}^\dagger(\theta_{ij})}
=e^{X_{j,j}(-\theta_{ij})}e^{X_{ij}(|z_{ij}|)}e^{X_{j,j}^\dagger(-\theta_{ij})},
\end{equation}
In other words, $e^{X_{ij}}=P_{i}(\theta_{ij})R_{ij}(|z_{ij}|)P^\dagger_i(\theta_{ij})=P_{j}(-\theta_{ij})R_{ij}(|z_{ij}|)P^\dagger_j(-\theta_{ij})$. $X_{ij}$ matrix has one rotation and its one associated phase degrees of freedom. A unitary matrix in this decomposition is given,
\begin{equation}
U_n=e^{X_{12}}e^{X_{23}}\cdots e^{X_{n-1, n}}Q_n
=P_1R_{12}P^\dagger_1 P_2R_{23}P^\dagger_2\cdots P_{n-1}R_{n-1,n}P^\dagger_{n-1}Q_n.
\end{equation}
where $Q_n$ is a general phase matrix in $n$-dimension. Note that $Q_n$ can be placed in anywhere, but for the purpose of dealing with a degeneracy it is more convenient to put on the right side. Let us call this representation, ``the phase adjoint rotation representation'' here.
In an angular representation the number of internal and external phases is
\begin{equation}
\frac{(n-1)(n-2)}{2}+2n-1=\frac{1}{2}n(n+1).
\end{equation}
The number of independent phases in a usual angular representation is exactly the same as that of phases in this representation. The number of phases from $X_{i,i}$ generators is $n$ and the number of phases from each rotation is $\frac{1}{2}n(n-1)$.
\begin{equation}
\frac{1}{2}n(n-1)+n=\frac{1}{2}n(n+1).
\end{equation}
The following subsection will show how this representation is transformed to the one phase-one rotation representation with an example for $n=3$
\subsection{Degenerate density matrix for $n=3$}\label{Sec:Degenerate density matrix for n=3}
This section presents an example of a degenerate density matrix for $n=3$ parametrized with a certain choice of a phase representation of a unitary matrix, the phase adjoint rotation representation, defined in the last section \ref{Sec:Phase transformed rotation matrix}, and spherical polar coordinates for a eigenvalue matrix in the spectral representation, $\rho_3=U_3D_3U^\dagger_3$.
The unitary matrix representation here consists of $n(n-1)/2=3$ adjoint rotation matrices $W_{ij}=P_iR_{ij}P^\dagger_i$, with each of $P_i$ having one independent phase and a general phase matrix $Q_n$. All the representations in different orders of matrices are equivalent.
To satisfy the non-negativity of eigenvalues and the normalized trace condition spherical polar coordinates are chosen. For convenience a general phase matrix $Q_n$ are placed on the right in a unitary matrix.
\begin{equation}\label{U3}
U_3=W_{31}W_{23}W_{12}Q_3.
\end{equation}
The associated phase matrices with rotation matrices are
\begin{equation}
P_1=\left(\begin{array}{ccc}
e^{i\delta_{1}}&0&0\\
0&1&0\\
0&0&1
\end{array}\right),~~
P_2=\left(\begin{array}{ccc}
1&0&0\\
0&e^{i\delta_{2}}&0\\
0&0&1
\end{array}\right),~~
P_3=\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{i\delta_{3}}
\end{array}\right),
\end{equation}
where the parameter set for $\delta_i$ is
\begin{equation}
S_p=\{\delta_i\in R: 0\leq\delta_i< 2\pi, (i=1,2,3) \}.
\end{equation}
In addition to those there are $n$ independent phases necessary for completion.
\begin{equation}
Q_3=\left(\begin{array}{ccc}
e^{i\eta_{1}}&0&0\\
0&e^{i\eta_{2}}&0\\
0&0&e^{i\eta_{3}}
\end{array}\right),
\end{equation}
where the parameter set for $\eta_i$ is
\begin{equation}
S_Q=\{\eta_i\in R: 0\leq\eta_i< 2\pi, (i=1,2,3) \}.
\end{equation}
$n(n-1)/2$ possible rotations are
\begin{equation}
R_{12}=\left(\begin{array}{ccc}
c_{12}&s_{12}&0\\
-s_{12}&c_{12}&0\\
0&0&1
\end{array}\right),~~
R_{23}=\left(\begin{array}{ccc}
1&0&0\\
0&c_{23}&s_{23}\\
0&-s_{23}&c_{23}
\end{array}\right),~~ \text{and}~
R_{31}=\left(\begin{array}{ccc}
c_{31}&0&s_{31}\\
0&1&0\\
-s_{31}&0&c_{31}
\end{array}\right),
\end{equation}
where the parameter set for $(\theta_{12}, \theta_{23}, \theta_{31})$ is
\begin{equation}
S_R=\{(\theta_{12}, \theta_{23}, \theta_{31}\in R^3:0\leq \theta_{12}, \theta_{23},\theta_{31}\leq \pi/2 \}.
\end{equation}
One can check that the ranges of rotation angles $0\leq \theta_{ij}\leq \pi/2$ guarantees the mapping to be one-to-one. The phase $e^{i\delta_i}$ at $\delta_i=\pi$ amounts to change of signs in some elements in rotation matrices. $W_{ij}$ has the left phase to rotation matrix $R_{ij}$. This phase together with phases from $W_{ik}$ and $W_{jk}$ which can be brought to the right side to the $R_{ij}$, can change a sign of either cosine or sine in $R_{ij}$. With one associated phase to a rotation matrix, a sign of only one column or row can change, but together with another phase from the above relevant $W$ matrices on the other side a sign of either cosine or sine can be changed. Therefore, the ranges of rotation angles, $0\leq\theta_{ij}\leq \pi/2$ is necessary and sufficient to cover all possible values for rotations.
A unitary matrix can be explicitly expressed from Eq.(\ref{U3}),
\begin{equation}
\begin{array}{ccl}
U_3=&&\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&s_{31}\\
0&1&0\\
-s_{31}&0&c_{31}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&e^{i\delta_{2}}&0\\
0&0&e^{-i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&c_{23}&s_{23}\\
0&-s_{23}&c_{23}
\end{array}\right)\\
&\times &\left(\begin{array}{ccc}
e^{i\delta_{1}}&0&0\\
0&e^{-i\delta_{2}}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
c_{12}&s_{12}&0\\
-s_{12}&c_{12}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
e^{-i\delta_{1}}&0&0\\
0&1&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
e^{i\eta_{1}}&0&0\\
0&e^{i\eta_{2}}&0\\
0&0&e^{i\eta_{3}}
\end{array}\right).
\end{array}
\end{equation}
By the following observation, a phase can pass through a rotation matrix.
\begin{equation}
\left(\begin{array}{cc}
e^{i\delta_{1}}&0\\
0&e^{i\delta_{2}}
\end{array}\right)
\left(\begin{array}{cc}
c&s\\
-s&c
\end{array}\right)
=\left(\begin{array}{cc}
e^{i(\delta_{1}-\delta_{2})}&0\\
0&1
\end{array}\right)
\left(\begin{array}{cc}
c&s\\
-s&c
\end{array}\right)
\left(\begin{array}{cc}
e^{i\delta_{2}}&0\\
0&e^{i\delta_{2}}
\end{array}\right),
\end{equation}
A unitary matrix can be rearranged as a product of the one rotation-one right phase blocks and $n$ phases.
\begin{equation}
\begin{array}{ccl}
U_3=&&\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&s_{31}\\
0&1&0\\
-s_{31}&0&c_{31}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&e^{i(\delta_{2}+\delta_{3})}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&c_{23}&s_{23}\\
0&-s_{23}&c_{23}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&e^{-i(\delta_{2}+\delta_{3})}&0\\
0&0&1
\end{array}\right)\\
&\times &
\left(\begin{array}{ccc}
1&0&0\\
0&e^{i\delta_{2}}&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
e^{i\delta_{1}}&0&0\\
0&e^{-i\delta_{2}}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
c_{12}&s_{12}&0\\
-s_{12}&c_{12}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
e^{-i\delta_{1}}&0&0\\
0&1&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
e^{i\eta_{1}}&0&0\\
0&e^{i\eta_{2}}&0\\
0&0&e^{i\eta_{3}}
\end{array}\right)\\
=&&\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&s_{31}\\
0&1&0\\
-s_{31}&0&c_{31}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&e^{i(\delta_{2}+\delta_{3})}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&c_{23}&s_{23}\\
0&-s_{23}&c_{23}
\end{array}\right)\\
&\times &
\left(\begin{array}{ccc}
e^{i(\delta_{1}+\delta_{2}+\delta_{3})}&0&0\\
0&1&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
c_{12}&s_{12}&0\\
-s_{12}&c_{12}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
e^{-i(\delta_{1}+\delta_{2}+\delta_{3})}&0&0\\
0&e^{i(\delta_{2}+\delta_{3})}&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
e^{i\eta_{1}}&0&0\\
0&e^{i\eta_{2}}&0\\
0&0&e^{i\eta_{3}}
\end{array}\right).
\end{array}
\end{equation}
It can be realized that a unitary matrix is just a product of such blocks, $Y_i$ and a phase matrix, $Q_3$.
\begin{equation}
U_3=Y_3Y_2Y_1Q_3,
\end{equation}
where
\begin{equation}
Y_3=P_3R_{31}=\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&s_{31}\\
0&1&0\\
-s_{31}&0&c_{31}
\end{array}\right),
\end{equation}
\begin{equation}
Y_2=P_2R_{23}
=\left(\begin{array}{ccc}
1&0&0\\
0&e^{i\delta_{2}}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&c_{23}&s_{23}\\
0&-s_{23}&c_{23}
\end{array}\right)
\end{equation}
and
\begin{equation}
Y_1=P_1R_{12}
=\left(\begin{array}{ccc}
e^{i\delta_{1}}&0&0\\
0&1&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
c_{12}&s_{12}&0\\
-s_{12}&c_{12}&0\\
0&0&1
\end{array}\right).
\end{equation}
A density matrix in the spectral representation is
\begin{equation}
\rho_3=U_3D_3U^\dagger_3.
\end{equation}
Certain parametrizations for a diagonal eigenvalue matrix automatically satisfy the trace and the non-negativity conditions. Such parametrizations are not so relevant for $n=3$ case, but as general cases considered those parametrizations may be useful. Let use spherical polar coordinates \cite{Boya:1998nq, Tilma:2002kf}.
\begin{equation}
D_3=\left(\begin{array}{ccc}
\sin^2\theta\sin^2\phi&0&0\\
0&\sin^2\theta\cos^2\phi&0\\
0&0&\cos^2\theta
\end{array}\right).
\end{equation}
where the parameter set for $(\theta, \phi)$ is
\begin{equation}
S_D=\{(\theta, \phi)\in R^2:0\leq \theta\leq \pi/2,0\leq \phi\leq \pi/2 \}.
\end{equation}
The total parameter space for a density matrix for $n=3$ is
\begin{equation}
S=\{(\delta_i,\theta_{12},\theta_{23}, \theta_{31}, \theta, \phi)\in R^8:0\leq\delta_i\leq 2\pi, 0\leq \theta_{12},\theta_{23}, \theta_{31}, \theta, \phi \leq \pi/2, (i=1,2,3) \}.
\end{equation}
This parametrization for a eigenvalue matrix can be extended to $n$-dimensional case using $n$-sphere polar coordinates. If only $k$ eigenvalues are distinct due to degeneracy, instead of using $n$-sphere polar coordinates $k$-sphere polar coordinates are used. The same averaged squared component is given to the the same eigenvalues. For example, in the examples below two dimensional polar coordinates are used and $\lambda_1=\lambda_2$ are the averaged squared component $\frac{1}{2}\sin^2\theta$. Similarly, for $\lambda_2=\lambda_3$, $\lambda_2=\lambda_3=\frac{1}{2}\cos^2\theta$.\\
i) Case $\lambda_1=\lambda_2$.
\begin{equation}
\rho_{12}=Y_3Y_2Y_1QD_3Q^\dagger Y^\dagger_1Y^\dagger_2Y^\dagger_3
=Y_3Y_2D_3Y^\dagger_2Y^\dagger_3,
\end{equation}
\begin{equation}
\begin{array}{ccl}
\rho_{12}=&&\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&s_{31}\\
0&1&0\\
-s_{31}&0&c_{31}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&e^{i\delta_{2}}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&c_{23}&s_{23}\\
0&-s_{23}&c_{23}
\end{array}\right)
\left(\begin{array}{ccc}
s^2_\theta /2&0&0\\
0&s^2_\theta /2&0\\
0&0&c^2_\theta
\end{array}\right)\\
&&\times
\left(\begin{array}{ccc}
1&0&0\\
0&c_{23}&-s_{23}\\
0&s_{23}&c_{23}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&e^{-i\delta_{2}}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&-s_{31}\\
0&1&0\\
s_{31}&0&c_{31}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{-i\delta_{3}}
\end{array}\right).
\end{array}
\end{equation}
ii) Case $\lambda_2=\lambda_3$.
\begin{equation}
\rho_{12}=Y_3Y_1Y_2QD_3Q^\dagger Y^\dagger_2Y^\dagger_1Y^\dagger_3
=Y_3Y_1D_3Y^\dagger_1Y^\dagger_3,
\end{equation}
\begin{equation}
\begin{array}{ccl}
\rho_{23}=&&\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{i\delta_{3}}
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&s_{31}\\
0&1&0\\
-s_{31}&0&c_{31}
\end{array}\right)
\left(\begin{array}{ccc}
e^{-i\delta_{1}}&0&0\\
0&1&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
c_{12}&s_{12}&0\\
-s_{12}&c_{12}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
s^2_\theta&0&0\\
0&c^2_\theta/2 &0\\
0&0& c^2_\theta/2
\end{array}\right)\\
&&\times
\left(\begin{array}{ccc}
c_{12}&-s_{12}&0\\
s_{12}&c_{12}&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
e^{-i\delta_{1}}&0&0\\
0&1&0\\
0&0&1
\end{array}\right)
\left(\begin{array}{ccc}
c_{31}&0&-s_{31}\\
0&1&0\\
s_{31}&0&c_{31}
\end{array}\right)
\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&e^{-i\delta_{3}}
\end{array}\right).
\end{array}
\end{equation}
This example can be easily extended to $n$-dimensional unitary matrix with $n$-dimensional spherical coordinates for a parametrization of an eigenvalue matrix.
\subsection{Phase manipulations}\label{Sec:Phase manipulations}
For practical purposes it would be often useful if there is an easy way to check whether a certain phase representation is a general description of a unitary matrix. As seen the one phase-one rotation representation is a general description of a unitary matrix. Here a simple diagram is introduced to show that a product of all possible rotation matrix with general phase matrices inserted between them is also equivalent, changing a phase configuration.
\begin{equation}\label{eq:general phase representation}
U=P_LR_{12}P_1R_{23}P_2\cdots R_{n,n-1}P_R=Y_1Y_2\cdots Y_{n(n-1)/2}P^\prime_R,
\end{equation}
where $Y_i=\tilde{P}_iR_{ij}$, $\tilde{P}_i$ has a phase matrix with one phase in $(i,i)$ place, $P_i$, $P_L$ and $P_R$ are general diagonal phase matrices and $R_{ij}$ are rotation matrices. The following simplified diagrams, in Tables [\ref{tb:1}, \ref{tb:2}], are convenient to transform one phase representation to another.
\begin{table}[!ht]
\centering
$\left(\begin{array}{cc}
\cos\theta_{ij}&\sin\theta_{ij}\\
-\sin\theta_{ij}&\cos\theta_{ij}\\
\end{array}\right)=~$\begin{tabular}{|l|}
\hline
$i$\\ \hline
$j$\\ \hline
\end{tabular}~,~~
$\left(\begin{array}{cc}
e^{i\delta_i}&0\\
0&e^{i\delta_j}\\
\end{array}\right)=~$\begin{tabular}{|l|}
\hline
$\textcircled{i}$\\ \hline
$\textcircled{j}$\\ \hline
\end{tabular}
\caption{Rotation blocks and phase matrices}\label{tb:1}
\end{table}
\begin{table}[!ht]
\centering
$\left(\begin{array}{cc}
e^{i\delta_i}&0\\
0&e^{i\delta_j}\\
\end{array}\right)\left(\begin{array}{cc}
\cos\theta_{ij}&\sin\theta_{ij}\\
-\sin\theta_{ij}&\cos\theta_{ij}\\
\end{array}\right)
\left(\begin{array}{cc}
e^{i\delta_k}&0\\
0&e^{i\delta_l}\\
\end{array}\right)=~$\begin{tabular}{|l|l|l|}
\hline
$\textcircled{i}$ &$i$&$\textcircled{k}$\\ \hline
$\textcircled{j}$ &$j$&$\textcircled{l}$\\ \hline
\end{tabular}
\caption{a phase-rotation-phase product}\label{tb:2}
\end{table}
The first example shows that the above representation can be reduced to the KM parametrization\cite{Kobayashi:1973fv}. The key point in this manipulation is to realize that a general $U(2)$ matrix must be parametrized with not four but three independent phases and one rotation. From four parameters in Eq.(\ref{3phaseU(2)}) can be reduced to three independent phases because $U(1)$ phase can commute with any matrix. Also, places for three independent phases can be free to be changed out of four places.
\begin{equation}\label{3phaseU(2)}
\left(\begin{array}{cc}
e^{i\delta_i} &0\\
0&e^{i\delta_j}\\
\end{array}\right)
\left(\begin{array}{cc}
c&s\\
-s&c\\
\end{array}\right)
\left(\begin{array}{cc}
e^{i\eta_i} &0\\
0&e^{i\eta_j}
\end{array}\right)
\Rightarrow
\left(\begin{array}{cc}
e^{i(\delta_i-\delta_j)} &0\\
0&1\\
\end{array}\right)
\left(\begin{array}{cc}
c&s\\
-s&c\\
\end{array}\right)
\left(\begin{array}{cc}
e^{i(\eta_i+\delta_j)} &0\\
0&e^{i(\eta_j+\delta_j)}
\end{array}\right)
\end{equation}
Tables [\ref{tb:3}, \ref{tb:4}, \ref{tb:5}] show how to manipulate phases in the general phase representation Eq.(\ref{eq:general phase representation}) to the KM parametrization \cite{Kobayashi:1973fv}.
\begin{table}[!ht]
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\textcircled{L}_1$& & $\textcircled{1}_L$&1&$\textcircled{1}_C$ &1 &$\textcircled{R}_1$\\ \hline
$\textcircled{L}_2$ &2 &$\textcircled{2}_C$&2 &$\textcircled{2}_R$&&$\textcircled{R}_2$\\ \hline
$\textcircled{L}_3$ & 3 &$\textcircled{3}_R$&&$\textcircled{3}_L$&3&$\textcircled{R}_3$\\
\hline
\end{tabular}~~$\rightarrow$
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& & $\textcircled{L}_1\textcircled{1}_L$&1&$\textcircled{1}_C$&1 &$\textcircled{R}_1$\\ \hline
$\textcircled{L}_2$ &2 &$\textcircled{2}_C$&2 &$\textcircled{2}_R\textcircled{R}_2$&&\\ \hline
$\textcircled{L}_3$ & 3 &&&$\textcircled{3}_R\textcircled{3}_L$&3&$\textcircled{R}_3$\\
\hline
\end{tabular}
\caption{Combining commuting phases}\label{tb:3}
\end{table}
%
\begin{table}[!ht]
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& & $\textcircled{L}_1\textcircled{1}_L$&1&$\textcircled{1}_C$&1 &$\textcircled{R}_1$\\ \hline
$\textcircled{L}_2$ &2 &$\textcircled{2}_C$&2 &$\textcircled{2}_R\textcircled{R}_2$&&\\ \hline
$\textcircled{L}_3$ & 3 &&&$\textcircled{3}_R\textcircled{3}_L$&3&$\textcircled{R}_3$\\
\hline
\end{tabular}~~$\rightarrow$
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& & $\textcircled{L}_1\textcircled{1}_L$&1&$\textcircled{1}_C/\textcircled{3}_R\textcircled{3}_L$&1 &$\textcircled{R}_1\textcircled{3}_R\textcircled{3}_L$\\ \hline
$\textcircled{L}_2$ &2 &$\textcircled{2}_C$&2 &$\textcircled{2}_R\textcircled{R}_2/\textcircled{3}_R\textcircled{3}_L$&&$\textcircled{3}_R\textcircled{3}_L$\\ \hline
$\textcircled{L}_3$ & 3 &&&&3&$\textcircled{R}_3\textcircled{3}_R\textcircled{3}_L$\\
\hline
\end{tabular}
\caption{One phase can be removed from four phases around (3,1) rotation.}\label{tb:4}
\end{table}
\begin{table}[!ht]
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& & $\textcircled{1}^\prime_L$&1&$\textcircled{1}^\prime_C$&1 &$\textcircled{R}_1^\prime$\\ \hline
$\textcircled{L}_2$ &2 &$\textcircled{2}_C$&2 &$\textcircled{2}_R^\prime$&&\\ \hline
$\textcircled{L}_3$ & 3 &&&&3&$\textcircled{R}_3^\prime$\\
\hline
\end{tabular}
$\rightarrow$
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& & $\textcircled{1}^\prime_L/\textcircled{2}_R$&1&$\textcircled{1}^\prime_C/\textcircled{2}_R$&1 &$\textcircled{R}_1^\prime$\\ \hline
$\textcircled{L}_2$ &2 &&2 &$\textcircled{2}_R^\prime/\textcircled{2}_C$&&\\ \hline
$\textcircled{L}_3$ & 3 &&&&3&$\textcircled{R}_3^\prime$\\
\hline
\end{tabular}
\begin{center}
$\rightarrow$
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\textcircled{1}^\prime_L/\textcircled{2}_R$ & & &1&$\textcircled{1}^\prime_C/\textcircled{2}_R$&1 &$\textcircled{R}_1^\prime$\\ \hline
$\textcircled{L}_2$ &2 &&2 &&&$\textcircled{2}_R^\prime/\textcircled{2}_C$\\ \hline
$\textcircled{L}_3$ & 3 &&&&3&$\textcircled{R}_3^\prime$\\
\hline
\end{tabular}
\caption{One phase can be removed from four phases around (1,2) rotation.}\label{tb:5}
\end{center}
\end{table}
In a similar way the phase adjoint rotation representation defined in the subsection \ref{Sec:Phase transformed rotation matrix} can be shown to other representations, the KM, the one phase-one rotation representation defined in the section \ref{Sec:Commutant of an eigenvalue matrix} and so on.
\section{Conclusion}
It has been presented that degeneracies are quantified as the number of pairs of the same eigenvalues and identified with redundant parameters in a unitary matrix. Redundant parameters due to degenerate eigenvalues in an $n$-dimensional density matrix can be conveniently eliminated after a commutant is factored out from a density matrix. It has been found that the one phase-one rotation representation in the section \ref{Sec:Commutant of an eigenvalue matrix} is a convenient representation to factor a unitary matrix with commutant units. It is a suitable way to factor out redundancies in a unitary matrix. In practice, there are many different possible angular representations just by changing phase configurations. It is often useful to figure out whether a given representation is general. A simple manipulation has been shown for transforming one representation to another.
\begin{acknowledgments}
I would like to express my appreciation to Prof. Erwin Bruning, who guided me with a lot of essential comments and advice in this research.
\end{acknowledgments}
|
1,941,325,221,015 | arxiv | \section{Introduction}
Let $(\Omega,\mathcal{F},P)$ be a probability space and $(w(t),\mathcal{F
_{t}^{w})=((w_{1}(t),\ldots,w_{q}(t))^{\top},\mathcal{F}_{t}^{w})$ be a
$q$-dimensional standard Wiener process, where $\mathcal{F}_{t}^{w},\ 0\leq
t\leq T,$ is an increasing family of $\sigma$-subalgebras of $\mathcal{F}$
induced by $w(t).$ We consider the system of stochastic Navier-Stokes
equations (SNSE) with additive noise for velocity $v$ and pressure $p$ in a
viscous incompressible flow
\begin{gather}
dv(t)=\left[ \frac{\sigma^{2}}{2}\Delta v-(v,\nabla)v-\nabla p+f(t,x)\right]
dt+\sum_{r=1}^{q}\gamma_{r}(t,x)dw_{r}(t),\label{NS1}\\
\ \ 0\leq t\leq T,\ x\in \mathbf{R}^{n}, \nonumber \\
\operatorname{div}\ v=0, \label{NS2
\end{gather}
with spatial periodic condition
\begin{align}
v(t,x+Le_{i}) & =v(t,x),\ p(t,x+Le_{i})=p(t,x),\label{NS4}\\
0 & \leq t\leq T,\ \ i=1,\ldots,n,\nonumber
\end{align}
and the initial conditio
\begin{equation}
v(0,x)=\varphi(x). \label{NS3
\end{equation}
In (\ref{NS1})-(\ref{NS2}) we have $v\in\mathbf{R}^{n}$,$\ p$ is a
scalar,\ $f$ $\in\mathbf{R}^{n},$\ $\gamma_{r}\in\mathbf{R}^{n}$; $\{e_{i}\}$
is the canonical basis in $\mathbf{R}^{n}$ and $L>0$ is the period (for
simplicity in writing, the periods in all the directions are taken the same).
The functions $f=f(t,x)$ and $\gamma_{r}(t,x)$ are supposed to be spatial
periodic as well. Further, we require that $\gamma_{r}(t,x)$ are divergence
free:
\begin{equation}
\operatorname{div}\gamma_{r}(t,x)=0,\ r=1,\ldots,q. \label{NS03
\end{equation}
SNSE can be useful for explaining the turbulence phenomenon (see
\cite{turb1,Flandoli,RozNS04} and references therein). They have complicated
dynamics and some interesting properties (e.g., ergodicity of solutions
\cite{HaMa06,HasNS,DaPr,MaPa06}). At the same time, rather little has been
done in numerics for SNSE. Let us cite \cite{HRoz07}, where algorithms based
on Wiener Chaos expansion are considered, and quite recent works
\cite{BCP10,CP,D}, where splitting schemes with finite element or Galerkin
approximations are applied. Here we suggest to exploit some probabilistic
representations of solutions to SNSE for constructing numerical methods of the
layer type. The proposed methods are promised to be effective, reliable
numerical methods for studying SNSE. Layer methods for deterministic
semilinear and quasilinear partial differential equations of parabolic type
were proposed in \cite{M1,quasic} (see also \cite{MT1,french}), and for
deterministic NSEs they were first considered in \cite{BM} and further
developed in \cite{NS5,NSB}. Layer methods for linear and semilinear
stochastic partial differential equations (SPDE) were constructed and analyzed
in \cite{spde}.
The rest of the paper is organized as follows. In Section~\ref{Secpre} we
introduce additional notation and write down probabilistic representations for
linearized SNSE (i.e., stochastic Oseen-Stokes equations) and for the SNSE
(\ref{NS1})-(\ref{NS3}) which we use in Section~\ref{secLayer} for
constructing layer methods for the SNSE. Three layer methods are given in
Section~\ref{secLayer} together with discussion of their implementation.
Numerical error analysis is done in Section~\ref{secER}. Results of numerical
experiments on two test models are presented in Section~\ref{secnum}.
\section{Preliminaries\label{Secpre}}
In this section we recall the required function spaces
\cite{CM,RT,T,RozNS04,RozNS05} and write probabilistic representations of
solutions to linearized SNSE and to SNSE resting on results from
\cite{KryR,Kun,Pard,R}.
\subsection{Function spaces, the Helmholtz-Hodge-Leray decomposition, and
notation}
Let $\{e_{i}\}$ be the canonical basis in $\mathbf{R}^{n}.$ We shall consider
spatial periodic $n$-vector functions $u(x)=(u^{1}(x),\ldots,u^{n}(x))^{\top}$
in $\mathbf{R}^{n}:$ $u(x+Le_{i})=u(x),\ i=1,\ldots,n,$ where $L>0$ is the
period in $i$th direction. Denote by $Q=(0,L)^{n}$ the cube of the period (of
course, one may consider different periods $L_{1},\ldots,L_{n}$ in the
different directions).\ We denote by $\mathbf{L}^{2}(Q)$ the Hilbert space of
functions on $Q$ with the scalar product and the nor
\[
(u,v)=\int_{Q}\sum_{i=1}^{n}u^{i}(x)v^{i}(x)dx,\ \Vert u\Vert=(u,u)^{1/2}.
\]
We keep the notation $|\cdot|$ for the absolute value of numbers and for the
length of $n$-dimensional vectors, for example
\[
|u(x)|=[(u^{1}(x))^{2}+\cdots+(u^{n}(x))^{2}]^{1/2}.
\]
We denote by $\mathbf{H}_{p}^{m}(Q),\ m=0,1,\ldots,$ the Sobolev space of
functions which are in $\mathbf{L}^{2}(Q),$ together with all their
derivatives of order less than or equal to $m,$ and which are periodic
functions with the period $Q.$ The space $\mathbf{H}_{p}^{m}(Q)$ is a Hilbert
space with the scalar product and the nor
\[
(u,v)_{m}=\int_{Q}\sum_{i=1}^{n}\sum_{[\alpha^{i}]\leq m}D^{\alpha^{i}
u^{i}(x)D^{\alpha^{i}}v^{i}(x)dx,\ \Vert u\Vert_{m}=[(u,u)_{m}]^{1/2},
\]
where $\alpha^{i}=(\alpha_{1}^{i},\ldots,\alpha_{n}^{i}),\ \alpha_{j}^{i
\in\{0,\ldots,m\},\ [\alpha^{i}]=\alpha_{1}^{i}+\cdots+\alpha_{n}^{i},$ an
\[
D^{\alpha^{i}}=D_{1}^{\alpha_{1}^{i}}\cdots D_{n}^{\alpha_{n}^{i}
=\frac{\partial^{\lbrack\alpha^{i}]}}{(\partial x^{1})^{\alpha_{1}^{i}
\cdots(\partial x^{n})^{\alpha_{n}^{i}}}\ ,\ i=1,\ldots,n.
\]
Note that $\mathbf{H}_{p}^{0}(Q)=\mathbf{L}^{2}(Q).$
Introduce the Hilbert subspaces of $\mathbf{H}_{p}^{m}(Q):
\begin{align*}
\mathbf{V}_{p}^{m} & =\{v:\ v\in\mathbf{H}_{p}^{m}(Q),\ \operatorname{div
v=0\},\ m>0,\\
\mathbf{V}_{p}^{0} & =\text{the closure of }\mathbf{V}_{p}^{m},\ m>0\text{
in }\mathbf{L}^{2}(Q).
\end{align*}
Clearly
\[
\mathbf{V}_{p}^{m_{1}}=\text{the closure of }\mathbf{V}_{p}^{m_{2}}\text{ in
}\mathbf{H}_{p}^{m_{1}}(Q)\text{ for any}\ m_{2}\geq m_{1}.
\]
Denote by $P$ the orthogonal projection in $\mathbf{H}_{p}^{m}(Q)$ onto
$\mathbf{V}_{p}^{m}$ (we omit $m$ in the notation $P$ here). The operator $P$
is often called the Leray projection. Due to the Helmholtz-Hodge-Leray
decomposition, any function $u\in\mathbf{H}_{p}^{m}(Q)$ can be represented as
\[
u=Pu+\nabla g,\ \operatorname{div}Pu=0,
\]
where $g=g(x)$ is a scalar $Q$-periodic function such that $\nabla
g\in\mathbf{H}_{p}^{m}(Q).$ It is natural to introduce the notation $P^{\bot
}u:=\nabla g$ and hence write
\[
u=Pu+P^{\bot}u
\]
with
\[
P^{\bot}u\in(\mathbf{V}_{p}^{m})^{\bot}=\{v:\ v\in\mathbf{H}_{p
^{m}(Q),\ v=\nabla g\}.
\]
Le
\begin{gather}
u(x)=\sum_{\mathbf{n}\in\mathbf{Z}^{n}}u_{\mathbf{n}}e^{i(2\pi/L)(\mathbf{n
,x)},\ g(x)=\sum_{\mathbf{n}\in\mathbf{Z}^{n}}g_{\mathbf{n}}e^{i(2\pi
/L)(\mathbf{n},x)},\ g_{\mathbf{0}}=0,\label{N00}\\
Pu(x)=\sum_{\mathbf{n}\in\mathbf{Z}^{n}}(Pu)_{\mathbf{n}}e^{i(2\pi
/L)(\mathbf{n},x)},\ P^{\bot}u(x)=\nabla g(x)=\sum_{\mathbf{n}\in
\mathbf{Z}^{n}}(P^{\bot}u)_{\mathbf{n}}e^{i(2\pi/L)(\mathbf{n},x)}\nonumber
\end{gather}
be the Fourier expansions of $u,$\ $g,$\ $Pu,$ and $P^{\bot}u=\nabla g.$ Here
$u_{\mathbf{n}},$\ $(Pu)_{\mathbf{n}},\ $and $(P^{\bot}u)_{\mathbf{n}}=(\nabla
g)_{\mathbf{n}}$ are $n$-dimensional vectors and $g_{\mathbf{n}}$ are scalars.
We note that $g_{\mathbf{0}}$ can be any real number but for definiteness we
set $g_{\mathbf{0}}=0.$ The coefficients $(Pu)_{\mathbf{n}},\ (P^{\bot
}u)_{\mathbf{n}}$, and $g_{\mathbf{n}}$ can be easily expressed in terms of
$u_{\mathbf{n}}:$
\begin{align}
(Pu)_{\mathbf{n}} & =u_{\mathbf{n}}-\frac{u_{\mathbf{n}}^{\top}\mathbf{n
}{|\mathbf{n}|^{2}}\mathbf{n,\ }(P^{\bot}u)_{\mathbf{n}}=i\frac{2\pi
{L}g_{\mathbf{n}}\mathbf{n=}\frac{u_{\mathbf{n}}^{\top}\mathbf{n}
{|\mathbf{n}|^{2}}\mathbf{n,\ }\label{N01}\\
g_{\mathbf{n}} & =-i\frac{L}{2\pi}\frac{u_{\mathbf{n}}^{\top}\mathbf{n
}{|\mathbf{n}|^{2}},\ \mathbf{n\neq0,\ }g_{\mathbf{0}}=0.\nonumber
\end{align}
We hav
\[
\nabla e^{i(2\pi/L)(\mathbf{n},x)}=\mathbf{n}e^{i(2\pi/L)(\mathbf{n},x)}\cdot
i\frac{2\pi}{L},
\]
hence $u_{\mathbf{n}}e^{i(2\pi/L)(\mathbf{n},x)}\in\mathbf{V}_{p}^{m}$ if and
only if $(u_{\mathbf{n}},\mathbf{n)}=0.$ We obtain from here that the
orthogonal basis of the subspace $(\mathbf{V}_{p}^{m})^{\bot}$ consists of
$\mathbf{n}e^{i(2\pi/L)(\mathbf{n},x)},\ \mathbf{n}\in\mathbf{Z
^{n},\ \mathbf{n\neq0}$; and an orthogonal basis of $\mathbf{V}_{p}^{m}$
consists of $_{k}u_{\mathbf{n}}e^{i(2\pi/L)(\mathbf{n},x)},$\ $k=1,\ldots
,n-1,\ \mathbf{n}\in\mathbf{Z}^{n},\ $where under $\mathbf{n\neq0}$ the
vectors $_{k}u_{\mathbf{n}}$ are orthogonal to $\mathbf{n:}$\textbf{\
$\mathbf{(}_{k}u_{\mathbf{n}},\mathbf{n)}=0,\ k=1,\ldots,n-1,$ and they are
orthogonal among themselves: $\mathbf{(}_{k}u_{\mathbf{n}},\ _{m
u_{\mathbf{n}}\mathbf{)}=0,$\ $k,m=1,\ldots,n-1,$\ $m\neq k,$ and finally, for
$\mathbf{n=0,}$ the vectors $_{k}u_{\mathbf{0}},\ k=1,\ldots,n,$ are orthogonal.
In what follows we suppose that the below assumptions hold.$\medskip$
\noindent\textbf{Assumptions 2.1.} \textit{We assume that the coefficients
}$f(t,x)$ \textit{and} $\gamma_{r}(s,x),$ $r=1,\ldots,q,$ \textit{are
sufficiently smooth} \textit{and} \textit{the problem }(\ref{NS1
)-(\ref{NS3})\textit{ has a unique classical solution} $v(t,x),\ p(t,x),$
$(t,x)\in\lbrack0,T]\times\mathbf{R}^{n},$ \textit{which} \textit{has
continuous derivatives in the space variable }$x$\ \textit{up to some order,
and the solution and the derivatives have uniformly in }$(t,x)$
\textit{bounded moments of a sufficiently high order }$m,$ $2\leq m<m_{0},$
\textit{where} $m_{0}>2$ \textit{is a positive number or }$m_{0}=\infty
$\textit{. }$\medskip$
The solution $v(t,x),\ p(t,x),$ $(t,x)\in\lbrack0,T]\times\mathbf{R}^{n},$ to
(\ref{NS1})-(\ref{NS3}) is $\mathcal{F}_{t}^{w}$-adaptive, $v(t,\cdot
)\in\mathbf{V}_{p}^{m}$ and $\nabla p(t,\cdot)\in(\mathbf{V}_{p}^{m})^{\bot}$
for every $t\in\lbrack0,T]$ and $\omega\in\Omega.$
Assumptions of this kind are rather usual for works dedicated to numerics.
They are rested on results concerning regularity of solutions (see, e.g., the
corresponding theory for deterministic NSE in \cite{RT,T}). Unfortunately, we
could not find explicit results on the classical solution for SNSE in
literature. At the same time, the question about existence of the unique
sufficiently regular (with respect to $x)$ solution of the SNSE (\ref{NS1
)-(\ref{NS3}) on a time interval $[0,T]$ is analogous to the one in the
deterministic case. Indeed, the following remark reduces this problem of
regularity for the SNSE to regularity of solutions to NSE with random
coefficients which is close to the theory of deterministic NSE treated in
\cite{RT,T}.
\begin{remark}
Let $\Gamma(t,x)=\sum_{r=1}^{q}\int_{0}^{t}\gamma_{r}(s,x)dw_{r}(s).$ Then
$V(t,x)=v(t,x)+\Gamma(t,x)$ together with $p(t,x)$ solves the following
`usual' NSE with random coefficients:
\begin{gather*}
\frac{\partial}{\partial t}V=\frac{\sigma^{2}}{2}\Delta V-(V-\Gamma
(t,x),\nabla)(V-\Gamma(t,x))-\nabla p+f(t,x)-\frac{\sigma^{2}}{2}\Delta
\Gamma(t,x),\\
0\leq t\leq T,\ x\in\mathbf{R}^{n},\\
\operatorname{div}\ V=0,
\end{gather*}
with spatial periodic condition
\begin{align*}
V(t,x+Le_{i}) & =V(t,x),\ p(t,x+Le_{i})=p(t,x),\ \\
0 & \leq t\leq T,\ \ i=1,\ldots,n,
\end{align*}
and the initial conditio
\[
V(0,x)=\varphi(x).
\]
\end{remark}
\subsection{Probabilistic representations of solutions to linearized
SNSE\label{prepOseen}}
We start with considering a linearized version of the SNSE (\ref{NS1
)-(\ref{NS3}), i.e., the stochastic Oseen-Stokes equations (see
\cite{MikStokes}):
\begin{gather}
dv_{a}(t)=\left[ \frac{\sigma^{2}}{2}\Delta v_{a}-(a,\nabla)v_{a}-\nabla
p_{a}+f(t,x)\right] dt+\sum_{r=1}^{q}\gamma_{r}(t,x)dw_{r}(t), \label{os1}\\
\ \ 0\leq t\leq T,\ x\in\mathbf{R}^{n}, \nonumber \\
\operatorname{div}\ v_{a}=0, \label{os2
\end{gather}
with spatial periodic condition
\begin{align}
v_{a}(t,x+Le_{i}) & =v_{a}(t,x),\ p_{a}(t,x+Le_{i})=p_{a}(t,x),\ \label{0s3
\\
0 & \leq t\leq T,\ i=1,\ldots,n,\nonumber
\end{align}
and the initial conditio
\begin{equation}
v_{a}(0,x)=\varphi(x), \label{os4
\end{equation}
where $a=a(t,x)$ is an $n$-dimensional vector $a=(a^{1},\ldots,a^{n
)^{\intercal}$ with $a^{i}$ being $Q$-periodic deterministic functions which
have continuous derivatives with respect to $x$ up to some order; and the rest
of the notation is the same as in (\ref{NS1})-(\ref{NS3}).
We re-write the problem (\ref{os1})-(\ref{os4}) with positive direction of
time into the problem with negative direction of time which is more convenient
for making use of probabilistic representations. To this end, introduce the
new time variable $s=T-t$ and the functions $u_{a}(s,x):=v_{a}(T-s,x),$
$\tilde{a}(s,x):=a(T-s,x),$ $\tilde{f}(s,x):=f(T-s,x),$ $\tilde{\gamma
_{r}(s,x):=\gamma_{r}(T-s,x),$ and $\tilde{p}_{a}(s,x):=p_{a}(T-s,x).$
Further, we recall the definition of a backward Ito integral \cite{R}.
Introduce the \textquotedblleft backward\textquotedblright\ Wiener processes
\begin{equation}
\tilde{w}_{r}(t):=w_{r}(T)-w_{r}(T-t),\ \ r=1,\ldots,q,\ \ 0\leq t\leq T,
\label{bs4
\end{equation}
and a decreasing family of $\sigma$-subalgebras $\mathcal{F}_{t,T}^{w},$
$0\leq t\leq T,$ induced by the increments $w_{r}(T)-w_{r}(t^{\prime}),$
$r=1,\ldots,q,$ $t^{\prime}\geq t$. The increasing family of $\sigma$-subalgebras
$\mathcal{F}_{t}^{\tilde{w}}$ induced by $\tilde{w}_{r}(s^{\prime}),$
$s^{\prime}\leq t,$ coincides with $\mathcal{F}_{T-t,T}^{w},$ while
$\mathcal{F}_{t,T}^{\tilde{w}}$ is induced by the increments $\tilde{w
_{r}(T)-\tilde{w}_{r}(t^{\prime}),$ $r=1,\ldots,q,$ $t^{\prime}\geq t$, and
coincides with $\mathcal{F}_{T-t}^{w}.$ The backward Ito integral with respect
to $\tilde{w}_{r}(s)$ is defined as the Ito integral with respect to
$w_{r}(s)$:
\begin{equation}
\int_{t}^{t^{\prime}}\psi(t^{\prime\prime})\ast d\tilde{w}_{r}(t^{\prime
\prime}):=\int_{T-t^{\prime}}^{T-t}\psi(T-t^{\prime\prime})dw_{r
(t^{\prime\prime}),\ \ 0\leq t\leq t^{\prime}\leq T, \label{bs5
\end{equation}
where $\psi(T-t),$ $t\leq T,$ is an $\mathcal{F}_{t}^{w}$-adapted
square-integrable function and $\psi(t)$ is $\mathcal{F}_{t}^{\tilde{w}
$-adapted. Note that $w_{r}(t)=\tilde{w}_{r}(T)-\tilde{w}_{r}(T-t),
\ $r=1,\ldots,q,$\ $0\leq t\leq T.$
The backward stochastic Oseen-Stokes equations can be written as
\begin{gather}
-du_{a}(s)=\left[ \frac{\sigma^{2}}{2}\Delta u_{a}-(\tilde{a},\nabla
)u_{a}-\nabla\tilde{p}_{a}+\tilde{f}(s,x)\right] ds+\sum_{r=1}^{q
\tilde{\gamma}_{r}(s,x)\ast d\tilde{w}_{r}(s),\ \ \label{os11}\\
0\leq s\leq T,\ x\in\mathbf{R}^{n},\nonumber\\
\operatorname{div}\ u_{a}=0, \label{os12
\end{gather}
with spatial periodic condition
\begin{align}
u_{a}(s,x+Le_{i}) & =u_{a}(s,x),\ \tilde{p}_{a}(s,x+Le_{i})=\tilde{p
_{a}(s,x),\ \label{os13}\\
0 & \leq s\leq T,\ i=1,\ldots,n,\nonumber
\end{align}
and the terminal conditio
\begin{equation}
u_{a}(T,x)=\varphi(x). \label{os14
\end{equation}
We note that (\ref{bs5}) implies
\[
\int_{s}^{T}\tilde{\gamma}_{r}(s^{\prime},x)\ast d\tilde{w}_{r}(s^{\prime
})=\int_{0}^{T-s}\gamma_{r}(s^{\prime},x)dw_{r}(s^{\prime}).
\]
The processes $u_{a}(s,x),$ $\tilde{p}_{a}(s,x)$ are $\mathcal{F
_{s,T}^{\tilde{w}}$-adapted (and $\mathcal{F}_{T-s}^{w}$-adapted), they depend
on $\tilde{w}_{r}(T)-\tilde{w}_{r}(s^{\prime})=w_{r}(T-s^{\prime}),$ $s\leq
s^{\prime}\leq T.$
Let $u_{a}(s,x),$\ $\tilde{p}_{a}(s,x)$ be a solution of the problem
(\ref{os11})-(\ref{os14}). For the function $u_{a}(s,x)$, one can use the
following probabilistic representation of solutions to the Cauchy problem for
linear SPDE of parabolic type (the conditional Feynman-Kac formula or the
averaging over characteristics formula, see, e.g., \cite{R} and \cite{spde})
\begin{equation}
u_{a}(s,x)=E^{\tilde{w}}\left[ \varphi(X_{s,x}(T))Y_{s,x,1}(T)+Z_{s,x,1,0
(T)\right] ,\ 0\leq s\leq T, \label{FBD5
\end{equation}
where $X_{s,x}(s^{\prime}),\ Y_{s,x,y}(s^{\prime}),\ Z_{s,x,y,z}(s^{\prime
}),\ s^{\prime}\geq s,$ solves the system of Ito stochastic differential
equations
\begin{gather}
dX=(-\tilde{a}(s^{\prime},X)-\sigma\mu(s^{\prime},X))ds^{\prime}+\sigma
dW(s^{\prime}),\ X(s)=x,\label{BDF0}\\
dY=\mu^{\intercal}(s^{\prime},X)YdW(s^{\prime}),\ Y(s)=y,\label{BDF1}\\
dZ=(-\nabla\tilde{p}_{a}(s^{\prime},X)+\tilde{f}(s^{\prime},X))Yds^{\prime
}+F(s^{\prime},X)YdW(s^{\prime})\label{BDF2}\\
+\sum_{r=1}^{q}\tilde{\gamma}_{r}(s^{\prime},X)Yd\tilde{w}_{r}(s^{\prime
}),\ Z(s)=z.\nonumber
\end{gather}
In (\ref{FBD5})-(\ref{BDF2}), $W(s)$ is a standard $n$-dimensional Wiener
process independent of $\tilde{w}_{r}(s)$ on the probability space
$(\Omega,\mathcal{F},P)$; $Y$ is a scalar, and $Z$ is an $n$-dimensional
column-vector;$\ \mu(s,x)$ is an arbitrary $n$-dimensional spatial periodic
vector function and $F(s,x)$ is an arbitrary $n\times n$-dimensional spatial
periodic matrix function, which are sufficiently smooth in $s,x$; the
expectation $E^{\tilde{w}}$ in (\ref{FBD5}) is taken over the realizations of
$W(s),$ $t\leq s\leq T,$ for a fixed $\tilde{w}_{r}(s^{\prime}),$
$r=1,\ldots,q,$ $s\leq s^{\prime}\leq T,$ in other words, $E^{\tilde{w
}\left( \cdot\right) $ means the conditional expectation:
\[
E\left( \cdot|\tilde{w}_{r}(s^{\prime})-\tilde{w}_{r}(s),\text{
r=1,\ldots,q,\text{ }s\leq s^{\prime}\leq T\right) .
\]
The probabilistic representation like (\ref{FBD5})-(\ref{BDF2}) for the Cauchy
problem (\ref{os11}), (\ref{os14}) is obtained (see, e.g., \cite{R}) for
linear SPDEs with deterministic coefficients. However here $\tilde{p
_{a}(s,x)$ is a part of solution of problem (\ref{os11})-(\ref{os14}) and it
is random (more precisely it is $\mathcal{F}_{s,T}^{\tilde{w}}$-adapted). In
this case the representation (\ref{FBD5})-(\ref{BDF2}) can be rigorously
justified in the following way. The solution $u_{a}$ of (\ref{os11}),
(\ref{os14}) can be represented in the form of the su
\[
u_{a}=u_{a}^{(0)}+u_{a}^{(1)},
\]
where $u_{a}^{(0)}$ satisfies the Cauchy problem for the backward
deterministic linear parabolic PDE with random parameters
\begin{align}
-\frac{\partial u_{a}^{(0)}}{\partial s} & =\frac{\sigma^{2}}{2}\Delta
u_{a}^{(0)}-(\tilde{a},\nabla)u_{a}^{(0)}-\nabla\tilde{p}_{a},\label{Fr1}\\
u_{a}^{(0)}(T,x) & =0,\nonumber
\end{align}
and $u_{a}^{(1)}$ satisfies the Cauchy problem for the backward stochastic
linear parabolic PDE with deterministic parameters
\begin{align}
-du_{a}^{(1)}(s) & =\left[ \frac{\sigma^{2}}{2}\Delta u_{a}^{(1)
-(\tilde{a},\nabla)u_{a}^{(1)}+\tilde{f}(s,x)\right] ds+\sum_{r=1}^{q
\tilde{\gamma}_{r}(s,x)\ast d\tilde{w}_{r}(s),\ \label{Fr2}\\
u_{a}^{(1)}(T,x) & =\varphi(x).\ \nonumber
\end{align}
Clearly
\[
u_{a}^{(0)}(s,x)=E^{\tilde{w}}\left[ Z_{s,x,1,0}^{(0)}(T)\right]
=-E^{\tilde{w}}\int_{s}^{T}\nabla\tilde{p}_{a}(s^{\prime},X_{s,x}(s^{\prime
}))\ Y_{s,x,1}(s^{\prime})ds^{\prime}.
\]
The Feynman-Kac formula for $u_{a}^{(1)}$ coincides with (\ref{FBD5
)-(\ref{BDF2}) under $\nabla\tilde{p}_{a}(s,x)$ $=$ $0$.
Let $\mathcal{F}_{s,t}^{W}$ be a $\sigma$-algebra\ induced by $W_{r
(s^{\prime})-W_{r}(s),$ $r=1,\ldots,n,\ s\leq s^{\prime}\leq t.$ We note that
$\nabla\tilde{p}_{a}(s^{\prime},X_{s,x}(s^{\prime}))$ in (\ref{BDF2}) is
$\mathcal{F}_{s,s^{\prime}}^{W}\vee\mathcal{F}_{s^{\prime},T}^{\tilde{w}
$-adapted, where the family of $\sigma$-algebras $\mathcal{F}_{s,s^{\prime
}^{W}\vee\mathcal{F}_{s^{\prime},T}^{\tilde{w}}$ is neither increasing nor
decreasing in $s^{\prime}$. Consequently, $Z_{s,x,y,z}(s^{\prime})$ is
measurable with respect to $\mathcal{F}_{s,s^{\prime}}^{W}\vee\mathcal{F
_{s^{\prime},T}^{\tilde{w}}$ for every $s^{\prime}\in\lbrack s,T].$ Since
$\tilde{\gamma}_{r}(s^{\prime},X_{s,x}(s^{\prime}))Y(s^{\prime})$ are
independent of $\tilde{w}_{r},$ the Ito integral in (\ref{BDF2}) is well defined.
\begin{remark}
\label{Rem_ant}We remark that within the non-anticipating stochastic calculus
the probabilistic representation $(\ref{FBD5})$-$(\ref{BDF2})$ for the linear
problem $(\ref{os11})$-$(\ref{os14})$ cannot be carried over to the backward
SNSE problem by changing the coefficient $\tilde{a}(s,x)$ to $u(s,x)$ since
then the integrand $\tilde{\gamma}_{r}(s^{\prime},X_{s,x}(s^{\prime
}))Y(s^{\prime})$ would be $\mathcal{F}_{s,s^{\prime}}^{W}\vee\mathcal{F
_{s^{\prime},T}^{\tilde{w}}$-measurable. Nevertheless, the representation
$(\ref{FBD5})$-$(\ref{BDF2})$ allows us to derive layer methods for the
stochastic Oseen-Stokes equations $(\ref{os11})$-$(\ref{os14})$, and then,
using them as a guidance, one can obtain layer methods for the SNSE
$(\ref{NS1})$-$(\ref{NS3})$ as well $($see Sections~3.1 and~3.2$)$.
\end{remark}
For deriving layer methods, we also use some direct probabilistic
representations for solutions of the SNSE. In Sections~\ref{prepDirect}
and~\ref{prepDoub} we give two such representations. The first one follows
from a specific probabilistic representation for a linear SPDE which differs
from $(\ref{FBD5})$-$(\ref{BDF2})$ and the second one uses backward doubly
stochastic differential equations \cite{PP}.
\subsection{A direct probabilistic representation for solutions of
SNSE\label{prepDirect}}
As in the case of the stochastic Oseen-Stokes equations, we re-write the SNSE
problem (\ref{NS1})-(\ref{NS3}) with positive direction of time into the
problem with negative direction of time. Again introduce the new time variable
$s=T-t$ and the functions $u(s,x):=v(T-s,x)$, $\tilde{f}(s,x):=f(T-s,x),$
$\tilde{\gamma}_{r}(s,x):=\gamma_{r}(T-s,x),$ and $\tilde{p}(s,x):=p(T-s,x).$
The corresponding backward SNSE take the form:
\begin{gather}
-du=(\frac{\sigma^{2}}{2}\Delta u-(u,\nabla)u-\nabla\tilde{p}+\tilde
{f})ds+\sum_{r=1}^{q}\tilde{\gamma}_{r}(s,x)\ast d\tilde{w}_{r
(s),\ u(T,x)=\varphi(x),\label{D1}\\
\operatorname{div}u=0\ , \label{D01
\end{gather}
with spatial periodic conditions for $u$ and $\tilde{p}$.
Introduce $F(s,x,u,\nabla u):=-(u,\nabla)u-\nabla\tilde{p}+\tilde{f}$ and
write (\ref{D1}) as
\begin{equation}
-du=\left( \frac{\sigma^{2}}{2}\Delta u+F(s,x,u,\nabla u)\right)
ds+\sum_{r=1}^{q}\tilde{\gamma}_{r}(s,x)\ast d\tilde{w}_{r
(s),\ u(T,x)=\varphi(x). \label{D03
\end{equation}
Let us assume that the solution $u(s,x)=u(s,x,\omega)$ to (\ref{D1
)-(\ref{D01}) is known. We substitute it in the $F(s,x,u,\nabla u)$ which
becomes a function $\tilde{F}(s,x,\omega)$ depending on $\omega$ as a
parameter. Hence (\ref{D03}) can be considered as a linear parabolic SPDE. For
solutions of this linear SPDE, we can write the following probabilistic
representation analogously to (\ref{FBD5})-(\ref{BDF2}) (we take $Y\equiv1$):
\begin{gather}
u(s,x)=E^{\tilde{w}}\varphi(X_{s,x}(T))\label{NS11n}\\
-E^{\tilde{w}}\left[ \int_{s}^{T}\{\nabla\tilde{p}(s^{\prime},X_{s,x
(s^{\prime}))-\tilde{f}(s^{\prime},X_{s,x}(s^{\prime})) \right. \nonumber \\
\Bigg. +(u(s^{\prime
,X_{s,x}(s^{\prime})),\nabla)u(s^{\prime},X_{s,x}(s^{\prime}))\}ds^{\prime
}\Bigg] \nonumber\\
+\sum_{r=1}^{q}E^{\tilde{w}}\left[ \int_{s}^{T}\tilde{\gamma}_{r}(s^{\prime
},X_{s,x}(s^{\prime}))d\tilde{w}_{r}(s^{\prime})\right] ,\nonumber
\end{gather}
where $X_{s,x}(s^{\prime}),$ $s^{\prime}\geq s,$ solves the system of
stochastic differential equations
\begin{equation}
dX=\sigma dW(s^{\prime}),\ X(s)=x, \label{NS12n
\end{equation}
$W$ is a standard $n$-dimensional Wiener process independent of $\tilde{w
_{r}$ on the probability space $(\Omega,\mathcal{F},P).$
\subsection{A probabilistic representation for solution of SNSE using backward
doubly stochastic differential equations\label{prepDoub}}
In connection with the backward SNSE (\ref{D1})-(\ref{D01}), we introduce the
system of backward doubly stochastic differential equations \cite{PP}:
\begin{align}
dX & =\sigma dW(s^{\prime}),\ \ X(s)=x,\label{D2} \displaybreak[0]\\
dU & =(\nabla\tilde{p}(s^{\prime},X)-\tilde{f}(s^{\prime},X)+\frac{1
{\sigma}\mathbb{Z}U)ds^{\prime}+\mathbb{Z}dW(s^{\prime})-\sum_{r=1}^{q
\tilde{\gamma}_{r}(s^{\prime},X)\ast d\tilde{w}_{r}(s^{\prime}),\label{D3} \displaybreak[0]\\
&U(T) =\varphi(X_{s,x}(T)). \label{D4
\end{align}
In (\ref{D2})-(\ref{D4}) $X,$\ $U,$\ $W$ are column vectors of dimension $n$
and $\mathbb{Z}$ is a matrix of dimension $n\times n,\ W(s)$ and $\tilde
{w}(s),\ 0\leq s\leq T,$ are mutually independent standard Wiener processes on
the probability space $(\Omega,\mathcal{F},P)$. We recall that the triple
$\{X_{s,x}(s^{\prime}),U_{s,x}(s^{\prime}),\mathbb{Z}_{s,x}(s^{\prime}),s\leq
s^{\prime}\leq T\}$ is a solution of (\ref{D2})-(\ref{D4}) if $X_{s,x
(s^{\prime})$ satisfies (\ref{D2}), $(U_{s,x}(s^{\prime}),\mathbb{Z
_{s,x}(s^{\prime}))$ for each $s^{\prime}$ is $\mathcal{F}_{s,s^{\prime}
^{W}\vee\mathcal{F}_{s^{\prime},T}^{\tilde{w}}$-measurable, an
\begin{gather}
U_{s,x}(s^{\prime})=\varphi(X_{s,x}(T))-\int_{s^{\prime}}^{T}(\nabla\tilde
{p}(s^{\prime\prime},X_{s,x}(s^{\prime\prime}))-\tilde{f}(s^{\prime\prime
},X_{s,x}(s^{\prime\prime})) \nonumber \\
+\frac{1}{\sigma}\mathbb{Z}_{s,x}(s^{\prime\prime
})U_{s,x}(s^{\prime\prime}))ds^{\prime\prime}\label{D45}\\
-\int_{s^{\prime}}^{T}\mathbb{Z}_{s,x}(s^{\prime\prime})dW(s^{\prime\prime
})+\int_{s^{\prime}}^{T}\sum_{r=1}^{q}\tilde{\gamma}_{r}(s^{\prime\prime
},X_{s,x}(s^{\prime\prime}))\ast d\tilde{w}_{r}(s^{\prime\prime}),\ s\leq
s^{\prime}\leq T.\nonumber
\end{gather}
Let $u(s,x)$ be a solution of the problem (\ref{D1}), i.e.,
\begin{align}
u(s,x) & =\varphi(x)+\int_{s}^{T}(\frac{\sigma^{2}}{2}\Delta u(s^{\prime
},x)-(u,\nabla)u(s^{\prime},x)-\nabla\tilde{p}(s^{\prime},x)+\tilde
{f}(s^{\prime},x))ds^{\prime}\label{D5}\\
& +\sum_{r=1}^{q}\int_{s}^{T}\tilde{\gamma}_{r}(s^{\prime},x)\ast d\tilde
{w}_{r}(s^{\prime}).\nonumber
\end{align}
It is known (see \cite{PP}) that the
\begin{align}
X(s^{\prime}) & =X_{s,x}(s^{\prime}),\ U(s^{\prime})=U_{s,x}(s^{\prime
})=u(s^{\prime},X_{s,x}(s^{\prime})),\label{D55}\\
\mathbb{Z}(s^{\prime}) & =\mathbb{Z}_{s,x}(s^{\prime})=\{\mathbb{Z
^{k,j}(s^{\prime})\}=\sigma\cdot\left\{ \frac{\partial u^{k}}{\partial x^{j
}(s^{\prime},X_{s,x}(s^{\prime}))\right\} ,\ k,j=1,\ldots,n,\nonumber
\end{align}
is a solution of (\ref{D2})-(\ref{D4}).
Conversely, if $X_{s,x}(s^{\prime}),$\ $U_{s,x}(s^{\prime}),$\ $\mathbb{Z
_{s,x}(s^{\prime})$ is a solution of the system of backward doubly stochastic
differential equations (\ref{D2})-(\ref{D4}) then it can be verified that
\begin{equation}
u(s,x)=U_{s,x}(s) \label{D9
\end{equation}
is the solution of (\ref{D1}) (see \cite{PP}). The condition (\ref{D01}) is
satisfied by choosing an appropriate pressure $\tilde{p}$.
We note that $u(s,x)$ is $\mathcal{F}_{s,T}^{\tilde{w}}$-measurable and then
using (\ref{D45}) we get
\begin{align}
u(s,x) & =U_{s,x}(s)=E[U_{s,x}(s)|\mathcal{F}_{s,T}^{\tilde{w}
]=E^{\tilde{w}}U_{s,x}(s)\label{D10} \displaybreak[0]\\
& =E^{\tilde{w}}\varphi(X_{s,x}(T))\nonumber \displaybreak[0]\\
& -E^{\tilde{w}}\int_{s}^{T}(\nabla\tilde{p}(s^{\prime},X_{s,x}(s^{\prime
}))-\tilde{f}(s^{\prime},X_{s,x}(s^{\prime}))+\frac{1}{\sigma}\mathbb{Z
_{s,x}(s^{\prime})U_{s,x}(s^{\prime}))ds^{\prime}\nonumber \displaybreak[0]\\
& +\sum_{r=1}^{q}E^{\tilde{w}}\int_{s}^{T}\tilde{\gamma}_{r}(s^{\prime
},X_{s,x}(s^{\prime}))\ast d\tilde{w}_{r}(s^{\prime}).\nonumber
\end{align}
Due to smoothness of $\tilde{\gamma}_{r}(s,x)$ in $s$ and independence of $X$
and $\tilde{w},$ the equalit
\[
\int_{s}^{T}\tilde{\gamma}_{r}(s^{\prime},X_{s,x}(s^{\prime}))\ast d\tilde
{w}_{r}(s^{\prime})=\int_{s}^{T}\tilde{\gamma}_{r}(s^{\prime},X_{s,x
(s^{\prime}))d\tilde{w}_{r}(s^{\prime})
\]
holds. Hence the right-hand side of (\ref{D10}) coincides with the right-hand
side of the probabilistic representation (\ref{NS11n}).
\section{Layer methods\label{secLayer}}
In this section we construct three layer methods based on the probabilistic
representations from Sections~\ref{prepOseen} and~\ref{prepDirect}. In the
case of deterministic NSE (i.e., when $\gamma_{r}=0$ in the SNSE
(\ref{NS1})-(\ref{NS3})) these methods coincide with the ones presented in
\cite{NS5}.
On the basis of the probabilistic representation (\ref{FBD5})-(\ref{BDF2}) we,
first, construct layer methods for the stochastic Oseen-Stokes equations and,
second, using the obtained methods as a guidance, we construct the
corresponding methods for the SNSE (this way of deriving numerical methods for
nonlinear SPDEs was proposed in \cite{spde}). This is done in
Sections~\ref{secSL} and~\ref{secSimL}. We underline that derivation of these
methods does not rely on direct probabilistic representations for the SNSE
themselves that would require the anticipating stochastic calculus (see
Remark~\ref{Rem_ant}) which is not developed satisfactorily from the numerical
point of view. That is why we prefer to use the mimicry approach here.
In Section~\ref{secDiL}\ we derive a layer method based on the direct
probabilistic representation for the SNSE from Section~\ref{prepDirect}.
In Sections~\ref{secSL}, \ref{secSimL} and~\ref{secDiL} we deal with
approximation of velocity $v(t,x)$ (i.e., a part of the solution $v(t,x),$
$p(t,x)$ to the SNSE) only. Since we consider here the spatial-periodic
problem (\ref{NS1})-(\ref{NS3}), we can separate approximation of velocity
$v(t,x)$ and pressure $p(t,x)$ in a constructive way. Approximation of
pressure is considered in Section~\ref{secPres}.
Let us introduce a uniform partition of the time interval $[0,T]:$
$0=t_{0}<t_{1}<\cdots<t_{N}=T$ and the time step $h=T/N$ (we restrict
ourselves to the uniform partition for simplicity only).
\subsection{A layer method based on the standard probabilistic
representation\label{secSL}}
Each choice of $\mu(s,x)$ and $F(s,x)$ in (\ref{FBD5})-(\ref{BDF2}) gives us a
particular probabilistic representation for the solution of the stochastic
Oseen-Stokes equations (\ref{os11})-(\ref{os14}) which can be used for
deriving the corresponding layer method. In this and the next section we
derive layer methods based on two of such probabilistic representations which
can be, in a sense, viewed as limiting cases of (\ref{FBD5})-(\ref{BDF2}). If
we put $\mu(s,x)=0$ and $F(s,x)=0$ in (\ref{FBD5})-(\ref{BDF2}), we obtain the
standard probabilistic representation for the solution to the backward linear
SPDE (\ref{os11})-(\ref{os14}) \cite{R}. This case is considered in this
section. The case of $F(s,x)=0$ and $\mu(s,x)$ turning the equation
(\ref{BDF0}) for $X(s)$ into pure diffusion is treated in the next section.
Analogously to (\ref{FBD5})-(\ref{BDF2}) with $\mu(s,x)=0$ and $F(s,x)=0,$ we
get the following local probabilistic representation of the solution to
(\ref{os11})-(\ref{os14}):
\begin{align}
u_{a}(t_{k},x) & =E^{\tilde{w}}\left[ u_{a}(t_{k+1},X_{t_{k},x
(t_{k+1}))-\int_{t_{k}}^{t_{k+1}}\nabla\tilde{p}_{a}(s,X_{t_{k},x
(s))ds\right. \label{spr1}\\
& \left. +\int_{t_{k}}^{t_{k+1}}\tilde{f}(s,X_{t_{k},x}(s))ds+\sum_{r=1
^{q}\int_{t_{k}}^{t_{k+1}}\tilde{\gamma}_{r}(s,X_{t_{k},x}(s))d\tilde{w
_{r}(s)\right] ,\nonumber
\end{align}
where
\begin{equation}
dX=-\tilde{a}(s,X)ds+\sigma dW(s),\ X(t_{k})=x. \label{spr2
\end{equation}
A slightly modified explicit Euler scheme with the simplest noise simulation
applied to (\ref{spr2}) give
\begin{equation}
X_{t_{k},x}(t_{k+1})\simeq\bar{X}_{t_{k},x}(t_{k+1})=x-\tilde{a
(t_{k+1},x)h+\sigma\sqrt{h}\xi, \label{NS15
\end{equation}
where $\xi=(\xi^{1},\ldots,\xi^{n})^{\top}$ and$\ \xi^{1},\ldots,\xi^{n}$ are
i.i.d. random variables with the law $P(\xi^{i}=\pm1)=1/2.$ We substitute
$\bar{X}_{t_{k},x}(t_{k+1})$ from (\ref{NS15}) in (\ref{spr1}) instead of
$X_{t_{k},x}(t_{k+1})$, evaluate the expectation exactly, and thus obtain
(recall that $\operatorname{div}\tilde{\gamma}_{r}=0$ and $\nabla\tilde{p
_{a}(s,x)\in(\mathbf{V}_{p}^{m})^{\bot}):$
\begin{gather}
u_{a}(t_{k},x)=\breve{u}_{a}(t_{k+1},x)-\nabla\tilde{p}_{a}(t_{k+1
,x)h+\tilde{f}(t_{k+1},x)h\label{NS16}\\
+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\left( \tilde{w}_{r
(t_{k+1})-\tilde{w}_{r}(t_{k})\right) +\rho\nonumber\\
=P\breve{u}_{a}(t_{k+1},x)+P\tilde{f}(t_{k+1},x)h+P^{\bot}\breve{u
_{a}(t_{k+1},x)+P^{\bot}\tilde{f}(t_{k+1},x)h\nonumber\\
-\nabla\tilde{p}_{a
(t_{k+1},x)h
+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r
+\rho,\nonumber
\end{gather}
where $\Delta_{k}\tilde{w}_{r}=\tilde{w}_{r}(t_{k+1})-\tilde{w}_{r}(t_{k}),$
$r=1,\ldots,q;$ $\rho=\rho(t_{k},x)$ is a remainder, and
\begin{equation}
\breve{u}_{a}(t_{k+1},x)=E^{\tilde{w}}u_{a}(t_{k+1},\bar{X}_{k+1})=2^{-n
\sum_{j=1}^{2^{n}}u_{a}(t_{k+1},x-\tilde{a}(t_{k+1},x)h+\sigma\sqrt{h}\xi_{j})
\label{NS17
\end{equation}
with $\xi_{1}=(1,1,\ldots,1)^{\top},\ \ldots,\ \xi_{2^{n}}=(-1,-1,\ldots
,-1)^{\top}.$ Taking into account that $u_{a}(t_{k},x)$ in (\ref{NS16}) is
divergence free, we ge
\begin{equation}
u_{a}(t_{k},x)=P\breve{u}_{a}(t_{k+1},x)+P\tilde{f}(t_{k+1},x)h+\sum_{r=1
^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r}+P\rho. \label{NS165
\end{equation}
Neglecting the remainder, we get the one-step approximation for $u_{a
(t_{k},x)$:
\begin{equation}
\hat{u}_{a}(t_{k},x)=P\breve{u}_{a}(t_{k+1},x)+P\tilde{f}(t_{k+1
,x)h+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r}.
\label{os15n
\end{equation}
Re-writing $\hat{u}_{a}(t_{k},x)$ of (\ref{os15n}) in the positive direction
of time, we obtain the one-step approximation for the velocity $v_{a
(t_{k},x)$ of the forward-time stochastic Oseen-Stokes equations
(\ref{os1})-(\ref{os4}):
\begin{equation}
\hat{v}_{a}(t_{k+1},x)=P\breve{v}_{a}(t_{k},x)+Pf(t_{k},x)h+\sum_{r=1
^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}, \label{os15
\end{equation}
where $\Delta_{k}w_{r}=w_{r}(t_{k+1})-w_{r}(t_{k}),$ $r=1,\ldots,q,$ and
\begin{equation}
\breve{v}_{a}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}v_{a}(t_{k},x-a(t_{k
,x)h+\sigma\sqrt{h}\xi_{j}). \label{os17
\end{equation}
Now let us turn our attention from the stochastic Oseen-Stokes equation to the
\textit{stochastic NSE} (\ref{NS1})-(\ref{NS3}).
Using the one-step approximation (\ref{os15})-(\ref{os17}) for the stochastic
Oseen-Stokes equations (\ref{os1})-(\ref{os4}) as a guidance, we construct the
one-step approximation for the SNSE (\ref{NS1})-(\ref{NS3}) by substituting
$a(t_{k},x)$ with $v(t_{k},x):$
\begin{equation}
\hat{v}(t_{k+1},x)=P\breve{v}(t_{k},x)+Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma
_{r}(t_{k},x)\Delta_{k}w_{r}, \label{NSA1
\end{equation}
where
\begin{equation}
\breve{v}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}v(t_{k},x-v(t_{k},x)h+\sigma
\sqrt{h}\xi_{j}). \label{NSA3
\end{equation}
It is easy to see that under Assumptions~2.1 $\operatorname{div}\hat
{v}(t_{k+1},x)=0.$
The corresponding layer method for the SNSE (\ref{NS1})-(\ref{NS3}) has the
for
\begin{gather}
\bar{v}(0,x)=\varphi(x),\ \bar{v}(t_{k+1},x)=P\breve{v}(t_{k},x)+Pf(t_{k
,x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r},\label{NS18}\\
k=0,\ldots,N-1,\nonumber
\end{gather}
wher
\begin{equation}
\breve{v}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}\bar{v}(t_{k},x-\bar{v
(t_{k},x)h+\sigma\sqrt{h}\xi_{j}). \label{NS19
\end{equation}
We note that we use the same notation $\breve{v}(t_{k},x)$ for the functions
appearing in the one-step approximation (\ref{NSA3}) and in the layer method
(\ref{NS19}) but this does not cause any confusion.
Knowing the expansions
\begin{align}
\breve{v}(t_{k},x) & =\sum_{\mathbf{n}\in\mathbf{Z}^{n}}\breve
{v}_{\mathbf{n}}(t_{k})e^{i(2\pi/L)(\mathbf{n},x)},\ \ \ f(t_{k
,x)=\sum_{\mathbf{n}\in\mathbf{Z}^{n}}f_{\mathbf{n}}(t_{k})e^{i(2\pi
/L)(\mathbf{n},x)},\label{NS20}\\
\gamma_{r}(t_{k},x) & =\sum_{\mathbf{n}\in\mathbf{Z}^{n}}\gamma
_{r,\mathbf{n}}(t_{k})e^{i(2\pi/L)(\mathbf{n},x)},\nonumber
\end{align}
it is not difficult to find $\bar{v}(t_{k+1},x)$. Indeed, using (\ref{N00})
and (\ref{N01}), we obtain from (\ref{NS18})-(\ref{NS19}):
\begin{gather}
\bar{v}(t_{k+1},x)=\sum_{\mathbf{n}\in\mathbf{Z}^{n}}\bar{v}_{\mathbf{n
}(t_{k+1})e^{i(2\pi/L)(\mathbf{n},x)},\ \ \label{NS21}\\
\bar{v}_{\mathbf{n}}(t_{k+1})=\breve{v}_{\mathbf{n}}(t_{k})+f_{\mathbf{n
}(t_{k})h-\frac{\breve{v}_{\mathbf{n}}^{\top}(t_{k})\mathbf{n}}{|\mathbf{n
|^{2}}\mathbf{n}-h\frac{f_{\mathbf{n}}^{\top}(t_{k})\mathbf{n}}{|\mathbf{n
|^{2}}\mathbf{n}+\sum_{r=1}^{q}\gamma_{r,\mathbf{n}}(t_{k})\ \Delta_{k
w_{r}\mathbf{.}\nonumber
\end{gather}
We note that turning the layer method (\ref{NS18})-(\ref{NS19}) into a
numerical algorithm requires to complement it with an interpolation in order
to compute the terms $\bar{v}(t_{k},x-\bar{v}(t_{k},x)h+\sigma\sqrt{h}\xi
_{j})$ in (\ref{NS19}) used for finding $\breve{v}_{\mathbf{n}}(t_{k})$ from
(\ref{NS20}), see the corresponding discussion in the case of deterministic
NSE in \cite{NS5}.
\subsection{Layer methods based on the probabilistic representation with
simplest characteristics\label{secSimL}}
If we put $\mu(s,x)=-\tilde{a}(s,x)/\sigma$ and $F(s,x)=0$ in (\ref{FBD5
)-(\ref{BDF2}), we can obtain the following local probabilistic representation
for the solution to the backward stochastic Oseen-Stokes equation
(\ref{os11})-(\ref{os14})
\begin{gather}
u_{a}(t_{k},x)=E^{\tilde{w}}[u_{a}(t_{k+1},X_{t_{k},x}(t_{k+1}))Y_{t_{k
,x,1}(t_{k+1})]\label{NS14}\\
+E^{\tilde{w}}\left[ -\int_{t_{k}}^{t_{k+1}}\nabla\tilde{p}_{a}(s,X_{t_{k
,x}(s))Y_{t_{k},x,1}(s)ds+\int_{t_{k}}^{t_{k+1}}\tilde{f}(s,X_{t_{k
,x}(s))Y_{t_{k},x,1}(s)ds\right. \nonumber\\
\left. +\sum_{r=1}^{q}\int_{t_{k}}^{t_{k+1}}\tilde{\gamma}_{r}(s,X_{t_{k
,x}(s))Y_{t_{k},x,1}(s)d\tilde{w}_{r}(s)\right] \ ,\nonumber
\end{gather}
where $X_{t,x}(s),$\ $Y_{t,x,1}(s),$ $s\geq t,$ solve the system of stochastic
differential equations
\begin{align}
dX & =\sigma dW(s),\ X(t)=x,\label{NS12}\\
dY & =-\frac{1}{\sigma}Y\tilde{a}^{\top}(s,X)dW(s),\ Y(t)=1. \label{NS13
\end{align}
We apply a slightly modified explicit Euler scheme with the simplest noise
simulation to (\ref{NS12})-(\ref{NS13}):
\begin{equation}
\bar{X}_{t_{k},x}(t_{k+1})=x+\sigma\sqrt{h}\xi,\ \bar{Y}_{t_{k},x,1
(t_{k+1})=1-\frac{1}{\sigma}\tilde{a}^{\top}(t_{k+1},x)\sqrt{h}\xi,
\label{NS30
\end{equation}
where $\xi$ is the same as in (\ref{NS15}). Approximating $X_{t_{k},x
(t_{k+1})$ and $Y_{t_{k},x,1}(t_{k+1})$ in (\ref{NS14}) by $\bar{X}_{t_{k
,x}(t_{k+1})$ and $\bar{Y}_{t_{k},x,1}(t_{k+1})$ from (\ref{NS30}), we obtai
\begin{gather}
u_{a}(t_{k},x)=E^{\tilde{w}}[u_{a}(t_{k+1},x+\sigma\sqrt{h}\xi)(1-\frac
{1}{\sigma}\tilde{a}^{\top}(t_{k+1},x)\sqrt{h}\xi)]-\nabla\tilde{p
_{a}(t_{k+1},x)h\label{NS31}\\
+\tilde{f}(t_{k+1},x)h+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta
_{k}\tilde{w}_{r}+\rho\nonumber\\
=2^{-n}\sum_{q=1}^{2^{n}}u_{a}(t_{k+1},x+\sigma\sqrt{h}\xi_{q})-\frac{\sqrt
{h}}{\sigma}\breve{u}_{a}(t_{k+1},x)-\nabla\tilde{p}_{a}(t_{k+1},x)h \nonumber\\
+\tilde{f}(t_{k+1},x)h
+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r
+\rho,\nonumber
\end{gather}
where
\begin{align}
\breve{u}_{a}(t_{k+1},x) & =E^{\tilde{w}}[u_{a}(t_{k+1},x+\sigma\sqrt{h
\xi)\xi^{\top}]\tilde{a}(t_{k+1},x)\label{NS32}\\
& =2^{-n}\sum_{j=1}^{2^{n}}u_{a}(t_{k+1},x+\sigma\sqrt{h}\xi_{j})\xi
_{j}^{\top}\tilde{a}(t_{k+1},x)\nonumber
\end{align}
and $\rho=\rho(t_{k},x)$ is a remainder.
Using the Helmholtz-Hodge-Leray decomposition and taking into account that
\[
\operatorname{div}u_{a}(t_{k+1},x+\sigma\sqrt{h}\xi_{q})=0,\text{\ \
\operatorname{div}\gamma_{r}=0,
\]
\ we get from (\ref{NS31})-(\ref{NS32}):
\begin{gather*}
u_{a}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}u_{a}(t_{k+1},x+\sigma\sqrt{h}\xi
_{j})-\frac{\sqrt{h}}{\sigma}P\breve{u}_{a}(t_{k+1},x)+P\tilde{f
(t_{k+1},x)h\\
-\frac{\sqrt{h}}{\sigma}P^{\bot}\breve{u}_{a}(t_{k+1},x)+P^{\bot}\tilde
{f}(t_{k+1},x)h-\nabla\tilde{p}_{a}(t_{k+1},x)h\\
+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r}+\rho,
\end{gather*}
whence we obtain after applying the operator $P:$
\begin{gather}
u_{a}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}u_{a}(t_{k+1},x+\sigma\sqrt{h}\xi
_{j})-\frac{\sqrt{h}}{\sigma}P\breve{u}_{a}(t_{k+1},x)+P\tilde{f
(t_{k+1},x)h\label{NS33} \displaybreak[0]\\
+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r
+P\rho.\nonumber
\end{gather}
Dropping the remainder in (\ref{NS33}) and re-writing the obtained
approximation in the one with positive direction of time, we obtain the
one-step approximation for the forward-time stochastic Oseen-Stokes equation
(\ref{os1})-(\ref{os4}):
\begin{gather}
\hat{v}_{a}(t_{k+1},x)=2^{-n}\sum_{j=1}^{2^{n}}v_{a}(t_{k},x+\sigma\sqrt{h
\xi_{j})-\frac{\sqrt{h}}{\sigma}P\breve{v}_{a}(t_{k},x)+Pf(t_{k
,x)h\label{os18}\\
+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r},\nonumber
\end{gather}
wher
\begin{equation}
\breve{v}_{a}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}v_{a}(t_{k},x+\sigma\sqrt{h
\xi_{j})\xi_{j}^{\top}a(t_{k},x). \label{os20
\end{equation}
Using (\ref{os18})-(\ref{os20}) as a guidance, we arrive at the one-step
approximation for the SNSE (\ref{NS1})-(\ref{NS3}):
\begin{gather}
\hat{v}(t_{k+1},x)=2^{-n}\sum_{q=1}^{2^{n}}v(t_{k},x+\sigma\sqrt{h}\xi
_{q})-\frac{\sqrt{h}}{\sigma}P\breve{v}(t_{k},x) \label{NSA4} \\
+Pf(t_{k},x)h+\sum_{r=1
^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}, \nonumber
\end{gather}
wher
\begin{equation}
\breve{v}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}v(t_{k},x+\sigma\sqrt{h}\xi_{j
)\xi_{j}^{\top}v(t_{k},x). \label{NSA6
\end{equation}
It is easy to see that under Assumptions~2.1 $\operatorname{div}\hat
{v}(t_{k+1},x)=0.$ The corresponding layer method for the SNSE (\ref{NS1
)-(\ref{NS3}) has the for
\begin{gather}
\bar{v}(0,x)=\varphi(x),\ \bar{v}(t_{k+1},x)=2^{-n}\sum_{j=1}^{2^{n}}\bar
{v}(t_{k},x+\sigma\sqrt{h}\xi_{j})-\frac{\sqrt{h}}{\sigma}P\breve{v
(t_{k},x)\label{NSM21} \displaybreak[0]\\
+Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r},\ \ k=0,\ldots
,N-1,\nonumber
\end{gather}
where
\begin{equation}
\breve{v}(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}\bar{v}(t_{k},x+\sigma\sqrt{h
\xi_{j})\xi_{j}^{\top}\bar{v}(t_{k},x). \label{NSM23
\end{equation}
Practical implementation of the layer method (\ref{NSM21})-(\ref{NSM23}) is
straightforward and efficient. Let us write the corresponding numerical
algorithm for simplicity in the two-dimensional ($n=2)$ case. We choose a
positive integer $M$ as a cut-off frequency and write the approximate velocity
at the time $t_{k+1}$ as the partial sum:
\begin{equation}
\bar{v}(t_{k+1},x)=\sum_{n_{1}=-M}^{M-1}\sum_{n_{2}=-M}^{M-1}\bar
{v}_{\mathbf{n}}(t_{k+1})e^{i(2\pi/L)(\mathbf{n},x)}, \label{al1
\end{equation}
where $\mathbf{n}=(n_{1},n_{2})^{\top}.$
We note that we use the same notation $\bar{v}(t_{k+1},x)$ for the partial sum
in (\ref{al1}) instead of writing $\bar{v}_{M}(t_{k+1},x)$ while in
(\ref{NSM21}) $\bar{v}(t_{k+1},x)$ denotes the approximate velocity containing
all frequencies but this should not lead to any confusion.
Further, we have
\begin{equation}
\frac{1}{4}\sum_{j=1}^{4}\bar{v}(t_{k},x+\sigma\sqrt{h}\xi_{j})=\sum
_{n_{1}=-M}^{M-1}\sum_{n_{2}=-M}^{M-1}\bar{v}_{\mathbf{n}}(t_{k
)e^{i(2\pi/L)(\mathbf{n},x)}\frac{1}{4}\sum_{j=1}^{4}e^{i(2\pi\sigma\sqrt
{h}/L)(\mathbf{n},\xi_{j})}. \label{al11
\end{equation}
Then
\begin{align*}
\breve{v}(t_{k},x) & =\frac{1}{4}\sum_{j=1}^{4}\bar{v}(t_{k},x+\sigma
\sqrt{h}\xi_{j})\xi_{j}^{\top}\bar{v}(t_{k},x)\\
& =\sum_{n_{1}=-N}^{M-1}\sum_{n_{2}=-N}^{M-1}\bar{v}_{\mathbf{n}
(t_{k})e^{i(2\pi/L)(\mathbf{n},x)}\frac{1}{4}\sum_{j=1}^{4}e^{i(2\pi
\sigma\sqrt{h}/L)(\mathbf{n},\xi_{j})}\xi_{j}^{\top}\bar{v}(t_{k},x)\\
& =\sum_{n_{1}=-M}^{M-1}\sum_{n_{2}=-M}^{M-1}V_{\mathbf{n}}(t_{k
)e^{i(2\pi/L)(\mathbf{n},x)}\bar{v}(t_{k},x),
\end{align*}
where
\[
V_{\mathbf{n}}(t_{k})=\bar{v}_{\mathbf{n}}(t_{k})\cdot\frac{1}{4}\sum
_{j=1}^{4}e^{i(2\pi\sigma\sqrt{h}/L)(\mathbf{n},\xi_{j})}\xi_{j}^{\top}.
\]
Note that $V_{\mathbf{n}}(t_{k})$ is a $2\times2$-matrix. Let
\begin{equation}
V(t_{k},x):=\sum_{n_{1}=-M}^{M-1}\sum_{n_{2}=-M}^{M-1}V_{\mathbf{n}
(t_{k})e^{i(2\pi/L)(\mathbf{n},x)} \label{ext
\end{equation}
then
\[
\breve{v}(t_{k},x)=V(t_{k},x)\bar{v}(t_{k},x).
\]
We obtain the algorithm:
\begin{align}
\bar{v}_{\mathbf{n}}(0) & =\varphi_{\mathbf{n}},\ \label{alg2} \displaybreak[0]\\
\bar{v}_{\mathbf{n}}(t_{k+1}) & =\bar{v}_{\mathbf{n}}(t_{k})-\frac{\sqrt{h
}{\sigma}\left( \breve{v}_{\mathbf{n}}(t_{k})-\frac{\breve{v}_{\mathbf{n
}^{\top}(t_{k})\mathbf{n}}{|\mathbf{n}|^{2}}\mathbf{n}\right) +f_{\mathbf{n
}(t_{k})h-h\frac{f_{\mathbf{n}}^{\top}(t_{k})\mathbf{n}}{|\mathbf{n}|^{2
}\mathbf{n} \nonumber \displaybreak[0]\\
&+\sum_{r=1}^{q}\gamma_{r,\mathbf{n}}(t_{k})\ \Delta_{k
w_{r},\nonumber
\end{align}
where
\begin{equation}
\breve{v}_{\mathbf{n}}(t_{k})=(\breve{v}(t_{k},x))_{\mathbf{n}}=\left(
V(t_{k},x)\bar{v}(t_{k},x)\right) _{\mathbf{n}}. \label{alg3
\end{equation}
To find $\breve{v}_{\mathbf{n}}(t_{k})$ one can either multiply two partial
sums of the form (\ref{al1}) and (\ref{ext}) or exploit fast Fourier transform
in the usual fashion (see, e.g. \cite{Can98}) to speed up the algorithm. The
algorithm (\ref{alg2}) can be viewed as analogous to spectral methods. It is
interesting that the layer method (\ref{NSM21})-(\ref{NSM23}) is, on the one
hand, related to a finite difference scheme (see below) and on the other hand,
to spectral methods.
\label{remFD}Let us discuss a relationship between the layer method
$(\ref{NSM21})$-$(\ref{NSM23})$ and finite difference methods. For simplicity
in writing, we give this illustration in the two-dimensional case. It is not
difficult to notice that the two-dimensional analog of the layer approximation
$(\ref{NSM21})$ can be re-written as the following finite difference scheme
for the SNSE $(\ref{NS1})$-$(\ref{NS3})$:
\begin{align}
& \frac{\bar{v}(t_{k+1},x)-\bar{v}(t_{k},x)}{h}\label{fd1}\\
& =\frac{\bar{v}(t_{k},x^{1}+\sigma\sqrt{h},x^{2}+\sigma\sqrt{h})+\bar
{v}(t_{k},x^{1}-\sigma\sqrt{h},x^{2}+\sigma\sqrt{h})-4\bar{v}(t_{k
,x^{1},x^{2})}{4h}\nonumber\\
& +\frac{\bar{v}(t_{k},x^{1}+\sigma\sqrt{h},x^{2}-\sigma\sqrt{h})+\bar
{v}(t_{k},x^{1}-\sigma\sqrt{h},x^{2}-\sigma\sqrt{h})}{4h}\nonumber\\
& -\frac{1}{\sigma\sqrt{h}}P\breve{v}(t_{k},x)+Pf(t_{k},x)+\sum_{r=1
^{q}\gamma_{r}(t_{k},x)\frac{\Delta w_{r}(t_{k+1})}{h}\nonumber
\end{align}
with
\begin{align}
\frac{\breve{v}(t_{k},x)}{\sigma\sqrt{h}} & =\bar{v}^{1}(t_{k},x)\frac
{\bar{v}(t_{k},x^{1}+\sigma\sqrt{h},x^{2}+\sigma\sqrt{h})-\bar{v}(t_{k
,x^{1}-\sigma\sqrt{h},x^{2}+\sigma\sqrt{h})}{4\sigma\sqrt{h}}\label{fd2}\\
& +\bar{v}^{1}(t_{k},x)\frac{\bar{v}(t_{k},x^{1}+\sigma\sqrt{h},x^{2
-\sigma\sqrt{h})-\bar{v}(t_{k},x^{1}-\sigma\sqrt{h},x^{2}-\sigma\sqrt{h
)}{4\sigma\sqrt{h}}\nonumber\\
& +\bar{v}^{2}(t_{k},x)\frac{\bar{v}(t_{k},x^{1}+\sigma\sqrt{h},x^{2
+\sigma\sqrt{h})-\bar{v}(t_{k},x^{1}+\sigma\sqrt{h},x^{2}-\sigma\sqrt{h
)}{4\sigma\sqrt{h}}\nonumber\\
& +\bar{v}^{2}(t_{k},x)\frac{\bar{v}(t_{k},x^{1}-\sigma\sqrt{h},x^{2
+\sigma\sqrt{h})-\bar{v}(t_{k},x^{1}-\sigma\sqrt{h},x^{2}-\sigma\sqrt{h
)}{4\sigma\sqrt{h}}\ \ .\nonumber
\end{align}
As one can see, $\bar{v}(t_{k},\cdot)$ in the right-hand side of $(\ref{fd1})$
is evaluated at the nodes $(x^{1},x^{2}),$ $(x^{1}\pm\sigma\sqrt{h},x^{2
\pm\sigma\sqrt{h})$, which is typical for a standard explicit finite
difference scheme with the space discretization step $h_{x}$ taken equal to
$\sigma\sqrt{h}$ and $h$ being the time-discretization step. We also note that
if in the approximation $(\ref{NS30})$ we choose a different random vector
$\xi$ than in $(\ref{NS15})$ then we can obtain another layer method for the
SNSE which can be again re-written as a finite difference scheme (see such a
discussion in the case of the deterministic NSE in \cite{NS5}).
We recall \cite{M1,MT1,spde} that convergence theorems for layer methods (in
comparison with the theory of finite difference methods) do not contain any
conditions on stability of their approximations. In layer methods we do not
need to a priori prescribe space nodes: they are obtained automatically
depending on choice of a probabilistic representation and a numerical scheme.
We note that our error analysis for the layer methods (see Section~\ref{secER
) immediately implies the same error estimates for the corresponding finite
difference scheme $(\ref{fd1})$.
\begin{remark}
It is not difficult to see from $(\ref{fd2})$ that
\begin{equation}
(\bar{v}(t_{k},x),\nabla)\bar{v}(t_{k},x)\approx\frac{\breve{v}(t_{k
,x)}{\sigma\sqrt{h}}\ . \label{fd3
\end{equation}
If we put the exact $v(t_{k},x)$ in $(\ref{fd3})$ (both in its left and
right-hand sides) instead of the approximate $\bar{v}(t_{k},x)$ then the
accuracy of the approximation in $(\ref{fd3})$ is of order $O(h).$ This
observation is helpful for understanding a relationship between the layer
methods from this and the next section (see Remark~\ref{Remfd2} at the end of
the next section).
\end{remark}
\subsection{A layer method based on the direct probabilistic
representation\label{secDiL}}
The local version of probabilistic representation (\ref{NS11n})-(\ref{NS12n}) for the solution to the
backward SNSE (\ref{D1})-(\ref{D01}) has the form:
\begin{gather}
u(t_{k},x)=E^{\tilde{w}}u(t_{k+1},X_{t_{k},x}(t_{k+1}))\label{NS13n}\\
-E^{\tilde{w}}\left[ \int_{t_{k}}^{t_{k+1}}\{\nabla\tilde{p}(s^{\prime
},X_{t_{k},x}(s^{\prime}))-\tilde{f}(s^{\prime},X_{t_{k},x}(s^{\prime
})) \right. \nonumber \\
\Bigg.+(u(s^{\prime},X_{t_{k},x}(s^{\prime})),\nabla)u(s^{\prime},X_{t_{k
,x}(s^{\prime}))\}ds^{\prime}\Bigg] \nonumber\\
+\sum_{r=1}^{q}E^{\tilde{w}}\left[ \int_{t_{k}}^{t_{k+1}}\tilde{\gamma
_{r}(s^{\prime},X_{t_{k},x}(s^{\prime}))d\tilde{w}_{r}(s^{\prime})\right]
.\nonumber
\end{gather}
Using (\ref{NS13n}), we construct the one-step approximation of the solution to the
backward SNSE (\ref{D1})-(\ref{D01})
\begin{align}
u(t_{k},x) & =E^{\tilde{w}}u(t_{k+1},X_{t_{k},x}(t_{k+1}))-h\{\nabla
\tilde{p}(t_{k+1},x)-\tilde{f}(t_{k+1},x)\label{DL2} \displaybreak[0] \\
& +(u(t_{k+1},x),\nabla)u(t_{k+1},x)\}+\sum_{r=1}^{q}\tilde{\gamma
_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r}+\rho\nonumber \displaybreak[0]\\
& =2^{-n}\sum_{j=1}^{2^{n}}u(t_{k+1},x+\sigma\sqrt{h}\xi_{j})\nonumber \displaybreak[0]\\
& -h\{\nabla\tilde{p}(t_{k+1},x)-\tilde{f}(t_{k+1},x)+(u(t_{k+1
,x),\nabla)u(t_{k+1},x)\}\nonumber \displaybreak[0]\\
& +\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r
+\rho,\nonumber
\end{align}
where $\rho=\rho(t_{k},x)$ is a remainder.
Using the Helmholtz-Hodge-Leray decomposition and taking into account that
$\operatorname{div}u(t_{k+1},x+\sigma\sqrt{h}\xi_{q})=0$ and
$\operatorname{div}\gamma_{r}=0,$\ we get from (\ref{DL2}):
\begin{gather}
u(t_{k},x)=2^{-n}\sum_{j=1}^{2^{n}}u(t_{k+1},x+\sigma\sqrt{h}\xi
_{j})-P[(u(t_{k+1},x),\nabla)u(t_{k+1},x)]h
\label{DL30}\\
+P\tilde{f}(t_{k+1},x)h-P^{\bot}[(u(t_{k+1},x),\nabla)u(t_{k+1},x)]h+P^{\bot}\tilde{f}(t_{k+1
,x)h\nonumber\\
-\nabla\tilde{p}(t_{k+1},x)h+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r
+\rho,\nonumber
\end{gather}
whence we obtain after applying the operator $P:
\begin{align}
u(t_{k},x) & =2^{-n}\sum_{j=1}^{2^{n}}u(t_{k+1},x+\sigma\sqrt{h}\xi
_{j})-P[(u(t_{k+1},x),\nabla)u(t_{k+1},x)]h
\label{DL3}\\
& +P\tilde{f}(t_{k+1},x)h+\sum_{r=1}^{q}\tilde{\gamma}_{r}(t_{k+1},x)\Delta_{k}\tilde{w}_{r
+P\rho.\nonumber
\end{align}
We re-write (\ref{DL30})-(\ref{DL3}) for the forward-time SNSE (\ref{NS1
)-(\ref{NS3}):
\begin{gather}
v(t_{k+1},x)=2^{-n}\sum_{j=1}^{2^{n}}v(t_{k},x+\sigma\sqrt{h}\xi
_{j})-P[(v(t_{k},x),\nabla)v(t_{k},x)]h\label{DL30n}\\
+Pf(t_{k},x)h-P^{\bot}[(v(t_{k},x),\nabla)v(t_{k},x)]h+P^{\bot}f(t_{k},x)h\nonumber\\
-\nabla
p(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}+\rho\nonumber
\end{gather}
and
\begin{align}
v(t_{k+1},x) & =2^{-n}\sum_{j=1}^{2^{n}}v(t_{k},x+\sigma\sqrt{h}\xi
_{j})-P\left[ (v(t_{k},x),\nabla)v(t_{k},x)\right] h\label{DL3n}\\
& +Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r
+P\rho.\nonumber
\end{align}
Dropping the remainder in (\ref{DL3n}), we obtain the one-step approximation
for the velocity $v(t_{k+1},x)$ in (\ref{NS1})-(\ref{NS3}):
\begin{align}
\hat{v}(t_{k+1},x) & =2^{-n}\sum_{j=1}^{2^{n}}v(t_{k},x+\sigma\sqrt{h
\xi_{j})-P\left[ (v(t_{k},x),\nabla)v(t_{k},x)\right] h\label{DL5}\\
& +Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}.\nonumber
\end{align}
It is easy to see that under Assumptions~2.1 $\operatorname{div}\hat
{v}(t_{k+1},x)=0.$ The corresponding layer method for the velocity of the SNSE
(\ref{NS1})-(\ref{NS3}) has the for
\begin{gather}
\bar{v}(0,x)=\varphi(x), \label{DL7} \displaybreak[0]\\
\bar{v}(t_{k+1},x)=2^{-n}\sum_{j=1}^{2^{n}}\bar
{v}(t_{k},x+\sigma\sqrt{h}\xi_{j})-P\left[ (\bar{v}(t_{k},x),\nabla)\bar
{v}(t_{k},x)\right] h \nonumber \displaybreak[0]\\
+Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r},\ \ k=0,\ldots
,N-1.\nonumber
\end{gather}
This method can be turned into a numerical algorithm analogously to how we
constructed the numerical algorithm (\ref{alg2}) based on the layer method
(\ref{NSM21}) in Section~\ref{secSimL}.
\begin{remark}
\label{Remfd2}It is interesting to note (see also $(\ref{fd2})$ and
$(\ref{fd3})$) the relationship between the methods $(\ref{NSM21})$ and
$(\ref{DL7})$: $\sqrt{h}\breve{v}(t_{k},x)/\sigma$ from $(\ref{NSA4
)$-$(\ref{NSA6})$ is a finite-difference approximation of the term $(\bar
{v}(t_{k},x),\nabla)\bar{v}(t_{k},x)h$ in $(\ref{DL7})$. We remark that this
finite difference naturally arises via the probabilistic approach. It is
useful to have both methods in the arsenal of layer methods for SNSE: while
the method $(\ref{DL7})$ has a smaller one-step error than $(\ref{NSM21}),$ it
requires evaluation of spatial derivatives of $\bar{v}(t_{k},x)$.
\end{remark}
\subsection{Approximation of pressure\label{secPres}}
In the previous sections we constructed numerical methods for velocity
$v(t,x),$ in this section we propose approximations for pressure $p(t,x)$.
Applying the projection operator $P^{\bot}$ to SNSE (\ref{NS1})-(\ref{NS3}),
we get (see also (\ref{NS03})):
\begin{equation}
\nabla p(t,x)=-P^{\bot}\left[ (v(t,x),\nabla)v(t,x)\right] +P^{\bot}f(t,x).
\label{pre1
\end{equation}
Based on (\ref{pre1}), we complement the layer method (\ref{DL7}) for the
velocity by the approximation of pressure as follows:
\begin{equation}
\nabla\bar{p}(t_{k+1},x)=-P^{\bot}\left[ (\bar{v}(t_{k+1},x),\nabla)\bar
{v}(t_{k+1},x)\right] +P^{\bot}f(t_{k+1},x). \label{DL7p
\end{equation}
As a result, we obtain \textit{the layer method} (\ref{DL7}), (\ref{DL7p}) for
the solution of SNSE (\ref{NS1})-(\ref{NS3}).
It is clear that the numerical error $\nabla\bar{p}(t_{k+1},x)-\nabla p(t,x)$
is of the same order as the global errors of $\bar{v}(t_{k+1},x)$ and
$\nabla\bar{v}(t_{k+1},x).$ We note that in (\ref{DL7p}) to evaluate pressure
at time $t_{k+1}$ we use velocity at time $t_{k+1},$ i.e., the updated velocity.
\begin{remark}
We observe that $\rho$ in $(\ref{DL30n})$ is such that $P^{\bot}\rho=0.$
Indeed, it follows from $(\ref{DL30n})$-$(\ref{DL3n})$ (with $t_{k+1}$ instead
of $t_{k}$) that
\begin{equation}
\nabla p(t_{k+1},x)=-P^{\bot}\left[ (v(t_{k+1},x),\nabla)v(t_{k+1},x)\right]
+P^{\bot}f(t_{k+1},x)+P^{\bot}\rho. \label{pre2
\end{equation}
Comparing $(\ref{pre1})$ and $(\ref{pre2})$, we get $P^{\bot}\rho=0.$
\end{remark}
Let us now return to the layer method (\ref{NSM21}) for velocity. We have to
complement it with an approximation of pressure. To this end, we approximate
(see Remark~\ref{Remfd2} and (\ref{fd3})) the term $(\bar{v}(t_{k+1
,x),\nabla)\allowbreak\bar{v}(t_{k+1},x)$ in (\ref{DL7p}) by $\breve
{v}(t_{k+1},x)/\sigma\sqrt{h}$ with $\breve{v}(t_{k+1},x)$ from (\ref{NSM23})
(with $t_{k+1}$ instead of $t_{k})$. We obtain
\begin{equation}
\nabla\bar{p}(t_{k+1},x)=-\frac{1}{\sigma\sqrt{h}}P^{\bot}\breve{v
(t_{k+1},x)+P^{\bot}f(t_{k+1},x), \label{NSMp
\end{equation}
where $\breve{v}(t_{k+1},x)$ is from (\ref{NSM23}). Note that in the velocity
approximation (\ref{NSM21}) we use $\breve{v}(t_{k},x)$ while in the pressure
approximation (\ref{NSMp}) we use $\breve{v}(t_{k+1},x).$
As a result, we obtain \textit{the layer method} (\ref{NSM21})-(\ref{NSM23}),
(\ref{NSMp}) for the solution of SNSE (\ref{NS1})-(\ref{NS3}).
We remark that the layer method (\ref{NS18}) for velocity can be completed by
approximating the pressure as it is either in (\ref{DL7p}) with $\bar
{v}(t_{k+1},x)$ found due to (\ref{NS18}) or in (\ref{NSMp}) but with
$\breve{v}(t_{k+1},x)$ from (\ref{NSM23}) using $\bar{v}(t_{k+1},x)$ found
due to (\ref{NS18}).
To provide an example of an algorithm involving an approximation of pressure,
let us return to the algorithm (\ref{alg2}) for velocity. Based on
(\ref{NSMp}) (see also (\ref{N01})), we obtain
\begin{equation}
\bar{p}_{\mathbf{n}}(t_{k+1})=i\frac{L}{2\pi}\left( \frac{\breve
{v}_{\mathbf{n}}^{\top}(t_{k+1})\mathbf{n}}{\sigma\sqrt{h}|\mathbf{n}|^{2
}-\frac{f_{\mathbf{n}}^{\top}(t_{k+1})\mathbf{n}}{|\mathbf{n}|^{2}}\right)
,\ \ \mathbf{n\neq0,\ }\bar{p}_{\mathbf{0}}(t_{k+1})=0, \label{algp
\end{equation}
where $\breve{v}_{\mathbf{n}}^{\top}(t_{k+1})$ are as in (\ref{alg3}) with
$t_{k+1}$ instead of $t_{k}$.
As a result, we obtain \textit{the algorithm} (\ref{alg2})-(\ref{alg3}),
(\ref{algp}) for the solution of SNSE (\ref{NS1})-(\ref{NS3}) which
corresponds to the layer method (\ref{NSM21})-(\ref{NSM23}), (\ref{NSMp}).
Analogously, one can obtain algorithms corresponding to the other two layer
methods considered in the paper.
\section{Error analysis\label{secER}}
In this section we provide theoretical support for the numerical methods from
the previous section. For definiteness, we consider the layer method
(\ref{NS18}). Analogous results can be obtained for the other two layer
methods proposed in Sections~\ref{secSimL}\ and~\ref{secDiL}.
As before, $||u(\cdot)||=||u(x)||$ denotes the $\mathbf{L}^{2}$-norm of a
function $u(x),$ $x\in Q.$ In this section we use the same letter $K$ for
various deterministic constants and $C=C(\omega)$ for various positive random variables.
We start with analysis of the local mean-square error.
\begin{theorem}
\label{lemonest}Let Assumptions~2.1 hold with $m_{0}>6$. The one-step error
\begin{equation}
\rho(t_{k+1},x)=\hat{v}(t_{k+1},x)-v(t_{k+1},x) \label{onesterr
\end{equation}
of the one-step approximation $(\ref{NSA1})$-$(\ref{NSA3})$ for the SNSE
$(\ref{NS1})$-$(\ref{NS3})$ is estimated as
\begin{equation}
||E(\rho(t_{k+1},x)|\mathcal{F}_{t_{k}}^{w})||\leq C(\omega)h^{2
,\ \label{lm1
\end{equation}
and for $1\leq p<p_{0}$
\begin{equation}
\left( E||\rho(t_{k+1},\cdot)||^{2p}\right) ^{1/2p}\leq Kh^{3/2
,\ \label{lm22
\end{equation}
where a random constant $C(\omega)>0$ with $EC^{2}<\infty$ does not depend on
$h$ and $k,$ a deterministic constant $K>0$ does not depend on $h$ and $k$ but
depends on $p,$ and $p_{0}=p_{0}(m_{0})>1$ is a positive number or
$p_{0}=\infty.$
\end{theorem}
\noindent\textbf{Proof. } Using Assumptions~2.1, we expand the right-hand side
of (\ref{NSA3}), substitute the outcome in (\ref{NSA1}), and obtain
\begin{align}
\hat{v}(t_{k+1},x) & =v(t_{k},x)-hP\left[ (v(t_{k},x),\nabla)v(t_{k
,x)\right] +\frac{\sigma^{2}}{2}h\Delta v(t_{k},x)\label{lm3}\\
& +Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}+r_{1
(t_{k},x),\nonumber
\end{align}
where the remainder $r_{1}(t_{k},x)$ has the form
\begin{align*}
r_{1}(t_{k},x) & =\frac{h^{2}}{2}\sum_{i,j=1}^{n}P\left[ v^{i
(t_{k},x)v^{j}(t_{k},x)\frac{\partial^{2}}{\partial x^{i}\partial x^{j
}v(t_{k},\Theta)\right] \\
& +\frac{\sigma^{2}h^{2}}{2}\sum_{i,j=1}^{n}P\left[ v^{j}(t_{k
,x)\frac{\partial^{2}}{\left( \partial x^{i}\right) ^{2}\partial x^{j
}v(t_{k},\tilde{\Theta})\right] \\
& +\frac{\sigma^{4}h^{2}}{24}2^{-n}\sum_{j=1}^{2^{n}}\sum_{i=1}^{n}P\left[
\frac{\partial^{4}}{\partial x^{i_{1}}\partial x^{i_{2}}\partial x^{i_{3
}\partial x^{i_{4}}}v(t_{k},\Xi_{j})\xi_{j}^{i_{1}}\xi_{j}^{i_{2}}\xi
_{j}^{i_{3}}\xi_{j}^{i_{4}}\right] ,
\end{align*}
and $\Theta$ and $\tilde{\Theta}$ are some intermediate points between $x$ and
$x-v(t_{k},x)h,$ and $\Xi_{j}$ are some intermediate points between
$x-v(t_{k},x)h$ and $x-v(t_{k},x)h+\sigma\sqrt{h}\xi_{j}$ (we note that
$r_{1}$ is a vector and the intermediate points depend on the component of
$r_{1}$ but we do not reflect this in the notation). It is not difficult to
estimate that this remainder satisfies the inequalities
\begin{equation}
||E\left( r_{1}(t_{k},x)|\mathcal{F}_{t_{k}}^{w}\right) ||\leq
C(\omega)h^{2},\ \ \left( E||r_{1}(t_{k},\cdot)||^{2p}\right) ^{1/2p}\leq
Kh^{2}. \label{lm4
\end{equation}
We write the solution $v(s,x),\ s\geq t_{k},$ of (\ref{NS1})-(\ref{NS3}) as
\begin{align}
v(s,x) & =v(t_{k},x)+\int_{t_{k}}^{s}\left[ \frac{\sigma^{2}}{2}\Delta
v(s^{\prime},x)-(v(s^{\prime},x),\nabla)v(s^{\prime},x)+f(s^{\prime
},x)\right] ds^{\prime}\label{lm5}\\
& -\int_{t_{k}}^{s}\nabla p(s^{\prime},x)ds^{\prime}+\sum_{r=1}^{q
\int_{t_{k}}^{s}\gamma_{r}(s^{\prime},x)dw_{r}(s^{\prime})\nonumber
\end{align}
and, in particular
\begin{align}
v(t_{k+1},x) & =v(t_{k},x)+\int_{t_{k}}^{t_{k+1}}\left[ \frac{\sigma^{2
}{2}\Delta v(s,x)-(v(s,x),\nabla)v(s,x)+f(s,x)\right] ds\label{lm6}\\
& -\int_{t_{k}}^{t_{k+1}}\nabla p(s,x)ds+\sum_{r=1}^{q}\int_{t_{k}}^{t_{k+1
}\gamma_{r}(s,x)dw_{r}(s).\nonumber
\end{align}
Substituting $v(s,x)$ from (\ref{lm5}) in the integrand of the first integral
in (\ref{lm6}) and expanding $\gamma_{r}(s,x)$ at $(t_{k},x),$ we obtain
\begin{align}
v(t_{k+1},x) & =v(t_{k},x)+h\frac{\sigma^{2}}{2}\Delta v(t_{k
,x)-h(v(t_{k},x),\nabla)v(t_{k},x)+hf(t_{k},x)\label{lm7}\\
& -\int_{t_{k}}^{t_{k+1}}\nabla p(s,x)ds+\sum_{r=1}^{q}\gamma_{r
(t_{k},x)\Delta_{k}w_{r}+r_{2}(t_{k},x),\nonumber
\end{align}
where
\[
r_{2}(t_{k},x)=r_{2}^{(1)}(t_{k},x)+r_{2}^{(2)}(t_{k},x)
\]
and
\begin{align*}
r_{2}^{(1)}(t_{k},x) & =\frac{\sigma^{2}}{2}\int_{t_{k}}^{t_{k+1}}\left[
\int_{t_{k}}^{s}\Delta\left( \frac{\sigma^{2}}{2}\Delta v(s^{\prime
},x)-(v(s^{\prime},x),\nabla)v(s^{\prime},x) \right. \right . \displaybreak[0]\\
& \bigg. \bigg. +f(s^{\prime},x)\bigg)ds^{\prime}\bigg] ds
-\frac{\sigma^{2}}{2}\int_{t_{k}}^{t_{k+1}}\int_{t_{k}}^{s}\Delta\nabla
p(s^{\prime},x)ds^{\prime}ds \displaybreak[0]\\
&-\int_{t_{k}}^{t_{k+1}}(v(s,x),\nabla)
\left[ \int_{t_{k}}^{s}\left(
\frac{\sigma^{2}}{2}\Delta v(s^{\prime},x)
-(v(s^{\prime},x),\nabla)v(s^{\prime},x)
\right. \right. \displaybreak[0]\\
&\bigg. \bigg. +f(s^{\prime},x)\bigg) ds^{\prime}\bigg] ds \displaybreak[0]\\
& +\int_{t_{k}}^{t_{k+1}}(v(s,x),\nabla)\int_{t_{k}}^{s}\nabla p(s^{\prime
},x)ds^{\prime}ds \displaybreak[0]\\
& -\int_{t_{k}}^{t_{k+1}}\left( \int_{t_{k}}^{s}\left( \frac{\sigma^{2}
{2}\Delta v(s^{\prime},x)-(v(s^{\prime},x),\nabla)v(s^{\prime},x) \right. \right. \displaybreak[0]\\
& \bigg. \bigg. +f(s^{\prime},x)\bigg) ds^{\prime},\nabla\bigg) v(s,x)ds \displaybreak[0]\\
&+\int_{t_{k}}^{t_{k+1}}\left( \int_{t_{k}}^{s}\nabla p(s^{\prime
},x)ds^{\prime},\nabla\right) v(s,x)ds \displaybreak[0]\\
& +\int_{t_{k}}^{t_{k+1}}(t_{k+1}-s)\frac{\partial}{\partial s}f(s,x)ds,
\end{align*
\begin{align*}
r_{2}^{(2)}(t_{k},x) & =\frac{\sigma^{2}}{2}\sum_{r=1}^{q}\int_{t_{k
}^{t_{k+1}}\int_{t_{k}}^{s}\Delta\gamma_{r}(s^{\prime},x)dw_{r}(s^{\prime
})ds \displaybreak[0]\\
& -\sum_{r=1}^{q}\int_{t_{k}}^{t_{k+1}}\left[ (v(s,x),\nabla)\int_{t_{k
}^{s}\gamma_{r}(s^{\prime},x)dw_{r}(s^{\prime})\right] ds \displaybreak[0]\\
& -\sum_{r=1}^{q}\int_{t_{k}}^{t_{k+1}}\left( \int_{t_{k}}^{s}\gamma
_{r}(s^{\prime},x)dw_{r}(s^{\prime}),\nabla\right) v(s,x)ds \displaybreak[0]\\
& +\sum_{r=1}^{q}\int_{t_{k}}^{t_{k+1}}\left( w_{r}(t_{k+1})-w_{r
(s)\right) \frac{\partial}{\partial s}\gamma_{r}(s,x)ds.
\end{align*}
We see that the remainder $r_{2}(t_{k},x)$ consists of 1) $r_{2}^{(1)
(t_{k},x)$ with terms of mean-square order $h^{2}$ and 2) $r_{2}^{(2)
(t_{k},x)$ with terms containing $\mathcal{F}_{t_{k+1}}^{w}$-measurable Ito
integrals of mean-square order $h^{3/2}$ which expectations with respect to
$\mathcal{F}_{t_{k}}^{w}$ equal zero.\ Further, using Assumptions~2.1, one can
show that
\begin{equation}
|E\left( r_{2}(t_{k},x)|\mathcal{F}_{t_{k}}^{w}\right) |\leq C(\omega
)h^{2},\ \ \left( E\left\vert r_{2}(t_{k},x)\right\vert ^{2p}\right)
^{1/2p}\leq Kh^{3/2},\ \label{lm8
\end{equation}
where $C(\omega)>0$ and $K>0$ do not depend on $k,$ $x,$ and $h.$ Based on the
second inequality in (\ref{lm8}), we obtain
\begin{align}
E\left\vert |r_{2}(t_{k},\cdot)|\right\vert ^{2p} & =E\left( \in
_{Q}\left[ r_{2}(t_{k},x)\right] ^{2}dx\right) ^{p}\leq KE\in
_{Q}\left\vert r_{2}(t_{k},x)\right\vert ^{2p}dx\label{lm82}\\
& \leq K\int_{Q}E\left\vert r_{2}(t_{k},x)\right\vert ^{2p}dx\leq
Kh^{2p\times3/2}\ .\nonumber
\end{align}
Applying the projector operator $P$ to the left- and right-hand sides of
(\ref{lm7}), we arrive at
\begin{align}
v(t_{k+1},x) & =v(t_{k},x)+h\frac{\sigma^{2}}{2}\Delta v(t_{k
,x)-hP[(v(t_{k},x),\nabla)v(t_{k},x)]+hPf(t_{k},x)\label{lm9}\\
& +\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}+r_{3}(t_{k
,x),\ \nonumber
\end{align}
where the new remainder $r_{3}(t_{k},x)=Pr_{2}(t_{k},x).$ Using (\ref{lm82}),
we get
\begin{equation}
E||r_{3}(t_{k},\cdot)||^{2p}=E||Pr_{2}(t_{k},\cdot)||^{2p}\leq E||r_{2
(t_{k},\cdot)||^{2p}\leq Kh^{2p\times3/2}. \label{lm100
\end{equation}
Hence from here, (\ref{lm4}) and (\ref{lm3}), (\ref{lm9}), we obtain
(\ref{lm22}).
Observing that expectation of projection $P$ of Ito integrals remains equal to
zero, we get $E\left( Pr_{2}^{(2)}(t_{k},x)|\mathcal{F}_{t_{k}}^{w}\right)
=0.$ Since $r_{2}^{(1)}(t_{k},x)$ consists of terms of mean-square order
$h^{2}$, we obtai
\begin{align*}
||E\left( r_{3}(t_{k},x)|\mathcal{F}_{t_{k}}^{w}\right) ||^{2} &
=||E\left( Pr_{2}^{(1)}(t_{k},x)|\mathcal{F}_{t_{k}}^{w}\right) ||^{2} \displaybreak[0]\\%
&=\int_{Q}\left[ E\left( Pr_{2}^{(1)}(t_{k},x)|\mathcal{F}_{t_{k}
^{w}\right) \right] ^{2}dx \displaybreak[0]\\
& \leq\int_{Q}E\left( \left[ Pr_{2}^{(1)}(t_{k},x)\right] ^{2
|\mathcal{F}_{t_{k}}^{w}\right) dx \displaybreak[0]\\
&=E\left( \int_{Q}\left[ Pr_{2
^{(1)}(t_{k},x)\right] ^{2}dx|\mathcal{F}_{t_{k}}^{w}\right) \displaybreak[0]\\
& \leq E\left( \int_{Q}\left[ r_{2}^{(1)}(t_{k},x)\right] ^{2
dx|\mathcal{F}_{t_{k}}^{w}\right) \leq C(\omega)h^{4
\end{align*}
whence
\begin{equation}
||E\left( r_{3}(t_{k},x)|\mathcal{F}_{t_{k}}^{w}\right) ||\leq
C(\omega)h^{2}\ . \label{lm11
\end{equation}
Then the estimate (\ref{lm1}) follows from (\ref{lm4}), (\ref{lm11}) and
(\ref{lm3}), (\ref{lm9}). \ $\square$
\begin{remark}
We recall that in Assumptions~2.1 we require existence of moments of order
$m,$ $2\leq m<m_{0},$ of the solution and its spatial derivatives. The higher
the $m_{0},$ the higher $p$, $1\leq p<p_{0},$ can be taken in $(\ref{lm22})$.
In particular, to guarantee $(\ref{lm22})$ with $p=1,$ we need existence of
moments of up to the order $m=6,$ while if the moments of any order $m$ (i.e.,
$m_{0}=\infty)$ are finite then $(\ref{lm22})$ is valid for any $p.$ We also
note that the smoothness conditions on the SNSE solution (see Assumptions~2.1)
required for proving Theorem~\ref{lemonest} are so that $v(t,x)$ should have
continuous spatial derivatives up to order four and $p(t,x)$ -- up to order three.
\end{remark}
\begin{corollary}
\label{coras}Let Assumptions~2.1 hold with the bounded moments of any order
$m\geq2.$ Then for almost every trajectory $w(\cdot)$ and any $0<\varepsilon
<3/2$ there exists a constant $C(\omega)>0$ such that the one-step error from
$(\ref{onesterr})$ is estimated as
\begin{equation}
||\rho(t_{k+1},\cdot)||\leq C(\omega)h^{3/2-\varepsilon}, \label{coroe
\end{equation}
i.e., the layer method $(\ref{NS18})$ has the one-step error of order
$3/2-\varepsilon$ a.s.\ .
\end{corollary}
\noindent\textbf{Proof.} Here we follow the recipe used in
\cite{Gyo98,filter,spde}. The Markov inequality together with (\ref{lm22})
implies
\[
P(||\rho(t_{k+1},\cdot)||>h^{\gamma})\leq\frac{E||\rho(t_{k+1},\cdot)||^{2p
}{h^{2p\gamma}}\leq Kh^{2p(3/2-\gamma)}.
\]
Then for any $\gamma=3/2-\varepsilon$ there is a sufficiently large $p\geq1$
such that (recall that $h=T/N)$
\[
\sum_{N=1}^{\infty}P\left( ||\rho(t_{k+1},\cdot)||>\frac{T^{\gamma
}{N^{\gamma}}\right) \leq KT^{2p(3/2-\gamma)}\sum_{N=1}^{\infty}\frac
{1}{N^{2p(3/2-\gamma)}}<\infty.
\]
Hence, due to the Borel-Cantelli lemma, the random variable
$$\varsigma
:=\sup_{h>0}h^{-\gamma}||\rho(t_{k+1},\cdot)||$$
is a.s. finite which implies
(\ref{coroe}). \ $\square$
\begin{remark}
Since it is desirable for the order of the one-step error $||\rho
(t_{k+1},\cdot)||$ to be greater than one, we should impose the restriction on
$\varepsilon$ in $(\ref{coroe})$ to be in $(0,0.5).$ If we restrict ourselves
to fulfilment of the inequality $(\ref{coroe})$ with $\varepsilon
_{0}<\varepsilon<1/2,$ where $\varepsilon_{0}$ is some positive number, then
the conditions of Corollary~\ref{coras}\ can be weakened since for such
$\varepsilon$ it is sufficient to take $p_{0}=1/(2\varepsilon_{0}).\ $
\end{remark}
The intuition built on numerics for ordinary stochastic differential equations
(see, e.g. \cite{MT1}) and also based on layer methods for SPDEs
\cite{filter,spde} together with convergence results for layer methods for
deterministic NSE \cite{BM,NS5} suggests that the one-step error properties
proved in Theorem~\ref{lemonest} should lead to mean-square convergence of the
layer method (\ref{NS18}) with order one, i.e.,
\begin{equation}
(E||\bar{v}(t_{k},\cdot)-v(t_{k},\cdot)||^{2p})^{1/2p}\leq Kh. \label{msqone
\end{equation}
However, we have not succeeded in proving such a result. Below we prove an
almost sure (a.s.) convergence of the method (\ref{NS18}) with lower order of
$1/2-\varepsilon$ for arbitrary $\varepsilon>0$ than the $1-\varepsilon$ a.s.
order which should follow from (\ref{msqone}) and the Borel-Cantelli-type of
arguments (see, e.g. \cite{filter,spde} and also the proof of
Corollary~\ref{coras} above). In our numerical experiments (see
Section~\ref{secnum}) we observed the first order (both mean-square and a.s.)
convergence of a layer method on test examples.
Since we assumed in Assumptions~2.1\textit{ }that the problem\textit{
}(\ref{NS1})-(\ref{NS3})\textit{ }has a unique classical solution
$v(t,x),$\ $p(t,x)$ which has continuous derivatives in the space variable
$x$\ up to some order and since we are considering the periodic case, then
$v(t,x),\ p(t,x)$ and their derivatives are a.s. finite on $[0,T]\times Q$.
To prove the below a.s. convergence Theorem~\ref{tmhconuadd}, we make the
following assumptions on the approximate solution $\bar{v}(t_{k},x)$ from
(\ref{NS18}). \medskip
\noindent\textbf{Assumptions 4.1. }\textit{Let }$\bar{v}(t_{k},x),$
$k=0,\ldots,N,$ have continuous first-order spatial derivatives and
\begin{align}
|\bar{v}(t_{k},x)| & \leq C(\omega),\ \label{NS201}\\
|\partial\bar{v}(t_{k},x)/\partial x^{i}| & \leq C(\omega),\ \ i=1,\ldots
,n,\nonumber
\end{align}
\textit{where} $C(\omega)>0$\ \textit{is an a.s. finite constant independent
of} $x,\ h,\ k.$ \medskip
The first inequality in (\ref{NS201}) is necessary for a.s. convergence of the
layer method (\ref{NS18}). The second inequality is also necessary if one
expects convergence of spatial derivatives of $\bar{v}(t,x).$ We note that
even in the case of deterministic NSE \cite{BM,NS5} it turns out to be
problematic to derive the inequalities (\ref{NS201}) for the approximate
solutions. At the same time, verifying Assumptions~4.1 in numerical
experiments is straightforward. We also note that in the case of Oseen-Stokes
equations we succeeded in deriving such estimates for approximate solutions
and their spatial derivatives.
\begin{theorem}
\label{tmhconuadd}Let Assumptions~2.1 hold with the bounded moments of any
order $m\geq2$ and Assumptions~4.1 also hold. For almost every trajectory
$w(\cdot)$ and any $0<\varepsilon<1/2$ there exists a constant $C(\omega)>0$
such that
\begin{equation}
||\bar{v}(t_{k},\cdot)-v(t_{k},\cdot)||\leq C(\omega)h^{1/2-\varepsilon},
\label{thm1
\end{equation}
i.e., the layer method $(\ref{NS18})$ for the SNSE $(\ref{NS1})$-$(\ref{NS3})$
converges with order $1/2-\varepsilon$ a.s..
\end{theorem}
\noindent\textbf{Proof. } First, we note that it is easy to see that under
Assumptions~2.1 and ~4.1:
\begin{equation}
\operatorname{div}\bar{v}(t_{k},x)=0. \label{bvdiv
\end{equation}
Denote the error of the method (\ref{NS18})-(\ref{NS19}) on the $k$th layer
by
\[
\varepsilon(t_{k},x)=\bar{v}(t_{k},x)-v(t_{k},x).
\]
Due to (\ref{NS18}) and (\ref{NS19}), we obtain
\begin{align*}
\varepsilon(t_{k+1},x)+v(t_{k+1},x) & =\bar{v}(t_{k+1},x) \displaybreak[0]\\
& =2^{-n}\sum_{j=1}^{2^{n}}P\bar{v}(t_{k},x-\bar{v}(t_{k},x)h+\sigma\sqrt
{h}\xi_{j})+Pf(t_{k},x)h \displaybreak[0]\\
& +\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}\\
& =2^{-n}\sum_{j=1}^{2^{n}}Pv(t_{k},x-\bar{v}(t_{k},x)h+\sigma\sqrt{h}\xi
_{j}) \displaybreak[0]\\
& +2^{-n}\sum_{j=1}^{2^{n}}P\varepsilon(t_{k},x-\bar{v}(t_{k},x)h+\sigma
\sqrt{h}\xi_{j}) \displaybreak[0]\\
& +Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}.
\end{align*}
Using Assumptions~2.1, we obtain
\begin{equation}
v(t_{k},x-\bar{v}(t_{k},x)h+\sigma\sqrt{h}\xi_{j})=v(t_{k},x-v(t_{k
,x)h+\sigma\sqrt{h}\xi_{j})+r_{j}(t_{k},x), \label{thm2
\end{equation}
where
\begin{equation}
|r_{j}(t_{k},x)|\leq C(\omega)|\varepsilon(t_{k},x)|h \label{thm3
\end{equation}
and $C(\omega)$ is an a.s. finite random variable. Hence
\begin{align*}
\varepsilon(t_{k+1},x)+v(t_{k+1},x) =2^{-n}\sum_{j=1}^{2^{n}
Pv(t_{k},x-v(t_{k},x)h+\sigma\sqrt{h}\xi_{j}) \displaybreak[0]\\
+2^{-n}\sum_{j=1}^{2^{n}} Pr_{j}(t_{k},x)
+2^{-n}\sum_{j=1}^{2^{n}}P\varepsilon(t_{k},x-\bar{v}(t_{k},x)h+\sigma
\sqrt{h}\xi_{j}) \displaybreak[0]\\
+Pf(t_{k},x)h+\sum_{r=1}^{q}\gamma_{r}(t_{k},x)\Delta_{k}w_{r}.
\end{align*}
Then we get
\begin{align}
\varepsilon(t_{k+1},x)&=2^{-n}\sum_{j=1}^{2^{n}}P\varepsilon(t_{k},x-\bar
{v}(t_{k},x)h+\sigma\sqrt{h}\xi_{j})+2^{-n}\sum_{j=1}^{2^{n}}Pr_{j
(t_{k},x) \label{thm5} \displaybreak[0]\\
&+\rho(t_{k+1},x), \nonumber
\end{align}
where $\rho(t_{k+1},x)$ is the error (see (\ref{onesterr})) of the one-step
approximation (\ref{NSA1})-(\ref{NSA3}) and this one-step error satisfies the
inequality (\ref{coroe}) from Corollary~\ref{coras}. It follows from
(\ref{thm5}), (\ref{thm3}) and (\ref{coroe}) that
\begin{align}
||\varepsilon(t_{k+1},\cdot)|| \leq & 2^{-n}\sum_{j=1}^{2^{n}}||P\varepsilon
(t_{k},\cdot-\bar{v}(t_{k},\cdot)h+\sigma\sqrt{h}\xi_{j})||+2^{-n}\sum
_{j=1}^{2^{n}}||Pr_{j}(t_{k},\cdot)||\label{thm7} \displaybreak[0]\\
& +||\rho(t_{k+1},\cdot)||\nonumber \displaybreak[0]\\
\leq &2^{-n}\sum_{j=1}^{2^{n}}||\varepsilon(t_{k},\cdot-\bar{v}(t_{k
,\cdot)h+\sigma\sqrt{h}\xi_{j})||+2^{-n}\sum_{j=1}^{2^{n}}||r_{j}(t_{k
,\cdot)|| \nonumber \displaybreak[0] \\
&+||\rho(t_{k+1},\cdot)||\nonumber \displaybreak[0]\\
\leq & 2^{-n}\sum_{j=1}^{2^{n}}||\varepsilon(t_{k},\cdot-\bar{v}(t_{k
,\cdot)h+\sigma\sqrt{h}\xi_{j})||+C(\omega)||\varepsilon(t_{k},\cdot
)||h \nonumber \displaybreak[0] \\
&+C(\omega)h^{3/2-\varepsilon}.\nonumber
\end{align}
Consider $\delta(x)=\varepsilon(t_{k},x-\bar{v}(t_{k},x)h+\sigma\sqrt{h
\xi_{j}).$ Due to Assumptions~4.1, the function $y(x)=x-\bar{v}(t_{k
,x)h+\sigma\sqrt{h}\xi_{j}$ is a differentiable function with continuous
partial derivatives. Furthermore, using Assumptions~4.1, one can show that for
sufficiently small $h>0$ the function $y(x)=x-\bar{v}(t_{k},x)h+\sigma\sqrt
{h}\xi_{j}$ is injective. Then, taking into account the $Q$-periodicity of
$\bar{v}(t_{k},x)$ and $\varepsilon^{i}(t_{k},x),$ we obtain
\begin{align*}
||\delta(\cdot)||^{2} & =\int_{Q}\sum_{i=1}^{n}\left[ \varepsilon^{i
(t_{k},x-\bar{v}(t_{k},x)h+\sigma\sqrt{h}\xi_{j})\right] ^{2}dx\\
& =\int_{Q}\sum_{i=1}^{n}\left[ \varepsilon^{i}(t_{k},y)\right] ^{2
\frac{D(x^{1}\ldots x^{n})}{D(y^{1}\ldots y^{n})}dy.
\end{align*}
Due to Assumptions~4.1 and due to (\ref{bvdiv}), we get
\begin{align}
\frac{D(y^{1}\ldots y^{n})}{D(x^{1}\ldots x^{n})} & =\left\vert
\begin{array}
[c]{cccc
1-h\frac{\partial\bar{v}^{1}(t_{k},x)}{\partial x^{1}} & -h\frac{\partial
\bar{v}^{1}(t_{k},x)}{\partial x^{2}} & \cdots & -h\frac{\partial\bar{v
^{1}(t_{k},x)}{\partial x^{n}} \\
-h\frac{\partial\bar{v}^{2}(t_{k},x)}{\partial x^{1}} & 1-h\frac{\partial
\bar{v}^{2}(t_{k},x)}{\partial x^{2}} & \cdots & -h\frac{\partial\bar{v
^{2}(t_{k},x)}{\partial x^{n}} \\
\cdots & \cdots & \cdots & \cdots \\
-h\frac{\partial\bar{v}^{n}(t_{k},x)}{\partial x^{1}} & -h\frac{\partial
\bar{v}^{n}(t_{k},x)}{\partial x^{2}} & \cdots & 1-h\frac{\partial\bar{v
^{n}(t_{k},x)}{\partial x^{n}
\end{array}
\right\vert \label{thm8} \displaybreak[0]\\
& =1+C(\omega)h^{2},\nonumber
\end{align}
where $C(\omega)$ is an a.s. finite random variable. Then, we also hav
\[
\dfrac{D(x^{1}\ldots x^{n})}{D(y^{1}\ldots y^{n})}=1+C(\omega)h^{2}.
\]
We obtain from (\ref{thm7}) and (\ref{thm8}):
\begin{equation}
||\varepsilon(t_{k+1},\cdot)||\leq||\varepsilon(t_{k},\cdot)||+C(\omega
)||\varepsilon(t_{k},\cdot)||h+C(\omega)h^{3/2-\varepsilon}, \label{thm9
\end{equation}
whence (\ref{thm1}) follows.$\ \ \square$
\begin{remark}
We recall that we have proved in Theorem~\ref{lemonest} that the mean and
mean-square one-step errors of the layer method $(\ref{NS18})$ (and
analogously of the other two layer methods from Section~\ref{secLayer}) are of
orders $O(h^{2})$ and $O(h^{3/2}),$ respectively. This has given us the basis
to argue that the methods from Section~\ref{secLayer} are of global
mean-square order one (see $(\ref{msqone})$). The same intuition implies that
if we incorporate terms of mean-square order $O(h^{3/2})$ and of mean order
$O(h^{2})$ in these first order methods (and thus make the mean-square
one-step errors to be of order $O(h^{2})$ and the mean errors of order
$O(h^{3}))$ then they become of global mean-square order $3/2.$ The required
Ito integrals of mean-square order $O(h^{3/2})$ can be simulated in the
constructive way (and hence these methods of order $3/2$ are constructive). In
the case of deterministic NSE (i.e., when $\gamma_{r}=0)$ such a method of
global mean-square order $3/2$ becomes of order two and coincides with the
corresponding layer method derived in \cite{NS5}.
\end{remark}
Let us now consider the error of the approximations of pressure considered in
Section~\ref{secPres}. In the next proposition we prove convergence of
pressure evaluated by (\ref{DL7p}),\ (\ref{NS18}). Analogously, one can prove
convergence of the other approximations of pressure derived in
Section~\ref{secPres}.
\begin{proposition}
\label{prp32}Let assumptions of Theorem~\ref{tmhconuadd} hold. In addition
assume that second-order spatial derivatives of the approximate solution are
a.s. finite: $|\partial^{2}\bar{v}(t_{k},x)/\partial x^{i}\partial
x^{j}|\allowbreak\leq C(\omega).$ Then for almost every trajectory $w(\cdot)$
and any $0<\varepsilon<1/3$ there exists a constant $C(\omega)>0$ such that
the approximate pressure $\bar{p}(t_{k},x)$\ from $(\ref{DL7p})
,\ $(\ref{NS18})$\ satisfies the following inequalit
\begin{equation}
\Vert\bar{p}(t_{k},\cdot)-p(t_{k},\cdot)\Vert\leq C(\omega)h^{1/3-\epsilon}.
\label{NS204
\end{equation}
\end{proposition}
\noindent\textbf{Proof}. We have
\begin{align}
\frac{\partial v^{i}}{\partial x^{j}}(t_{k},x) & =\frac{v^{i}(t_{k},x+\delta
e_{j})-v^{i}(t_{k},x-\delta e_{j})}{2\delta}+O(\delta^{2}),\label{NS25} \displaybreak[0]\\
\frac{\partial\bar{v}^{i}}{\partial x^{j}}(t_{k},x) & =\frac{\bar{v
^{i}(t_{k},x+\delta e_{j})-\bar{v}^{i}(t_{k},x-\delta e_{j})}{2\delta
}+O(\delta^{2}),\nonumber
\end{align}
where $\delta$ is a positive sufficiently small number and $|O(\delta
^{2})|\leq C(\omega)\delta^{2}$. Due to Theorem~\ref{tmhconuadd},
\begin{gather}
\left\Vert \frac{v(t_{k},x+\delta e_{j})-v(t_{k},x-\delta e_{j})}{2\delta
}-\frac{\bar{v}(t_{k},x+\delta e_{j})-\bar{v}(t_{k},x-\delta e_{j})}{2\delta
}\right\Vert \label{NS255} \\
\leq C(\omega)\frac{h^{1/2-\epsilon/2}}{\delta}\ \text{a.s.} \nonumber
\end{gather}
Choosing $\delta=ch^{1/6+\epsilon/2}$ with some $c>0,$ we obtain from
(\ref{NS25}) and (\ref{NS255}) tha
\begin{equation}
\left\Vert \frac{\partial v}{\partial x^{j}}(t_{k},\cdot)-\frac{\partial
\bar{v}}{\partial x^{j}}(t_{k},\cdot)\right\Vert \leq C(\omega)h^{1/3-\epsilon
}\ \ \ \text{a.s.\ .} \label{NS26
\end{equation}
Subtracting (\ref{pre1}) with $t=t_{k}$ from (\ref{DL7p}) with $t_{k}$ instead
of $t_{k+1}$, we ge
\begin{gather}
\left\Vert \nabla\bar{p}(t_{k},\cdot)-\nabla p(t_{k},\cdot)\right\Vert
=\left\Vert P^{\bot}\left[ (v(t_{k},\cdot),\nabla)v(t_{k},\cdot)\right]
-P^{\bot}\left[ (\bar{v}(t_{k},\cdot),\nabla)\bar{v}(t_{k},\cdot)\right]
\right\Vert \label{NS27} \displaybreak[0]\\
\leq\left\Vert P^{\bot}\left[ (v(t_{k},\cdot),\nabla)(v(t_{k},\cdot)-\bar
{v}(t_{k},\cdot))\right] \right\Vert +\left\Vert P^{\bot}\left[
(v(t_{k},\cdot)-\bar{v}(t_{k},\cdot),\nabla)\bar{v}(t_{k},\cdot)\right]
\right\Vert \nonumber \displaybreak[0]\\
\leq\left\Vert (v(t_{k},\cdot),\nabla)(v(t_{k},\cdot)-\bar{v}(t_{k
,\cdot))\right\Vert +\left\Vert (v(t_{k},\cdot)-\bar{v}(t_{k},\cdot
),\nabla)\bar{v}(t_{k},\cdot)\right\Vert \ .\nonumber
\end{gather}
Due to Assumptions~2.1 and (\ref{NS26}),
\begin{equation}
\left\Vert (v(t_{k},\cdot),\nabla)(v(t_{k},\cdot)-\bar{v}(t_{k},\cdot
))\right\Vert \leq C(\omega)h^{1/3-\epsilon}\ \ \text{a.s.\ .} \label{NS28
\end{equation}
Due to Assumptions~4.1 and Theorem~\ref{tmhconuadd},
\begin{equation}
\left\Vert (v(t_{k},\cdot)-\bar{v}(t_{k},\cdot),\nabla)\bar{v}(t_{k
,\cdot)\right\Vert \leq C(\omega)h^{1/2-\epsilon}\ \ \text{a.s.\ .}
\label{NS29
\end{equation}
Thus, (\ref{NS27})-(\ref{NS29}) imply (\ref{NS204}). \ $\ \square$
\begin{remark}
To prove the estimate
\begin{equation}
\left\Vert \frac{\partial v}{\partial x^{j}}(t_{k},x)-\frac{\bar{v
(t_{k},x+\delta e_{j})-\bar{v}(t_{k},x-\delta e_{j})}{2\delta}\right\Vert \leq
C(\omega)h^{1/3-\epsilon}\ \ \ \text{a.s.\ ,} \label{eremp1
\end{equation}
we do not need in the assumption on boundedness of second-order spatial
derivatives of the approximate solution. Then, under the conditions of
Theorem~\ref{tmhconuadd} (without the additional assumption on second-order
spatial derivatives of the approximate solution), we can analogously prove
convergence with a.s. order $1/3-\epsilon$ of the approximate pressure
$\bar{p}(t_{k},x)$\ from $(\ref{NSMp})$ with $\breve{v}(t_{k+1},x)$ from
$(\ref{NSM23})$ in which we substitute $\bar{v}(t_{k+1},x)$ found due to
$(\ref{NS18})$.
\end{remark}
\begin{remark}
\label{remp2}As we discussed earlier in this section, though we proved
$1/2-\varepsilon$ a.s. convergence order for the velocity approximation in
Theorem~\ref{tmhconuadd}, we are expecting that the actual a.s. convergence
order is $1-\varepsilon$ which was observed in our numerical experiments in
Section~\ref{secnum}. Analogously, we expect that spatial derivatives of the
approximate velocity converge with a.s. order $1-\varepsilon$ instead of
$1/3-\epsilon$ shown in $(\ref{NS26})$. It is not difficult to see from the
proof of Proposition~\ref{prp32} that a.s. convergence of both velocity and
its first-order spatial derivatives with order $1-\varepsilon$ implies a.s.
convergence of pressure with order $1-\varepsilon.$ In our numerical
experiments (see Section~\ref{secnum}) we observed convergence (both
mean-square and a.s.) of pressure with order one.
\end{remark}
\section{Numerical examples\label{secnum}}
In this section we test the numerical algorithm (\ref{alg2}) from
Section~\ref{secSimL} on two model problems. The experiments indicate that the
algorithm has the first order mean-square convergence.
\subsection{Model problems}
We introduce two model examples of SNSE (\ref{NS1})-(\ref{NS3}) which
solutions can be written in an analytic form. Both examples are
generalizations of the deterministic model of laminar flow from \cite{Taylor}
to the stochastic case.\medskip
\textbf{First model problem. }Let
\begin{equation}
f(t,x)=0,\text{\ \ }\varphi(x)=0, \label{nl1
\end{equation
\begin{align}
q & =1,\label{nl11} \displaybreak[0]\\
\gamma_{1}^{1}(t,x) & =A\sin\frac{2\pi\kappa\ x^{1}}{L}\cos\frac{2\pi
\kappa\ x^{2}}{L}\exp\left( -\sigma^{2}\left( \frac{2\pi\kappa}{L}\right)
^{2}t\right) \ ,\nonumber \displaybreak[0]\\
\gamma_{1}^{2}(t,x) & =-A\cos\frac{2\pi\kappa\ x^{1}}{L}\sin\frac{2\pi
\kappa\ x^{2}}{L}\exp\left( -\sigma^{2}\left( \frac{2\pi\kappa}{L}\right)
^{2}t\right) \ ,\ \kappa\in\mathbf{Z},\ \ A\in\mathbf{R},\nonumber
\end{align}
then it is easy to check that the problem (\ref{NS1})-(\ref{NS3}),
(\ref{nl1})-(\ref{nl11}) has the following solution
\begin{align}
v^{1}(t,x) =&A\sin\frac{2\pi\kappa\ x^{1}}{L}\cos\frac{2\pi\kappa\ x^{2
}{L} \label{nl2} \displaybreak[0] \\
&\times \exp\left( -\sigma^{2}\left( \frac{2\pi\kappa}{L}\right) ^{2}t\right)
w(t)\ , \nonumber \displaybreak[0] \\
v^{2}(t,x) = &-A\cos\frac{2\pi\kappa\ x^{1}}{L}\sin\frac{2\pi\kappa\ x^{2
}{L} \nonumber \displaybreak[0] \\
&\times \exp\left( -\sigma^{2}\left( \frac{2\pi\kappa}{L}\right) ^{2}t\right)
w(t)\ ,\nonumber \displaybreak[0]\\
p(t,x) =&\frac{A^{2}}{4}\left( \cos\frac{4\pi\kappa\ x^{1}}{L}+\cos
\frac{4\pi\kappa\ x^{2}}{L}\right) \nonumber \displaybreak[0] \\
& \times \exp\left( -2\sigma^{2}\left( \frac
{2\pi\kappa}{L}\right) ^{2}t\right) (w(t))^{2}\ .\nonumber
\end{align}
\textbf{Second model problem. }To construct this example, we recall the
following proposition from \cite{HRoz07}.
\begin{proposition}
\label{PropRoz}Let $V(t,x),$ $P(t,x)$ be a solution of the deterministic NSE
with zero forcing $($i.e., of $(\ref{NS1})$-$(\ref{NS3})$ with all $\gamma
_{r}=0$ and $f(t,x)=0)$ then the solution $v(t,x),$ $p(t,x)$ of $(\ref{NS1
)$-$(\ref{NS3})$ with constant $\gamma_{r}(t,x)=\gamma_{r}$ and $f(t,x)=0$ is
equal to
\begin{align}
v(t,x) & =V\left( t,x-\int_{0}^{t}\sum_{r=1}^{q}\gamma_{r}w_{r
(s)ds\right) +\sum_{r=1}^{q}\gamma_{r}w_{r}(t),\label{sgt}\\
p(t,x) & =P\left( t,x-\int_{0}^{t}\sum_{r=1}^{q}\gamma_{r}w_{r
(s)ds\right) . \label{sgt1
\end{align}
\end{proposition}
Combining this proposition with the deterministic model of laminar flow from
\cite{Taylor}, we obtain that if
\begin{gather}
f(t,x)=0,\text{\ \ }\varphi(x)=\left( A\sin\frac{2\pi\kappa\ x^{1}}{L
\cos\frac{2\pi\kappa\ x^{2}}{L},-A\cos\frac{2\pi\kappa\ x^{1}}{L}\sin
\frac{2\pi\kappa\ x^{2}}{L}\right) ^{\top},\label{nl3} \displaybreak[0]\\
\kappa\in\mathbf{Z},\ \ A\in\mathbf{R},\nonumber
\end{gather}
an
\begin{equation}
q=1,\ \gamma_{1}^{1}(t,x)=\gamma^{1},\ \ \gamma_{1}^{2}(t,x)=\gamma^{2}\ .
\label{nl31
\end{equation}
then the problem (\ref{NS1})-(\ref{NS3}), (\ref{nl3})-(\ref{nl31}) has the
following solution
\begin{align}
v^{1}(t,x) & =A\sin\frac{2\pi\kappa\ \left( x^{1}-\gamma^{1}I(t)\right)
}{L}\cos\frac{2\pi\kappa\ \left( x^{2}-\gamma^{2}I(t)\right) }{L} \label{nl4} \displaybreak[0]\\
& \times \exp\left(
-\sigma^{2}\left( \frac{2\pi\kappa}{L}\right) ^{2}t\right)
+\gamma^{1}w(t),\nonumber \displaybreak[0]\\
v^{2}(t,x) & =-A\cos\frac{2\pi\kappa\ \left( x^{1}-\gamma^{1}I(t)\right)
}{L}\sin\frac{2\pi\kappa\ \left( x^{2}-\gamma^{2}I(t)\right) }{L} \nonumber \displaybreak[0]\\
& \times \exp\left(
-\sigma^{2}\left( \frac{2\pi\kappa}{L}\right) ^{2}t\right)
+\gamma^{2}w(t),\nonumber \displaybreak[0]\\
p(t,x) & =\frac{A^{2}}{4}\left( \cos\frac{4\pi\kappa\ \left( x^{1
-\gamma^{1}I(t)\right) }{L}+\cos\frac{4\pi\kappa\ \left( x^{2}-\gamma
^{2}I(t)\right) }{L}\right) \nonumber \displaybreak[0] \\
& \times \exp\left( -2\sigma^{2}\left( \frac{2\pi
\kappa}{L}\right) ^{2}t\right) ,\nonumber
\end{align}
where
\[
I(t)=\int_{0}^{t}w(s)ds,\ w(s)=w_{1}(s).
\]
\subsection{Results of numerical experiments}
In our numerical experiments we test the algorithm (\ref{alg2})-(\ref{alg3}),
(\ref{algp}) which is a realization of the layer method (\ref{NSM21
)-(\ref{NSM23}), (\ref{NSMp}). This algorithm possesses the following properties.
\begin{proposition}
\label{Propcons}\textbf{1.} The approximate solution of the problem
$(\ref{NS1})$-$(\ref{NS3})$, $(\ref{nl1})$-$(\ref{nl11})$ obtained by the
algorithm $(\ref{alg2})$-$(\ref{alg3})$, $(\ref{algp})$ contains only those
modes which are present in the coefficient $\gamma_{1}(t,x)$ from
$(\ref{nl11})$, i.e., which are present in the exact solution $(\ref{nl2})$.
\textbf{2.} The approximate solution of the problem $(\ref{NS1})
-$(\ref{NS3})$, $(\ref{nl3})$-$(\ref{nl31})$ obtained by the algorithm
$(\ref{alg2})$-$(\ref{alg3})$, $(\ref{algp})$ contains only those modes which
are present in the initial condition $\varphi(x)$ from $(\ref{nl3})$ and the
zero mode, i.e., which are present in the exact solution $(\ref{nl4})$.
\end{proposition}
The proof of this proposition is analogous to the proof of a similar result in
the deterministic case \cite{NS5} and it is omitted here. \medskip
We measure the numerical error in the experiments as follows. First, we
consider the relative mean-square error defined a
\begin{equation}
err_{msq}^{v}=\frac{\sqrt{E\sum_{\mathbf{n}}|\bar{v}_{\mathbf{n
}(T)-v_{\mathbf{n}}(T)|^{2}}}{\sqrt{E\sum_{\mathbf{n}}|v_{\mathbf{n}}(T)|^{2
}}\ ,\ \ \ err_{msq}^{p}=\frac{\sqrt{E\sum_{\mathbf{n}}|\bar{p}_{\mathbf{n
}(T)-p_{\mathbf{n}}(T)|^{2}}}{\sqrt{E\sum_{\mathbf{n}}|p_{\mathbf{n}}(T)|^{2
}}\ . \label{msqerr
\end{equation}
Analysis of this error provides us with information about mean-square
convergence of the numerical algorithm considered. To evaluate this error in
the experiments, we use the Monte Carlo technique for finding the expectations
in (\ref{msqerr}) by running $K$ independent (with respect to realizations of
the Wiener process $w(t))$ realizations of $\bar{v}_{\mathbf{n}}(T),$
$v_{\mathbf{n}}(T),$ $\bar{p}_{\mathbf{n}}(T),\ p_{\mathbf{n}}(T).$ Second, we
consider the relative $L_{2}$-error for a fixed trajectory of $w(t):$
\begin{equation}
err^{v}=\frac{\sqrt{\sum_{\mathbf{n}}|\bar{v}_{\mathbf{n}}(T)-v_{\mathbf{n
}(T)|^{2}}}{\sqrt{\sum_{\mathbf{n}}|v_{\mathbf{n}}(T)|^{2}}}\ ,\ \ \ err^{p
=\frac{\sqrt{\sum_{\mathbf{n}}|\bar{p}_{\mathbf{n}}(T)-p_{\mathbf{n}}(T)|^{2
}}{\sqrt{\sum_{\mathbf{n}}|p_{\mathbf{n}}(T)|^{2}}}\ . \label{err
\end{equation}
Analysis of this error provides us with information about a.s. convergence of
the numerical algorithm. To evaluate this error in the tests, we fix a
trajectory $w(t),$ $0\leq t\leq T,$ which is obtained with a small time step.
We note that in the case of the considered examples and the tested algorithm
(see Proposition~\ref{Propcons}) $v_{\mathbf{n}}(T)$ are nonzero only for
$|\mathbf{n}^{1}|=|\mathbf{n}^{2}|=|\kappa|$ and $p_{\mathbf{n}}(T)$ are
nonzero only for $|\mathbf{n}^{1}|=2|\kappa|,$\ $\mathbf{n}^{2}=0$ and
$\mathbf{n}^{1}=0,$\ $|\mathbf{n}^{2}|=2|\kappa|$. Hence, the sums in
(\ref{msqerr}) and (\ref{err}) are finite here. This also implies that it is
sufficient here to take the cut-off parameter $M$ in the algorithm
(\ref{alg2})-(\ref{alg3}), (\ref{algp}) to be equal to $2|\kappa|.$
The test results for the algorithm (\ref{alg2})-(\ref{alg3}), (\ref{algp})
applied to the first model problem (\ref{NS1})-(\ref{NS3}), (\ref{nl1
)-(\ref{nl11}) are presented in Tables$~$\ref{tab1} and~\ref{tab2}. In
Table$~$\ref{tab1} the \textquotedblleft$\pm$\textquotedblright\ reflects the
Monte Carlo errors in evaluating of $err_{msq}^{v}$ and $err_{msq}^{p}$, they
give the confidence intervals for the corresponding values with probability
$0.95$
\begin{table}[htb] \centering
\caption{Mean-square relative errors $err_{msq}^v$ and $err_{msq}^p$ from (\ref{msqerr}) at $T=3$ in simulation of the problem
(\ref{NS1})-(\ref{NS3}), (\ref{nl1})-(\ref{nl11})
with $\sigma =0.1$, $A=1$, $\kappa =1$, $L=1$
by the algorithm (\ref{alg2})-(\ref{alg3}), (\ref{algp}) with $M=2$ and various time steps $h$.
The \textquotedblleft $\pm $\textquotedblright reflects
the Monte Carlo error in evaluating $err_{msq}^v$ and $err_{msq}^p$ via the Monte Carlo technique
with $K=4000$ independent runs.
The exact values (up to 5 d.p.) of the denominators in (\ref{msqerr}) are $0.37470$ and $0.12159$, respectively.
\label{tab1}} \setlength{\tabcolsep}{3pt
\begin{tabular}
[c]{lc}\hline
$h$ &
\begin{tabular}
[c]{ll
velocity & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ pressure
\end{tabular}
\\\hline
$0.2$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0537$ $\pm\ 0.0012$ & \ \ \ \ $\ \ 0.0710$ $\pm\ 0.0038
\end{tabular}
}\\
$0.1$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0263$ $\pm\ 0.0006$ & \ \ \ \ $\ \ 0.0337$ $\pm\ 0.0016
\end{tabular}
}\\
$0.05$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0130$ $\pm\ 0.0003$ & \ \ \ \ $\ \ 0.0170$ $\pm\ 0.0009
\end{tabular}
}\\
$0.02$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0052$ $\pm\ 0.0001$ & \ \ \ \ $\ \ 0.0066$ $\pm\ 0.0003
\end{tabular}
}\\
$0.01$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0025\ \pm$ $0.00006$ & \ \ \ $\ 0.0031$ $\pm\ 0.0001
\end{tabular}
}\\\hline
\end{tabular
\end{table
\begin{table}[htb] \centering
\caption{Relative errors $err^v$ and $err^p$ from (\ref{err}) at $T=3$ in simulation of the problem
(\ref{NS1})-(\ref{NS3}), (\ref{nl1})-(\ref{nl11})
with $\sigma =0.1$, $A=1$, $\kappa =1$, $L=1$ for a fixed trajectory of the Wiener process $w(t)$
by the algorithm (\ref{alg2})-(\ref{alg3}), (\ref{algp}) with $M=2$ and various time steps $h$.
The exact values (up to 5 d.p.) of the denominators in (\ref{err}) are $0.43950$ and $0.09658$, respectively.
\label{tab2}} \setlength{\tabcolsep}{3pt
\begin{tabular}
[c]{lc}\hline
$h$ &
\begin{tabular}
[c]{ll
velocity & \ \ \ pressure
\end{tabular}
\\\hline
$0.2$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0485$ & \ \ \ \ $\ \ 0.0585
\end{tabular}
}\\
$0.1$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0237$ & \ \ \ \ $\ \ 0.0284
\end{tabular}
}\\
$0.05$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0117$ & \ \ \ \ $\ \ 0.0141
\end{tabular}
}\\
$0.02$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0047$ & \ \ \ \ $\ \ 0.0056
\end{tabular}
}\\
$0.01$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0023$ & \ \ \ \ $\ \ 0.0028
\end{tabular}
}\\\hline
\end{tabular
\end{table
We can conclude from Table$~$\ref{tab1} that both velocity and pressure found
due to the algorithm (\ref{alg2})-(\ref{alg3}), (\ref{algp}) demonstrate the
mean-square convergence with order $1.$ We also see from Table$~$\ref{tab2}
that both velocity and pressure converge with order $1$ for a particular,
fixed trajectory of $w(t).$ We note that we repeated the experiment for other
realizations of $w(t)$ and observed the same behavior. The observed first
order convergence of the algorithm is consistent with our prediction (see
(\ref{msqone}), the discussion after it, and Remark~\ref{remp2}).
The test results for the algorithm (\ref{alg2})-(\ref{alg3}), (\ref{algp})
applied to the second model problem (\ref{NS1})-(\ref{NS3}), (\ref{nl3
)-(\ref{nl31}) are presented in Table$~$\ref{tab3}. In these tests we limit
ourselves to simulation for a particular, fixed trajectory of $w(t)$ and
observation of a.s. convergence. We note that evaluation of the exact solution
(\ref{nl4}) requires simulation of the integral $I(t).$ This was done in the
following way. At each time step $k+1,$ $k=0,\ldots,N-1,$ we simulate a Wiener
increment $\Delta_{k}w$ as i.i.d. Gaussian $\mathcal{N}(0,h)$ random variables
(and we find $w(t_{k+1})=w(t_{k})+\Delta_{k}w)$ and i.i.d. Gaussian
$\mathcal{N}(0,1)$ random variables $\eta_{k}.$ Then (see \cite[Chapter
1]{MT1}):
\[
I(t_{k+1})=I(t_{k})+hw(t_{k})+\frac{h}{2}\Delta_{k}w+\frac{h^{3/2}}{\sqrt{12
}\eta_{k}\ .
\]
\begin{table}[htb] \centering
\caption{Relative errors $err^v$ and $err^p$ from (\ref{err}) at $T=3$ in simulation of the problem
(\ref{NS1})-(\ref{NS3}), (\ref{nl3})-(\ref{nl31})
with $\sigma =0.1$, $A=1$, $\kappa =1$, $L=1$, $\gamma^1 =0.5$, $\gamma^2 =0.2$
for a fixed trajectory of the Wiener process $w(t)$
by the algorithm (\ref{alg2})-(\ref{alg3}), (\ref{algp}) with $M=2$ and various time steps $h$.
The exact values (up to 6 d.p.) of the denominators in (\ref{err}) are $0.505620$ and $0.000548$, respectively.
\label{tab3}} \setlength{\tabcolsep}{3pt
\begin{tabular}
[c]{lc}\hline
$h$ &
\begin{tabular}
[c]{ll
velocity & \ \ \ pressure
\end{tabular}
\\\hline
$0.01$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.166\ \ $ & $\ \ \ \ \ \ 0.973
\end{tabular}
}\\
$0.005$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.068\ \ $ & $\ \ \ \ \ \ 0.384
\end{tabular}
}\\
$0.002$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.024\ \ $ & $\ \ \ \ \ \ 0.134
\end{tabular}
}\\
$0.001$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0118$ & $\ \ \ \ \ \ 0.0645
\end{tabular}
}\\
$0.0005$ & \multicolumn{1}{l}
\begin{tabular}
[c]{ll
$0.0058$ & $\ \ \ \ \ \ 0.0313
\end{tabular}
}\\\hline
\end{tabular
\end{table
Again, the observed first order convergence of the algorithm in
Table~\ref{tab3} is consistent with our prediction (see the discussion after
(\ref{msqone}) and Remark~\ref{remp2}).
\section*{Acknowledgments}
The work was partially supported by the Royal Society International Joint
Project grant JP091142.
|
1,941,325,221,016 | arxiv | \section*{Introduction}
\renewcommand*{\thetheoremintro}{\Alph{theoremintro}}
\noindent
The intimate connections between operator algebras and dynamical systems go all the way back to the classical formulation of quantum mechanics; they are a pivotal element in Connes' noncommutative geometry and have spurred beautiful results in Popa's rigidity theory. Dynamical systems are an inexhaustible source of inspiring examples, and the links to operator algebras are strong enough to carry back and forth fundamental concepts and ideas as well as very concrete statements and technology.
On the ergodic side, striking examples are the classical Rokhlin lemma, the version of Ornstein and Weiss for probability measure preserving actions, and its implications for the Connes-Feldman-Weiss theorem; see \cite{KerrLi:Book} for an overview. On the topological side, the Rokhlin lemma works best for actions on the Cantor set and has been the point of departure for important developments in $C^{*}$-dynamics, notably Giordano-Putnam-Skau's rigidity up to strong orbit equivalence and Kishimoto's Rokhlin properties; see \cite{GPS:orbit,Ks1,Kishimoto-flows}. Even though the latter carry over the Rokhlin lemma to a $C^{*}$-algebra context in a very elegant manner, their scope is limited by (sometimes obvious, sometimes hidden) topological obstructions. In \cite{HWZ} this problem was circumvented by the notion of Rokhlin dimension, which may be regarded as a higher dimensional version of the Rokhlin lemma and is much more prevalent than the Rokhlin property. In fact, it is often generic or equivalent to outerness; cf.\ \cite{HWZ, BEMSW}. The concept makes direct contact with the striking recent developments in the structure and classification theory of nuclear $C^{*}$-algebras. First, it allows one to derive in many cases that transformation group $C^{*}$-algebras are classifiable in the sense of Elliott; the reason is that the dynamical notion of Rokhlin dimension interacts nicely with $C^{*}$-algebraic concepts like finite nuclear dimension and $\mathcal{Z}$-stability. Moreover, it provides a way to observe the latter phenomena
at the level of the underlying dynamical systems themselves. This aspect is particularly fruitful, since it makes it possible to link $C^{*}$-algebraic regularity with well-studied dynamical notions such as the mean dimension of Lindenstrauss and Weiss \cite{LinWei:meandimension} or Gutman's marker property \cite{Gutman-marker, Gut:ETDS}; cf.\ \cite{Szabo}.
In \cite{HWZ} Rokhlin dimension was defined for actions of the integers and of finite groups; in \cite{Szabo} the notion was generalized to $\mathbb{Z}^{d}$-actions, and in \cite{SWZ} to residually finite groups. All of these work nicely, and yield very convincing results. They are, however, restricted to discrete groups and therefore miss out on many applications from geometry or physics, which are typically related to one parameter actions, or time evolutions. The present paper addresses this issue by introducing Rokhlin dimension for flows (i.e.\ continuous $\mathbb{R}$-actions) on $C^{*}$-algebras. Just as for $\mathbb{Z}$-actions, our definition is a higher dimensional version of the corresponding Rokhlin property introduced by Kishimoto in \cite{Kishimoto-flows}.
The basic idea behind the aforementioned Rokhlin properties is to find an exhaustive set of finite (or compact) approximate representations of the group $G$ in the $C^{*}$-algebra $A$, which are at the same time approximately central. For discrete $G$, this can be elegantly expressed using the central sequence algebra $A_{\infty} \cap A'$ (i.e., bounded sequences commuting with constant sequences, identified up to null sequences). One could then define a $\mathbb{Z}$-action $\alpha$ on $A$ to have the Rokhlin property, if for any natural number $m$ one finds a unitary $u \in A_{\infty} \cap A'$ of order $m$ such that for each $k \in \mathbb{Z}$ one has $\alpha_{k}(u) = e^{k/2\pi i m} u$ --- or equivalently, if there are pairwise orthogonal projections (of which we think as a Rokhlin tower) $p_{0},\ldots,p_{m-1} \in A_{\infty} \cap A'$ such that $\alpha_{k}(e_{i}) = e_{i+k}$ cyclically and $\sum e_{i} = 1$ (let us not worry about the nonunital case for the moment). For $\mathbb{Z}$-actions this definition works, but has limited scope since it runs straight into $K$-theoretic obstructions. The way out is to allow two Rokhlin towers instead of just one, of lengths $m$ and $m+1$, respectively, such that the action is the shift within each tower individually, and maps the sum of the top levels to the sum of the bottom levels (possibly mixing the two towers at this point). This variant is much more common, since its $K$-theoretic obstructions are minimized; of course it still requires the existence of nontrivial projections. The higher dimensional, or colored, version of \cite{HWZ} bypasses even this latter restriction, and indeed it was shown in \cite{HWZ} and in \cite{Szabo} that it occurs in stunning generality. The idea is to replace projections by positive elements (and thus, intuitively speaking, generalize from a clopen partition to an open cover), and group them into finitely many sets of pairwise orthogonal towers, with the action still shifting within each one. Again the definition leaves some amount of flexibility, as to whether the action shifts cyclically or not within each tower, or whether the elements of different towers commute.
For flows Kishimoto has given an analogous definition, but the situation is more subtle. First, since one is interested in point-norm continuous actions, the sequence algebra needs to be restricted to the subalgebra $A^{(\alpha)}_{\infty}$ consisting of those elements for which the induced action remains continuous. Kishimoto then defines $\alpha$ to have the Rokhlin property if for any (typically small) $p \in \mathbb{R}$ there is a unitary $v \in A^{(\alpha)}_{\infty} \cap A'$ such that for any $t \in \mathbb{R}$ we have $\alpha_{\infty,t}(v) = e^{ipt} v$. The obstructions to having the Rokhlin property for flows are more refined than for integer actions (see Proposition~\ref{prop:obstruction} below) --- but again it does appear in important examples; see \cite{Kishimoto-flows, Kishimoto-shift, Kishimoto-flows-O2, BratelliKishimotoRobinson}. Just as for $\mathbb{Z}$-actions one can define a higher dimensional version:\ a positive contraction can generally be interpreted as a cone over a projection, and in the same way a cone over a unitary corresponds to a normal contraction; Rokhlin dimension can then be modeled using finite sums of such normal contractions, with each one witnessing the $\mathbb{R}$-action periodically (again there is some flexibility, e.g.\ whether the normal elements are required to commute or not). In the unital case our definition reads as follows (cf.\ Definitions~\ref{Def:dimrok} and \ref{Def:dimrok-comm} below):
\begin{defnintro}
Let $A$ be a separable unital $C^*$-algebra with a flow $\alpha: {\mathbb R}\to\mathrm{Aut}(A)$, i.e., a point-norm continuous action by automorphisms. We say $\alpha$ has Rokhlin dimension $d$, if $d$ is the smallest natural number such that the following holds:\ for every $p\in {\mathbb R}$, there exist normal contractions $x^{(0)},x^{(1)},\ldots,x^{(d)}\in A_\infty^{(\alpha)} \cap A'$ with $x^{(0)*}x^{(0)}+\dots+x^{(d)*}x^{(d)}=1$ and $\alpha_{\infty,t}(x^{(j)})=e^{ipt}x^{(j)}$ for all $t\in{\mathbb R}$ and for $j=0,\dots,d$.
We say the Rokhlin dimension with commuting towers is $d$, if for any $p$ there are pairwise commuting $x^{(j)}$ as above.
\end{defnintro}
It is clear that the definition can be generalized to other groups without too much effort. However, this would require a certain amount of choices, and since at this point we do not have a sufficiently large stock of guiding examples, we stick to ${\mathbb R}$-actions for most of our paper. The main exception is Section~\ref{Section:reduction}, in which we consider closed cocompact subgroups of locally compact groups and define their relative Rokhlin dimension. In this setting there is not much room for choice and it seems reasonable to state the definitions and results in a general context.
In view of the advances in the structure and classification theory of nuclear $C^{*}$-algebras---cf.\ \cite{GLN}, \cite{EGLN:arXiv}, \cite{Win:AJM} and \cite{TWW}---it is particularly relevant how Rokhlin dimension behaves in connection with nuclear dimension in the sense of \cite{winter-zacharias}, and with $D$-absorption, where $D$ is a strongly self-absorbing $C^{*}$-algebra; cf.\ \cite{TomsWinter07}. These two concepts are very closely related via the Toms-Winter conjecture; cf.\ \cite{ElliottToms08, Winter:dimnuc-Z-stable, SWW:Invent}. Finite nuclear dimension is preserved under forming crossed products by flows with finite Rokhlin dimension (and one can give concrete upper bounds; see Theorem~\ref{Thm:dimnuc-bound}).
\begin{thmintro}
Let $A$ be a separable $C^*$-algebra and let $\alpha: {\mathbb R} \to \mathrm{Aut}(A)$ be a flow. If the nuclear dimension of $A$ and the Rokhlin dimension of $\alpha$ are finite, then so is the nuclear dimension of the crossed product $A \rtimes_{\alpha}{\mathbb R}$.
\end{thmintro}
The situation for $D$-absorption ($D$ strongly self-absorbing) is similar, just as in \cite{HWZ}:\ $D$-absorption is preserved under forming crossed products by flows with finite Rokhlin dimension with commuting towers; see Theorem~\ref{Thm:Z-absorption}, which generalizes \cite[Theorem 5.2]{HW} to the case of finite Rokhlin dimension.
\begin{thmintro}
Let $D$ be a strongly self-absorbing $C^*$-algebra. Let $A$ be a separable, $D$-absorbing $C^*$-algebra and let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ be a flow with finite Rokhlin dimension with commuting towers. Then $A \rtimes_{\alpha} {\mathbb R}$ is $D$-absorbing.
\end{thmintro}
For a non-unital $C^*$-algebra, it is also meaningful to ask whether it tensorially absorbs the algebra of compact operators, i.e., when it is stable. We show that, for crossed products by flows, this is always true under the assumption of finite Rokhlin dimension (see Corollary~\ref{cor:stability}).
\begin{thmintro}
Let $A$ be a separable $C^*$-algebra with a flow $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ of finite Rokhlin dimension. Then $A \rtimes_\alpha {\mathbb R}$ is stable.
\end{thmintro}
We already pointed out that the concept of finite Rokhlin dimension generalizes Kishimoto's Rokhlin property, which is particularly interesting and relevant for noncommutative $C^{*}$-dynamical systems. However, it turns out that finite Rokhlin dimension also plays an important role in the classical setting of flows on locally compact spaces. It is easy to see that for such flows finite Rokhlin dimension implies freeness (i.e., every point is only left fixed by the neutral element). What is arguably our main result says that for finite dimensional spaces the converse holds (cf.\ Corollary~\ref{cor:top-flow-Rokhlin-estimate} and \ref{cor:top-flow-dimnuc-estimate}):
\begin{thmintro}
Let $Y$ be a locally compact and metrizable space with finite covering dimension and let $\Phi$ be a free flow on $Y$. Then the induced flow on ${C}_{0}(Y)$ has finite Rokhlin dimension. As a consequence, the crossed product $C^{*}$-algebra $ {C}_{0}(Y) \rtimes {\mathbb{R}^{}} $ has finite nuclear dimension.
\end{thmintro}
The proof is intricate, and is based on high-powered technology (called the flow space construction) developed by Bartels, L\"uck and Reich \cite{BarLRei081465306017991882} in the context of their work on the Farrel-Jones conjecture (we actually use the version given by Kasprowski and R\"uping in \cite{Kasprowski-Rueping}). The statement of the result (free flows on finite dimensional spaces have finite Rokhlin dimension) is completely analogous to Szab{\'o}'s \cite{Szabo}. However, the proofs are quite different, and we find it most remarkable that both arguments import high-end machinery that was developed for entirely different purposes---the Farrell-Jones conjecture for hyperbolic groups
in our case, and Gutman's marker property from \cite{Gutman-marker, Gut:ETDS}, which is based on \cite{Lind:IHES},
in Szab{\'o}'s case.
Upon returning to the associated $C^{*}$-algebras, we then combine our results with recent progress in Elliott's classification program for nuclear $C^{*}$-algebras. The outcome is a far-reaching classification theorem for crossed product $C^{*}$-algebras associated to free and minimal flows on finite dimensional spaces, analogous to that of \cite{TomsWinter:minhom} (see also \cite{TomsWinter:PNAS}):\ whenever the crossed products are Morita equivalent to unital $C^{*}$-algebras, they are classified by their Elliott invariant (cf.\ Corollary~\ref{cor:classification}).
\begin{thmintro}
\label{intro:classification}
Let $Y$ be a locally compact and metrizable space and $\Phi$ a flow on $Y$. Suppose that $Y$ has finite covering dimension and $\Phi$ is free and minimal. Then the crossed product $ {C}_{0}(Y) \rtimes {\mathbb{R}^{}} $ is classifiable in the sense of Elliott, provided that it contains a nonzero projection.
\end{thmintro}
When the space is a compact manifold, the invariant boils down to topological $K$-theory together with the Ruelle-Sullivan map, a natural pairing with the space of invariant probability measures. Thanks to \cite{connes}, the invariant is very computable, and known in many situations.
The existence of some nonzero projection is a minimum requirement for the current state of the art of the classification program, which is not yet developed far enough to yield a result of similar strength for stably projectionless $C^{*}$-algebras. The condition is in particular satisfied for uniquely ergodic smooth flows on compact manifolds, provided the Ruelle-Sullivan current associated to the measure yields a nontrivial class in the first cohomology group; cf.\ \cite{connes, KelPut}. We also provide another condition in terms of compact transversal subsets of the flow, which naturally produces nontrivial projections in the crossed product.
\bigskip
\noindent
{\bf Acknowledgements.} We would like to thank Sel{\c c}uk Barlak, Arthur Bartels, Hiroki Matui, Joav Orovitz, N. Christopher Phillips, Mikael R{\o}rdam, Yasuhiko Sato, Claude Schochet, and Joachim Zacharias for various inspiring conversations. We also thank the participants of the kleines seminar in 2013/14 in M\"unster, which helped us a great deal to better understand the flow space construction of \cite{BarLRei081465306017991882}. Finally, we would like to thank the referee for their impressively quick and careful proofreading, and for some helpful and detailed comments on the first submitted version.
\bigskip
\tableofcontents
\section{Preliminaries}
\label{preliminaries}
\noindent
Let us start by fixing some notations and conventions that we use in this paper, and by recalling some definitions.
\begin{Notation}
Let $A$ be a $C^*$-algebra. We denote by $\ell^{\infty}({\mathbb N},A)$ the $C^*$-algebra of all norm-bounded sequences with values in $A$, and by $c_0({\mathbb N},A)$ the $C^*$-subalgebra of all null sequences. The sequence algebra of $A$ is defined as the quotient $A_\infty=\ell^\infty({\mathbb N},A)/c_0({\mathbb N},A)$. One views $A$ as embedded into $A_{\infty}$ as (equivalence classes of) constant sequences in $\ell^{\infty}({\mathbb N},A)$. We shall refer to this as the standard embedding of $A$ and sometimes write $\iota_A: A\to A_\infty$.
Every automorphism $\phi\in\operatorname{Aut}(A)$ naturally induces an automorphism $\ell^\infty(\phi)$ on $\ell^{\infty}({\mathbb N},A)$ by componentwise application of $\phi$. This automorphism leaves the ideal $c_0({\mathbb N},A)$ invariant, and therefore induces an automorphism $\phi_{\infty}$ of $A_{\infty}$.
Now let $G$ be a locally compact group. For a point-norm continuous action $\alpha: G\to\mathrm{Aut}(A)$ of $G$ on $A$, the map $\ell^\infty(\alpha): G\to\mathrm{Aut}(\ell^\infty({\mathbb N},A))$ given by $g\mapsto\ell^\infty(\alpha_g)$ again yields an action. Likewise, the map $\alpha_\infty: G\to\mathrm{Aut}(A_\infty)$ given by $g\mapsto\alpha_{g,\infty}$ yields an action.
However, given $x \in \ell^{\infty}({\mathbb N},A)$, the map $g \mapsto \ell^\infty(\alpha_g)(x)$ need not be continuous in general. We thus consider the $C^*$-subalgebra $\ell^{\infty, (\alpha)}({\mathbb N},A)$ of elements $x$ for which this map is continuous. With an elementary $\varepsilon/2$-argument, one can see that $c_0({\mathbb N},A)\subset\ell^{\infty,(\alpha)}({\mathbb N}, A)$. We thus define $A_{\infty}^{(\alpha)} = \ell^{\infty, (\alpha)}({\mathbb N},A) / c_0({\mathbb N},A) \subset A_\infty$. Then the restriction of $\alpha_\infty$ yields a point-norm continuous action of $G$ on $A_\infty^{(\alpha)}$. Clearly the image of the standard embedding of $A$ is in $A_\infty^{(\alpha)}$, so we can view $\iota_A$ also as a map into $A_\infty^{(\alpha)}$.
\end{Notation}
\begin{Rmk} \label{continuous seq}
As an alternative to the above, we might also have defined $A_{\infty}^{(\alpha)}\subset A_\infty$ as the $C^*$-algebra consisting of those elements $x\in A_{\infty}$ for which the assignment $g\mapsto\alpha_{\infty,g}(x)$ yields a norm-continuous map from $G$ to $A_{\infty}$. In fact, these two definitions coincide as long as we assume $\alpha$ to be point-norm continuous.
This follows from \cite[Theorem 2]{LarryBrown98}.
\end{Rmk}
The following two observations will be useful throughout the paper; the crossed products in question are the full ones.
\begin{Lemma}
\label{Rmk:cont-part-crossed-product-embeds}
Let $A$ be a $C^*$-algebra, let $G$ be a locally compact group and let $\alpha: G\to\operatorname{Aut}(A)$ be a point-norm continuous action. Then there exists a natural $^{*}$-homomorphism $\Phi: A_\infty^{(\alpha)}\rtimes_{\alpha_\infty} G \to (A\rtimes_\alpha G)_\infty$ that is compatible with the standard embeddings in the sense that $\Phi\circ (\iota_A\rtimes G) = \iota_{A\rtimes_\alpha G}$.
\end{Lemma}
\begin{proof}
Despite slight abuse of notation, let us denote the constant sequence embedding of $A$ into either $\ell^{\infty}({\mathbb N}, A)$ or $\ell^{\infty,(\alpha)}({\mathbb N},A)$ by $\iota_A$.
Consider the evaluation maps $\operatorname{ev}_n: \ell^\infty({\mathbb N},A)\to A$ given by $\operatorname{ev}_n\big( (a_k)_k \big)=a_n$ for $n\in{\mathbb N}$. These are clearly $\ell^\infty(\alpha) - \alpha$ equivariant. In particular, restricting to the continuous part $\operatorname{ev}_n: \ell^{\infty,(\alpha)}({\mathbb N},A)\to A$ yields an equivariant ${}^*$-homomorphism. Consider the induced ${}^*$-homomorphism between the crossed products $\operatorname{ev}_n\rtimes G: \ell^{\infty,(\alpha)}({\mathbb N},A)\rtimes_{\ell^\infty(\alpha)} G \to A\rtimes_\alpha G$. Writing these into a sequence, we obtain a ${}^*$-homomorphism
$$
\Psi = (\operatorname{ev}_n\rtimes G)_n: \ell^{\infty,(\alpha)}({\mathbb N},A)\rtimes_{\ell^\infty(\alpha)} G\to\ell^\infty({\mathbb N},A\rtimes_\alpha G)
\, ,
$$
which is clearly compatible with the embeddings $A\subset\ell^{\infty,(\alpha)}({\mathbb N},A)$ and $A\rtimes_\alpha G\subset\ell^\infty({\mathbb N},A\rtimes_\alpha G)$ in the sense that $\Psi\circ (\iota_A\rtimes G)=\iota_{A\rtimes_\alpha G}$.
Now by definition, we have an equivariant short exact sequence
\[
\xymatrix{
0 \ar[r] & c_0({\mathbb N},A) \ar[r] & \ell^{\infty,(\alpha)}({\mathbb N},A)\ar[r] & A_\infty^{(\alpha)} \ar[r] & 0.
}
\]
Applying the ${}^*$-homomorphism constructed above and considering that it is a canonical isomorphism between $c_0({\mathbb N},A)\rtimes_{\ell^\infty(\alpha)} G$ and $c_0({\mathbb N},A\rtimes_\alpha G)$, we get an induced ${}^*$-homomorphism, as illustrated by the following diagram:
\[
\xymatrix@C-2mm{
0 \ar[r] & c_0({\mathbb N},A)\rtimes_{\ell^\infty(\alpha)} G \ar[r] \ar[d]_\Psi^\cong & \ell^{\infty,(\alpha)}({\mathbb N},A)\rtimes_{\ell^\infty(\alpha)} G\ar[r] \ar[d]_\Psi & A_\infty^{(\alpha)}\rtimes_{\alpha_\infty} G \ar[r]\ar@{-->}[d]_\Phi & 0 \\
0 \ar[r] & c_0({\mathbb N},A\rtimes_\alpha G) \ar[r] & \ell^{\infty}({\mathbb N},A\rtimes_\alpha G)\ar[r] & (A\rtimes_\alpha G)_\infty \ar[r] & 0.
}
\]
The identity $\Phi\circ (\iota_A\rtimes G) = \iota_{A\rtimes_\alpha G}$ then follows from the analogous one which holds for $\Psi$.
\end{proof}
\begin{Lemma}
\label{Rmk:equivariant-order-zero-maps}
Let $A$ and $B$ be $C^*$-algebras, let $G$ be a locally compact group and let $\alpha: G\to\operatorname{Aut}(A)$ and $\beta: G\to\operatorname{Aut}(B)$ be point-norm continuous actions. Let $\varphi: A\to B$ be an $\alpha - \beta$ equivariant c.p.c.~order zero map. Then there is an induced c.p.c.~order zero map $\varphi\rtimes G: A\rtimes_\alpha G\to B\rtimes_\beta G$.
In fact, for every $k\in{\mathbb N}$, the full crossed product construction is functorial with respect to sums of $k$ c.p.\ order zero maps via the assignment $\sum_{j=1}^k\varphi^{(j)}\mapsto \sum_{j=1}^k \varphi^{(j)}\rtimes G$.
\end{Lemma}
\begin{proof}
If $(A,\alpha,G)$ is any $C^*$-dynamical system, denote by
\[
\iota^\alpha: A\to\mathcal{M}(A\rtimes_\alpha G) \quad\text{and}\quad \lambda^\alpha: C^*(G)\to \mathcal{M} (A\rtimes_\alpha G)
\]
the two $^*$-homomorphisms coming from the canonical covariant representation on $A\rtimes_\alpha G$. If $S\subset A$ is some generating set for $A$, then the elements $\iota^\alpha(a)\lambda^\alpha(f)\in A\rtimes_\alpha G$, for $a\in S$ and $f\in C_c(G)$, generate $A\rtimes_\alpha G$ as a $C^*$-algebra.
Now let us prove the assertion. By the structure theorem for order zero maps \cite[Corollary 4.1]{winter-zacharias-order-zero}, there is a (unique) ${}^*$-homomorphism $\psi: C_0( (0,1], A)\to B$ such that $\psi(\mathrm{id}_{[0,1]}\otimes a)=\varphi(a)$ for all $a\in A$. Equipping the cone over $A$ with the action $C\alpha=(\mathrm{id}_{C_0(0,1]}\otimes\alpha): G\to\operatorname{Aut}(C_0( (0,1], A))$, we see that $\psi$ is equivariant, when restricted to the subset $\mathrm{id}_{[0,1]}\otimes A$, because $\varphi$ was assumed to be $\alpha - \beta$ equivariant. But since $\mathrm{id}_{[0,1]}\otimes A$ generates $C_0( (0,1], A)$ as a $C^*$-algebra, it follows that in fact $\psi$ must be $C\alpha - \beta$ equivariant. This induces a ${}^*$-homomorphism
$$
\psi\rtimes G: C_0( (0,1], A)\rtimes_{C\alpha} G \to B\rtimes_\beta G
$$
by functoriality of the full crossed product. Recall that on the aforementioned generators, we have
\[
(\psi\rtimes G)(\iota^{C\alpha}(x)\lambda^{C\alpha}(f))=\iota^\beta(\psi(x))\lambda^\beta(f)\; \text{for all}~x\in C_0( (0,1], A), f\in C_c(G).
\]
Keeping in mind the definition of the action $C\alpha$, we have a natural isomorphism $\mu: C_0\big( (0,1],(A\rtimes_\alpha G) \big)\to C_0( (0,1], A)\rtimes_{C\alpha} G$ via
\[
\mu\bigl( \mathrm{id}_{[0,1]}\otimes(\iota^\alpha(a)\lambda^\alpha(f)) \bigl)=\iota^{C\alpha}(\mathrm{id}_{[0,1]}\otimes a)\lambda^{C\alpha}(f)\quad\text{for all}~a\in A, f\in C_c(G).
\]
Now set $(\varphi\rtimes G)(x)=(\psi\rtimes G)\circ\mu(\mathrm{id}_{[0,1]}\otimes x)$ for all $x\in A\rtimes_\alpha G$. On the generators, it is simply given by
\[
(\varphi\rtimes G)(\iota^\alpha(a)\lambda^\alpha(f)) = \iota^\beta(\varphi(a))\lambda^\beta(f)\quad\text{for all}~ a\in A, f\in C_c(G).
\]
It now follows that the assignment $\sum_{j=1}^k\varphi_j\mapsto \sum_{j=1}^k \varphi_j\rtimes G$ is also well-defined and shows that the full crossed product is functorial with respect to sums of $k$ c.p.\ order zero maps for any $k\in{\mathbb N}$.
\end{proof}
For both technical and conceptual reasons, it will be useful in this paper to make use of Kirchberg's variant of the central sequence algebra \cite{Kirchberg-Abel} instead of the ordinary one. One crucial advantage of using this algebra is that it is unital, even if the underlying $C^*$-algebra is not. Kirchberg's central sequence algebra models the approximate behavior of bounded sequences with respect to the strict topology rather than the norm topology.
\begin{Def}[following {\cite[Definition 1.1]{Kirchberg-Abel}}]
Let $A$ be a $C^*$-algebra. Denote
\[
\operatorname{Ann}(A,A_\infty) = \{x \in A_{\infty} \mid xA = Ax = \{0\} \}.
\]
Since any $x\in\operatorname{Ann}(A,A_\infty)$ is in the commutant of $A$, this $C^*$-algebra is a closed two sided ideal in
\[
A_\infty\cap A' = \{x \in A_{\infty} \mid xa = ax~\text{for all}~a\in A \}.
\]
The \emph{(corrected) central sequence algebra} of $A$ is defined as the quotient
\[
F_\infty(A) = (A_\infty\cap A') / \operatorname{Ann}(A,A_\infty).
\]
\end{Def}
\begin{Rmk} \label{Rmk:F(A)-unital}
Note that if $A$ is $\sigma$-unital, then this is a unital $C^*$-algebra, the unit coming from any countable approximate unit of $A$; see \cite[Proposition 1.9(3)]{Kirchberg-Abel}. Moreover, we have $F_\infty(A)=A_\infty\cap A'$, if $A$ is unital.
\end{Rmk}
\begin{Notation}
Let $A$ be a $C^*$-algebra and $\phi\in\mathrm{Aut}(A)$ an automorphism. Then $\phi_\infty(A_\infty\cap A')=A_\infty\cap A'$ and $\phi_\infty(\operatorname{Ann}(A,A_\infty))=\operatorname{Ann}(A,A_\infty)$. This gives rise to an automorphism $\tilde{\phi}_\infty$ on $F_\infty(A)$.
Let $G$ be a locally compact group. Given a point-norm continuous action $\alpha\colon G\to\operatorname{Aut}(A)$, the assignment $g\mapsto\tilde{\alpha}_{g,\infty}$ gives rise to an action $\tilde{\alpha}_\infty$ on $F_\infty(A)$. As before, this action need not be point-norm continuous in general.
We thus define the \emph{continuous central sequence algebra} of $A$ with respect to $\alpha$ as
\[
F_\infty^{(\alpha)}(A) = \{ x\in F_\infty(A) \mid
g\mapsto\tilde{\alpha}_{\infty,g}(x)~\text{is continuous} \}.
\]
\end{Notation}
\begin{Rmk}
Given a point-norm continuous action $\alpha\colon G\to\mathrm{Aut}(A)$ of a locally compact group on a unital $C^*$-algebra, it follows from Remarks~\ref{continuous seq} and \ref{Rmk:F(A)-unital} that $F_\infty^{(\alpha)}(A)=A_\infty^{(\alpha)}\cap A'$ and that $\tilde{\alpha}_{\infty}$ agrees with $\alpha_{\infty}$.
\end{Rmk}
\begin{Rmk}[cf.~{\cite[Definition 1.1]{Kirchberg-Abel}}]
\label{F(A)}
Let $A$ be a $C^*$-algebra. One has a canonical $^{*}$-homo\-morphism
\[
F_\infty(A)\otimes_{\max} A \to A_\infty\quad\text{via}\quad (x+\operatorname{Ann}(A,A_\infty))\otimes a \mapsto x\cdot a.
\]
If $A$ is $\sigma$-unital, this map sends $1\otimes a$ to $a$ for all $a\in A$. Now if $\alpha\colon G\to\operatorname{Aut}(A)$ is a point-norm continuous action of a locally compact group, then the above $^{*}$-homomorphism is $(\tilde{\alpha}_\infty\otimes\alpha) - \alpha_\infty$ equivariant. In view of Remark~\ref{continuous seq}, the map restricts to an equivariant $^{*}$-homomorphism
\[
F_\infty^{(\alpha)}(A) \otimes_{\max} A \to A_\infty^{(\alpha)}.
\]
\end{Rmk}
The following lemma by Kasparov, which asserts that every point-norm continuous action on a $C^*$-algebra admits approximately invariant approximate units, will be useful on several occasions in this paper:
\begin{Lemma}[{\cite[Lemma 1.4]{Kasparov88}}]
\label{Lemma:invariant-approx-unit}
Let $A$ be a $\sigma$-unital $C^*$-algebra, $G$ a $\sigma$-compact, locally compact group and $\alpha\colon G\to\operatorname{Aut}(A)$ a point-norm continuous action. Then there exists an approximate unit $(e_n)_{n \in {\mathbb N}}$ for $A$ such that $\|\alpha_g(e_n)-e_n\|\to 0$ uniformly on compact subsets of $G$.
\end{Lemma}
We fix some terminology and notation for actions of ${\mathbb R}$.
\begin{Notation}
A flow on a $C^*$-algebra $A$ is a point-norm continuous action $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$, that is, we have $\alpha_{t+s}=\alpha_t\circ\alpha_s$ for all $s,t\in{\mathbb R}$, and for every $a \in A$, the map $t \mapsto \alpha_t(a)$ is a norm-continuous function from ${\mathbb R}$ to $A$.
A topological flow on a locally compact Hausdorff space is a group homomorphism $\Phi$ from ${\mathbb R}$ to the homeomorphism group of $X$ that is continuous in the sense that $(t,x) \mapsto \Phi_t(x)$ is a continuous function from ${\mathbb R} \times X$ to $X$. The associated flow $\alpha$ on $C_0(X)$ is given by $\alpha_t (f) = f \circ \Phi_{-t}$, and the continuity requirement on $\Phi$ ensures that $\alpha$ is point-norm continuous. We say that a topological flow $\Phi$ is free if it has no periodic points, that is, the map $t\mapsto\Phi_t(x)$ is injective for every $x\in X$.
\end{Notation}
\begin{Notation} \label{Not:flow-conventions}
Let $A$ be a $C^*$-algebra, and let $\alpha:{\mathbb R} \to \mathrm{Aut}(A)$ be a flow.
The twisted convolution algebra is the space $L^1({\mathbb R},A)$ of equivalence classes of weakly measurable functions (that is, functions $f \colon {\mathbb R} \to A$ such that $\varphi \circ f$ is Borel for any $\varphi \in A^*$, such that $\|f\|_{L^1} = \int_{{\mathbb R}}\|f(t)\|dt < \infty$, and where we identify functions which agree except for a set of measure zero), with the convolution product given by the weak integral $f * g (t) = \int_{{\mathbb R}}f(s)\alpha_s(g(t-s))ds$ and involution $\widetilde{f}(t) = \alpha_t(f(-t)^*)$.
The convolution algebra acts on the left regular representation Hilbert $A$-module $L^2({\mathbb R},A)$ by twisted convolution, as in the previous formula. The closure of those convolution operators is isomorphic to the crossed product. The compactly supported continuous $A$-valued functions $C_c({\mathbb R},A)$ form a dense subalgebra of $L^1({\mathbb R},A)$, and therefore its image is dense in $A \rtimes_{\alpha} {\mathbb R}$. (It is sufficient to consider $C_c({\mathbb R},A)$ for the definition and some computations, however in order to discuss spectra of actions, one requires functions which are not compactly supported.)
Let $\widehat{\alpha}$ be the dual action of $\widehat{{\mathbb R}} \cong {\mathbb R}$ on the crossed product $A \rtimes_{\alpha} {\mathbb R}$. Takai's duality theorem states that $A \rtimes_{\alpha} {\mathbb R} \rtimes_{\widehat{\alpha}} {\mathbb R} \cong A \otimes\mathcal{K}(L^2({\mathbb R}))$ and we will sometimes write $\widehat{{\mathbb R}}$ for the second copy of ${\mathbb R}$ to make it clear which copy we mean. We denote by $\sigma$ the shift flow on $C_0({\mathbb R})$ given by $\sigma_t(f)(s) = f(s-t)$ for all $f\in C_0({\mathbb R})$ and for all $s,t\in{\mathbb R}$. For the induced action $\sigma \otimes \alpha \colon {\mathbb R} \to \mathrm{Aut}(C_0({\mathbb R}) \otimes A)$, there is a natural isomorphism
$$
(C_0({\mathbb R}) \otimes A) \rtimes_{\sigma \otimes \alpha} {\mathbb R} \cong A \rtimes_{\alpha} {\mathbb R} \rtimes_{\widehat{\alpha}} \widehat{{\mathbb R}}.
$$ See \cite[Lemma 7.9.2]{pedersen-book}.
\end{Notation}
\begin{Notation}
To simplify some of the formulas in this paper, we write $\dimnucone(A) = \mathrm{dim}_{\mathrm{nuc}}(A)+1$ and use the analogous notation for the other various notions of dimension that appear in the paper.
\end{Notation}
The following is a technical characterization of nuclear dimension that we will use later on; the statement is well-known and we include a proof only for convenience.
\begin{Lemma}
\label{Lemma:dimnuc-central-sequence}
Let $A$ be a $C^*$-algebra, and let $d,n>0$.
Denote by $\iota:A \to A_{\infty}$ the canonical inclusion as constant sequences.
Suppose that for every finite set $\mathcal{F} \subseteq A$ and for any $\varepsilon>0$, there exists a $C^*$-algebra $B = B_{\mathcal{F},\varepsilon}$ with $\mathrm{dim}_{\mathrm{nuc}}(B) \leq d$, a c.p.c.~map $\varphi:A \to B$ and a family of c.p.c.~order zero maps
$\psi^{(0)},\psi^{(1)},\ldots,\psi^{(n)} : B \to A_{\infty}$ such that
\[
\Bigg \|\iota(x) - \sum_{j=0}^n \psi^{(j)}(\varphi(x)) \Bigg\| \leq \varepsilon
\]
for all $x \in \mathcal{F}$. Then
\[
\dimnucone(A) \leq (d+1)(n+1).
\]
The analogous statement with decomposition rank instead of nuclear dimension holds, if we require that furthermore \[ \Biggl\| \sum_{j=0}^n \psi^{(j)} \Biggl\| \leq 1.\]
\end{Lemma}
\begin{proof}
Let $\mathcal{F}\subset A$ and $\varepsilon>0$ be given. Choose a $C^*$-algebra $B$ with $\mathrm{dim}_{\mathrm{nuc}}(B)\leq d$ and maps $\varphi: A\to B$, $\psi^{(0)},\dots,\psi^{(n)}: B\to A_\infty$ as in the statement with $\|\iota(x) - \sum_{j=0}^n \psi^{(j)}(\varphi(x)) \| \leq \varepsilon$ for all $x\in \mathcal{F}$. Now find a finite dimensional $C^*$-algebra $F$, a c.p.c.~map $\kappa: B\to F$ and c.p.c.~order zero maps $\mu^{(0)},\dots,\mu^{(d)}: F \to B$ with $\|\varphi(x)-\sum_{l=0}^d \mu^{(l)}(\kappa(\varphi(x)))\|\leq\varepsilon$ for all $x\in \mathcal{F}$.
This implies that
\[
\def2.2{2}
\begin{array}{ll}
\multicolumn{2}{l}{ \displaystyle \left\|\iota(x)-\sum_{j=0}^n\sum_{l=0}^d \psi^{(j)}\circ\mu^{(l)}\circ\kappa\circ\varphi(x) \right\| } \\
\leq& \displaystyle \left\|\iota(x)- \sum_{j=0}^n \psi^{(j)}(\varphi(x)) \right\| \\
&\displaystyle +\left\| \sum_{j=0}^n \psi^{(j)}\Bigl(\varphi(x)-\sum_{l=0}^d \mu^{(l)}(\kappa(\varphi(x))) \Bigl) \right\| \\
\leq& \varepsilon+(n+1)\varepsilon \\
= & (n+2)\varepsilon.
\end{array}
\]
Notice that the maps $\psi^{(j)}\circ\kappa^{(l)}: F \to A_\infty$ for $j=0,\dots,n$ and for $l=0,\dots,d$ are c.p.c.~order zero. Recall that by \cite[Remark 2.4]{kirchberg-winter}), sums of $k$ c.p.\ order zero maps from finite dimensional $C^*$-algebras can be lifted to sums of $k$ c.p.\ order zero maps for all $k\in{\mathbb N}$. In particular, we can find an $(n+1)(d+1)$-decomposable, completely positive lift $\Psi=(\Psi_m)_m: F \to \ell^\infty({\mathbb N}, A)$ for $\sum_{j=0}^n\sum_{l=0}^d \psi_j\circ\kappa_l: F\to A_\infty$. Because it is a lift, we have
\[
\limsup_{m\to\infty}\|x-\Psi_m\circ\kappa\circ\varphi(x)\| \leq (n+2)\varepsilon
\]
for all $x\in \mathcal{F}$. In particular, we can choose some $m\in{\mathbb N}$ with
\[
\|x-\Psi_m\circ\kappa\circ\varphi(x)\| \leq (n+3)\varepsilon
\]
for all $x\in \mathcal{F}$. As $\Psi_m$ is decomposable into a sum of $(n+1)(d+1)$ c.p.\ order zero maps and $\mathcal{F}$ and $\varepsilon$ were arbitrary, this now shows $\dimnucone(A)\leq (n+1)(d+1)$.
For the second statement, assume that the sum $\sum_{j=0}^n \psi^{(j)}$ can always be chosen to be a contraction. If $\mathcal{F}$ and $\varepsilon$ are arbitrary, choose $B$ as above with $\mathrm{dr}(B)\leq d$. Then the sum $\sum_{l=0}^d \mu^{(l)}$ from above can be chosen to be a contraction, and then the map $\sum_{j=0}^n\sum_{l=0}^d \psi^{(j)}\circ\kappa^{(l)}$ is also a contraction. In this case, the lift $\Psi=(\Psi_m)_m$ can also be chosen to consist of contractions, thus leading to $\drone(A)\leq(n+1)(d+1)$ with the same argument.
\end{proof}
\section{Rokhlin flows and Rokhlin dimension}
\noindent
We recall the definition of a Rokhlin flow from \cite{Kishimoto-flows}.
\begin{Def}
Let $A$ be a separable, unital $C^*$-algebra, and let $\alpha :{\mathbb R} \to \mathrm{Aut}(A)$ be a flow. We say that $\alpha$ has the \emph{Rokhlin property}, or is a \emph{Rokhlin flow}, if for any $p \in {\mathbb R}$, there exists a unitary $v \in A_{\infty}^{(\alpha)} \cap A'$ such that $\alpha_{\infty,t}(v) = e^{ipt}v$ for all $t \in {\mathbb R}$.
\end{Def}
\begin{Rmk} \label{Rmk:rp-reform}
Fix $M>0$, and let $\lambda_t$ be the ${\mathbb R}$-shift on $C({\mathbb R}/M{\mathbb Z})$ given by $\lambda_t(f)(x) = f(x-t)$. Of course, for different $M$, the notation $\lambda$ means something else. We will suppress the $M$ to lighten notation, as most of the arguments will involve a fixed $M$. Note that the Rokhlin property can be phrased as follows. The flow $\alpha$ has the Rokhlin property if and only if for any $M>0$, there exists a unital $\lambda - \alpha_\infty$ equivariant $^{*}$-homomorphism $C({\mathbb R}/M{\mathbb Z}) \to A_{\infty}^{(\alpha)} \cap A'$.
\end{Rmk}
There are interesting and important examples of Rokhlin flows; see \cite{Kishimoto-flows} for flows on noncommutative tori, and \cite{Kishimoto-flows-O2, Kishimoto-shift, BratelliKishimotoRobinson} for flows on Cuntz algebras. At the same time, there are $K$-theoretic obstructions to having Rokhlin flows on $C^*$-algebras, and thus they are less common than single automorphisms with the Rokhlin property. This may not be so surprising because the definition of the Rokhlin property for flows can be thought of as an analogue of the definition of the cyclic Rokhlin property for a single automorphism, in which the Rokhlin projections consist of one cyclic tower of any prescribed height. This restricted form of the Rokhlin property for single automorphisms does come with severe $K$-theoretic obstructions, e.g.~the $K_0$-group has to be divisible in certain cases. For the case of Rokhlin flows, there is a more subtle obstruction. This was observed in a remark on the top of page 600 of \cite{Kishimoto-flows}. We establish an obstruction of this type here. We first require a lemma concerning the existence of an unbounded trace on crossed products. We recall that an unbounded, densely defined trace $\tau$ on $A$ is said to be faithful if $\tau(a)>0$ for any positive nonzero element $a$ in the domain of $\tau$.
\begin{Lemma}
\label{Lemma:unbounded-trace}
Let $A$ be a unital $C^*$-algebra that admits a tracial state. Let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ be a flow on $A$. If $A \rtimes_{\alpha} {\mathbb R}$ is simple, then $A\rtimes_{\alpha} {\mathbb R}$ has a faithful, densely defined unbounded trace.
\end{Lemma}
\begin{proof}
For any $\tau \in T(A)$, and any $a \in A$, the map $t \mapsto \tau(\alpha_t(a))$ is continuous. For every $M>0$, we can thus make sense of a Riemann integral
\[
\tau_M(a) = \frac{1}{2M} \int_{-M}^M \tau(\alpha_t(a))~dt.
\]
Being an average of tracial states, $\tau_M$ is a tracial state as well.
Starting with any given trace, since $A$ is unital, any weak-$*$ cluster point of the family $(\tau_n)_{n \in {\mathbb N}}$ is an $\alpha$-invariant tracial state.
Now, assume that $\tau$ is an $\alpha$-invariant tracial state on $A$. Consider the twisted convolution algebra $C_c({\mathbb R},A)$, with product
$$
f * g (t) = \int_{{\mathbb R}}f(s)\alpha_s(g(t-s))ds .
$$
By definition, this is a dense subalgebra of the crossed product $A \rtimes_{\alpha} {\mathbb R}$. For $f \in C_c({\mathbb R},A)$, we define $\zeta \colon C_c({\mathbb R},A) \to {\mathbb C}$ by
$$
\zeta(f) = \tau(f(0)) .
$$
We check that $\tau$ is an unbounded trace.
Using the fact that $\tau$ is a trace in the third row, and that $\tau$ is $\alpha$-invariant in the fifth, we see that, for $f,g \in C_c({\mathbb R},A)$,
\begin{align*}
\zeta(f * g) & = \tau \left ( \int_{{\mathbb R}} f(s) \alpha_s(g(-s)) ds \right ) \\
& = \int_{{\mathbb R}} \tau \left ( f(s) \alpha_s(g(-s)) \right ) ds \\
& = \int_{{\mathbb R}} \tau \left ( \alpha_s(g(-s)) f(s) \right ) ds \\
& = \int_{{\mathbb R}} \tau \left ( \alpha_{-x}(g(x)) f(-x) \right ) dx \\
& = \int_{{\mathbb R}} \tau \left ( (g(x)) \alpha_x(f(-x)) \right ) dx = \zeta(g * f).
\end{align*}
Furthermore, noting that the adjoint of $f$ is given by $\widetilde{f}(x) = \alpha_{x}(f(-x)^*)$, we have
\begin{align*}
\zeta(f * \widetilde{f}) & =\int_{{\mathbb R}} \tau \left ( f(s) \alpha_s(\widetilde{f}(-s)) \right ) ds \\
& = \int_{{\mathbb R}} \tau \left ( f(s) f(s)^* \right ) \geq 0
\end{align*}
and thus $\tau$ is positive. That $\zeta$ is unbounded follows from the fact that $\|f\|_{L^1} \geq \|f\|_{A \rtimes_{\alpha}{\mathbb R}}$, and the restriction of $\zeta$ to $C_c({\mathbb R},{\mathbb C}) \subseteq C_c({\mathbb R},A)$ is just a point evaluation, which is well-known to be unbounded in the $L^1$-norm.
Since $A\rtimes_{\alpha} {\mathbb R}$ is simple, this trace is faithful. To see that, pick a nonzero positive element $a \in A \rtimes_{\alpha}{\mathbb R}$ such that $\infty>d_{\tau}(a)>0$, recalling $d_{\tau}(a) = \lim_{n \to \infty} \tau(a^{1/n})$. Then the restriction of $\tau$ to the hereditary subalgebra generated by $a$ is bounded and hence faithful. Any nonzero positive element $b$ in the domain of $\tau$ is dominated by a positive element $a$
in the domain with $\tau(a)>0$. (Pick any positive element $c$ such that $\tau(c)>0$ and set $a = b+c$.) Therefore $\tau(b)>0$ as well.
\end{proof}
\begin{Prop}
\label{prop:obstruction}
Let $A$ be a separable, unital $C^*$-algebra. Suppose that $\alpha$ is a flow on $A$ such that $A \rtimes_{\alpha} {\mathbb R}$ is simple.
If $\alpha$ is a Rokhlin flow, then $A\rtimes_{\alpha} {\mathbb R}$ has a nontrivial projection. If $A$ furthermore admits a tracial state, then $K_1(A)$ is nontrivial.
\end{Prop}
\begin{proof}
Let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ be a flow as given in the statement. Let $v\inA_{\infty}^{(\alpha)} \cap A'$ be a unitary with $\alpha_{\infty,t}(v)=e^{2\pi it}v$ for all $t\in{\mathbb R}$.
Then there exists a $\lambda - \alpha_\infty$ equivariant unital $^{*}$-homomorphism $\varphi \colon C({\mathbb R}/{\mathbb Z}) \to A_{\infty}^{(\alpha)} \cap A'$, which identifies $C({\mathbb R}/{\mathbb Z})$ with $C^*(v)$. (Note that since $\alpha_{\infty , t}(v) = e^{2\pi i t}v$ for all $t$, and automorphisms preserve spectra, the spectrum of $v$ is necessarily the full unit circle.) By Lemma~\ref{Rmk:cont-part-crossed-product-embeds}, this in turn induces a nonzero $^{*}$-homomorphism $\pi \colon C({\mathbb R}/{\mathbb Z}) \rtimes_{\lambda} {\mathbb R} \to A_{\infty}^{(\alpha)} \rtimes_{\alpha_\infty} {\mathbb R} \to (A \rtimes_{\alpha} {\mathbb R})_{\infty}$.
By Green's Imprimitivity Theorem, we have an isomorphism $C({\mathbb R}/{\mathbb Z}) \rtimes_{\lambda} {\mathbb R} \cong C({\mathbb T}) \otimes\mathcal{K}$ (one can compute this directly as well).
Thus, there exists a nonzero projection in
$\pi(C({\mathbb R}/{\mathbb Z}) \rtimes_{\lambda} {\mathbb R}) \subseteq (A \rtimes_{\alpha} {\mathbb R})_{\infty}$. Since being a projection is a stable relation, it follows that there exists a nonzero projection $p \in A \rtimes_{\alpha} {\mathbb R}$.
If $A$ is furthermore assumed to admit a tracial state, then by Lemma~\ref{Lemma:unbounded-trace}, $A \rtimes_{\alpha} {\mathbb R}$ admits a densely defined, faithful, unbounded trace. By using the pairing between traces and the $K_0$-group, we therefore get $K_0(A \rtimes_{\alpha}{\mathbb R}) \neq 0$. By Connes' analogue of the Thom isomorphism, we have $K_1(A) \cong K_0(A \rtimes_{\alpha}{\mathbb R})$, and in particular, we have $K_1(A) \neq 0$.
\end{proof}
\begin{Exl}
\label{Example:two-spheres}
Suppose that $\Phi$ is a smooth, minimal flow on a smooth compact manifold $M$ with $H^1(M ; {\mathbb Z}) = 0$. Let $\alpha$ be the induced flow on $C(M)$ and $\tau$ a densely defined, unbounded trace on $C(M)\rtimes_{\alpha} {\mathbb R}$ induced by an invariant probability measure on $M$. By \cite[Corollary 2]{connes}, the image of the pairing of $\tau$ and $K_0(C(M)\rtimes_{\alpha}{\mathbb R})$ is trivial. Since there exist invariant probability measures on $M$, and $C(M)\rtimes_{\alpha}{\mathbb R}$ is simple, any such trace $\tau$ is faithful. Therefore, $C(M)\rtimes_{\alpha}{\mathbb R}$ is stably projectionless. In particular, $\alpha$ is not a Rokhlin flow.
Examples of manifolds $M$ with those properties include products of two odd spheres $M = S^n \times S^m$ where $n,m$ are odd numbers greater than 1, as those admit free actions of ${\mathbb T}^2$; see \cite[Theorem 2]{fathi-herman}.
\end{Exl}
We now turn to our main definition.
\begin{Def}
\label{Def:dimrok}
Let $A$ be a separable $C^*$-algebra with a flow $\alpha: {\mathbb R}\to\mathrm{Aut}(A)$. The \emph{Rokhlin dimension} of $\alpha$ is the smallest natural number $d\in{\mathbb N}$, such that the following holds: for every $p\in {\mathbb R}$, there exist normal contractions $x^{(0)},x^{(1)},\ldots,x^{(d)}\in F_\infty^{(\alpha)}(A)$ with $x^{(0)*}x^{(0)}+\dots+x^{(d)*}x^{(d)}=1$ and $\tilde{\alpha}_{\infty,t}(x^{(j)})=e^{ipt}x^{(j)}$ for all $t\in{\mathbb R}$ and for $j=0,\dots,d$. In this case, we write $\dim_{\mathrm{Rok}}(\alpha)=d$. If no such number exists, we say that the Rokhlin dimension is infinite.
\end{Def}
\begin{Rmk}
In Definition~\ref{Def:dimrok}, one could just as well only require the given condition for $p>0$. For $p=0$ the condition always holds by taking $x^{(0)} = 1$ and $x^{(j)}=0$ for $j>0$. If $x^{(0)},\ldots,x^{(d)}$ satisfy the conditions for a given $p$, then $x^{(0)*},\ldots,x^{(d)*}$ satisfy the required conditions with $-p$ instead of $p$.
\end{Rmk}
We list some more easy variations of Definition~\ref{Def:dimrok}.
\begin{Lemma}
\label{Lemma:def-dimrok-lift}
Let $A$ be a separable $C^*$-algebra with a flow $\alpha: {\mathbb R}\to\mathrm{Aut}(A)$ and let $d\in{\mathbb N}$. The following are equivalent:
\begin{enumerate}[leftmargin=*]
\item The action $\alpha$ has Rokhlin dimension at most $d$.
\label{Lemma:def-dimrok-lift-item-1}
\item
\label{Lemma:def-dimrok-lift-item-2}
For any $p \in {\mathbb R}$, there are contractions $x^{(0)},x^{(1)},\ldots,x^{(d)}\in A_\infty \cap A'$ such that
\begin{enumerate}
\item $x^{(j)}x^{(j)*}a = x^{(j)*}x^{(j)}a$ for all $j=0,1,\ldots,d$ and for all $a \in A$.
\item $\sum_{j=0}^d x^{(j)}x^{(j)*}a = a$ for all $a \in A$.
\item $\alpha_{\infty,t}(x^{(j)})a = e^{ipt}x^{(j)}a$ for all $j$, for all $t \in {\mathbb R}$ and for all $a \in A$.
\end{enumerate}
\item
\label{Lemma:def-dimrok-lift-item-3}
For any $p\in{\mathbb R}$ and for any separable $C^*$-subalgebra $E \subset A_\infty$, there are contractions $x^{(0)},\ldots,x^{(d)}\in A_ \infty \cap E'$ such that
\begin{enumerate}
\item $x^{(j)}x^{(j)*}a = x^{(j)*}x^{(j)}a$ for all $j=0,1,\ldots,d$ and for all $a \in E$.
\item $\sum_{j=0}^d x^{(j)}x^{(j)*}a = a$ for all $a \in E$.
\item $\alpha_{\infty,t}(x^{(j)})a = e^{ipt}x^{(j)}a$ for all $j$, for all $t \in {\mathbb R}$ and for all $a \in E$.
\end{enumerate}
\item
\label{Lemma:def-dimrok-lift-item-4}
Condition~(\ref{Lemma:def-dimrok-lift-item-2}) holds when we furthermore require that $x^{(0)},\dots,x^{(d)}$ are in the subalgebra $A_{\infty}^{(\alpha)}\cap A'$.
\item
\label{alternativedefinitionofRokhlindimensionforflows}
For any $ p, T, \delta > 0 $ and any finite set $\mathcal{F} \subset A$, there are contractions $ x^{(0)}, \dots, x^{(d)} \in A $ satisfying:
\begin{enumerate}
\item
\label{Lemma:def-dimrok-lift-item-5a}
$ \left\| a (\alpha_{t}( x^{(l)} ) - e^{ipt} \cdot x^{(l)}) \right\| \le \delta $ for all $ l = 0, \dots, d $, for all $t \in [ -T, T] $ and for $a \in \mathcal{F}$.
\item
\label{Lemma:def-dimrok-lift-item-5b}
$ \left\| a-a\cdot\sum_{l= 0 }^{d} x^{(l)} x^{(l)*} \right\| \le \delta $ for all $ a \in \mathcal{F}$.
\item
\label{Lemma:def-dimrok-lift-item-5c}
$ \left\| [ x^{(l)} , a ] \right\| \le \delta $ for all $ l = 0, \dots, d $ and $ a \in \mathcal{F}$.
\item
\label{Lemma:def-dimrok-lift-item-5d}
$ \left\| a [ x^{(l)} , x^{(l)*} ] \right\| \le \delta $ for all $ l = 0, \dots, d $ and $ a \in \mathcal{F}$.
\end{enumerate}
In fact, it suffices to consider $ T = 1 $ or any other positive number. Moreover, it is enough to verify this condition for finite sets $ \mathcal{F} $ from a prescribed dense subset of $ A_{\le 1} $.
\end{enumerate}
If $A$ is unital, one can simplify conditions~(\ref{Lemma:def-dimrok-lift-item-2}), (\ref{Lemma:def-dimrok-lift-item-3}) and (\ref{Lemma:def-dimrok-lift-item-5a}), (\ref{Lemma:def-dimrok-lift-item-5b}), (\ref{Lemma:def-dimrok-lift-item-5d}), as it suffices to consider $a=1$.
\end{Lemma}
\begin{proof}
$(\ref{Lemma:def-dimrok-lift-item-1}) \Longleftrightarrow (\ref{alternativedefinitionofRokhlindimensionforflows})$: This is straightforward, by unraveling the definition in terms of representing bounded sequences of elements in the central sequence algebra.
$(\ref{Lemma:def-dimrok-lift-item-1})\implies (\ref{Lemma:def-dimrok-lift-item-2})$: This follows directly by lifting the elements $x^{(0)},\ldots,x^{(d)}$ that appear in Definition~\ref{Def:dimrok} to elements in $A_\infty$.
$(\ref{Lemma:def-dimrok-lift-item-2})\implies (\ref{Lemma:def-dimrok-lift-item-3})$: This follows from a standard reindexation argument.
$(\ref{Lemma:def-dimrok-lift-item-3})\implies (\ref{Lemma:def-dimrok-lift-item-4})$: Apply Lemma~\ref{Lemma:invariant-approx-unit} to choose a positive contraction $e\in A_\infty$ that is fixed under $\alpha_\infty$ and satisfies $ea=ae=a$ for all $a\in A$. Let $p\in{\mathbb R}$. Apply (\ref{Lemma:def-dimrok-lift-item-3}) to $E=C^*(A\cup\{e\})$ and choose $x^{(0)},\dots,x^{(d)}\in A_\infty\cap E'$ accordingly. For all $j=0,\dots,d$, the products $y^{(j)}=x^{(j)}e$ yield elements in $A_\infty\cap A'$ satisfying all the properties of (\ref{Lemma:def-dimrok-lift-item-2}) for $p$. Moreover, we have
\[
\alpha_{\infty,t}(y^{(j)})=\alpha_{\infty,t}(x^{(j)}e) = \alpha_{\infty,t}(x^{(j)})e = e^{ipt}x^{(j)}e = e^{ipt}y^{(j)}.
\]
In particular, we have $y^{(j)}\inA_{\infty}^{(\alpha)}\cap A'$ by appealing to Remark~\ref{continuous seq}.
$(\ref{Lemma:def-dimrok-lift-item-4})\implies (\ref{Lemma:def-dimrok-lift-item-1})$: Having chosen $x^{(0)},\dots,x^{(d)}\inA_{\infty}^{(\alpha)}\cap A'$ with the given properties, consider the corresponding elements $y^{(j)}=x^{(j)}+\operatorname{Ann}(A,A_\infty)\in F_\infty^{(\alpha)}(A)$ for $j=0,\dots,d$. These will satisfy the properties required by Definition~\ref{Def:dimrok}.
\end{proof}
\begin{Lemma}
\label{Lemma:hereditary-subalg}
Let $A$ be a separable $C^*$-algebra. Let $\alpha$ be a flow on $A$. If $B \subseteq A$ is an $\alpha$-invariant, hereditary $C^*$-subalgebra, then $\dim_{\mathrm{Rok}}(\alpha|_{B}) \leq \dim_{\mathrm{Rok}}(\alpha)$.
\end{Lemma}
\begin{proof}
We may assume that $\alpha$ has finite Rokhlin dimension $d=\dim_{\mathrm{Rok}}(\alpha)$, since otherwise there is nothing to show.
By Lemma~\ref{Lemma:invariant-approx-unit}, there exists an approximate unit $(e_n)_{n \in {\mathbb N}}$ for $B$ with $\|\alpha_t(e_n)-e_n\|<1/n$ for every $t \in [-n,n]$ and for all $n$.
Let $e$ be the image of the sequence $(e_1,e_2,\ldots)$ in $B_{\infty} \subseteq A_{\infty}$. Notice that $e$ is fixed under $\alpha_\infty$. In particular, we have $e \in B_{\infty}^{(\alpha)}$. As $B$ is hereditary, it follows that $e A_\infty e \subseteq B_{\infty}$. Use Lemma~\ref{Lemma:def-dimrok-lift} to find elements $x^{(0)},\ldots,x^{(d)}$ as in the statement~(\ref{Lemma:def-dimrok-lift-item-3}), with $E = C^*(A \cup \{e\})$. Set $y^{(j)} = ex^{(j)}e$. Since $e$ was chosen to satisfy $eb=be=b$ for all $b\in B$, it follows that the elements $y^{(0)},\ldots,y^{(d)}$ satisfy the conditions of Lemma~\ref{Lemma:def-dimrok-lift}~(\ref{Lemma:def-dimrok-lift-item-2}) with $B$ in place of $A$ and with $y^{(j)}$ in place of $x^{(j)}$.
\end{proof}
\begin{Rmk}
\label{Rmk:Rokhlin-dim}
As we mentioned in Remark~\ref{Rmk:rp-reform} (in the case of unital $A$), the definition of the Rokhlin property for flows requires that for any $M>0$, there exists a unital $\lambda - \alpha_\infty$ equivariant $^{*}$-homomorphism from $C({\mathbb R}/M{\mathbb Z})$ to $A^{(\alpha)}_{\infty} \cap A'$, which then agrees with $F_\infty^{(\alpha)}(A)$ and $\alpha_\infty$ agrees with $\tilde{\alpha}_\infty$. Likewise, $\alpha$ has Rokhlin dimension $d$ if and only if $d$ is the smallest natural number such that, for any $M>0$, there are $(d+1)$-many $\lambda - \tilde{\alpha}_\infty$ equivariant c.p.c.~order zero maps $\mu^{(0)},\mu^{(1)},\ldots, \mu^{(d)} \colon C({\mathbb R}/M{\mathbb Z}) \to F_\infty^{(\alpha)}(A)$ with $\sum_{j=0}^d \mu^{(j)}(1) = 1$:
Let $x^{(0)},\dots,x^{(d)}\in F_\infty^{(\alpha)}(A)$ be normal contractions satisfying $\tilde{\alpha}_{\infty,t}(x^{(j)}) = e^{2\pi i t /M}x^{(j)}$ and $x^{(0)*}x^{(0)}+\dots+x^{(d)*}x^{(d)}=1$.
Notice that the universal $C^*$-algebra generated by a normal contraction is isomorphic to the continuous functions $C_{0}((0,1]\times{\mathbb T})$ on the unit disk vanishing at $0$, and coincides with the universal $C^*$-algebra generated by the order zero image of a unitary.
Identifying ${\mathbb T} \cong {\mathbb R}/M{\mathbb Z}$, it follows that for each $j=0,\dots,d$, the normal contraction $x^{(j)}\in F_\infty^{(\alpha)}(A)$ induces a $\lambda-\tilde{\alpha}_\infty$ equivariant c.p.c.~order zero map
\[
C(\mathbb{R}/M \mathbb{Z})\to F_\infty^{(\alpha)}(A)\quad\text{via}\quad \mathrm{id}_{C(\mathbb{R}/M \mathbb{Z})}\mapsto x^{(j)}|x^{(j)}|.
\]
This viewpoint will be the one that we use for the proof of Theorem~\ref{Thm:dimnuc-bound}.
\end{Rmk}
We now consider the strong Connes spectrum of a flow with finite Rokhlin dimension. We refer the reader to \cite{kishimoto-strong-connes} for details concerning the strong Connes spectrum, and to \cite{pedersen-book} for a detailed discussion of spectral theory for actions (although it does not cover the strong Connes spectrum). We recall a few definitions. Let $\alpha$ be a flow on a $C^*$-algebra $A$. For $f \in L^1({\mathbb R})$ and $a \in A$, we let
\[
\alpha_f(a) = \int_{-\infty}^{\infty} f(t) \alpha_t(a)~dt
\, .
\]
For $f \in L^1({\mathbb R})$, we let $z(f)$ denote the zero set of the Fourier transform of $f$.
We set
\[
\mathrm{Sp}_{\alpha}(a) = \bigcap \{z(f) \mid \alpha_f(a) = 0\}
\, .
\]
For any closed subset $\Omega \subseteq \widehat{{\mathbb R}}$, we set
\[
A^{\alpha}(\Omega) = \{a \in A \mid \mathrm{Sp}_{\alpha}(a) \subseteq \Omega\}
\]
and
\[
A(\Omega) = \overline{\mathrm{span}\{x^*ay \mid a \in A \; \mathrm{and} \; x,y \in A^{\alpha}(\Omega) \} }
\, .
\]
The strong (Arveson) spectrum of $\alpha$, denoted $\widetilde{\mathrm{Sp}}(\alpha)$, is defined to be the set of all $\xi \in \widehat{{\mathbb R}}$ such that for any closed neighborhood $\Omega$ of $\xi$, one has $A(\Omega) = A$. Lastly, the strong Connes spectrum of $\alpha$ is defined to be
\[
\widetilde{\Gamma}(\alpha) = \bigcap \{\widetilde{\mathrm{Sp}}(\alpha|_{B}) \mid B \mbox{ is a nonzero invariant hereditary subalgebra}\}
\, .
\]
\begin{Prop}
\label{prop:full-connes-spectrum}
Let $A$ be a separable $C^*$-algebra, and let $\alpha$ be a flow on $A$ with finite Rokhlin dimension. Then $\widetilde{\Gamma}(\alpha) = \widehat{{\mathbb R}}$.
\end{Prop}
\begin{proof}
By Lemma~\ref{Lemma:hereditary-subalg}, the restriction of $\alpha$ to any invariant hereditary subalgebra also has finite Rokhlin dimension. Thus, it suffices to show that if $\alpha$ is a flow on a separable $C^*$-algebra $A$ with finite Rokhlin dimension, then $\widetilde{\mathrm{Sp}}(\alpha) = \widehat{{\mathbb R}}$.
Fix $p \in \widehat{{\mathbb R}}$. Use Lemma~\ref{Lemma:def-dimrok-lift}(\ref{Lemma:def-dimrok-lift-item-4}) to find $x^{(0)},\ldots,x^{(d)} \in F_\infty^{(\alpha)}(A)$ as in Definition~\ref{Def:dimrok}, but also admitting representatives in $A_\infty^{(\alpha)}$. These are eigenvectors for the action of $\tilde{\alpha}_\infty$ on $F_\infty^{(\alpha)}(A)$, so for every $f \in L^1({\mathbb R})$ and $j=0,1,\ldots,d$, we have
\[
\tilde{\alpha}_{\infty,f}(x^{(j)}) = \int_{-\infty}^{\infty}f(t)\tilde{\alpha}_{\infty,t}(x^{(j)})~dt =
x^{(j)}\int_{-\infty}^{\infty}f(t)e^{ipt}~dt =
\widehat{f}(-p)x^{(j)}
\, .
\]
Pick a closed neighborhood $\Omega$ of $-p$, and find a function $f \in L^1({\mathbb R})$ such that $\widehat{f}$ is supported in $\Omega$ and $\widehat{f}(-p) = 1$. Then $x^{(j)} = \tilde{\alpha}_{\infty,f}(x^{(j)})$ for all $j$.
For every $j$, pick a lift $\widetilde{x}^{(j)} = (x^{(j)}(1),x^{(j)}(2),\ldots) \in \ell^{\infty , (\alpha)}({\mathbb N},A)$. We can replace $\widetilde{x}^{(j)}$ by $\alpha_f(\widetilde{x}^{(j)})$, and it is still a lift, so we may assume without loss of generality that we have done so. If $g \in L^1({\mathbb R})$ satisfies $z(g) \supseteq \Omega$, then $g*f = 0$, because $\widehat{g*f} = \widehat{g}\cdot \widehat{f} = 0$. For all $j=0,1,\ldots,d$ and all $m$, we thus have $\alpha_g(x^{(j)}(m)) = \alpha_{g*f}(x^{(j)}(m)) = 0$. So each $x^{(j)}(m)$ is in $A^{\alpha}(\Omega)$. Now, for any $a \in A$, we have
\[
\lim_{m \to \infty} \sum_{j=0}^d x^{(j)}(m)^*x^{(j)}(m)a = a
\]
and $\lim_{m\to\infty} x^{(j)}(m)a - ax^{(j)}(m) = 0$, and thus $a \in A(\Omega)$. Therefore $-p \in \widetilde{\mathrm{Sp}}(\alpha)$. As $p$ was arbitrary, we have $\widetilde{\mathrm{Sp}}(\alpha) = \widehat{{\mathbb R}}$, as required.
\end{proof}
We recall that by \cite[Theorem 3.5]{kishimoto-strong-connes}, if $\alpha: {\mathbb R}\to\operatorname{Aut}(A)$ has full strong Connes spectrum, then the crossed product $A\rtimes_\alpha {\mathbb R}$ is simple if and only if $A$ has no non-trivial $\alpha$-invariant ideals.
Thus, we have the following corollary.
\begin{Cor}
Let $A$ be a separable $C^*$-algebra, and let $\alpha$ be a flow on $A$ with finite Rokhlin dimension. If $A$ has no $\alpha$-invariant ideals, then $A \rtimes_{\alpha} {\mathbb R}$ is simple.
\end{Cor}
\section{Nuclear dimension of crossed products}
\label{Section:Nuclear dimension}
\noindent
Let $A$ be a $C^*$-algebra, and let $\alpha:{\mathbb R} \to \mathrm{Aut}(A)$ be a flow.
As before, we let $\sigma: {\mathbb R} \to \mathrm{Aut}(C_0({\mathbb R}))$ be the shift flow. Fix $M>0$, and let $\lambda : {\mathbb R} \to \mathrm{Aut}(C({\mathbb R}/M{\mathbb Z}))$ be the shift modulo $M$.
We consider the actions $\sigma \otimes \alpha : {\mathbb R} \to \mathrm{Aut}(C_0({\mathbb R}) \otimes A)$ and $\lambda \otimes \alpha : {\mathbb R} \to \mathrm{Aut}(C({\mathbb R}/M{\mathbb Z}) \otimes A)$. We recall from Notation~\ref{Not:flow-conventions} that $(C_0({\mathbb R}) \otimes A) \rtimes_{\sigma \otimes \alpha} {\mathbb R} \cong A \otimes\mathcal{K}$.
Let
\begin{align*}
B_0 = & \{
f \in L^1({\mathbb R}, C_0((-M/2,M/2) , A)) \mid \\
& f(t)(x) = 0 \mbox{ whenever } x -t \not \in (-M/2,M/2)
\}
\end{align*}
We can view $C_0((-M/2,M/2),A)$ as a subalgebra of $C_0({\mathbb R},A)$, as well as a subalgebra of $C_0({\mathbb R}/M{\mathbb Z},A)$, where the latter identification is obtained by identifying
$$
C({\mathbb R}/M{\mathbb Z},A) \cong \{f \in C([-M/2,M/2],A) \mid f(-M/2) = f(M/2)\} .
$$
With those identifications, we can view $B_0$ as a closed, self-adjoint, linear subspace of the twisted convolution algebras $L^1({\mathbb R},C_0({\mathbb R},A))$ and of $L^1({\mathbb R},C({\mathbb R}/M{\mathbb Z},A))$. Moreover, we have the following.
\begin{Claim}
$B_0$ is closed under the product operations of both of the algebras $L^1({\mathbb R},C_0({\mathbb R},A))$ and $L^1({\mathbb R},C({\mathbb R}/M{\mathbb Z},A))$, and the two restricted product operations in fact coincide.
\end{Claim}
\begin{proof}
Let us first consider the product in $L^1({\mathbb R},C_0({\mathbb R},A))$, twisted by $\sigma \otimes \alpha$,
\begin{align*}
f * g (t)(x) & = \int_{{\mathbb R}} f(s)(x)(\sigma_s \otimes \alpha_s) (g(t-s))(x)~ds\\
& = \int_{{\mathbb R}} f(s)(x)\alpha_s (g(t-s)(x-s))~ds.
\end{align*}
If $x-s \not \in (-M/2,M/2)$ then $f(s)(x) = 0$ for $f \in B_{0}$. Thus, the latter integral can be rewritten as
\[
\int_{x-M/2}^{x+M/2} f(s)(x)\alpha_s (g(t-s)(x-s))~ds.
\]
If $x-t \not \in (-M/2,M/2)$ then $(x-s) - (t-s) = x-t \not \in (-M/2,M/2)$, so $g(t-s)(x-s) = 0$ for $g \in B_{0}$, and thus it follows for such $x$ that $f*g(t)(x) = 0$, so $f*g \in B_0$.
If we consider instead the product twisted by $\lambda \otimes \alpha$, then the expression $x-s$ above is considered modulo $M$. However, in the way that we represent the integral, we choose $s \in (x-M/2,x+M/2)$. Thus we have $x-s \in (-M/2,M/2)$, and it does not need to be modified. Therefore, we get the exact same expression.
\end{proof}
We denote by $B_{\sigma}$ and $B_{\lambda}$ the completions of $B_0$ in $(C_0({\mathbb R}) \otimes A) \rtimes_{\sigma \otimes \alpha} {\mathbb R}$ and in $(C({\mathbb R}/M{\mathbb Z}) \otimes A) \rtimes_{\lambda \otimes \alpha} {\mathbb R}$, respectively. By representing those two crossed product $C^*$-algebras using the standard left regular representation module, the claim above shows that the norm of any $f \in B_0$ is the same in either completion. Thus, the identity function $B_0 \to B_0$ extends to an
isomorphism
\begin{equation} \label{eq:xi}
\zeta \colon B_{\sigma} \to B_{\lambda}.
\end{equation}
Let $h \in C_0({\mathbb R}) \subseteq \mathcal{M}(C_0({\mathbb R},A))$ be the function defined as follows.
\begin{center}
\begin{picture}(230,65)
\put(0,10){\vector(1,0){200}}
\put(86,3){\vector(0,1){50}}
\put(86,7){\line(0,1){6}}
\put(167,7){\line(0,1){6}}
\thicklines
\put(5,10){\line(3,1){81}}
\put(86,37){\line(3,-1){81}}
\put(75,40){\makebox(0,0){$1$}}
\put(5,-4){\makebox(0,0)[b]{\footnotesize $-\frac{M}{2}$\normalsize}}
\put(86,-4){\makebox(0,0)[b]{\footnotesize $0$\normalsize}}
\put(167,-4){\makebox(0,0)[b]{\footnotesize $\frac{M}{2}$\normalsize}}
\put(25,47){\makebox(0,0){$h$}}
\end{picture}
\end{center}
Viewing $h$ as an element of $\mathcal{M}((C_0({\mathbb R}) \otimes A) \rtimes_{\sigma \otimes \alpha} {\mathbb R})$, we can consider the hereditary subalgebra of the crossed product given by $h$, that is,
$$
B_h = \overline{ h \big( (C_0({\mathbb R}) \otimes A) \rtimes_{\sigma \otimes \alpha} {\mathbb R} \big) h} \, .
$$
Notice that $h$ and the $C^*$-algebras $B_{\sigma}$ and $B_{\lambda}$ depend on the choice of $M$. We will write $h(M), B_{\sigma}(M)$ and $B_{\lambda}(M)$ if there is room for confusion, but otherwise we suppress it so as to lighten notation.
\begin{Claim} \label{claim 2}
With the notation as in the discussion above, we have $B_h = B_{\sigma}$.
\end{Claim}
\begin{proof} We first show that if $f \in L^1({\mathbb R},C_0({\mathbb R},A))$, then $h \cdot f \cdot h \in B_0$. For $t \in {\mathbb R}$, denote $h_t(x) = h(x-t)$. Notice that we have $(f \cdot h) (t) = f (t) h_t$. So for every $t,x \in {\mathbb R}$, we have $(h \cdot f \cdot h) (t) (x) = f (t)(x) h(x-t) h(x)$, and $h(x-t)h(x) = 0$ whenever $x \not \in (-M/2,M/2)$ and whenever $x-t \not \in (-M/2,M/2)$. Therefore, we have $B_h \subseteq B_{\sigma}$. It is easy to furthermore see that $h \cdot L^1({\mathbb R},C_0({\mathbb R},A)) \cdot h$ is in fact dense in $B_{\sigma}$, and therefore we have the reverse inclusion as well.
\end{proof}
\begin{Rmk}
\label{Rmk:not-hereditary}
Denote by $h_0 \in C({\mathbb R}/M{\mathbb Z})$ the function given by $h$ on the interval $(-M/2,M/2)$.
We caution the reader that, even if the notational similarity might suggest so, $B_{\lambda}$ is \emph{not} $\overline{h_0 \big( (C({\mathbb R}/M{\mathbb Z}) \otimes A)\rtimes_{\lambda \otimes \alpha} {\mathbb R} \big) h_0}$. In fact, $B_{\lambda}$ is generally not even a hereditary subalgebra of $(C({\mathbb R}/M{\mathbb Z}) \otimes A)\rtimes_{\lambda \otimes \alpha} {\mathbb R}$.
\end{Rmk}
We record the following immediate fact.
\begin{Claim}
\label{claim:h+h-shifted}
Let $h_0$ be as in Remark~\ref{Rmk:not-hereditary} above. Denoting $h_1=\lambda_{M/2}(h_0)$, we have $h_0 + h_1 = 1$.
\end{Claim}
We now come to the main result of this section, which asserts that taking crossed products by flows with finite Rokhlin dimension preserves the property of having finite nuclear dimension. This is the analogue for flows of \cite[Theorem 4.1]{HWZ}.
\begin{Thm}
\label{Thm:dimnuc-bound}
Let $A$ be a separable $C^*$-algebra and let $\alpha: {\mathbb R} \to \mathrm{Aut}(A)$ be a flow. Then we have
\[
\dimnucone(A \rtimes_{\alpha}{\mathbb R}) \leq 2\cdot \dimrokone(\alpha)\cdot \dimnucone(A).
\]
\end{Thm}
\begin{proof}
We may assume that both the Rokhlin dimension of $\alpha$ and the nuclear dimension of $A$ are finite, as there is otherwise nothing to show. Denote $d=\dim_{\mathrm{Rok}}(\alpha)$.
Fix a finite set $\mathcal{F} \subseteq A \rtimes_{\alpha}{\mathbb R}$, and fix $\varepsilon>0$.
We may assume without loss of generality that $\mathcal{F} \subseteq C_c({\mathbb R},A)$, and fix $m>0$ such that all elements of $\mathcal{F}$ are supported in the interval $(-m,m)$. Denote
\begin{equation} \label{eq:L}
L = \sup_{x \in \mathcal{F}}\|x\|_{L^1}.
\end{equation}
For the rest of this proof, we adopt the notations that we introduced in the beginning of this section. We pick $M>0$ large enough such that $h = h(M)$ satisfies
\begin{equation} \label{eq:h}
\|h^{1/2}-\sigma_t(h)^{1/2}\|< \frac{\varepsilon}{2L(d+1)}
\quad \textrm{for all } t \in [-m,m].
\end{equation}
Using Remark~\ref{Rmk:Rokhlin-dim} and find $\lambda-\tilde{\alpha}_\infty$ equivariant c.p.c.~order zero maps
$$
\mu^{(0)},\dots,\mu^{(d)}: C(\mathbb{R}/M \mathbb{Z})\to F_\infty^{(\alpha)}(A)
$$
with
$$
\mu^{(0)}(1)+\dots+\mu^{(d)}(1)=1.
$$
In view of Remark~\ref{F(A)}, each map $\mu^{(j)}$ induces a $\lambda\otimes\alpha - \alpha_\infty$ equivariant c.p.c.~order zero map $\eta^{(j)} \colon C(\mathbb{R}/M \mathbb{Z}) \otimes A \to A_{\infty}^{(\alpha)}$. Moreover, the identity $\mu^{(0)}(1)+\dots+\mu^{(d)}(1)=1$ implies
\begin{equation} \label{eq:eta-sum}
\eta^{(0)}(1\otimes a)+\dots+\eta^{(d)}(1\otimes a)=a
\end{equation}
for all $a\in A$.
Applying Lemma~\ref{Rmk:equivariant-order-zero-maps}, these maps give rise to c.p.c.~order zero maps between the crossed products
\[
\eta^{(j)}\rtimes {\mathbb R} \colon (C(\mathbb{R}/M \mathbb{Z}) \otimes A) \rtimes_{\lambda \otimes \alpha} \mathbb{R} \to A_{\infty}^{(\alpha)} \rtimes_{\alpha_{\infty}} \mathbb{R}
\]
that satisfy the equation
\begin{equation} \label{eq:eta-sum2}
\sum_{j=0}^d (\eta^{(j)}\rtimes {\mathbb R})\circ\big( (1\otimes \mathrm{id}_A)\rtimes {\mathbb R} \big) = \left( \sum_{j=0}^d \eta^{(j)}\circ (1\otimes\mathrm{id}_A) \right)\rtimes {\mathbb R} \stackrel{\eqref{eq:eta-sum}}{=} \mathrm{id}_{A\rtimes_\alpha {\mathbb R}}.
\end{equation}
Let $B_{\sigma}$ and $B_{\lambda}$ be as above, and let $h \in C_0({\mathbb R}) \cong C^*(\widehat{{\mathbb R}})$ be as above as well.
We define a completely positive contraction
\[
\varphi\colon A \rtimes_{\alpha}{\mathbb R} \to B_{\sigma} \subseteq A \rtimes_{\alpha} {\mathbb R} \rtimes_{\widehat{\alpha}} \widehat{{\mathbb R}} \quad\text{via}\quad \varphi(a) = h^{1/2} a h^{1/2}.
\]
(The inclusion $B_{\sigma} \subseteq A \rtimes_{\alpha} {\mathbb R} \rtimes_{\widehat{\alpha}} \widehat{{\mathbb R}}$ is as in Claim~\ref{claim 2} and the preceding discussion, using Notation \ref{Not:flow-conventions}.)
We now define c.p.c.~order zero maps
\[
\zeta_0,\zeta_1\colon B_{\sigma} \to (C({\mathbb R}/M{\mathbb Z}) \otimes A) \rtimes_{\lambda \otimes \alpha} {\mathbb R}
\]
as follows. We first define $\zeta_0$ to be the isomorphism $\zeta$ given in \eqref{eq:xi} above:
\[
\zeta_0 = \zeta \colon B_{\sigma} \stackrel{\cong}{\longrightarrow} B_{\lambda} \subseteq ( C({\mathbb R}/M{\mathbb Z}) \otimes A) \rtimes_{\lambda \otimes \alpha} {\mathbb R}
\, .
\]
For $\zeta_1$, we first note that the automorphism $\lambda_{M/2} \otimes \mathrm{id} \in \mathrm{Aut}(C({\mathbb R}/M{\mathbb Z})\otimes A)$ commutes with $\lambda_t \otimes \alpha_t$ for all $t$. This gives rise to an automorphism $\widetilde{\lambda}_{M/2}=(\lambda_{M/2}\otimes\mathrm{id})\rtimes{\mathbb R}$ of $( C({\mathbb R}/M{\mathbb Z}) \otimes A) \rtimes_{\lambda \otimes \alpha} {\mathbb R}$. Now, we define
\[
\zeta_1 =\widetilde{\lambda}_{M/2}\circ \zeta_0.
\]
Consider the c.p.c.~map $\kappa$ given by the following diagram:
\[
\xymatrix{
A \rtimes_{\alpha} {\mathbb R} \ar[dr]_{\varphi} \ar@{.>}[rrrr]^{\kappa} & & & & A_{\infty}^{(\alpha)} \rtimes_{\alpha_{\infty}} {\mathbb R} \\
& B_{\sigma} \ar[rr]^{\!\!\!\!\!\! x \mapsto x \oplus x} &&
B_{\sigma} \oplus B_{\sigma} \ar[ur]_{\qquad \quad \, (x,y) \mapsto \sum_{j=0}^d (\eta^{(j)}\rtimes {\mathbb R}) \circ \zeta_0 (x) + (\eta^{(j)}\rtimes {\mathbb R})\circ \zeta_1 (y)} &
}
\]
Note that the upwards map is a sum of $2(d+1)$ c.p.c.~order zero maps.
Let $\iota \colon A\rtimes_\alpha{\mathbb R} \to A_{\infty}^{(\alpha)}\rtimes_{\alpha_\infty}{\mathbb R}$ be the natural inclusion that is induced by the equivariant inclusion of $A$ into $A_{\infty}^{(\alpha)}$ as constant sequences. We wish to show that
\[
\|\kappa(x) - \iota(x)\|\le\varepsilon
\]
for all $x \in \mathcal{F}$.
Recall that $h_0 \in C({\mathbb R}/M{\mathbb Z})$ is induced by $h$ as in Remark~\ref{Rmk:not-hereditary} and that by Claim~\ref{claim:h+h-shifted}, we have $h_0 + h_1 = 1 \in C({\mathbb R}/M{\mathbb Z})$. Moreover, we would like to apply Claim~\ref{claim 2} to an expression of the form $h^{1/2} \cdot x \cdot h^{1/2}$, $x \in \mathcal{F}$. For this to make sense, we may regard such an $x$ as an element in $L^1({\mathbb R},C_0({\mathbb R},A))$ by setting
\[
x(t)(s) = \left\{ \begin{matrix} x(t) & \mid & s \in (-M/2,M/2) \\
0 & \mid & \mbox{ else } \end{matrix} \right.
\, .
\]
From Claim~\ref{claim 2} we then see that $h^{1/2} \cdot x \cdot h^{1/2} \in B_0 \subset B_\sigma$ and
\[
(h^{1/2} \cdot x \cdot h^{1/2}) (t) (s) = x (t)(s) h^{1/2}(s-t) h^{1/2}(s)
\, .
\]
The map $\zeta_0$ sends this element to the function in $L^1({\mathbb R},C({\mathbb R}/M{\mathbb Z})\otimes A)$ given by
\[
\zeta_0 (x)(t)(s) = x (t)(s) h_0^{1/2}(s-t) h_0^{1/2}(s)
\]
where here $s \in (-M/2,M/2) \subseteq {\mathbb R}/M{\mathbb Z}$, and we recall that the range of applicable $t \in {\mathbb R}$ for which the expression is not zero is taken such that $s-t \in (-M/2,M/2)$.
Thus
\[
\begin{array}{rcl}
\|\zeta_0 (x) (t)(s)- (x \cdot h_0)(t)(s)\| & = &
\|x(t)(s) h_0^{1/2}(s) (h_0^{1/2}(s-t) - h_0^{1/2}(s))\| \\
& \stackrel{\eqref{eq:h}}{\leq} &
\displaystyle \|x(t)(s)\|\cdot \frac{\varepsilon}{2L(d+1)}
\end{array}
\]
for all $t\in{\mathbb R}$ and $s\in (-M/2, M/2)$.
Therefore, we have
\begin{samepage}
\[
\begin{array}{rcl}
\|(\eta^{(j)}\rtimes {\mathbb R}) \circ \zeta_0 (x)- (\eta^{(j)}\rtimes {\mathbb R})(x \cdot h_0)\| & \leq & \|\zeta_0 (x)-x \cdot h_0\|_{L^1} \\
& \leq & \displaystyle \frac{\varepsilon}{2L(d+1)}\cdot \|x\|_{L^1} \\
& \stackrel{\eqref{eq:L}}{\leq} & \displaystyle \frac{\varepsilon}{2(d+1)}
\end{array}
\]
for all $x\in \mathcal{F}$.\end{samepage}
Likewise,
\[
\zeta_1 (x)(t)(s) = x (t)(s) h_1^{1/2}(s-t) h_1^{1/2}(s)
\]
and a similar computation shows that
\[
\|(\eta^{(j)}\rtimes {\mathbb R}) \circ \zeta_1 (x)- (\eta^{(j)}\rtimes {\mathbb R})(x \cdot h_1)\| \leq
\frac{\varepsilon}{2(d+1)}
\, .
\]
Thus, for any $x \in \mathcal{F}$ we have
\[
\begin{array}{ccl}
\|\kappa(x) - \iota(x)\| &=& \displaystyle \Bigg\| \sum_{j=0}^d (\eta^{(j)}\rtimes {\mathbb R}) \circ \zeta_0 (x) + (\eta^{(j)}\rtimes {\mathbb R}) \circ \zeta_1 (x) - x \Bigg\| \\
&\leq& \displaystyle 2(d+1)\cdot\frac{\varepsilon}{2(d+1)} \\
& & \displaystyle + \Bigg\| \sum_{j=0}^d (\eta^{(j)}\rtimes {\mathbb R})(x \cdot h_0) + (\eta^{(j)}\rtimes {\mathbb R})(x \cdot h_1)-\iota(x) \Bigg\| \\
&=& \displaystyle\varepsilon + \Bigg\| \sum_{j=0}^d (\eta^{(j)}\rtimes {\mathbb R})(1\otimes x)-\iota(x) \Bigg\| \stackrel{\eqref{eq:eta-sum2}}{=} \varepsilon .
\end{array}
\]
The $C^*$-algebra $B_{\sigma}$ is a hereditary $C^*$-subalgebra of $A \otimes\mathcal{K}$ by Claim~\ref{claim 2}, so
\[
\mathrm{dim}_{\mathrm{nuc}}(B_{\sigma}) \leq \mathrm{dim}_{\mathrm{nuc}}(A),
\]
and $\mathrm{dim}_{\mathrm{nuc}}(B_{\sigma} \oplus B_{\sigma}) \leq \mathrm{dim}_{\mathrm{nuc}}(A)$ as well. We can now consider the upward maps that we have constructed, and compose them with the natural $^{*}$-homomorphism $A_{\infty}^{(\alpha)} \rtimes_{\alpha} {\mathbb R}\to(A\rtimes_{\alpha}{\mathbb R})_{\infty}$ from Lemma~\ref{Rmk:cont-part-crossed-product-embeds}. Since this $^{*}$-homomorphism is compatible with respect to standard embeddings, we can apply Lemma~\ref{Lemma:dimnuc-central-sequence} and see that indeed
\[
\dimnucone(A \rtimes_{\alpha}{\mathbb R}) \leq 2(d+1)\dimnucone(A) \, ,
\]
as required.
\end{proof}
\begin{Rmk}
\label{Rn-dimnuc}
To keep notation simple, we restricted ourselves here to actions of ${\mathbb R}$. However, it seems clear that the argument generalizes in a straightforward manner to actions of ${\mathbb R}^n$ with the analogous notion of Rokhlin dimension for ${\mathbb R}^n$-actions (using equivariant order zero maps from $C(\mathbb{R}^n/p \mathbb{Z}^n)$ in place of normal elements, which correspond to equivariant order zero maps from $C(\mathbb{T}) \cong C(\mathbb{R}/p \mathbb{Z})$), where the bound is given by
\[
\dimnucone(A \rtimes_{\alpha}{\mathbb R}^n) \leq 2^n\cdot \dimrokone(\alpha)\cdot \dimnucone(A)\, .
\]
\end{Rmk}
\section{Reduction to cocompact subgroups}
\label{Section:reduction}
\noindent
In this section, we define and study the Rokhlin dimension of an action of a second-countable, locally compact group relative to a closed, cocompact subgroup. Finite Rokhlin dimension in this sense allows us to study permanence properties of the nature that we discussed in the previous section, but with minimal restrictions on the acting group. In this context, one cannot expect to obtain a direct connection between the nuclear dimension of the crossed product $C^*$-algebra $A\rtimes_\alpha G$ and that of the coefficient $C^*$-algebra $A$. We can establish a connection between the nuclear dimension $A\rtimes_\alpha G$ and that of $A\rtimes_{\alpha|_H} H$, where $H$ is a closed, cocompact subgroup. If one has sufficient information about the restricted action $\alpha|_H : H\to\operatorname{Aut}(A)$ or its crossed product, the method discussed in this section can have some advantages that \emph{global} Rokhlin dimension, in the sense of the previous section or \cite{HWZ, SWZ}, does not have in the non-compact case.
For example, obtaining bounds concerning decomposition rank of crossed products by non-compact groups appears to be difficult, and would require different techniques and more severe constraints on the action than just finite Rokhlin dimension. The methods developed in this section yield the following conditional statement: Let $A$ be a separable $C^*$-algebra, and let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ be a flow with finite Rokhlin dimension. If there exists $t>0$ such that $A \rtimes_{\alpha_t}{\mathbb Z}$ has finite decomposition rank, then $A \rtimes_{\alpha}{\mathbb R}$ has finite decomposition rank.
\begin{Def} \label{Def:dimrok-H}
Let $G$ be a second-countable, locally compact group, let $A$ be a separable $C^*$-algebra and let $\alpha: G \to \mathrm{Aut}(A)$ be a point-norm continuous action. Let $H < G$ be a closed and cocompact subgroup, i.e.~ a closed subgroup such that $G/H$ is compact. We use $\lambda$ to denote the action of $G$ on $C(G/H)$ by left translation, i.e.~$\lambda_g(f)(hH) = f(g^{-1}hH)$. The \emph{Rokhlin dimension of $\alpha$ relative to $H$}, denoted $\dim_{\mathrm{Rok}}(\alpha, H)$, is the smallest natural number $d$ such that there exist $\lambda - \tilde{\alpha}_\infty$ equivariant c.p.c.~order zero maps
\[
\mu^{(0)},\ldots,\mu^{(d)}: C(G/H) \to F_\infty^{(\alpha)}(A)
\]
with $\mu^{(0)}(1)+\dots+\mu^{(d)}(1)=1$.
\end{Def}
\begin{Rmk}
\label{Rmk:comparison-global-local}
The connection between Rokhlin dimension of a flow and Rokhlin dimension relative to cocompact subgroups of ${\mathbb R}$ is as follows. If $\alpha: {\mathbb R}\to\operatorname{Aut}(A)$ is a flow on a separable $C^*$-algebra, then
$$\dim_{\mathrm{Rok}}(\alpha)=\sup_{t>0}~\dim_{\mathrm{Rok}}(\alpha, t{\mathbb Z}) \, .
$$
This follows immediately from Definition~\ref{Def:dimrok} and Remark~\ref{Rmk:Rokhlin-dim}.
\end{Rmk}
\begin{Thm}
\label{Thm:Green}
Let $G$ be a second-countable, locally compact group, let $A$ be a separable $C^*$-algebra and let $\alpha: G \to \mathrm{Aut}(A)$ be a point-norm continuous action. Let $H < G$ be a closed and cocompact subgroup.
Then
\[
\dimnucone(A \rtimes_{\alpha}G) \leq \dimrokone(\alpha, H)\cdot\dimnucone(A \rtimes_{\alpha|_H}H)
\]
and
\[
\drone(A \rtimes_{\alpha}G) \leq \dimrokone(\alpha, H)\cdot\drone(A \rtimes_{\alpha|_H}H).
\]
\end{Thm}
\begin{proof}
Green's Imprimitivity Theorem (\cite[Theorem 4.22]{williams-book}) states that the crossed product $(C(G/H)\otimes A)\rtimes_{\lambda \otimes \alpha} G$ is Morita equivalent to $A \rtimes_{\alpha|_H}H$. In particular, those two $C^*$-algebras have the same decomposition rank and the same nuclear dimension.
The embedding $A \to C(G/H) \otimes A$ given by $a \mapsto 1 \otimes a$ is $\alpha - (\lambda\otimes \alpha)$ equivariant, and therefore induces a $^{*}$-homomorphism
\[
\varphi = (1\otimes\mathrm{id}_A)\rtimes G \colon A\rtimes_{\alpha} G \to (C(G/H)\otimes A)\rtimes_{\lambda \otimes \alpha} G.
\]
Let $\iota \colon A \to A_{\infty}^{(\alpha)}$ be the standard embedding as constant sequences. Denote $d=\dim_{\mathrm{Rok}}(\alpha, H)$ and let
\[
\mu^{(0)},\ldots,\mu^{(d)}: C(G/H) \to F^{(\alpha)}_{\infty}(A)
\]
be as in Definition~\ref{Def:dimrok-H}. In view of Remark~\ref{F(A)}, those maps induce $(\lambda\otimes\alpha) - \alpha_\infty$ equivariant c.p.c.~order zero maps
\[
\eta^{(j)} \colon C(G/H)\otimes A \to A_{\infty}^{(\alpha)}
\]
given by $\eta^{(j)}(f \otimes a) = \mu^{(j)}(f)\iota(a)$ for all $j=0,\dots,d$. One then has
\[
\sum_{j=0}^d \eta^{(j)}\circ (1\otimes a) = \Bigg( \sum_{j=0}^d \mu^{(j)}(1) \Bigg) a = a
\]
for all $a\in A$.
By Lemma~\ref{Rmk:equivariant-order-zero-maps}, these maps induce c.p.c.~order zero maps
\[
\psi^{(j)} = \eta^{(j)}\rtimes G \colon (C(G/H)\otimes A)\rtimes_{\lambda \otimes \alpha} G ~\to~ A_{\infty}^{(\alpha)} \rtimes_{\alpha} G ~\stackrel{(\text{L} \ref{Rmk:cont-part-crossed-product-embeds})}{\longrightarrow}~ (A \rtimes_{\alpha} G)_{\infty}
\]
satisfying
\[
\sum_{j=0}^d \psi^{(j)}\circ\varphi = \Bigg( \sum_{j=0}^d (\eta^{(j)}\circ (1\otimes\mathrm{id}_A))\Bigg)\rtimes G = \iota\rtimes G.
\]
Moreover, this equation shows that the sum $\sum_{j=0}^d \psi^{(j)}$ is contractive because $\varphi(A\rtimes G)\subset (C(G/H)\otimes A)\rtimes_{\lambda \otimes \alpha} G$ is non-degenerate.
Applying Lemma~\ref{Lemma:dimnuc-central-sequence}, we thus obtain
\[
\dimnucone(A \rtimes_{\alpha}G) \leq (d+1)\dimnucone(A \rtimes_{\alpha|_H}H)
\]
and
\[
\drone(A \rtimes_{\alpha}G) \leq (d+1)\drone(A \rtimes_{\alpha|_H}H).
\]
\end{proof}
\begin{Rmk}
In Definition~\ref{Def:dimrok-H}, one can restrict to the special case in which $G$ is compact and $H$ is the trivial subgroup. In that case, one recovers the Rokhlin dimension of $\alpha$ as an action of a compact group, which has been introduced in \cite{Gardella1}. In this way, Theorem~\ref{Thm:Green} can be viewed as a generalization of the main result of \cite{Gardella2}.
\end{Rmk}
Specializing now to the case of flows, we show that finite Rokhlin dimension passes from the action of ${\mathbb R}$ to the restriction to $t{\mathbb Z}$ for any $t > 0$.
\begin{prop}
\label{Prop:dimrok-restricted-flow}
Let $A$ be a separable $C^*$-algebra with a flow $\alpha: {\mathbb R}\to\operatorname{Aut}(A)$. For any $t>0$, one has
$$
\dimrokone(\alpha_t)\leq 2\cdot\dimrokone(\alpha) \, .
$$
\end{prop}
\begin{proof}
Let $t>0$ be given. Choose a rationally independent number $M>0$ from $t$. It follows that the shift $\lambda_t$ on $C({\mathbb R}/M{\mathbb Z})$ is an irrational rotation. In particular, it has Rokhlin dimension 1 by \cite[Theorem 6.2]{HWZ}, and in fact, by the proof of \cite[Theorem 6.2]{HWZ} with single towers if we assume that the height of the towers is a prime number. Let $\varepsilon>0$, $n\in{\mathbb N}$ and a finite set $\mathcal{F}\subset A$ be given. By Remark \cite[Remark 2.4(v)]{HWZ}, it suffices to consider towers with arbitrarily large height, so we may assume without loss of generality that $n$ is prime.
Then there exist positive contractions $b_0^{(0)},\dots,b_{n-1}^{(0)},b_0^{(1)},\dots,b_{n-1}^{(1)}\in C({\mathbb R}/M{\mathbb Z})$ which satisfy,
for $i=0,1$ and for all $j,j_1,j_2=0,\dots,n-1$ with $j_1\neq j_2$,
\begin{itemize}
\item $1 = \sum_{i=0,1} \sum_{j=0}^{n-1} b_j^{(i)}$
\item $\|b_{j+1}^{(i)}-\lambda_t(b_j^{(i)})\|\leq\varepsilon$
\item $\|b_{0}^{(i)}-\lambda_t(b_{n-1}^{(i)})\|\leq\varepsilon$
\item $\|b_{j_1}^{(i)}b_{j_2}^{(i)}\|\leq\varepsilon .$
\end{itemize}
Denoting $d=\dim_{\mathrm{Rok}}(\alpha)$, choose $\lambda-\tilde{\alpha}_\infty$ equivariant c.p.c.~order zero maps
$$
\mu^{(0)},\mu^{(1)},\dots,\mu^{(d)}: C({\mathbb R}/M{\mathbb Z}) \to F_\infty^{(\alpha)}(A)
$$
with $\mu^{(0)}(1)+\mu^{(1)}(1)+\dots+\mu^{(d)}(1)=1$. In particular, these maps are $\lambda_t - \tilde{\alpha}_{\infty,t}$ equivariant. So defining positive contractions $\{f_j^{(i,l)}\}_{j=0,\dots,n-1}^{i=0,1;~ l=0,\dots,d}$ in $F_\infty(A)$ via $f_j^{(i,l)}=\mu^{(l)}(b_j^{(i)})$ yields the following relations,
for $i=0,1$, for $l=0,\dots,d$ and for $j,j_1,j_2=0,\dots,n-1$ with $j_1\neq j_2$:
\begin{itemize}
\item $\displaystyle 1 = \sum_{i=0,1}\sum_{l=0}^d \sum_{j=0}^{n-1} f_j^{(i,l)}$
\item $\|f_{j+1}^{(i,l)}-\tilde{\alpha}_{\infty,t}(f_j^{(i,l)})\|\leq\varepsilon$
\item $\|f_{0}^{(i,l)}-\tilde{\alpha}_{\infty,t}(f_{n-1}^{(i,l)})\|\leq\varepsilon$
\item $\|f_{j_1}^{(i,l)}f_{j_2}^{(i,l)}\|\leq\varepsilon \, .$
\end{itemize}
If we represent the elements $f_j^{(i,l)}$ by positive bounded sequences, say $(h_j^{(i,l)}(m))_m$ in $\ell^\infty({\mathbb N},A)$, then these satisfy
\begin{itemize}
\item $\displaystyle\lim_{m\to\infty} \Big( \sum_{i=0,1}\sum_{l=0}^d\sum_{j=0}^{n-1} h_j^{(i,l)}(m) \Big) a = a$
\item $\displaystyle\limsup_{m\to\infty}\| \big( h_{j+1}^{(i,l)}(m)-\alpha_{t}(h_j^{(i,l)}(m)) \big) a\|\leq\varepsilon$
\item $\displaystyle\limsup_{m\to\infty}\| \big( h_{0}^{(i,l)}(m)-\alpha_{t}(h_{n-1}^{(i,l)}(m)) \big) a\|\leq\varepsilon$
\item $\displaystyle\limsup_{m\to\infty}\|h_{j_1}^{(i,l)}(m)h_{j_2}^{(i,l)}(m)a\|\leq\varepsilon$
\item $\displaystyle\lim_{m\to\infty} \|[h_j^{(i,l)}(m),a]\|=0$
\end{itemize}
for all $a\in A$, for $i=0,1$, for $l=0,\dots,d$ and for $j,j_1,j_2=0,\dots,n-1$ with $j_1\neq j_2$. In particular, if we choose $m$ sufficiently large, we find positive contractions $h_j^{(i,l)}$ in $A$ satisfying
\begin{itemize}
\item $\displaystyle \Big\| \Big( \sum_{i=0,1}\sum_{l=0}^d\sum_{j=0}^{n-1} h_j^{(i,l)} \Big)\cdot a - a \Big\|\leq\varepsilon$
\item $\| \big( h_{j+1}^{(i,l)}-\alpha_{t}(h_j^{(i,l)}) \big)a\|\leq 2\varepsilon$
\item $\| \big( h_{0}^{(i,l)}-\alpha_{t}(h_{n-1}^{(i,l)}) \big)a\|\leq 2\varepsilon$
\item $\| h_{j_1}^{(i,l)}h_{j_2}^{(i,l)}a \|\leq 2\varepsilon$
\item $\|[h_j^{(i,l)},a]\|\leq\varepsilon$
\end{itemize}
for all $a\in \mathcal{F}$, for $i=0,1$, for $l=0,\dots,d$ and for $j,j_1,j_2=0,\dots,n-1$ with $j_1\neq j_2$. Thus, $\alpha_t$ satisfies \cite[Definition 1.21]{hirshberg-phillips} (c.f.~ \cite[Proposition 4.5]{SWZ}) and the claim follows.
\end{proof}
We conclude the section by showing how the above results can give a short, alternative proof of Theorem~\ref{Thm:dimnuc-bound}, although with the following weaker bound on the nuclear dimension:
\[
\dimnucone(A \rtimes_{\alpha} {\mathbb R}) \leq 4\cdot \dimrokone(\alpha)^2\cdot \dimnucone(A).
\]
\begin{proof}[Second proof of Theorem~\ref{Thm:dimnuc-bound} with a weaker bound]
Denote $d=\dim_{\mathrm{Rok}}(\alpha)$. Let $t>0$. By Proposition~\ref{Prop:dimrok-restricted-flow}, we get $\dim_{\mathrm{Rok}}(\alpha_t)\leq 2d+1$.
Now, by the straightforward generalization of \cite[Theorem 4.1]{HWZ} to the non-unital setting (\cite[Theorem 3.1]{hirshberg-phillips} and \cite[Theorem 5.2]{SWZ}), we have
\[
\dimnucone(A \rtimes_{\alpha_t} {\mathbb Z}) \leq 4(d+1)\dimnucone(A)
\, .
\]
By Remark~\ref{Rmk:comparison-global-local}, we have $\dim_{\mathrm{Rok}}(\alpha, t{\mathbb Z})\leq d$. Thus, by Theorem~\ref{Thm:Green}, we obtain
\[
\dimnucone(A \rtimes_{\alpha}{\mathbb R}) \leq (d+1)\dimnucone(A \rtimes_{\alpha_t} {\mathbb Z}) \leq 4(d+1)^2\dimnucone(A)
\]
as required.
\end{proof}
\section{Rokhlin dimension with commuting towers and $D$-absorption}
\label{Section:D-absorption}
\noindent
In this section, we study permanence with respect to $D$-absorption for a strongly self-absorbing $C^*$-algebra $D$. Recall from \cite{TomsWinter07} that a separable, unital $C^*$-algebra $D$ is called strongly self-absorbing, if the first-factor embedding $d\mapsto d\otimes 1$ from $D$ to $D\otimes D$ is approximately unitarily equivalent to an isomorphism. Given such $D$ and a $C^*$-algebra $A$, it is interesting to know when $A$ is $D$-absorbing, that is, when $A\cong A\otimes D$. (See, for instance, \cite{Winter14}).
As in the case of discrete groups (cf.\ \cite{HWZ,SWZ,hirshberg-phillips}), finite Rokhlin dimension does not appear sufficient for the purpose of proving that $D$-absorption passes to the crossed product. Therefore, we consider a stronger variant of finite Rokhlin dimension, namely with commuting towers. The main result of this section is a generalization of \cite[Theorem 5.2]{HW}.
\begin{Def}
\label{Def:dimrok-comm}
Let $A$ be a separable $C^*$-algebra with a flow $\alpha: {\mathbb R}\to\mathrm{Aut}(A)$. The \emph{Rokhlin dimension of $\alpha$ with commuting towers} is the smallest natural number $d\in{\mathbb N}$, such that the following holds. For every $p\in {\mathbb R}$ there exist pairwise commuting normal contractions $x^{(0)},x^{(1)},\ldots,x^{(d)}\in F_\infty^{(\alpha)}(A)$ with $x^{(0)*}x^{(0)}+\dots+x^{(d)*}x^{(d)}=1$ and $\tilde{\alpha}_{\infty,t}(x^{(j)})=e^{ipt}x^{(j)}$ for all $t\in{\mathbb R}$ and $j=0,\dots,d$. In this case, we write $\dim^{\mathrm{c}}_{\mathrm{Rok}}(\alpha)=d$.
\end{Def}
\begin{Rmk}
\label{Rmk:cRokhlin-dim}
As in the case of Rokhlin dimension without commuting towers, the above definition has a useful equivalent reformulation: $\dim^{\mathrm{c}}_{\mathrm{Rok}}(\alpha)\leq d$ if and only if for all $M>0$, there exist $\lambda - \tilde{\alpha}_\infty$ equivariant c.p.c.~order zero maps $\mu^{(0)},\ldots, \mu^{(d)} \colon C({\mathbb R}/M{\mathbb Z}) \to F_\infty^{(\alpha)}(A)$ with pairwise commuting images such that $\sum_{j=0}^d \mu^{(j)}(1) = 1$.
\end{Rmk}
For the rest of this section, we sometimes use the expression $\mathrm{id}_A$ both for the identity map on a $C^*$-algebra $A$ and for the trivial action of a locally compact group $G$ on $A$.
Our main theorem for this section is the following generalization and strengthening of \cite[Theorem 5.2]{HW} from the case of Kishimoto's Rokhlin property (that is, Rokhlin dimension zero) to finite Rokhlin dimension with commuting towers. Recall that two flows $\alpha: {\mathbb R} \to \mathrm{Aut}(A)$ and $\beta: {\mathbb R} \to \mathrm{Aut}(B)$ are called cocycle conjugate, if there exists a $^{*}$-isomorphism $\psi: A\to B$ and a strictly continuous map $u: {\mathbb R} \to \mathcal{U}(M(A))$ satisfying $u_{t+s}=u_t\alpha_t(u_s)$ and $\psi^{-1}\circ\beta_t\circ\psi = \operatorname{Ad}(u_t)\circ\alpha_t$ for all $t,s\in{\mathbb R}$. It is well-known that cocycle conjugate flows yield naturally isomorphic crossed products; cf.\ \cite{PackerRaeburn}.
\begin{Thm}
\label{Thm:Z-absorption}
Let $D$ be a strongly self-absorbing $C^*$-algebra. Let $A$ be a separable, $D$-absorbing $C^*$-algebra. Let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ be a flow with $\dim^{\mathrm{c}}_{\mathrm{Rok}}(\alpha)<\infty$. Then $\alpha$ is cocycle conjugate to $\alpha\otimes\mathrm{id}_D$, and in particular, $A \rtimes_{\alpha} {\mathbb R}$ is $D$-absorbing.
\end{Thm}
The main criterion for the conclusion of Theorem~\ref{Thm:Z-absorption} is given by the following theorem from \cite{Szabo15ssa}:
\begin{Thm}[cf.~{\cite[Corollary 3.8]{Szabo15ssa}}]
\label{Thm:equ-D-absorption}
Let $D$ be a strongly self-absorbing $C^*$-algebra. Let $A$ be a separable $C^*$-algebra. Let $G$ be a second-countable, locally compact group and $\alpha \colon G\to\mathrm{Aut}(A)$ a point-norm continuous action. Then $\alpha$ is cocycle conjugate to $\alpha\otimes\mathrm{id}_D$ if and only if there exists a unital $^{*}$-homomorphism from $D$ to the fixed point algebra $F_\infty^{(\alpha)}(A)^{\tilde{\alpha}_\infty}$.
\end{Thm}
For the reader's convenience, we provide a short argument proving that if there exists a unital $^{*}$-homomorphism from $D$ to $F_\infty^{(\alpha)}(A)^{\tilde{\alpha}_\infty}$, then the crossed product $A\rtimes_\alpha G$ is $D$-absorbing.For unital $C^*$-algebras $A$, this statement appeared in \cite[Lemma 2.3]{HW}.
We recall that any strongly self-absorbing $C^*$-algebra is $K_1$-injective; see \cite{winter-ssa-Z-stable}. Thus the $K_1$-injectivity condition that appears in the relevant part of \cite{TomsWinter07} used below holds automatically, and we can omit it from the statement.
\begin{Lemma}
\label{Lemma:D-absorption-condition-3}
Let $A$ be a separable $C^*$-algebra and let $D$ be a strongly self-absorbing $C^*$-algebra. Suppose that $\alpha: G\to\mathrm{Aut}(A)$ is a point-norm continuous action of a second-countable, locally compact group. If there exists a unital $^{*}$-homomorphism from $D$ to the fixed point algebra $F_\infty^{(\alpha)}(A)^{\tilde{\alpha}_\infty}$, then $A\rtimes_\alpha G$ is $D$-absorbing.
\end{Lemma}
\begin{proof}
Consider the embedding $1\otimes\mathrm{id}_A : A\to D\otimes A$ as the second factor. This map is clearly $\alpha - \mathrm{id}_D\otimes\alpha$ equivariant.
A unital $^{*}$-homomorphism from $D$ to $F_\infty^{(\alpha)}(A)^{\tilde{\alpha}_\infty}$ is the same as an $\mathrm{id}_D - \tilde{\alpha}_\infty$ equivariant unital $^{*}$-homomorphism from $D$ to $F_\infty^{(\alpha)}$.
In view of Remark~\ref{F(A)}, we obtain an $\mathrm{id}_D\otimes\alpha - \alpha_\infty$ equivariant $^{*}$-homomorphism $\psi: D\otimes A\to A_\infty^{(\alpha)}$ such that $\psi\circ(1\otimes\mathrm{id}_A) $ coincides with the standard embedding of $A$ into $A_\infty^{(\alpha)}$. Forming the crossed product everywhere, we get induced $^{*}$-homomorphisms
\[
(1\otimes\mathrm{id}_A)\rtimes G: A\rtimes_\alpha G \to (D\otimes A)\rtimes_{\mathrm{id}_D\otimes\alpha} G
\]
and
\[
\psi\rtimes G: (D\otimes A)\rtimes_{\mathrm{id}_D\otimes\alpha} G \to A_\infty^{(\alpha)}\rtimes_{\alpha_\infty} G \stackrel{(\mathrm{Lemma} \, \ref{Rmk:cont-part-crossed-product-embeds})}{\longrightarrow} (A\rtimes_\alpha G)_\infty
\]
such that $(\psi\rtimes G)\circ \big( (1\otimes\mathrm{id}_A)\rtimes G \big)$ coincides with the standard embedding of $A\rtimes_\alpha G$ into $(A\rtimes_\alpha G)_\infty$. We have a natural isomorphism
\[
\mu: (D\otimes A)\rtimes_{\mathrm{id}_D\otimes\alpha} G \to D\otimes (A\rtimes_\alpha G)
\, .
\]
Using the notation from the proof of Lemma~\ref{Rmk:equivariant-order-zero-maps}, this map is given on the generators by
\[
\mu\big( \iota^{\mathrm{id}_D\otimes\alpha}(d\otimes a)\lambda^{\mathrm{id}_D\otimes\alpha}(f) \big) = d\otimes \big( \iota^{\alpha}(a)\lambda^\alpha(f) \big)
\]
for all $d\in D$, $a\in A$ and $f\in C_c(G)$.
In particular, we can see that $\mu\circ \big( (1\otimes\mathrm{id}_A)\rtimes G \big) = 1_D\otimes\mathrm{id}_{A\rtimes_\alpha G}$. This implies that the $^{*}$-homomorphism $\varphi=(\psi\rtimes G)\circ\mu^{-1}: D\otimes(A\rtimes_\alpha G)\to (A\rtimes_\alpha G)_\infty$ satisfies $\varphi(1_D\otimes x)=x$ for all $x\in A\rtimes_\alpha G$.
It now follows from \cite[Theorem 2.3]{TomsWinter07} that $A\rtimes_\alpha G$ is $D$-absorbing.
\end{proof}
The following lemma follows directly by repeated application of \cite[Lemma 5.2]{HWZ}, using the correspondence between order zero maps and $^{*}$-homomorphisms from cones given in \cite[Corollary 4.1]{winter-zacharias-order-zero}. This can be thought of as a multivariable generalization of the characterization of order zero maps as $^{*}$-homomorphisms from cones: a single c.p.c.\ order zero map corresponds to a $^{*}$-homomorphism from the cone, and $n$ c.p.c.\ order zero maps with commuting images correspond to a $^{*}$-homomorphism for a section algebra $E$ of a bundle over $[0,1]^n$ with suitable boundary conditions on the edges.
\begin{Lemma}
\label{Lemma:universal-n-cones}
Let $D_1,D_2,\ldots,D_n$ be unital $C^*$-algebras. For $k=1,2,\ldots,n$ and $t \in [0,1]$, denote
\[
D_k^{(t)} = \left \{ \begin{matrix} D_k & \mid & t > 0 \\
{\mathbb C} \cdot 1_{D_k} & \mid & t=0 \end{matrix} \right .
\, .
\]
For $\vec{t} = (t_1,t_2,\ldots t_n) \in [0,1]^n$, write
$$
D^{(\vec{t})} = D_1^{(t_1)} \otimes_{\max} D_2^{(t_2)} \otimes_{\max} \cdots \otimes_{\max} D_n^{(t_n)} .
$$
Denote $\vec{0}=(0,0,\ldots,0) \in [0,1]^n$ and let
\[
E = \big\{f \in C_0([0,1]^n \setminus \{\vec{0}\},D_1 \otimes_{\max} D_2 \otimes_{\max} \cdots \otimes_{\max} D_n \mid f(\vec{t}) \in D^{(\vec{t})} \big\} \, .
\]
For $k=1,2,\ldots,n$, define c.p.c.~order zero maps $\eta^{(k)} \colon D_k \to E$ by
\[
\eta^{(k)}(a)(\vec{t}) = t_k (1_{D_{1}} \otimes 1_{D_{2}} \otimes \cdots \otimes a \otimes \cdots \otimes 1_{D_{n}})
\, ,
\]
where $a$ is inserted in the $k$-th factor.
Then $E$ has the following universal property: For any $C^*$-algebra $B$ and any $n$ c.p.c.~order zero maps $\psi^{(k)} \colon D_k \to B$ for $k=1,2,\ldots,n$ with pairwise commuting images, there exists a $^{*}$-homomorphism $\mu \colon E \to B$ such that $\psi^{(k)} = \mu \circ \eta^{(k)}$ for all $k$.
\end{Lemma}
We can view $E$ from the previous lemma as a $C([0,1]^n)$-algebra in a natural way (the fiber over $\vec{0}$ is $0$).
The following is an immediate corollary.
\begin{Cor}
\label{Cor:universal-n-cones-quotient}
Let $D_1,D_2,\ldots,D_n$ and $E$ be as in Lemma~\ref{Lemma:universal-n-cones}. Let
\[
\Delta = \{\vec{t} \in [0,1]^n \mid t_1+t_2+\cdots+t_n = 1\} \, .
\]
Let $E|_{\Delta}$ denote the restriction of $E$ to $\Delta$, that is,
$$
E|_{\Delta} = E / (C_0([0,1]^n \setminus \Delta)\cdot E) .
$$
Then $E|_{\Delta}$ has the following universal property. For any unital $C^*$-algebra $B$ and any $n$ c.p.c.~order zero maps $\psi^{(k)} \colon D_k \to B$ for $k=1,2,\ldots,n$ whose images pairwise commute and furthermore satisfy
$$
\psi^{(1)}(1_{D_{1}}) + \cdots + \psi^{(n)}(1_{D_{n}}) = 1_{B} \, ,
$$
there exists a unital $^{*}$-homomorphism
\[
\mu \colon E|_{\Delta} \to B
\]
such that $\psi^{(k)} = \mu \circ \eta^{(k)}$ for all $k$.
\end{Cor}
\begin{Lemma}
\label{Lemma:universal-n-cones-D}
Let $D$ be a strongly self-absorbing $C^*$-algebra. Let $B$ be a unital $C^*$-algebra. Suppose that $\psi^{(1)},\psi^{(2)},\ldots,\psi^{(n)} \colon D \to B$ are c.p.c.~order zero maps with pairwise commuting images such that $\psi^{(1)}(1_{D}) + \cdots + \psi^{(n)}(1_{D}) = 1_{B}$. Then there exists a unital $^{*}$-homomorphism from $D$ to $B$.
\end{Lemma}
\begin{proof}
Let $E|_{\Delta}$ be as in Corollary~\ref{Cor:universal-n-cones-quotient}, with $D_k = D$ for $k=1,2,\ldots,n$. $\Delta$ is an $n-1$ dimensional simplex, and $E|_{\Delta}$ is a $C(\Delta)$-algebra such that the fiber over each point is isomorphic to $D$. By \cite[Theorem 4.6]{HRW}, $E|_{\Delta}$ is $D$-absorbing (in fact, by the main theorem of \cite{dadarlat-winter-trivialization}, it follows that $E|_{\Delta} \cong C(\Delta) \otimes D$). In particular, there exists some unital $^{*}$-homomorphism $\gamma \colon D \to E_{\Delta}$.
Let $\mu \colon E|_{\Delta} \to B$ be as in Corollary~\ref{Cor:universal-n-cones-quotient}, and define $\psi \colon D \to B$ by $\psi = \mu \circ \gamma$.
\end{proof}
We record some further technical lemmas. The first one follows from \cite[Theorem 3.3]{winter-zacharias-order-zero}.
\begin{Lemma}
\label{lemma:xy-x'y'}
Let $A$ and $B$ be $C^*$-algebras, and let $\nu \colon A \to B$ be a c.p.c.~order zero map. Then for every $x,y,x',y' \in A$, we have
\[
\|\nu(x)\nu(y) - \nu(x')\nu(y')\| \leq \|xy-x'y'\|.
\]
\end{Lemma}
\begin{Lemma}
\label{lemma:commutators}
Let $Y$ be a locally compact Hausdorff space, and $A, B$ two $C^*$-algebras. Let $\mu_1,\mu_2: C_0(Y)\to B$ be two c.p.c.~order zero maps with commuting images. Then for every two functions $f_1, f_2\in C_0(Y,A)\cong C_0(Y)\otimes A$, we have
\[
\| [(\mu_1\otimes\mathrm{id}_A)(f_1),(\mu_2\otimes\mathrm{id}_A)(f_2)] \|_{B \otimes_{\max}A} \leq \max_{y_1,y_2\in Y} \| [f_1(y_1), f_2(y_2)] \|.
\]
\end{Lemma}
\begin{proof}
Let us first assume that $\mu_1$ and $\mu_2$ are $^{*}$-homomorphisms. As the ranges of these maps commute, the $C^*$-algebra generated by them is commutative. We define $Z$ as the Gelfand-Naimark spectrum
\[
Z = \operatorname{Spec}\Big( C^*\big( \mu_1(C_0(Y)), \mu_2(C_0(Y)) \big) \Big).
\]
We thus have $^{*}$-homomorphisms $\mu_1, \mu_2: C_0(Y)\to C_0(Z)\subset B$. It suffices to show the assertion for $C_0(Z)$ instead of $B$. To this end, we set
\[
Z_i = \operatorname{Spec}\Big( \mu_i(C_0(Y))C_0(Z) \Big),\quad i=1,2.
\]
Both these sets are open subsets in $Z$. The $^{*}$-homomorphisms $\mu_i$, viewed as having image in $C_0(Z_i)$, are non-degenerate and thus come from some proper continuous maps $\kappa_i: Z_i\to Y$. Embedding each algebra $C_0(Z_i)$ into $C_0(Z)$ by extending trivially, we get that the $^{*}$-homomorphisms $\mu_i: C_0(Y)\to C_0(Z)$ have the form
\[
\mu_i(f)(z) = \begin{cases} f(\kappa_i(z)) &\mid z\in Z_i \\ 0 &\mid z\notin Z_i. \end{cases}
\]
The $^{*}$-homomorphisms $\mu_i\otimes\mathrm{id}_A: C_0(Y,A)\to C_0(Z,A)$ are thus given by
\[
(\mu_i\otimes\mathrm{id}_A)(f)(z) = \begin{cases} f(\kappa_i(z)) &\mid z\in Z_i \\ 0 &\mid z\notin Z_i. \end{cases}
\]
for all $f\in C_0(Y,A)$ and $z\in Z$. Hence for every $f_1, f_2\in C_0(Y,A)$ and $z\in Z$, it follows that
\[
\def2.2{1.2}
\begin{array}{cl}
\multicolumn{2}{l} {\| [(\mu_1\otimes\mathrm{id}_A)(f_1),(\mu_2\otimes\mathrm{id}_A)(f_2)](z) \| }\\
\leq& \displaystyle \max_{z_1\in Z_1, z_2\in Z_2} \| [f_1(\kappa_1(z_1)), f_2(\kappa_2(z_2)] \|\\
\leq & \displaystyle \max_{y_1,y_2\in Y} \| [f_1(y_1), f_2(y_2)] \|.
\end{array}
\]
This indeed shows our claim under the assumption that $\mu_1$ and $\mu_2$ are $^{*}$-homomorphisms.
Let us now turn to the general case. Let $\iota: C_0(Y)\to C_0\big( (0,1]\times Y \big)$ be the canonical order zero embedding given by $\iota(f)(t,y)=t\cdot f(y)$ for $0 < t\leq 1$ and $y\in Y$.
By identifying $C_0\big( (0,1]\times Y \big)\cong C_0\big( (0,1], C_0(Y) \big)$, the structure theorem for c.p.c.~order zero maps from \cite[Corollary 4.1]{winter-zacharias-order-zero} implies that there exist $^{*}$-homomorphisms $\tilde{\mu}_i: C_0\big( (0,1], C_0(Y) \big) \to B$ making the following diagram commutative for $i=1,2$:
\[
\xymatrix{
C_0(Y) \ar[rr]^\iota \ar[rrd]_{\mu_i} && C_0\big( (0,1]\times Y \big) \ar[d]^{\tilde{\mu}_i} \\
&& B
}
\]
It is clear that $\tilde{\mu}_1$ and $\tilde{\mu}_2$ have commuting images as well. Thus the above applies to the maps $\tilde{\mu}_i$ and we compute for all $f_1,f_2\in C_0(Y)$ that
\[
\def2.2{1.2}
\begin{array}{cl}
\multicolumn{2}{l}{ \| [ (\mu_1\otimes\mathrm{id}_A)(f_1),(\mu_2\otimes\mathrm{id}_A)(f_2) ] \| } \\
=& \| [ ( (\tilde{\mu}_1\circ\iota)\otimes\mathrm{id}_A)(f_1) , ( (\tilde{\mu}_2\circ\iota)\otimes\mathrm{id}_A)(f_2)] \| \\
\leq & \displaystyle \max_{0<t_1,t_2\leq 1} \max_{y_1,y_2\in Y} \| [ (\iota\otimes\mathrm{id}_A)(f_1)(t_1,y_1), (\iota\otimes\mathrm{id}_A)(f_2)(t_2,y_2) ] \| \\
=& \displaystyle \max_{0<t_1,t_2\leq 1} \max_{y_1,y_2\in Y} t_1 t_2 \| [ f_1(y_1), f_2(y_2) ] \| \\
=& \displaystyle \max_{y_1,y_2\in Y} \| [ f_1(y_1), f_2(y_2) ] \| .
\end{array}
\]
This finishes the proof.
\end{proof}
The following is the main technical result of this section.
\begin{Lemma}
\label{technical dimrokc}
Let $A$ be a separable $C^*$-algebra and let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ a flow with $d=\dim^{\mathrm{c}}_{\mathrm{Rok}}(\alpha)<\infty$. Let $D$ be a separable, unital and nuclear $C^*$-algebra such that there is a unital $^{*}$-homomorphism from $D$ to $F_\infty(A)$. Then for $l=1,\dots,2(d+1)$ there exist c.p.c.~order zero maps
\[
\psi^{(l)}\colon D\to F_\infty^{(\alpha)}(A)^{\tilde{\alpha}_\infty}
\]
with pairwise commuting images such that $\displaystyle \sum_{l=1}^{2(d+1)}\psi^{(l)}(1)=1$.\end{Lemma}
We first indicate how Theorem~\ref{Thm:Z-absorption} follows from what he have so far, and then prove Lemma~\ref{technical dimrokc}.
\begin{proof}[Proof of Theorem~\ref{Thm:Z-absorption}]
It follows from combining Lemma~\ref{Lemma:universal-n-cones-D} and Lemma~\ref{technical dimrokc} that there exists a unital $^{*}$-homomorphism from $D$ to $F_\infty^{(\alpha)}(A)^{\tilde{\alpha}_\infty}$. By Theorem~\ref{Thm:equ-D-absorption}, $\alpha$ is cocycle conjugate to $\alpha\otimes\mathrm{id}_D$. By Lemma~\ref{Lemma:D-absorption-condition-3}, $A \rtimes_{\alpha} {\mathbb R}$ is $D$-absorbing.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{technical dimrokc}]
Let $T>0$ be a positive number and $\varepsilon>0$. Fix some $M>2T/\varepsilon$. Recall the notation $\lambda$ for the periodic shift flow on $C({\mathbb R}/M{\mathbb Z})$.
Let $h_0 \in C_0(-M/2,M/2)\subseteq C({\mathbb R}/M{\mathbb Z})$ be the function defined as follows on the interval $[-M/2,M/2]$ (viewed as a periodic function on ${\mathbb R}$).
\begin{center}
\begin{picture}(230,65)
\put(5,10){\line(1,0){162}}
\put(86,3){\vector(0,1){50}}
\put(86,7){\line(0,1){6}}
\put(167,7){\line(0,1){6}}
\put(5,7){\line(0,1){6}}
\thicklines
\put(5,10){\line(3,1){81}}
\put(86,37){\line(3,-1){81}}
\put(75,40){\makebox(0,0){$1$}}
\put(5,-4){\makebox(0,0)[b]{\footnotesize $-\frac{M}{2}$\normalsize}}
\put(86,-4){\makebox(0,0)[b]{\footnotesize $0$\normalsize}}
\put(167,-4){\makebox(0,0)[b]{\footnotesize $\frac{M}{2}$\normalsize}}
\put(25,47){\makebox(0,0){$h$}}
\end{picture}
\end{center}
Setting $h_1=\lambda_{M/2}(h_0)$, we have $h_0+h_1=1$ in $C({\mathbb R}/M{\mathbb Z})$. Consider the two c.p.c.~order zero maps
\[
\varphi^{(i)}: A\to C({\mathbb R}/M{\mathbb Z})\otimes A \cong C({\mathbb R}/M{\mathbb Z}, A),\quad i=0,1
\]
given by
\begin{equation} \label{eq:phi}
\varphi^{(i)}(a)(t+M{\mathbb Z}) = \begin{cases} h_0(t+M{\mathbb Z})\alpha_t(a) &\mid i=0 \\
h_1(t+M{\mathbb Z})\alpha_{t-M/2}(a) &\mid i=1
\end{cases}
\end{equation}
for $t\in [-M/2,M/2]$.
Since $M> \frac{2T}{\varepsilon}$, for all $t \in [-T,T]$ and for $ i=0,1$ we have
\[
\|h_i-\lambda_t(h_i)\|\leq\varepsilon
\]
Therefore, for $i=0,1$ and for all $t \in [-T,T]$ we have
\begin{equation} \label{eq:phii}
\|(\lambda_t\otimes\alpha_t)\circ\varphi^{(i)} - \varphi^{(i)}\|\leq\varepsilon \, .
\end{equation}
To see this for $i=0$, note that (applying \eqref{eq:phi} with $t_0-t$ in place of $t$) for every contraction $a\in A$ and $t_0\in{\mathbb R}$ we have
\[
\def2.2{1.5}
\begin{array}{ccl}
\Big((\lambda_t\otimes\alpha_t)(\varphi^{(0)}(a))\Big)(t_0+M{\mathbb Z}) &=& \alpha_t\Bigl( \varphi^{(0)}(a)(t_0-t+M{\mathbb Z}) \Bigl) \\
&=& \alpha_t\Bigl( h_0(t_0-t+M{\mathbb Z})\alpha_{t_0-t}(a) \Bigl) \\
&=& h_0(t_0-t+M{\mathbb Z})\alpha_{t_0}(a),
\end{array}
\]
which is equal to $\varphi^{(0)}(a)(t_0+M{\mathbb Z})$ up to $\varepsilon$. An analogous calculation shows this for $i=1$.
By assumption, there is a unital $^{*}$-homomorphism $\tilde{\kappa}: D\to F_\infty(A)$.
As $D$ is nuclear, we can apply the Choi-Effros lifting theorem \cite{Choi-Effros} and find a c.p.c.~lift of this map to $\ell^\infty({\mathbb N}, A)$, and represent it by a sequence of c.p.c.~maps $\kappa_n: D\to A$ such that $\tilde{\kappa}(b)$ is the image of $(\kappa_1(b),\kappa_2(b),\ldots)$ for all $b\in D$.
Let $\mathcal{F}_D\subset D$ and $\mathcal{F}_A\subset A$ be finite subsets, with $1_D \in \mathcal{F}_D$. We may assume that they consist of elements of norm at most $1$. Applying Lemma~\ref{Lemma:invariant-approx-unit}, we may furthermore assume that $\mathcal{F}_A$ contains a positive element $e$ of norm $1$, such that $\|ea - a\|<\varepsilon$ and $\|ae-a\|<\varepsilon$ for all $a \in \mathcal{F}_A\setminus\{e\}$ and such that $\|\alpha_t(e) - e\|<\varepsilon$ for all $t \in [-M,M]$.
By picking $\kappa=\kappa_n$ for some sufficiently large $n$, we have a c.p.c.~map satisfying the following for all $t\in[-M,M]$, for all $a\in \mathcal{F}_A$ and for all $d_1,d_2\in \mathcal{F}_D$:
\begin{enumerate}[label=\textup{({a}\arabic*)}]
\item $\|\kappa(1)\alpha_t(a)-\alpha_t(a)\|\leq\varepsilon$; \label{eqa1}
\item $\|\bigl( \kappa(d_1)\kappa(d_2)-\kappa(d_1d_2) \bigl) \alpha_t(a)\|\leq\varepsilon$; \label{eqa2}
\item $\|[\alpha_t(a),\kappa(d_1)]\|\leq\varepsilon$. \label{eqa3}
\end{enumerate}
We choose inductively c.p.c.~maps $\kappa^{(i,l)}: D\to A$ for $i=0,1$ and for $l=0,1,\dots,d$ satisfying the above conditions and such that we also have
\begin{enumerate}[label=\textup{({a}\arabic*)}, resume]
\item $\|[\alpha_t\circ\kappa^{(i,l)}(d_1),\kappa^{(i',l')}(d_2)]\|\leq\varepsilon$ \label{eqa4}
\end{enumerate}
for all $t\in [-2M,2M]$, for all $d_1,d_2\in \mathcal{F}_D$ and whenever $(i,l)\neq (i',l')$.
Combining the properties of the maps $\kappa^{(i,l)}$ and $\varphi^{(i)}$, we see that the following hold for all $i,i'=0,1$ and for all $l,l'=0,\dots,d$ with $(i',l')\neq (i,l)$, for all $t\in [-T,T]$, for all $a\in \mathcal{F}_A$, and for all $d_1,d_2\in \mathcal{F}_D$:
\begin{enumerate}[label=\textup{({b}\arabic*)}]
\item $\displaystyle \Big\| \Big(1- (\varphi^{(0)}\circ\kappa^{(0,l)}(1) + \varphi^{(1)}\circ\kappa^{(1,l)}(1) ) \Big) \cdot (1_{C({\mathbb R}/M{\mathbb Z})}\otimes a ) \Big\|\leq 2\varepsilon$; \label{eqb1}
\item $\| (\lambda_t\otimes\alpha_t)\circ\varphi^{(i)}\circ\kappa^{(i,l)}-\varphi^{(i)}\circ\kappa^{(i,l)}\|\leq\varepsilon$. \label{eqb2}
\end{enumerate}
Here, \ref{eqb1} follows from \ref{eqa1} and condition \ref{eqb2} follows from \eqref{eq:phii}.
Consider the canonical $\tilde{\alpha}_\infty\otimes\alpha - \alpha_\infty$ equivariant $^{*}$-homomorphism
\[
\theta: F_\infty^{(\alpha)}(A)\otimes_{\max} A\to A_\infty^{(\alpha)}
\]
from Remark~\ref{F(A)}. Since $d=\dim^{\mathrm{c}}_{\mathrm{Rok}}(\alpha)<\infty$, we can choose $\lambda-\tilde{\alpha}_\infty$ equivariant c.p.c.~order zero maps
$$
\mu^{(0)},\mu^{(1)},\dots,\mu^{(d)}\colon C({\mathbb R}/M{\mathbb Z})\to F_\infty^{(\alpha)}(A)
$$
with pairwise commuting images and such that $\mu^{(0)}(1)+\dots+\mu^{(d)}(1)=1$.
For $i=0,1$ and for $l=0,\dots,d$, we define c.p.c.~maps $\psi^{(i,l)}: D\to A_\infty^{(\alpha)}$ as the composition
\[
\xymatrix@C+3mm{
D \ar[r]^{\kappa^{(i,l)}} \ar@/_2pc/[rrrr]_{\psi^{(i,l)}}
& A \ar[r]^(0.27){\varphi^{(i)}}
& C({\mathbb R}/M{\mathbb Z}) \otimes A \ar[r]^(0.5){\mu^{(l)}\otimes\mathrm{id}_A}
& F_\infty^{(\alpha)}(A)\otimes_{\max} A \ar[r]^(0.65){\theta}
& A_\infty^{(\alpha)}.
}
\]
Now by choice, we know that for all $a\in A$,
\begin{equation} \label{eq:sum-one}
\sum_{l=0}^d \theta\circ(\mu^{(l)}\otimes\mathrm{id}_A)(1_{C({\mathbb R}/M{\mathbb Z})}\otimes a)=\sum_{l=0}^d \mu^{(l)}(1)\cdot a = a \, .
\end{equation}
We claim now that the maps $\psi^{(i,l)}$ satisfy the following properties, for $i,i'=0,1$ and for $l,l'=0,1,\dots,d$ with $(i,l)\neq (i',l')$, for all $t\in [-T,T]$, for all $a\in \mathcal{F}_A$, and for all $d_1,d_2\in \mathcal{F}_D$:
\begin{enumerate}[label=\textup{({c}\arabic*)}]
\item $\displaystyle \Big\| \Big( 1-\sum_{l=0}^d \big( \psi^{(0,l)}(1) + \psi^{(1,l)}(1) \big) \Big) a \Big\|\leq 2(d+1)\varepsilon$; \label{eqc1}
\item $\|\alpha_{\infty,t}\circ\psi^{(i,l)}-\psi^{(i,l)}\|\leq\varepsilon$; \label{eqc2}
\item $\| \bigl( \psi^{(i,l)}(d_1)\psi^{(i,l)}(d_2)-\psi^{(i,l)}(d_1d_2)\psi^{(i,l)}(1) \bigl) a\|\leq 6\varepsilon$; \label{eqc3}
\item $\|[a,\psi^{(i,l)}(d_1)]\|\leq \varepsilon$; \label{eqc4}
\item $\|[\psi^{(i,l)}(d_1),\psi^{(i',l')}(d_2)]\|\leq \varepsilon$. \label{eqc5}
\end{enumerate}
Property \ref{eqc1} follows from \ref{eqb1} and \eqref{eq:sum-one} and \ref{eqc2} follows from \ref{eqb2}.
As for \ref{eqc3}, notice first that since $\|ea - a\|<\varepsilon$, it suffices to prove the claim for $a = e$ and $4\varepsilon$ in place of $6\varepsilon$. Note that since $e$ is almost $\alpha$-invariant, we have $\|\varphi^{(i)}(e) - h_i \otimes e\|<\varepsilon$ for $i=0,1$. Furthermore, for any $x \in A$ and for $i=0,1$, we have
\begin{equation}
\label{eqn-xe}
\|\varphi^{(i)}(xe) - \varphi^{(i)}(x) \cdot (1 \otimes e)\|\leq \varepsilon\|x\| \, .
\end{equation}
Note that if $y \in C({\mathbb R}/M{\mathbb Z}) \otimes A$ and $a \in A$ then
\begin{equation}
\label{eqn-eta-semi-multiplicative}
(\mu^{(l)}\otimes\mathrm{id}_A)(y \cdot (1_{C({\mathbb R}/M{\mathbb Z})} \otimes a)) = (\mu^{(l)}\otimes\mathrm{id}_A)(y) \cdot (1_{F_\infty^{(\alpha)}(A)} \otimes a)
\end{equation}
This follows from the fact that $\mu^{(l)}$ is a completely bounded map, and is seen by first verifying the formula for elementary tensors.
In particular, for any $d \in D$ we have
\[
\psi^{(i,l)}(d)e = \theta \circ (\mu^{(l)}\otimes\mathrm{id}_A)(\varphi^{(i)} \circ \kappa^{(i,l)}(d) \cdot (1 \otimes e)) \, .
\]
We thus have
\[
\arraycolsep=0.7mm
\def2.2{1.5}
\begin{array}{cl}
\multicolumn{2}{l}{ \| \big( \psi^{(i,l)}(d_1)\psi^{(i,l)}(d_2)-\psi^{(i,l)}(d_1d_2)\psi^{(i,l)}(1) \big) e\| }
\\
\stackrel{\eqref{eqn-eta-semi-multiplicative}}{=} & \big\| \theta \left ( (\mu^{(l)}\otimes\mathrm{id}_A) (\varphi^{(i)} ( \kappa^{(i,l)}(d_1) )) \cdot (\mu^{(l)}\otimes\mathrm{id}_A) (\varphi^{(i)} ( \kappa^{(i,l)}(d_2)) \cdot (1\otimes e)) \right )
\\
& \quad -\theta \left ((\mu^{(l)}\otimes\mathrm{id}_A) (\varphi^{(i)} ( \kappa^{(i,l)}(d_1d_2))) \right.
\\
& \left. \quad \quad \cdot (\mu^{(l)}\otimes\mathrm{id}_A)(\varphi^{(i)} ( \kappa^{(i,l)}(1) \cdot (1 \otimes e)) \right ) \big\|
\\
\leq & \| (\mu^{(l)}\otimes\mathrm{id}_A) (\varphi^{(i)} ( \kappa^{(i,l)}(d_1) )) \cdot(\mu^{(l)}\otimes\mathrm{id}_A) (\varphi^{(i)} ( \kappa^{(i,l)}(d_2)) \cdot (1\otimes e))
\\
& \quad -(\mu^{(l)}\otimes\mathrm{id}_A) (\varphi^{(i)} ( \kappa^{(i,l)}(d_1d_2))) \\
& \quad \quad \cdot(\mu^{(l)}\otimes\mathrm{id}_A)(\varphi^{(i)} ( \kappa^{(i,l)}(1) \cdot (1 \otimes e)) \|
\\
\stackrel{(\text{L}\ref{lemma:xy-x'y'})}{\leq} & \| \bigl( \varphi^{(i)} ( \kappa^{(i,l)}(d_1) ) \cdot \varphi^{(i)} ( \kappa^{(i,l)}(d_2)) \\
& \quad -\varphi^{(i)} ( \kappa^{(i,l)}(d_1d_2)) \cdot \varphi^{(i)} ( \kappa^{(i,l)}(1)) \bigl) \cdot (1 \otimes e)\|
\\
\stackrel{\eqref{eqn-xe}}{\leq} &
\| \varphi^{(i)}( \kappa^{(i,l)}(d_1)) \cdot \varphi^{(i)}(\kappa^{(i,l)}(d_2)e)-\varphi^{(i)} ( \kappa^{(i,l)}(d_1d_2)) \cdot \varphi^{(i)} ( \kappa^{(i,l)}(1)e) \| \\
& + 2\varepsilon
\\
\stackrel{\ref{eqa1}}{\leq} &
\| \varphi^{(i)} ( \kappa^{(i,l)}(d_1)) \cdot \varphi^{(i)}(\kappa^{(i,l)}(d_2)e)-\varphi^{(i)} ( \kappa^{(i,l)}(d_1d_2)) \cdot \varphi^{(i)} ( e) \| + 3\varepsilon
\\
\stackrel{(\text{L}\ref{lemma:xy-x'y'})}{\leq} &
\| \kappa^{(i,l)}(d_1)) \kappa^{(i,l)}(d_2)e - \kappa^{(i,l)}(d_1d_2)) e \| + 3\varepsilon \\
\stackrel{\ref{eqa2}}{\leq} & 4\varepsilon \, .
\end{array}
\]
This shows \ref{eqc3}. As for \ref{eqc4}, we can use \eqref{eqn-eta-semi-multiplicative} and get
\[
[a,\psi^{(i,l)}(d)] = \theta ( (\mu^{(l)}\otimes\mathrm{id}_A) ([1 \otimes a,\varphi^{(i)} (\kappa^{(i,l)}(d))])) \, .
\]
Thus, it suffices to show that for all $a \in \mathcal{F}_A$ and all $d \in \mathcal{F}_D$, we have $\|[1 \otimes a,\varphi^{(i)} (\kappa^{(i,l)}(d))]\| \leq\varepsilon$. But this follows directly from \ref{eqa3} and the definition of the map $\varphi^{(i)}$.
Lastly, for \ref{eqc5}, we check the following:
\[
\def2.2{1.5}
\begin{array}{cl}
\multicolumn{2}{l}{ \|[\psi^{(i,l)}(d_1),\psi^{(i',l')}(d_2)]\| } \\
\leq & \|[(\mu^{(l)}\otimes\mathrm{id}_A)(\varphi^{(i)}(\kappa^{(i,l)}(d_1))),
(\mu^{(l')}\otimes\mathrm{id}_A)(\varphi^{(i')}(\kappa^{(i',l')}(d_2)))]\|
\\
\stackrel{(\text{L}\ref{lemma:commutators})}{\leq} &
\displaystyle \max_{t_1, t_2\in{\mathbb R}} \|[\varphi^{(i)}(\kappa^{(i,l)}(d_1))(t_1+M{\mathbb Z}),
\varphi^{(i')}(\kappa^{(i',l')}(d_2))(t_2+M{\mathbb Z})]\|
\\
\stackrel{\eqref{eq:phi}}{\leq} &
\displaystyle \max_{-M\leq t_1,t_2\leq M} \big\| \big[ \alpha_{t_1} \big( \kappa^{(i,l)}(d_1) \big) ,\alpha_{t_2} \big( \kappa^{(i',l')}(d_2) \big) \big] \big\| \stackrel{\ref{eqa4}}{\leq} \varepsilon \, .
\end{array}
\]
Since $D$ and $A$ are separable, we can apply a standard reindexation argument as follows.
By choosing an increasing sequence of finite subsets $\mathcal{F}_D$ with dense union in the unit ball of $D$, finite subsets $\mathcal{F}_A$ with dense union in the unit ball of $A$, a decreasing sequence of positive numbers $\varepsilon_n$ tending to $0$ and an increasing sequence of positive numbers $T_n$ tending to infinity,
we successively choose c.p.c.~maps from $D$ to $A$ satisfying the conditions~\ref{eqc1} to \ref{eqc5}, and thus obtain c.p.c.~maps $\widehat{\psi}^{(i,l)}: D\to A_\infty^{(\alpha)}$ satisfying
\begin{itemize}
\item $\displaystyle a = \sum_{l=0}^d \left (\widehat{\psi}^{(0,l)}(1) +\widehat{\psi}^{(1,l)}(1) \right )a$
\item $\alpha_{\infty,t}\circ\widehat{\psi}^{(i,l)} = \widehat{\psi}^{(i,l)}$
\item $\widehat{\psi}^{(i,l)}(d_1)\widehat{\psi}^{(i,l)}(d_2)a = \widehat{\psi}^{(i,l)}(d_1d_2)\widehat{\psi}^{(i,l)}(1)a$
\item $[a,\widehat{\psi}^{(i,l)}(d_1)]=0$
\item $[\widehat{\psi}^{(i,l)}(d_1),\widehat{\psi}^{(i',l')}(d_2)]=0$
\end{itemize}
for $i,i'=0,1$, and for $l,l'=0,1,\dots,d$ with $(i,l)\neq (i',l')$, for all $t\in{\mathbb R}$, for all $a\in A$ and for all $d_1,d_2\in D$.
For $i=0,1$ and $l=0,\dots,d$, consider the maps $ \widetilde{\psi}^{(i,l)}: D\to F_\infty^{(\alpha)}(A)^{ \tilde{\alpha}_\infty}$ given by $ \widetilde{\psi}^{(i,l)}(d)= \widehat{\psi}^{(i,l)}(d)+\operatorname{Ann}(A,A_\infty)$ for all $d\in D$. Because of the properties of the maps $\widehat{\psi}^{(i,l)}$ listed above, these yield well-defined c.p.c.~order zero maps from $D$ to $F_\infty^{(\alpha)}(A)^{ \widetilde{\alpha}_\infty}$ with pairwise commuting images and satisfying the equation $1=\sum_{i=0,1} \sum_{l=0}^d \widetilde{\psi}^{(i,l)}(1)$. This concludes the proof.
\end{proof}
\begin{Exl}
Suppose that $X$ is a locally compact, metrizable space with finite covering dimension. Let $D$ be a strongly self-absorbing $C^*$-algebra. Suppose that $A$ is a separable, unital and $D$-absorbing $C^*$-algebra with primitive ideal space $X$. If $\alpha$ is a flow on $A$, then it induces a topological flow $\Phi$ on $X$ associated to $A$. If $\Phi$ is free, then the restriction of $\alpha$ to the center of $A$, $Z(A) \cong C(X)$, has finite Rokhlin dimension by Corollary~\ref{cor:top-flow-Rokhlin-estimate} below. It follows from this that $\alpha$ has finite Rokhlin dimension with commuting towers, so by Theorem~\ref{Thm:Z-absorption}, $A \rtimes_{\alpha} {\mathbb R}$ is $D$-absorbing as well.
\end{Exl}
We conclude this section with some further remarks.
\begin{Rmk}
The results of this section generalize to ${\mathbb R}^n$-actions with the analogous notion of Rokhlin dimension with commuting towers, in a straightforward manner (see also Remark~\ref{Rn-dimnuc}). One would need to generalize Lemma~\ref{technical dimrokc}. This works in a similar way with $2^n(d+1)$ maps instead of $2(d+1)$. Specifically, the functions $h_0=h$ and $h_1=\lambda_{M/2}(h)$ from the proof have to be replaced by $2^n$ functions of the form $h_{i_1}\otimes\dots\otimes h_{i_n} \in C({\mathbb R}/M{\mathbb Z})^{\otimes n} \cong C({\mathbb R}^n/M{\mathbb Z}^n)$ for $(i_1,\dots,i_n)\in\{0,1\}^n$. The rest of the argument is, for the most part, identical.
\end{Rmk}
\begin{Rmk}
This same method of proof can be used to obtain the analogous result for actions of ${\mathbb Z}$: if $D$ is strongly self-absorbing, $A$ is a separable $D$-absorbing $C^*$-algebra and $\alpha$ is an automorphism of $A$ that has finite Rokhlin dimension with commuting towers, then $A \rtimes_{\alpha} {\mathbb Z}$ is $D$-absorbing as well. This generalizes \cite[Theorem 5.8]{HWZ} and \cite[Theorem 3.2]{hirshberg-phillips}, where this was proven for the special case of $D = \mathcal{Z}$. This is made rigorous in \cite[Section 10]{SWZ} more generally for actions of residually finite groups with finite dimensional box spaces.
\end{Rmk}
\begin{Rmk}
In Lemma~\ref{technical dimrokc} and its proof, the commuting tower assumption is only needed in order to get pairwise commuting images of the c.p.c.~order zero maps in the statement. Repeating the same argument as in the proof of Lemma~\ref{technical dimrokc} without the commuting tower assumption yields the following potentially useful observation.
Let $A$ be a separable $C^*$-algebra and let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ be a flow with $d=\dim_{\mathrm{Rok}}(\alpha)<\infty$. Let $D$ be a separable, nuclear and unital $C^*$-algebra such that there is a unital $^{*}$-homomorphism from $D$ to $F_\infty(A)$. Then there exist c.p.c.~order zero maps $\psi^{(l)}: D\to F_\infty^{(\alpha)}(A)^{\tilde{\alpha}_\infty}$ for $l=1,\dots,2(d+1)$ such that $1=\sum_{l=1}^{2(d+1)}\psi^{(l)}(1)$.
For example, if in the situation described above, $D$ is strongly self-absor\-bing and $A$ is $D$-absorbing, then one can use this to show that $F_\infty(A\rtimes_\alpha{\mathbb R})$ has no characters. This is sufficient for deducing weaker structural properties for $A\rtimes_\alpha{\mathbb R}$, such as the strong corona factorization property; see \cite{KirchbergRordam14}.
\end{Rmk}
\section{Stability of crossed products}
\label{Section:stability}
In this section we show that for every flow with finite Rokhlin dimension, the associated crossed product $C^*$-algebra is stable, i.e., it tensorially absorbs the algebra of compact operators $K(H)$ on a separable infinite-dimensional Hilbert space. In fact, we will see that a much weaker consequence of finite Rokhlin dimension, which makes sense for flows on non-separable $C^*$-algebras as well, is sufficient for this purpose. We will use a local characterization of stability developed in \cite{HjelmborgRordam1998stability} for $\sigma$-unital $C^*$-algebras:
\begin{lem}\label{lem:local-stability}
A $\sigma$-unital $C^*$-algebra $B$ is stable if and only if for any $b \in B_+$ and any $\varepsilon > 0$, there exists $y \in B$ such that $\| yy^* - b \| <\varepsilon $ and $\| y^2 \| < \varepsilon$.
\end{lem}
\begin{proof}
The ``only if'' part is straightforward to verify while the ``if'' part follows directly from \cite[Theorem~2.1 and Proposition~2.2]{HjelmborgRordam1998stability}.
\end{proof}
Let us now consider a consequence of finite Rokhlin dimension for flows on separable $C^*$-algebras. The only difference between the following and condition~(\ref{Lemma:def-dimrok-lift-item-5a}) in Lemma~\ref{Lemma:def-dimrok-lift}(\ref{alternativedefinitionofRokhlindimensionforflows}) is the extra factor the exponential function.
\begin{lem}\label{lem:alternativedefinitionofRokhlindimensionforflows-variant}
Let $A$ be a separable $C^*$-algebra with a flow $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$ of finite Rokhlin dimension $d$. Then for any $ p, T, \delta > 0 $ and any finite set $\mathcal{F} \subset A$, there are contractions $ x^{(0)}, \dots, x^{(d)} \in A $ satisfying conditions~(\ref{Lemma:def-dimrok-lift-item-5b}) - (\ref{Lemma:def-dimrok-lift-item-5d}) in Lemma~\ref{Lemma:def-dimrok-lift}(\ref{alternativedefinitionofRokhlindimensionforflows}) together with
\begin{enumerate}[label={(\ref*{Lemma:def-dimrok-lift-item-5a}')}]
\item
\label{Lemma:def-dimrok-lift-item-5a'} \quad
$ \left\| a (\alpha_{t}( x^{(l)} ) - e^{ip (l+1) t} \cdot x^{(l)}) \right\| \le \delta $
\end{enumerate}
for $ l = 0, \dots, d $, for all $t \in [ -T, T] $ and for all $a \in \mathcal{F}$.
\end{lem}
\begin{proof}
For any $s \in {\mathbb R}$, we consider the action $\lambda^{(s)} \colon {\mathbb R} \curvearrowright C({\mathbb R}/p{\mathbb Z})$ given by $\lambda^{(s)}_t (f) (x) = f(x - st)$ for $f \in C({\mathbb R}/p{\mathbb Z})$. Recall that $\lambda^{(1)} = \lambda$ in our previous notation. Given $M>0$, we apply Remark~\ref{Rmk:Rokhlin-dim} and find $\lambda - \tilde{\alpha}_\infty$ equivariant c.p.c.~order zero maps
\[
\mu^{(0)}, \ldots, \mu^{(d)} \colon C({\mathbb R}/M{\mathbb Z}) \to F_\infty^{(\alpha)}(A)
\]
with $\sum_{j=0}^d \mu^{(j)}(1) = 1$. By composing each c.p.c.~order zero map $\mu^{(l)}$ with the $\lambda^{(l+1)} - \lambda$ equivariant unital endomorphism $C({\mathbb R}/M{\mathbb Z}) \to C({\mathbb R}/M{\mathbb Z})$ mapping $f$ to $f( (l+1) \cdot -)$, we obtain c.p.c.~order zero maps
\[
\widetilde{\mu}^{(0)}, \ldots, \widetilde{\mu}^{(d)} \colon C({\mathbb R}/M{\mathbb Z}) \to F_\infty^{(\alpha)}(A)
\]
with $\sum_{j=0}^d \widetilde{\mu}^{(j)}(1) = 1$ and each map $\widetilde{\mu}^{(l)}$ being $\lambda^{(l+1)} - \tilde{\alpha}_\infty$ equivariant. For $l = 0, \ldots, d$, let $\widetilde{x}^{(l)}$ be the image of the standard generator $z = [ x \mapsto e^{2\pi i \frac{x}{M}}]$ of $C({\mathbb R}/M{\mathbb Z})$ under $\widetilde{\mu}^{(l)}$. This yields normal contractions $\widetilde{x}^{(0)},\dots,\widetilde{x}^{(d)}\in F_\infty^{(\alpha)}(A)$ satisfying the equations $\tilde{\alpha}_{\infty,t}(\widetilde{x}^{(j)}) = e^{2\pi i t (l+1)/ M} \cdot \widetilde{x}^{(j)}$ and $\widetilde{x}^{(0)*}\widetilde{x}^{(0)}+\dots+\widetilde{x}^{(d)*}\widetilde{x}^{(d)}=1$. Directly unraveling the definition of $F_\infty^{(\alpha)}(A)$ as in Lemma~\ref{Lemma:def-dimrok-lift} leads to the desired conclusion.
\end{proof}
For the main result of this section concerning the stability of crossed products, we will in fact only need to require conditions \ref{Lemma:def-dimrok-lift-item-5a'} and \eqref{Lemma:def-dimrok-lift-item-5b}. This gives rise to the following ad-hoc definition:
\begin{Def} \label{def:adhoc-dimrok}
Let $\alpha: {\mathbb R}\to\mathrm{Aut}(A)$ be a flow on a $C^*$-algebra. We say that $\alpha$ \emph{admits a $d$-dimensional eigenframe}, if there exists a natural number $d$ satisfying:
For every $p,T,\delta>0$ and finite set $\mathcal{F}\subset A$, there exist contractions $x^{(0)},\dots,x^{(d)}\in A$ such that
\[
\left\| a \big( \alpha_{t}( x^{(l)} ) - e^{ip (l+1) t} \cdot x^{(l)} \big) \right\| \le \delta
\]
and
\[
\Bigg\| a - a\cdot \sum_{l=0}^d x^{(l)} x^{(l)*} \Bigg\| \leq \delta
\]
for all $ l = 0, \dots, d $, for all $t \in [ -T, T] $ and for $a \in \mathcal{F}$.
\end{Def}
\begin{Rmk} \label{Rmk:dimrok-implies-adhoc-dimrok}
Lemma \ref{lem:alternativedefinitionofRokhlindimensionforflows-variant}
in particular shows that any flow on a separable $C^*$-algebra with finite Rokhlin dimension admits a $d$-dimensional eigenframe. We note, however, that one can easily construct examples of flows $\alpha$ admitting a $d$-dimensional eigenframe such that $\alpha_t$ is inner for every $t\in{\mathbb R}$. By Proposition \ref{Prop:dimrok-restricted-flow}, such an example is far away from having finite Rokhlin dimension.
\end{Rmk}
In what follows, we make use of the canonical embeddings of $A$ and $C^*({\mathbb R})$ into the multiplier algebra of $A \rtimes_\alpha {\mathbb R}$, so that for any $h,g \in C_c({\mathbb R}, A) \subset A \rtimes_\alpha {\mathbb R}$, any $a \in A$, any $f \in C_c({\mathbb R}) \subset C^*({\mathbb R})$ and any $t \in {\mathbb R}$, we have
\begin{align*}
& (a h) (t) = a h(t) \;, && (h a) (t) = h(t) \alpha_t(a) \;, \\
& (h g) (t) = \int_{{\mathbb R}} h(s) \alpha_s (g (t-s) ) \, ds \;, && (f h) (t) = \int_{{\mathbb R}} f(s) h(t-s) \, ds \;, \\
& (a f) (t) = a f(t) \;, && (f a) (t) = f(t) \alpha_t(a) \;.
\end{align*}
The proof of the next lemma is standard and we omit it.
\begin{lem}\label{lem:approx-unit}
Let $A$ be a $\sigma$-unital $C^*$-algebra with a flow $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$. Then $A \rtimes_\alpha {\mathbb R}$ is $\sigma$-unital and there is a countable approximate unit of the form $(\sqrt{a_j} g_j \sqrt{a_j} )_{j = 1}^\infty$, where $(g_j )_{j = 1}^\infty$ is a countable approximate unit of $C^*({\mathbb R})$
consisting of functions in the convolution subsalgebra $L^1({\mathbb R})$ whose Fourier transform has compact support,
$(a_j)_{j = 1}^\infty$ is a countable approximately invariant approximate unit of $A$, and they satisfy $[a_j, g_j] \to 0$ as $j \to \infty$.
\qed
\end{lem}
\begin{thm}\label{thm:stability}
Let $A$ be a $\sigma$-unital $C^*$-algebra with a flow $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$. If $\alpha$ admits a $d$-dimensional eigenframe, then $A \rtimes_\alpha {\mathbb R}$ is stable.
\end{thm}
\begin{proof}
Throughout this proof, when we refer to $L^1({\mathbb R})$, we think of it as the dense convolution algebra inside of $C^*({\mathbb R})$, which in turn is identified with its canonical copy inside of the multiplier algebra of $A \rtimes_{\alpha} {\mathbb R}$.
Let $d \in {\mathbb N}$ be a natural number as required by Definition \ref{def:adhoc-dimrok}.
Given $b \in (A \rtimes_\alpha {\mathbb R})_+$ with $\|b\|=1$ and $\varepsilon > 0$, we shall find $y \in A \rtimes_\alpha {\mathbb R}$ satisfying the conditions in Lemma~\ref{lem:local-stability} with $\varepsilon$ replaced by $(6d + 10) \varepsilon $. For convenience, let us assume from now on that $\varepsilon$ was chosen sufficiently small so that $(6d+10)\varepsilon \leq 35$.
We let $\lambda \colon \widehat{{\mathbb R}} \to \mathrm{Aut}(C_0(\widehat{{\mathbb R}})$ denote the shift flow, $\lambda_t(f)(x) = f(x-t)$. Under the identification of $C_0(\widehat{{\mathbb R}})$ with $C^*({\mathbb R})$ via the Fourier transform (with a suitable choice of normalization), the action $\lambda$ corresponds to the modulation action $\mu \colon {\mathbb R} \to \mathrm{Aut}(C^*({\mathbb R}))$ given, for elements $h \in L^1({\mathbb R}) \subset C^*({\mathbb R})$, by $\mu_t(h)(s) = e^{2\pi i ts}h(s)$.
We now make the following successive choices.
\begin{enumerate}[label=\textup{({D}\arabic*)}]
\item\label{proof:them:stability-D1} Using Lemma~\ref{lem:approx-unit}, we choose
$g \in L^1({\mathbb R})$ such that $\widehat{g} \in C_c(\widehat{{\mathbb R}})_{+, \leq 1}$ and $a \in A_{+, \leq 1}$ such that $g a \sqrt{b}$, $\sqrt{b} g a$ and $a \sqrt{b} g$ are all no more than $\varepsilon$ away from $\sqrt{b}$.
Moreover, we may assume $\|[a,g]\|\leq\varepsilon$. We can furthermore assume that $\|g\|_{C^*({\mathbb R})} = 1$, and therefore in particular, the
$L^1$ norm of $g$, $\|g\|_1$, is at least 1.
\item\label{proof:them:stability-D2} Let $p>0$ be such that $\widehat{g}$ is supported within $\left(-\frac{p}{2}, \frac{p}{2}\right) \subset \widehat{{\mathbb R}}$. Note that ${\lambda}_{(l+1)p} (\widehat{g}) \in C_c(\widehat{{\mathbb R}})$ is supported within $\left(\frac{2l-1}{2} p, \frac{2l + 1}{2} p \right)$ for each $l \in \{0, \ldots, d\}$ and thus $\left\{ \mu_{(l+1)p} (g) \right\}_{l=0,\dots,d}$ are mutually orthogonal.
\item\label{proof:them:stability-D3}
Choose a compactly supported positive definite function $\widetilde{g} \in L^1({\mathbb R})$ such that $\|\widetilde{g} - g\|_1 < \varepsilon$ and $\|\widetilde{g}\|_{C^*({\mathbb R})} = 1$, and in particular, $\|\widetilde{g}\|_1\geq 1$ as well. Note that $\|\widetilde{g} - g\|_{C^*({\mathbb R})} \leq \|\widetilde{g} - g\|_1 < \varepsilon$.
\item\label{proof:them:stability-D4} Let $T > 0$ be such that $\widetilde{g}$ is supported within $[-T, T] \subset {\mathbb R}$. Set $\delta = \varepsilon / \big\| \widetilde{g} \big\|_1$.
\item\label{proof:them:stability-D6} Choose contractions $ x^{(0)}, \dots, x^{(d)} \in A $ satisfying the conditions in Definition \ref{def:adhoc-dimrok} with respect to $p, T, \delta$ and $\mathcal{F} = \{a\}$.
\end{enumerate}
We claim that for any $l \in \{0, \ldots, d\}$, we have
\begin{equation}\label{eq:thm:stability}
\left\| a \left( g \, x^{(l)} - x^{(l)} \, \mu_{(l+1)p} (g) \right) \right\| \leq 3 \varepsilon \; .
\end{equation}
Indeed, since $\| g - \widetilde{g}\| \leq \varepsilon$ by \ref{proof:them:stability-D3} and $\| x^{(l)}\| \leq 1$, it suffices to show
\[
\left\| a \left( \widetilde{g} \, x^{(l)} - x^{(l)} \, \mu_{(l+1)p} (\widetilde{g}) \right) \right\| \leq \varepsilon \; .
\]
To this end, we note that for any $s \in {\mathbb R}$, we have
\[
\left( a \widetilde{g} \, x^{(l)} \right) (s) = \widetilde{g} (s) a \cdot \alpha_{s} (x^{(l)}) \in A
\]
and
\begin{align*}
\left( a \, x^{(l)} \, \mu_{(l+1)p} (\widetilde{g}) \right) (s) = a \, \mu_{(l+1)p} (\widetilde{g}) (s) \cdot x^{(l)} = e^{2 \pi i (l+1)p s} \widetilde{g} (s) a x^{(l)} \; ,
\end{align*}
and thus
\[
\def2.2{2}
\begin{array}{cl}
\multicolumn{2}{l} { \displaystyle\left\| a \left( \widetilde{g} \, x^{(l)} - x^{(l)} \, \mu_{(l+1)p} (\widetilde{g}) \right) \right\| }\\
\leq &\displaystyle \left\| \left[ s \mapsto \left( a \left( \widetilde{g} \, x^{(l)} - x^{(l)} \, \mu_{(l+1)p} (\widetilde{g}) \right)\right) (s) \right] \right\|_{L^1} \\
\stackrel{\ref{proof:them:stability-D4}}{ \leq} &\displaystyle \int_{-T}^T \left| \widetilde{g} (s) \, a \cdot \left( \alpha_{s} (x^{(l)}) - e^{2 \pi i (l+1)p s} \cdot x^{(l)} \right) \right| \, d s \\
\stackrel{\ref{proof:them:stability-D6}}{ \leq} & \displaystyle \delta \int_{-T}^T \left| \widetilde{g} (s) \right| \, d s ~ \stackrel{\ref{proof:them:stability-D4}}{\leq}~ \varepsilon \; ,
\end{array}
\]
which proves \eqref{eq:thm:stability}.
Now define
\[
y = \sqrt {b} a \left( \sum_{l=0}^d x^{(l)} \cdot \mu_{(l+1)p} (g) \right) \; .
\]
We have
\[
\begin{array}{cl}
\multicolumn{2}{l}{\left\| y y^* - b \right\| }\\
= & \displaystyle \left\| \sqrt {b} a \left( \sum_{l=0}^d x^{(l)} \cdot \mu_{(l+1)p} (g) \right) \left( \sum_{k=0}^d \mu_{(k+1)p} (g) \cdot x^{(k)*} \right) a \sqrt {b} - b \right\| \\
\stackrel{\ref{proof:them:stability-D2}}{=} & \displaystyle \left\| \sqrt {b} a \left( \sum_{l=0}^d x^{(l)} \cdot \mu_{(l+1)p} (g) \cdot \mu_{(l+1)p} (g) \cdot x^{(l)*} \right) a \sqrt {b} - b \right\| \\
\stackrel{(\ref{eq:thm:stability})}{\leq} & \displaystyle 6(d+1) \varepsilon + \left\| \sqrt {b} a \left( \sum_{l=0}^d \left(g \, x^{(l)} \right) \cdot \left(g \, x^{(l)} \right)^* \right) a \sqrt {b} - b \right\| \\
= & \displaystyle 6(d+1)\varepsilon+ \left\| \sqrt {b} a g \left( \sum_{l=0}^d x^{(l)} x^{(l)*} \right) g a \sqrt {b} - b \right\| \\
\stackrel{\ref{proof:them:stability-D1}}{\leq} & \displaystyle (6d+7)\varepsilon + \left\| \sqrt {b} g a \left( \sum_{l=0}^d x^{(l)} x^{(l)*} \right) g a \sqrt {b} - b \right\| \\
\stackrel{\ref{proof:them:stability-D6}}{\leq} & \displaystyle (6d+8)\varepsilon + \left\| (\sqrt {b} g a ) ( g a \sqrt {b} ) - b \right\| \\
\stackrel{\ref{proof:them:stability-D1}}{\leq} & (6d + 10) \varepsilon\; .
\end{array}
\]
In particular, we have $ \|y\|^2 \leq 1 + (6d + 10) \varepsilon $. We further have
\[
\begin{array}{cl}
\multicolumn{2}{l} {\displaystyle \left\| y^2 \right\|} \\
\leq & \displaystyle \left\| \sqrt {b} a \left( \sum_{l=0}^d x^{(l)} \cdot \mu_{(l+1)p} (g) \right) \sqrt {b} a \left( \sum_{l=0}^d x^{(l)} \cdot \mu_{(l+1)p} (g) \right) \right\| \\
\stackrel{\ref{proof:them:stability-D1}}{ \leq} & \displaystyle \left\| \sqrt {b} a \left( \sum_{l=0}^d x^{(l)} \cdot \mu_{(l+1)p} (g) \right) g a \sqrt {b} a \left( \sum_{l=0}^d x^{(l)} \cdot \mu_{(l+1)p} (g) \right) \right\| \\
&\displaystyle + \varepsilon (d+1) \sqrt{ 1 + (6d + 10) \varepsilon } \\
\stackrel{\ref{proof:them:stability-D2}}{ \leq} & 0 + \varepsilon (d+1) \sqrt{ 1 + 35 } = 6(d+1) \varepsilon \leq (6d+10)\varepsilon \; .
\end{array}
\]
Therefore we have verified the condition in Lemma~\ref{lem:local-stability} with $\varepsilon$ replaced by $(6d + 10) \varepsilon $, and the proof is complete.
\end{proof}
\begin{Cor} \label{cor:stability}
Let $A$ be a separable $C^*$-algebra with a flow $\alpha \colon {\mathbb R} \to \mathrm{Aut}(A)$. If $\alpha$ has finite Rokhlin dimension, then $A \rtimes_\alpha {\mathbb R}$ is stable.
\end{Cor}
\begin{proof}
This follows directly from Remark \ref{Rmk:dimrok-implies-adhoc-dimrok} and Theorem \ref{thm:stability}.
\end{proof}
\begin{Rmk}
Related results concerning stability of crossed products by compact groups with finite Rokhlin dimension will be explored in \cite{GHS}.
\end{Rmk}
\section{The tube dimension}
\label{Section:Tube}
\noindent
In the next two sections, we study topological flows and their induced ${C}^*$-dynamical systems. We shall show that any free flow on a finite dimensional, locally compact and metrizable space induces a one-parameter automorphism group with finite Rokhlin dimension. For this purpose, we will need a few (special cases of) technical results of Bartels, L\"{u}ck and Reich \cite{BarLRei081465306017991882} and later improvements by Kasprowski and R{\"u}ping \cite{Kasprowski-Rueping}.
We will define the notion of tube dimension for a topological flow, whose main purpose is to serve as a purely topological counterpart to Rokhlin dimension. This plays an analogous role to the purely topological variant of Rokhlin dimension for ${\mathbb Z}^d$-actions in \cite{Szabo}. We begin with recalling the definition of a box due to Bartels, L{\"u}ck and Reich.
\begin{defn}[cf.~{\cite[Definition 2.2]{BarLRei081465306017991882}}]
\label{definitionofboxes}
Let $Y$ be a locally compact and metrizable space with a flow $\Phi$.
A \emph{box} (or a \emph{tube}) for $(Y,\Phi)$ is a compact subset $B \subset Y$ such that there exists a real number $ l = l_B $ with the property that for every $y \in B$, there exist real numbers $ a_-(y) \le 0 \le a_+(y)$ and $\varepsilon(y) > 0 $ satisfying
\begin{enumerate}[label=(\roman*)]
\item $l = a_+(y) - a_-(y)$ ;
\item $\Phi_t(y) \in B$ for $t \in [a_-(y), a_+(y) ]$ ;
\item $\Phi_t(y) \not\in B$ for $t \in ( a_-(y) - \varepsilon(y), a_-(y) ) \cup (a_+(y), a_+(y) + \varepsilon(y) )$.
\end{enumerate}
Given such a box $B$, we will implicitly keep track of the following data that it comes with:
\begin{enumerate}
\item the \emph{length} $ l_B $;
\item the maps $a_\pm: B \to {\mathbb{R}^{}}, \ y \mapsto a_\pm(y)$;
\item the topological interior $B^o$, called the \emph{open box};
\item the subset $ S_B = \{ y \in B \ |\ a_-(y) + a_+(y) =0 \}$, called the \emph{central slice} of $B$;
\item the subsets $ \partial_+ B $ and $ \partial_- B $, respectively called the \emph{top} and the \emph{bottom} of $B$, defined by
\[
\partial_\pm B = \{ y \in B \ | \ a_\pm (y) =0 \} = \{ \Phi_{a_\pm(y)} (y) \ |\ y \in S_B \} ;
\]
\item similarly, the \emph{open top} $ \partial_+ B ^o $ and the \emph{open bottom} $ \partial_- B ^o $, defined by
\[
\partial_\pm B^o = \{ \Phi_{a_\pm(y)} (y) \ |\ y \in S_B \cap B^o \} .
\]
\end{enumerate}
\end{defn}
Intuitively speaking, what a box is to a topological flow is like what a Rokhlin tower is to a single homeomorphism, in that they facilitate local trivialization of the action. We use this concept to define a notion of dimension, which is closely related to Rokhlin dimension. A natural term for such a dimension might have been \emph{box dimension}, but this term has been used in the literature for something else. From the perspective of the intuition behind the term, these boxes could have legitimately been named \emph{tubes} as well. Thus, we opted for the term \emph{tube dimension} in what follows.
\begin{lem}[cf.~{\cite[Lemma 2.6]{BarLRei081465306017991882}}]
\label{basicpropertiesofboxes}
Let $B \subset Y$ be a box of length $l = l_B$. Then:
\begin{enumerate}
\item the maps
\[
a_\pm: B \to {\mathbb{R}^{}}, \ y \mapsto a_\pm(y)
\]
are continuous;
\item there exists $\varepsilon_B > 0 $ depending only on $B$ such that the numbers $\varepsilon(y)$ appearing in the definition of a box can be chosen so that $ \varepsilon(y) \ge \varepsilon_B$ holds for all $y \in B$;
\item the map
\[
S_B \times \left[ - \frac{l}{2}, \frac{l}{2} \right] \to B , \ (y, t) \mapsto \Phi_t(y)
\]
is a homeomorphism.
\end{enumerate}
\end{lem}
\begin{rmk}
The last statement in the previous lemma can be turned into an alternative definition for boxes: a box is a pair $(S, l)$ for a compact subset $S \subset Y$ and a positive number $l >0$ such that the map
\[
S \times \left[ - \frac{l}{2}, \frac{l}{2} \right] \to Y , \ (y, t) \mapsto \Phi_t(y)
\]
is an embedding.
\end{rmk}
Occasionally we will need to \emph{stretch} a box, as formalized in the following lemma, the proof of which is but a simple exercise based on the definition.
\begin{lem}\label{lemaboutstretchingabox}
Let $B$ be a box and $\displaystyle 0 < L < \frac{\varepsilon_B}{2} $. Then the set $ \Phi_{[-L, L]} (B) $ is also a box with the same central slice and a new length $ l_B + 2 L $.
\end{lem}
\begin{Notation}
\label{def:mult}
Given a topological space $X$ with a collection $\mathcal{U}$ of open subsets of $X$, its \emph{multiplicity} $d=\operatorname{mult}(\mathcal{U})$ is the smallest natural number so that the intersection of any $d+1$ pairwise distinct elements in $\mathcal{U}$ is empty.
\end{Notation}
In order to study the nuclear dimension of the crossed product ${C}_{0}(Y) \rtimes {\mathbb{R}^{}} $ of a topological flow, we would like to decompose $Y$ in a dimensionally controlled fashion into open subsets such that the flow is trivialized when restricted to each open subset. This is encapsulated in the following definition of the so-called tube dimension.
\begin{defn}\label{def:tube-dimension}
The \emph{tube dimension} of a topological flow $ (Y, \Phi) $, denoted by $ \mathrm{dim}_\mathrm{tube} (\Phi) $, is the smallest natural number $ d $ such that for any $L > 0$ and compact set $ K \subset Y $, there is a collection $ \mathcal{U} $ of open subsets of $Y$ satisfying:
\begin{enumerate}
\item\label{def:tube-dimension-1} for any $ y\in K $, there is $U \in \mathcal{U}$ such that $ \Phi_{[-L, L]}(y) \subset U $; \label{def:boxdim-item1}
\item\label{def:tube-dimension-2} each $ U \in \mathcal{U} $ is contained in a box $B_U$; \label{def:boxdim-item2}
\item\label{def:tube-dimension-3} the multiplicity of $\mathcal{U}$ is at most $d+1$. \label{def:boxdim-item3}
\end{enumerate}
If no such $ d $ exists, we define $ \mathrm{dim}_\mathrm{tube} (\Phi) = \infty $.
\end{defn}
\begin{rmk}\label{rmk:tube-dim-basics}
It is not hard to see that as $L$ gets larger, so does the length of the box $ B_U $ for each $U$, unless $U$ is redundant. Therefore $\Phi$ cannot have periodic points, as they limit the lengths of boxes. In particular, every topological flow with finite tube dimension must be free.
\end{rmk}
It turns out that any free flow $\Phi$ on a locally compact and metrizable space $ Y $ with finite covering dimension has finite tube dimension. For this, we will need to invoke a recent result by Kasprowski and R\"{u}ping (\cite[Theorem~5.3]{Kasprowski-Rueping}), which itself is an improvement of a pioneering construction of the so-called ``long thin covers" by Bartels, L\"{u}ck and Reich (\cite[Theorem~1.2, Proposition~4.1]{BarLRei081465306017991882}), a crucial step in their solution of the Farrell-Jones conjecture for Gromov's hyperbolic groups. Since we are dealing with a simplified situation, we shall give a somewhat different presentation of their theorem that is sufficient for our purposes, cf. Remark~\ref{rmk:KRcoverbyboxes-differences}.
\begin{thm}[cf.~{\cite[Theorem~5.3]{Kasprowski-Rueping}}]
\label{thm:KRcoverbyboxes}
Let $Y$ be a locally compact and metrizable space with a free continuous action $\Phi$ by $\mathbb{R}$. Let $L$ be a positive number. Then there is a cover of $Y$ of multiplicity at most $5 (\mathrm{dim}(Y) + 1)$ consisting of open boxes and satisfying the property that for any point $x \in Y$, there is an open set in this cover containing $\Phi_{[-L, L]}(x)$.
\qed
\end{thm}
This immediately gives us a bound for the tube dimension of $ \Phi$.
\begin{cor}\label{cor:estimate-tube-dim}
Let $Y$ be a locally compact and metrizable space and $\Phi$ a flow on $Y$. Suppose that $Y$ has finite covering dimension and that $\Phi$ is free. Then
\[
\mathrm{dim}_\mathrm{tube}^{\!+1} (\Phi) \le 5 \cdot \mathrm{dim}^{\!+1}(Y)
\, .
\]
\qed
\end{cor}
\begin{rmk}\label{rmk:KRcoverbyboxes-differences}
We explain some diviation from the original presentation of the above theorem in \cite{Kasprowski-Rueping}:
\begin{enumerate}
\item In the original version, the authors consider not only a flow $\Phi$ on $Y$, but also a proper action of a discrete group $G$ that commutes with $\Phi$, and the cover they produce is required to be a so-called \emph{$\mathcal{F}in$}-cover with regard to the second action: it is invariant, and for each open set $U$ in the cover, only finitely many elements of $G$ fix $U$, while any other element carry $U$ to a set disjoint from $U$ ({\cite[Notation~1.3(4)]{Kasprowski-Rueping}}). Since this is not needed for proving our main result, we drop this assumption, or equivalently, we assume this extra group $G$ that appears in their theorem to be the trivial group, in which case the {$\mathcal{F}in$}-cover condition is automatic.
\item After we ignore the extra group $G$, a second difference appears: the flow $\Phi$ is originally not assumed to be free. With this generality, one cannot hope to cover the entire space $Y$ with open boxes, as explained in Remark~\ref{rmk:tube-dim-basics}. Consequently, the cover constructed in the original version is only for the subspace $Y_{>20 L}$, which consists of all points whose orbits have length more than $20L$. Since we are only interested in a free action of $\mathbb{R}$, the space $Y_{>20 L}$ is equal to $Y$.
\item The dimension estimate in the original version is in terms of the \emph{small inductive dimension} $\mathrm{ind}(Y)$, but as they remarked in \cite[Theorem~3.5]{Kasprowski-Rueping}, in the context their Theorem~5.3 applies to, where $Y$ is locally compact and metrizable, the small inductive dimension is equal to the covering dimension $\mathrm{dim}(Y)$.
\item It is not made explicit in the original statement of their theorem that the cover consists of open boxes, but this is evident from their proof: the cover is made up of the sets $\Phi_{(-4L, 4L)} (B_i^k)$ for $i \in \mathbb{N}$ and $k \in \{0, \ldots, \mathrm{dim}(Y)\}$, and each of them is the interior of a box $\displaystyle \Phi_{[-4L, 4L]} (\overline{B_i^k} )$, which is restricted from the larger box $\Phi_{[-10L, 10L]} (S_i)$ constructed in \cite[Lemma~4.6]{Kasprowski-Rueping}.
\item It is also clear here that although in the original statement of the theorem, it is only claimed that the cover has dimension at most $5 \mathrm{dim}^{\!+1}(Y)$, but in fact the cover they obtained has multiplicity at most $5 \mathrm{dim}^{\!+1}(Y)$.
\end{enumerate}
\end{rmk}
The rest of this section is devoted to exhibiting a number of seemingly stonger but equivalent characterizations of finite tube dimension for a flow. In view of the application to Rokhlin dimension and nuclear dimension, the characterizations using certain partitions of unity are of particular interest. The intuitive idea is that covers having large overlaps along flow lines, as in Definition~\ref{def:tube-dimension}, give rise to partitions of unity that are, in a sense, almost flat along the flow, and vice versa. First, we need to collect a few technical tools needed later for the proof of the main proposition.\\
\paragraph{\textbf{Flow-wise Lipschitz partitions of unity.} }Let us make precise what we mean by almost flat partitions of unity. Although there is some freedom in picking the exact condition expressing this intuitive idea, we find the following Lipschitz-type characterization most convenient for our purposes.
\begin{defn}\label{definitionofflowwiseLipschitz}
Let $ \Phi: {\mathbb{R}^{}} \curvearrowright Y $ be a flow and $ F : Y \to X $ a map to a metric space $(X, d)$. The map $F$ is called \emph{$\Phi$-Lipschitz with constant $\delta$}, if for every $y \in Y$, the map $ t \mapsto F(\Phi_t (y) ) $ is Lipschitz with constant $\delta$. In other words, we have
\[
d( F(\Phi_t (y) ), F(y) ) \le \delta \cdot |t|
\]
for all $y \in Y$ and $t \in {\mathbb{R}^{}}$.
\end{defn}
\begin{Rmk}
One way to produce flow-wise Lipschitz functions from any given function is to \emph{smear} it along the flow. More precisely, for any bounded Borel function $f$ on $Y$ and any $\lambda_+ , \lambda_- \in {\mathbb{R}^{}} $ such that $ \lambda_+ > \lambda_- $, we define $\mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) : Y \to {\mathbb{C}^{}}$ by
\begin{equation}\label{definitionofsmearing}
\mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) (y) = \frac{1}{\lambda_+ - \lambda_-} \int_{\lambda_-} ^{\lambda_+} f (\Phi_{-t} (y) ) \: d t .
\end{equation}
\end{Rmk}
\begin{lem}\label{lemaboutsmearingandflowLipschitz}
For every bounded Borel function $f$ on $Y$ and any $\lambda_+ , \lambda_- \in {\mathbb{R}^{}} $ such that $ \lambda_+ > \lambda_- $, the function $ \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) $ is bounded by $\| f \|_\infty$ and $\Phi$-Lipschitz with constant $ \displaystyle \frac{2 \| f \|_\infty }{ \lambda_+ - \lambda_- } $. If $f$ is continuous and has compact support, then so does $ \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) $.
If $f$ is continuous and vanishes at infinity, then so does $ \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) $.
\end{lem}
\begin{proof}
For all $y \in Y$ and $t \in {\mathbb{R}^{}}$, we have
\[
\left| \left( \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) \right) (y) \right| \leq \frac{1}{\lambda_+ - \lambda_-} \int_{\lambda_-} ^{\lambda_+} \left| f (\Phi_{-t} (y) ) \: d t \right| \leq \| f \|_\infty
\]
and
\[
\def2.2{2}
\begin{array}{cl}
\multicolumn{2}{l} {\left| \left( \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) \right) (\Phi_t (y) ) - \left( \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) \right) (y) \right| }\\
= & \displaystyle \left| \frac{1}{\lambda_+ - \lambda_-} \int_{ (\lambda_-, \lambda_+) \bigtriangleup (\lambda_- + t, \lambda_+ + t) } f (\Phi_{-s} (y) ) \: d s \right| \\
\le & \displaystyle \frac{1}{\lambda_+ - \lambda_-} \left( \left| \int_{\lambda_-} ^{\lambda_- + t} f (\Phi_{-s} (y) ) \ d s \right| + \left| \int_{\lambda_+} ^{\lambda_+ + t} f (\Phi_{-s} (y) ) \: d s \right| \right) \\
\le & \displaystyle \frac{2 \| f \|_\infty }{ \lambda_+ - \lambda_- } \cdot | t | .
\end{array}
\]
This proves the first statement.
For the second statement, it is easy to see that $ \mathrm{supp}( \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) ) \subset \Phi_{[\lambda_-, \lambda_+]} (\mathrm{supp} (f)) $ and is thus compact. The continuity of $ \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) $ is proved by considering a similar estimate
\[
\def2.2{2}
\begin{array}{cl}
\multicolumn{2}{l} {\left| \left( \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) \right) ( y ) - \left( \mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} (f) \right) ( y' ) \right| }\\
\le\ & \displaystyle \frac{1}{\lambda_+ - \lambda_-} \int_{\lambda_-} ^{\lambda_+} \left|f (\Phi_{-t} (y) ) - f (\Phi_{-t} (y') ) \right| \: d t \; .
\end{array}
\]
Let us fix $y$ in the formula, together with an arbitrary $\varepsilon > 0$. To prove continuity, it suffices to find a neighborhood $U$ of $y$ such that the above expression is no more than $\varepsilon$ whenever $y' \in U$. Consider the continuous function $\psi_y \colon [\lambda_-, \lambda_+] \times Y \to [0, \infty)$ defined by
\[
\psi (t , y') = \left|f (\Phi_{-t} (y) ) - f (\Phi_{-t} (y') ) \right| \; .
\]
Since $\psi([\lambda_-, \lambda_+] \times \{y\}) = \{0\}$, there is an open neighborhood $V$ of $[\lambda_-, \lambda_+] \times \{y\}$ such that $\psi(V) \subset [0, \varepsilon)$. Since $[\lambda_-, \lambda_+]$ is compact, by the tube lemma in basic topology, there is an open neighborhood $U$ of $y$ such that $[\lambda_-, \lambda_+] \times U \subset V$. This choice of $U$ is clearly what we need.
Finally, since the linear map $\mathbb{E} (\Phi_*)_{[\lambda_-, \lambda_+]} $ is contractive in the uniform norm $\| \cdot \|_\infty$, the statement for $ f \in {C}_{0}(Y) $ follows from the compactly supported continuous case by approximation.
\end{proof}
This smearing technique has the advantage that it preserves, if applied to a family of functions, the property of being a partition of unity. For our purpose, it is necessary to introduce a relative version of partitions of unity.
\begin{defn}\label{def:relative-pou}
Let $ X $ be a topological space and $ A \subset X $ a subset. Let $ \mathcal{U} = \{ U_i \}_{i\in I} $ be a locally finite collection of open sets in $X$ such that $A \subset \bigcup \mathcal{U}$. Then a \emph{partition of unity for $ A \subset X $ subordinate to $ \mathcal{U} $} is a collection of continuous functions $ \{ f_i : X \to [0,\infty) \}_{i\in I} $ such that
\begin{enumerate}
\item for each $i\in I$, the support of $f_U$ is contained in $ U_i\in I $;
\item for all $x \in A$, one has $\displaystyle \sum_{i\in I} f_i (x) =1 $.
\end{enumerate}
\end{defn}
\begin{lem}\label{lem:relative-pou}
Let $ X $ be a locally compact Hausdorff space and $ A \subset X $ a compact subset. Let $ \mathcal{U} = \{ U_i \}_{i\in I} $ be a finite collection of open sets in $X$ such that $A \subset \bigcup \mathcal{U}$. Then there exists a partition of unity for $ A \subset X $ subordinate to $ \mathcal{U} $.
\end{lem}
\begin{proof}
Define $ U_{\infty} = X \setminus A $ and $ I^+ = I \sqcup \{\infty\} $. Then the collection $ \mathcal{U}^+ = \{ U_i \}_{i\in I^+} $ is a finite open cover of $ X $. Pick a partition of unity $ \{ f_i \}_{i \in I^+} $ subordinate to $\mathcal{U}^+$. Then the subcollection $ \{ f_i \}_{i \in I} $ is a partition of unity for $ A \subset X $ subordinate to $ \mathcal{U} $, where the second condition is proved by observing that $ f_{\infty} (x) = 0 $ for any $x \in A$.
\end{proof}
\begin{lem}\label{lemaboutsmearingofpartitionofunity}
Let $ (X, \Phi) $ be a topological flow, $ A \subset X $ a subset and $\lambda > 0 $. Let $ \mathcal{U} = \{ U_i \}_{i\in I} $ be a finite collection of open sets in $X$ that covers $ \Phi_{ \left[ - \lambda, \lambda \right] } (A )$ and $ \{ f_i \}_{i\in I} $ a partition of unity for $ \Phi_{ \left[ - \lambda, \lambda \right] } (A ) \subset X $ subordinate to $ \mathcal{U} $. Then the collection $\big\{ \mathbb{E} (\Phi_*)_{ \left[ - \lambda, \lambda \right] } (f_i) \big\} _{i\in I}$ is a partition of unity for $ A \subset X$ subordinate to $ \{ \Phi_{ \left[ - \lambda, \lambda \right] } (U ) \}_{U\in\mathcal{U}} $, the members of which are $\Phi$-Lipschitz with constant $ \frac{1 }{\lambda} $. Moreover, the function $ \displaystyle \mathrm{1}_X - \sum_{i\in I} \mathbb{E} (\Phi_*)_{ \left[ - \lambda, \lambda \right] } (f_i) $ is also $\Phi$-Lipschitz with constant $ \frac{1 }{\lambda} $.
\end{lem}
\begin{proof}
For any $x \in A$, we have
\begin{align*}
\sum_{i\in I} \mathbb{E} (\Phi_*)_{ \left[ - \lambda, \lambda \right] } (f_i) (x) =\ & \frac{1 }{2 \lambda} \int_{-\lambda } ^{ \lambda } \left( \sum_{i\in I} f_i (\Phi_{-t} (x) ) \right) \ d t \\
=\ & \frac{1 }{2 \lambda} \int_{-\lambda } ^{ \lambda } 1 \: d t \ = 1 .
\end{align*}
The non-negativity of each $\mathbb{E} (\Phi_*)_{ \left[ - \lambda, \lambda \right] } (f_i) $ and the stretching of their supports are immediate from definition, and the flow-Lipschitz constant is derived from Lemma~\ref{lemaboutsmearingandflowLipschitz}. The last statement uses the fact that
$$ \mathrm{1}_X - \sum_{i\in I} \mathbb{E} (\Phi_*)_{ \left[ - \lambda, \lambda \right] } (f_i) = \mathbb{E} (\Phi_*)_{ \left[ - \lambda, \lambda \right] } \left( \mathrm{1}_X - \sum_{i\in I} f_i \right) $$
together with Lemma~\ref{lemaboutsmearingandflowLipschitz}.
\end{proof}
\paragraph{\textbf{Simplicial techniques.} }Another aspect of the flexibility of the tube dimension is the conversion between the two most common ways of defining a notion of dimension. Consider a collection $\mathcal{U}$ of subsets of a space. The dimension of $\mathcal{U}$ is usually defined to be either one of the following two numbers minus $1$:
\begin{enumerate}
\item the multiplicity (or colloquially known as the covering number), the maximal cardinality of subcollections with nonempty intersection (cf.~Definition~\ref{def:mult});
\item the coloring number, the minimal number of subfamilies needed to partition $\mathcal{U}$ into, so that elements within the same subfamily are disjoint.
\end{enumerate}
It is clear that the latter bounds the former from above. Definition~\ref{def:tube-dimension} of the tube dimension uses the covering number, while in order to establish connection with the Rokhlin dimension of the associated $C^*$-flow, we will need to make use of the coloring number, as will be made precise in Proposition~\ref{prop:characterizations-tube-dim}(\ref{prop:characterizations-tube-dim-3})-(\ref{prop:characterizations-tube-dim-5}). A standard way to show that they are equivalent is to apply certain simplicial techniques.
\begin{defn}\label{def:simplicial-complex}
For us, an \emph{abstract simplicial complex} $Z$ consists of:
\begin{itemize}
\item a set $Z_0$, called the set of \emph{vertices}, and
\item a collection of its finite subsets closed under taking subsets, called the collection of \emph{simplices}.
\end{itemize}
We often write $\sigma \in Z$ to denote that $\sigma$ is a simplex of $Z$. We also associate the following structures to $Z$:
\begin{enumerate}
\item\label{def:simplicial-complex:dimension} The dimension of a simplex is the cardinality of the corresponding finite subset minus $1$, and the (simplicial) dimension of the abstract simplicial complex is the supremum of the dimensions of its simplices.
\item\label{def:simplicial-complex:realization} The \emph{geometric realization} of an abstract simplicial complex $Z$, denoted as $|Z|$, is the set of tuples
\[
\bigcup_{\sigma \in Z} \left\{ (z_v)_{v} \in [0,1]^{Z_0} ~\Bigl|~ \sum_{v\in\sigma} z_v = 1 \,, ~\text{and}~ z_v = 0 ~\text{for~any}~ v \in Z_0 \setminus \sigma \right\}.
\]
Similarly for a simplex $\sigma$ of $Z$, we define its \emph{closed} (respectively, \emph{open}) \emph{geometric realization} $\overline{|\sigma|}$ (respectively, $|\sigma|$) as follows:
\begin{align*}
\overline{|\sigma|} & = \left\{ (z_v)_v \in |Z| ~\Bigl|~ \sum_{v\in\sigma} z_v = 1 \right\} \\
|\sigma| & = \left\{ (z_v)_v \in |Z| ~\Bigl|~ \sum_{v\in\sigma} z_v = 1 ~ \text{with}~ z_v >0 ~\text{for~any~} v\in\sigma \right\}\; .
\end{align*}
\item\label{def:simplicial-complex:metric} Although usually $|Z|$ is equipped with the weak topology, for our purposes we consider the $\ell^1$-topology, induced by the $\ell^1$-metric $d^1: |Z| \times |Z| \to [0, 2]$ defined by
\[
d^1\Bigl( (z_v)_v , (z'_v)_v \Bigl) = \sum_{v \in Z_0} |z_v - z'_v | \; .
\]
\item\label{def:simplicial-complex:star} For any vertex $v_0 \in Z_0$, the \emph{(simplicial) star} around $v_0$ is the set of simplices of $Z$ that contain $v_0$, and the \emph{open star} around $v_0$ is the union of the open geometric realizations of such simplices in $|Z|$, that is, the set
\[
\left\{ (z_v)_v \in |Z| ~\Bigl|~ z_{v_0} > 0 \right\} \; .
\]
\item\label{def:simplicial-complex:cone} The \emph{simplicial cone} $CZ$ is the abstract simplicical complex
\[
\left\{ \sigma, \sigma \sqcup \{\infty\} \ \big| \ \sigma \in Z \right\} \; ,
\]
where $\infty$ is an additional vertex. More concretely, we have $(CZ)_0 = Z_0 \sqcup \{\infty\}$, each simplex $\sigma$ in $Z$ spawns two simplices $\sigma$ and $\sigma \sqcup \{\infty\}$ in $CZ$, and all simplices of $CZ$ arise this way.
\item\label{def:simplicial-complex:subcomplex} A \emph{subcomplex} of $Z$ is an abstract simplicial complex $Z'$ with $Z'_0 \subset Z_0$ and $Z' \subset Z$. It is clear that there is a canonical embedding $|Z'| \subset |Z|$ preserving the $\ell^1$-metric.
\end{enumerate}
\end{defn}
A major advantage of simplicial complexes for us is that many ways of defining dimension agree for them. The following lemma gives a canonical open cover on an abstract simplicial complex with nice properties.
\begin{lem}\label{lem:canonical-cover}
Let $Z$ be an abstract simplicial complex. For each simplex $\sigma \in Z$, the set $V_\sigma$ given by
\[
V_\sigma = \left\{ (z_v)_v \in | Z | \ \bigg|\ z_v > z_{v'},\ \text{for\ all\ } v \in \sigma \ \text{and}\ v' \in Z_0 \setminus \sigma \right\}
\]
is an open neighborhood of $|\sigma|$ in $|Z|$.
Furthermore, if $Z$ has finite dimension $d$, then the following are true:
\begin{enumerate}
\item\label{lem:canonical-cover:disjoint} For $l = 0, \ldots, d$, the collection $ {\mathcal{V}}^{(l)} = \{ V_\sigma \ | \ \sigma \in Z , \ \mathrm{dim}(\sigma) = l \} $ consists of disjoint open sets.
\item\label{lem:canonical-cover:cover} The collection $ \mathcal{V} = {\mathcal{V}}^{(0)} \cup \cdots \cup {\mathcal{V}}^{(d)} $ is an open cover of $|Z|$ with Lebesgue number at least $\frac{1}{(d+1)(d+2)}$.
\item\label{lem:canonical-cover:pou} For each simplex $\sigma \in Z$, the formula
\begin{equation*
\nu_\sigma (z) = \frac{d^1(z, | Z | \setminus V_\sigma)}{ \displaystyle \sum_{ \sigma' \in Z} d^1(z, | Z | \setminus V_{\sigma'}) }
\end{equation*}
defines a function $\nu_\sigma \colon |Z| \to [0,1]$ that is $2(d+1) (d+2) (2d+3)$-Lipschitz. And the collection $ \{ \nu_\sigma \}_{\sigma \in Z } $ is a partition of unity for $|Z|$ subordinate to the open cover $\mathcal{V}$.
\end{enumerate}
\end{lem}
\begin{proof}
It is clear that $|\sigma| \subset V_\sigma$. Now to show $V_\sigma$ is open, we produce, for any $(z_v)_v \in V_\sigma$, an open set $U$ such that $(z_v)_v \in U \subset V_\sigma$. Define
\begin{equation*
\delta = \frac{1}{2} \min \left\{ z_v - z_{v'} ~\bigl|~ v \in \sigma ~\text{and}~ v' \in Z_0 \setminus \sigma \right\} \; ,
\end{equation*}
which is a positive number. Then we can take $U = B_{\delta} \big((z_v)_v \big)$, the open ball around $(z_v)_v$ with radius $\delta$, since
\[
B_{\delta} \big((z_v)_v \big) \subset \left\{ (z'_v)_v \in |Z| ~\Bigl|~ \sup_{v \in Z_0} |z_v - z'_v| < \delta \right\} \subset V_\sigma \; .
\]
Let us now prove statements (\ref{lem:canonical-cover:disjoint})-(\ref{lem:canonical-cover:pou}):
\begin{enumerate}
\item It suffices to show that for any $l \in \{0, \ldots, d\}$ and any two simplices $\sigma$ and $\sigma'$ of dimension $l$, if there is $(z_v)_v \in V_\sigma \cap V_{\sigma'}$, then $\sigma = \sigma'$, but this is obvious, since $\sigma$ is uniquely determined as the collection of indices of the $(l+1)$ largest coordinates of $(z_v)_v$.
\item It suffices to show that for any $z \in |Z|$, there is $\sigma \in Z$ such that $B_{\delta} \big((z_v)_v \big) \subset V_\sigma$, for $\delta = \frac{1}{(d+1)(d+2)}$. To this end, we let $z^{l}$ be the value of the $l$-th greatest coordinate(s) of $z$, for $l \geq 1$. (We count with multiplicities and set $z^{d+2}=0$.) We know
\[
1=\sum_{l=1}^{d+1} z^{l} = \sum_{l=1}^{d+1} l \cdot (z^l - z^{l+1}) \le \sum_{l=1}^{d+1} l \cdot \max_k(z^k-z^{k+1}),
\]
from which follows that there is $l_0 \in \{1, \ldots, d+1 \}$ such that $z^{l_0} - z^{l_0+1} \geq \frac{2}{(d+1)(d+2)}$. Now let $\sigma \in Z$ consist of the indices of the $l_0 $ greatest coordinates of $z$. Then as we argued above, we have $B_{\delta} \big((z_v)_v \big) \subset V_\sigma$.
\item This follows directly from \cite[Lemma~4.3.5]{NowakYu2012Large}.
\end{enumerate}
\end{proof}
The following construction makes a link between simplicial complexes and open covers.
\begin{defn}\label{def:nerve-complex}
Let $ X $ be a topological space, and let $ \mathcal{U} $ be a locally finite collection of open subsets of $ X $. Then the \emph{nerve complex} of $ \mathcal{U} $, denoted as $\mathcal{N}(\mathcal{U})$, is the abstract simplicical complex with ${\mathcal{U}}$ as its vertex set and the simplices corresponding to subcollections of ${\mathcal{U}}$ with nonempty intersections.
\end{defn}
The following lemmas are immediate from the definitions.
\begin{lem}\label{lem:nerve-complex}
Let $ X $ be a topological space, and let $ \mathcal{U} $ be a locally finite collection of open subsets of $ X $. Then the simplicial dimension of $\mathcal{N}(\mathcal{U})$ is equal to $\mathrm{mult}(\mathcal{U}) - 1$. Moreover, if $\mathcal{V}$ is another locally finite collection of open subsets of $ X $ such that $\mathcal{U} \subset \mathcal{V}$, then $\mathcal{N}(\mathcal{U})$ embeds as a subcomplex into $\mathcal{N}(\mathcal{V})$.
\qed
\end{lem}
\begin{lem}\label{lem:map-to-nerve}
Let $ X $ be a locally compact, metrizable space, let $ K \subset X $ be a compact subset, let $ \mathcal{U} $ be a locally finite collection of open subsets of $ X $ that covers $ K $, and let $ \{ \mu_U \}_{U \in \mathcal{U}} $ be a partition of unity of $ K \subset X $ subordinate to and indexed by $ \mathcal{U} $ in the sense of Definition~\ref{def:relative-pou}. Consider the open cover $ \mathcal{U}^+ = \mathcal{U} \cup \{ X \setminus K \} $ and put $ \mu_{X \setminus K} = \mathrm{1}_X - \sum_{U \in \mathcal{U}} \mu_U $. Then there is a continuous map $\mu_{\mathcal{U}^+} \colon X \to | \mathcal{N}({\mathcal{U}^+}) |$ given by
\begin{equation}\label{eq:map-to-nerve}
\mu_{\mathcal{U}^+} (x) = \big( \mu_U (x) \big)_{U \in {\mathcal{U}^+} } .
\end{equation}
Furthermore $\mu_{\mathcal{U}^+}$ maps $ K $ into $ | \mathcal{N}({\mathcal{U}}) |$ (as a subspace of $| \mathcal{N}({\mathcal{U}^+}) | $), and for each $U \in {\mathcal{U}^+}$, the preimage of the open star in $| \mathcal{N}({\mathcal{U}^+}) | $ around the vertex $U$ is contained in $U$ (as a subset of $Y$).
\qed
\end{lem}
\paragraph{\textbf{Equivalent characterizations.} }Now we can put together the tools we have gathered above and prove a key proposition regarding various equivalent ways to define the tube dimension. We first record a general lemma, which we assume is well known.
\begin{Lemma}
\label{Lemma:compact-open-topology}
Let $X,K,Y$ be metrizable spaces, with $K$ compact. Let $f \colon X \times K \to Y$ be a continuous function, and let $U \subseteq Y$ be open. Then $\{x \in X \mid f(x,t) \in U \; \mathrm{ for } \, \mathrm{ all } \; t \in K\}$ is an open subset of $X$.
\end{Lemma}
\begin{proof}
The space $C(K,Y)$ with the compact-open topology is metrizable, hence compactly generated. Therefore the map $\widetilde{f} \colon X \to C(K,Y)$ given by $\widetilde{f}(x)(t) = f(x,t)$ is continuous, $C(K,U)$ is an open subset of $C(K,Y)$, and $\{x \in X \mid f(x,t) \in U \; \mathrm{ for } \, \mathrm{ all } \; t \in K\} = \widetilde{f}^{-1} (C(K,U))$.
\end{proof}
In particular, it follows that if $\Phi$ is a flow on $Y$, and $U \subseteq Y$ is open, then for any $L>0$, the set $\{y \in Y \mid \Phi_{[-L,L]}(y) \subseteq U]\}$ is open.
\begin{prop}\label{prop:characterizations-tube-dim}
Let $(Y, \Phi)$ be a topological flow. Let $d\in{\mathbb N}$. Then the following are equivalent:
\begin{enumerate}
\item The tube dimension of $\Phi$ is at most $d$;
\label{prop:characterizations-tube-dim-1}
\item
\label{prop:characterizations-tube-dim-2}
For any $\eta > 0$ and compact set $ K \subset Y $, there is a finite abstract simplicial complex $ Z $ of dimension at most $d$ and a continuous map $ F: Y \to | CZ | $ satisfying:
\begin{enumerate}
\item
\label{prop:characterizations-tube-dim-2-a}
$ F $ is $\Phi$-Lipschitz with constant $\eta$;
\item
\label{prop:characterizations-tube-dim-2-b}
for any vertex $ v \in Z_0 $, the preimage of the open star around $v$ is contained in a box $B_v$;
\item
\label{prop:characterizations-tube-dim-2-c}
$ F(K) \subset Z $.
\end{enumerate}
\item
\label{prop:characterizations-tube-dim-3}
For any $\eta > 0$ and any compact set $ K \subset Y $, there is a finite partition of unity $ \{ \varphi_i \}_{i \in I} $ of $ K \subset Y$ satisfying:
\begin{enumerate}
\item
\label{prop:characterizations-tube-dim-3-a}
for any $ i\in I $, $ \varphi_i $ is $\Phi$-Lipschitz with constant $\eta$;
\item
\label{prop:characterizations-tube-dim-3-b}
for any $ i\in I $, $ \varphi_i $ is supported in a box $B_i$;
\item
\label{prop:characterizations-tube-dim-3-c}
there is a decomposition $I = I^{(0)} \dot{\cup} \cdots \dot{\cup} I^{(d)} $ such that for any $ l \in \{ 0, \dots, d \} $ and any two distinct $i, j \in I^{(l)} $, we have $ \varphi_i \cdot \varphi_j =0 $.
\end{enumerate}
\item
\label{prop:characterizations-tube-dim-4}
For any $\eta > 0$, $L >0$ and compact set $ K \subset Y $, there is a finite partition of unity $ \{ \varphi_i \}_{i \in I} $ of $ K \subset Y$ satisfying:
\begin{enumerate}
\item
\label{prop:characterizations-tube-dim-4-a}
for any $ i\in I $, $ \varphi_i $ is $\Phi$-Lipschitz with constant $\eta$;
\item
\label{prop:characterizations-tube-dim-4-b}
for any $ i\in I $, $ \Phi_{[-L, L]} \big( \mathrm{supp}( \varphi_i ) \big) $ is contained in a box $B_i$ (or, equivalently, $ \varphi_i $ is supported in a box $B_i$ with $\varepsilon_{B_i} \ge 2L $);
\item
\label{prop:characterizations-tube-dim-4-c}
there is a decomposition $I = I^{(0)} \dot{\cup} \cdots \dot{\cup} I^{(d)} $ such that for any $ l \in \{ 0, \dots, d \} $ and any two distinct $i, j \in I^{(l)} $, we have
$$ \Phi_{[-L, L]} \big( \mathrm{supp}( \varphi_i ) \big) \cap \Phi_{[-L, L]} \big( \mathrm{supp}( \varphi_j ) \big) = \varnothing . $$
\end{enumerate}
\item
\label{prop:characterizations-tube-dim-5}
For any $L > 0$ and any compact set $ K \subset Y $, there is a finite collection $ \mathcal{U} $ of open subsets of $Y$ that covers $K$ and satisfies:
\begin{enumerate}
\item
\label{prop:characterizations-tube-dim-5-a}
each $ U \in \mathcal{U} $ is contained in a box $B_U$ with $\varepsilon_{B_U} \ge 2L $;
\item
\label{prop:characterizations-tube-dim-5-b}
there is a decomposition $\mathcal{U} = \mathcal{U}^{(0)} \dot{\cup} \cdots \dot{\cup}\, \mathcal{U}^{(d)} $ such that for any $ l \in \{ 0, \dots, d \} $ and any two distinct $U, U' \in \mathcal{U}^{(l)} $, we have
\[
\Phi_{[-L, L]} (\overline{U}) \cap \Phi_{[-L, L]} (\overline{U'}) = \varnothing .
\]
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof
We proceed in the order (\ref{prop:characterizations-tube-dim-1})$\Rightarrow$(\ref{prop:characterizations-tube-dim-2})$\Rightarrow$(\ref{prop:characterizations-tube-dim-3})$\Rightarrow$(\ref{prop:characterizations-tube-dim-4})$\Rightarrow$(\ref{prop:characterizations-tube-dim-5})$\Rightarrow$(\ref{prop:characterizations-tube-dim-1}).
\paragraph{(\ref{prop:characterizations-tube-dim-1})$\Rightarrow$(\ref{prop:characterizations-tube-dim-2}):}Given any $\eta > 0$ and compact set $ K \subset Y $, set $ L = \frac{ d + 2 }{\eta} $ and $ \widehat{K} = \Phi_{ \left[ - L, L \right] } (K) $, and obtain a collection $ \mathcal{U} $ of open subsets of $Y$ as in Definition~\ref{def:tube-dimension} with regard to $ L $ and $ \Phi_{ \left[ - L, L \right] } (\widehat{K}) $. For each $U \in \mathcal{U}$, define $ U' = \{ y \in Y \ |\ \Phi_{[-L, L]}(y) \subset U \} $, which is open by Lemma~\ref{Lemma:compact-open-topology}. By construction, we have $ \Phi_{[-L, L]}(U') \subset U $.
Because of condition \eqref{def:boxdim-item1} in Definition~\ref{def:tube-dimension}, the collection $ \mathcal{U}' = \{ U' \}_{U\in\mathcal{U}} $ covers $ \widehat{K} $. Since $K$ is compact, we may find a finite subcover $ \mathcal{U}'_0$ in $\mathcal{U}'$, and define the finite collection $\mathcal{U}_0 = \{U \in \mathcal{U} \ | \ U' \in \mathcal{U}'_0 \}$. Now fix a partition of unity $ \{ f_U \}_{U\in\mathcal{U}_0} $ for $ \widehat{K} \subset Y $ subordinate to $ \mathcal{U}'_0 $. Then by Lemma~\ref{lemaboutsmearingofpartitionofunity}, the collection $ \big\{ \widehat{f}_U = \mathbb{E} (\Phi_*)_{ \left[ - L, L \right] } (f_U) \big\}_{U \in \mathcal{U}_0} $ is a finite partition of unity for $ K \subset X$ subordinate to $ \mathcal{U}_0 $, the members of which are $\Phi$-Lipschitz with constant $ \frac{1 }{L} $, and so is the function $ \displaystyle \mathrm{1}_X - \sum_{U \in \mathcal{U}_0} \mathbb{E} (\Phi_*)_{ \left[ - L, L \right] } (f_U) $.
Set $ Z = \mathcal{N}(\mathcal{U}_0) $. Since $\mathrm{mult}(\mathcal{U}_0) \leq \mathrm{mult}(\mathcal{U}) \leq d+1$, we can apply Lemma~\ref{lem:nerve-complex} to deduce that $Z$ is a finite abstract simplicial complex of dimension at most $d$. By Lemma~\ref{lem:map-to-nerve}, we have a continuous map
\[
F = \widehat{f}_{\mathcal{U}_0^+} = \left( \bigoplus_{U\in\mathcal{U}_0} \widehat{f}_U \right) \oplus \left( \mathrm{1}_X - \sum_{V \in \mathcal{U}_0} \widehat{f}_{V} \right) : Y \to |\mathcal{N}(\mathcal{U}_0^+) | \subset | C Z |,
\]
which is $\Phi$-Lipschitz with regard to the $ \mathit{l}^{1} $-metric with constant $ \frac{ d + 2 }{L} = \eta $, as at most $ (d+2) $ summands of $F$ are non-zero at each point. It also maps $ K $ into $ | \mathcal{N}(\mathcal{U}_0) | $ because $ \left( \mathrm{1}_X - \sum_{U \in \mathcal{U}_0} \mathbb{E} (\Phi_*)_{ \left[ - L, L \right] } (f_U) \right) $ vanishes on $K$. Finally, the preimage of the open star around each vertex $ U \in \mathcal{U}_0 $ is contained in $ \mathrm{supp}(\widehat{f}_U) \subset U $, which is in turn contained in a box $B_U$.
\paragraph{(\ref{prop:characterizations-tube-dim-2})$\Rightarrow$(\ref{prop:characterizations-tube-dim-3}):}Given any $\eta > 0$ and compact set $ K \subset Y $, set $ \eta' = \frac{\eta}{2(d+2)(d+3)(2d+5)} $ and obtain a finite abstract simplicial complex $Z$ and a map $ F: Y \to |CZ| $ satisfying the conditions in (\ref{prop:characterizations-tube-dim-2}) with regard to $ \eta' $ and $K$. For $ l = 0,\dots,d $, define $I^{(l)}$ to be the collection of all $l$-dimensional simplices in $Z$, and set $ I = \bigcup_{l=0}^{d} I^{(l)} = Z $. For any $\sigma \in I$, let $\nu_\sigma: |CZ| \to [0,1]$ be a function as in Lemma~\ref{lem:canonical-cover}(\ref{lem:canonical-cover:pou}), and define $ \varphi_\sigma = \nu_\sigma \circ F : Y \to [0,1] $. We claim that $ \{\varphi_\sigma\}_{\sigma\in I} $ together with the decomposition $ I = \bigcup_{l=0}^{d} I^{(l)} $ is a finite partition of unity for $ K \subset Y $ satisfying the required conditions.
Since $ F $ maps $ K $ into $ | Z | \subset | CZ | $ and $ \sum_{\sigma \in I } \nu_\sigma (z) = 1 $ for any $ z \in | Z | $, it follows that $ \sum_{\sigma \in I} \varphi_\sigma (y) = 1 $ for any $ y \in K $.
Since by Lemma~\ref{lem:canonical-cover}~(\ref{lem:canonical-cover:pou}), the map $ \nu_\sigma $ is Lipschitz with the constant $ 2 \big((d+1) +1 \big) \big((d+1) + 2 \big) \big(2(d+1) + 3 \big) $ and $F$ is $\Phi$-Lipschitz with constant $ \eta' $, it follows for any $ \sigma \in I $ that the composition is $\Phi$-Lipschitz with constant $ \eta' \cdot 2(d+2)(d+3)(2d+5) = \eta $, which proves condition~(\ref{prop:characterizations-tube-dim-3-a}).
As for condition~(\ref{prop:characterizations-tube-dim-3-b}), we observe that each of the open sets $ V_\sigma = \{ z \in |CZ| \ |\ \nu_\sigma (z) \not= 0 \} $ is contained in the open star of any vertex $ v \in \sigma $, whose preimage under $F$ by assumption is contained in the box $ B_v $, and thus so is $ \mathrm{supp} (\nu_\sigma) $.
Lastly, for each $ l \in \{ 0, \dots, d \} $, $ \{ \varphi_\sigma \}_{\sigma \in I^{(l)} } $ is a family of orthogonal functions because it is a pullback of the orthogonal family $ \{ \nu_\sigma \}_{\sigma \in I^{(l)} } $ by $F$.
\paragraph{(\ref{prop:characterizations-tube-dim-3})$\Rightarrow$(\ref{prop:characterizations-tube-dim-4}):}Given $ \eta >0 $, $L > 0 $ and compact set $ K \subset Y $, choose $ \eta' > 0 $ with
$$ 0 < \frac{ \eta' \left( 1+ (d + 1) (2 - \frac{2\eta'}{L})\right) }{\left( 1- (d+1) \frac{2\eta'}{L} \right)^2} < \eta $$
and obtain a finite partition unity $ \{ \psi_i \}_{i\in I} $ satisfying the conditions in (\ref{prop:characterizations-tube-dim-3}) with regard to $ \eta' $ and $K$. Now for each $i\in I$ define
$$ \psi_i ' = \left( \psi_i - \frac{2\eta'}{L} \right)_+ \; :\ Y \to \left[0, 1 - \frac{2\eta'}{L} \right] . $$
Since each $ \psi_i $ is $\Phi$-Lipschitz with constant $ \eta' $ and since $ \mathrm{supp}(\psi_i ') \subset \{ y \in Y \ \big| \ \psi_i (y) \ge \frac{2\eta'}{L} \} $, we have
$$ \Phi_{[-L, L]} \big( \mathrm{supp}(\psi_i ') \big) \subset \left\{ y \in Y \ \left| \ \max_{t\in [-L, L]} \psi_i (\Phi_{t}(y)) \ge \frac{2\eta'}{L} \right. \right\} \subset \psi_i ^{-1} \left(\left[ \frac{\eta'}{L} , 1 \right]\right) . $$
Moreover, if we put
$$ \psi' = \left( \sum_{i\in I} \psi_i' \right) + \left( \mathrm{1}_Y - \sum_{i\in I} \psi_i \right) , $$
then for any $ y \in Y $ we have
$$ (\mathrm{1}_Y - \psi' ) (y) = \sum_{i\in I} ( \psi_i - \psi_i' ) (y) \in \left[0, (d+1) \frac{2\eta'}{L} \right] , $$
where the estimate for the upper bound uses the fact that the family $ \{ ( \psi_i - \psi_i' ) \}_{i\in I^{(l)}} $ is orthogonal for each $ l \in \{0,\dots, d \} $. By putting
$$ \varphi_i = \psi_i' / \psi' , $$
we obtain a finite partition of unity $\{\varphi\}_{i\in I}$ for $K \subset Y$ that satisfies conditions~(\ref{prop:characterizations-tube-dim-4-b}) and (\ref{prop:characterizations-tube-dim-4-c}). As for condition~(\ref{prop:characterizations-tube-dim-4-a}), we observe that each $ \psi_i' $ is also $\Phi$-Lipschitz with constant $ \eta' $ and $\left( \mathrm{1}_Y - \sum_{i\in I} \psi_i \right) $ is $\Phi$-Lipschitz with constant $ \eta' (d + 1) $. Thus, we deduce that $ \psi' $ is $\Phi$-Lipschitz with constant $ 2 \eta' (d + 1) $. So for any $y \in Y$ and any $t \in {\mathbb{R}^{}}$, we have
\[
\def2.2{2.2}
\begin{array}{cl}
\multicolumn{2}{l} {\displaystyle \left| \varphi_i(\Phi_t (y) ) - \varphi_i(y) \right| } \\
= &\displaystyle \left| \frac{\psi_i'(\Phi_t (y) )}{\psi' (\Phi_t (y) )} - \frac{\psi_i'(y)}{\psi' (y )} \right| \\
= & \displaystyle \left| \frac{\psi_i'(\Phi_t (y) )}{\psi' (\Phi_t (y) )} - \frac{\psi_i'(y)}{\psi' (\Phi_t (y) )} + \frac{\psi_i'(y)}{\psi' (\Phi_t (y) )} - \frac{\psi_i'(y)}{\psi' (y )} \right| \\
\le & \displaystyle \frac{\left| \psi_i'(\Phi_t (y) ) - \psi_i'(y) \right|}{ \left|\psi' (\Phi_t (y) ) \right|} + \left|\psi_i' ( y ) \right| \cdot \frac{\left| \psi'(y) - \psi'(\Phi_t (y) ) \right|}{ \left|\psi' (\Phi_t (y) ) \right| \cdot \left|\psi' ( y ) \right|} \\
\le & \displaystyle \frac{\eta'}{1- (d+1) \frac{2\eta'}{L} } + \frac{ 2 \eta' (d + 1) }{\left( 1- (d+1) \frac{2\eta'}{L} \right)^2} \\
\le & \displaystyle \frac{ \eta' \left( 1+ (d + 1) (2 - \frac{2\eta'}{L})\right) }{\left( 1- (d+1) \frac{2\eta'}{L} \right)^2} < \eta \, .
\end{array}
\]
This shows that each $ \varphi_i $ is $\Phi$-Lipschitz with constant $\eta$.
\paragraph{(\ref{prop:characterizations-tube-dim-4})$\Rightarrow$(\ref{prop:characterizations-tube-dim-5}):} Set $ \mathcal{U}^{(l)} = \left\{ \mathrm{supp}( \varphi_i )^\mathrm{o} \right\}_{i \in I^{(l)} } $ for $ l = 0, \dots, l $ and shrink each $ B_i $ by $ L $ on both ends.
\paragraph{(\ref{prop:characterizations-tube-dim-5})$\Rightarrow$(\ref{prop:characterizations-tube-dim-1}):} This is done by replacing $ \mathcal{U} $ by the collection $ \{ \Phi_{[-L, L]} (U) \}_{U\in\mathcal{U}} $ and applying Lemma~\ref{lemaboutstretchingabox}.
\end{proof}
\section{Tube dimension vs.~Rokhlin dimension}
\label{Section:Tube2}
\noindent
Using some of the more analytic characterizations of the tube dimension provided by Proposition~\ref{prop:characterizations-tube-dim}, we can demonstrate a close relation between the tube dimension of a topological flow and the Rokhlin dimension of the induced ${C}^*$-dynamical system.
\begin{thm}\label{thmaboutrelationbetweenboxdimensionandRokhlindimension}
Let $Y$ be a locally compact Hausdorff space and $\Phi$ a topological flow on $Y$. Let $\alpha \colon {\mathbb R} \to \mathrm{Aut}(C_{0}(Y))$ be the associated flow. Then
$$ \dimrokone(\alpha) \le \mathrm{dim}_\mathrm{tube}^{\!+1}(\Phi) \le 2 \: \dimrokone(\alpha) .$$
\end{thm}
\begin{proof}[Proof of the left hand inequality]
Let us assume $ \mathrm{dim}_\mathrm{tube}(\Phi) \le d $ for some positive integer $ d$, and show that $ \mathrm{dim}_\mathrm{Rok}(\alpha) \le d $. For simplicity we make use of the auxiliary function $ \rho: \mathbb{C} \to \mathbb{C} $ that maps $ a e^{i \theta} $ to $ \sqrt{a} e^{i \theta} $, where $ a \ge 0 $ and $ \theta \in \mathbb{R} $. It is easy to see that $ \rho $ is uniformly continuous. We also remark that $ {C}_{c}(Y)_{\le 1} $ is dense in $ {C}_{0}(Y)_{\le 1} $.
Given any $ M, T, \delta > 0 $ and any finite set $\mathcal{F} \subset {C}_{c}(Y)_{\le 1}$, we pick a compact set $ K \subset Y $ so large that it contains the supports of all the functions in $\mathcal{F}$, and also pick $ \eta > 0 $ so small that $ | w - w' | \le \eta T $ implies $ | \rho(w) - \rho(w') | \le \delta $ for all $ w, w' \in \mathbb{C} $. Then we apply Proposition~\ref{prop:characterizations-tube-dim}\eqref{prop:characterizations-tube-dim-3} to obtain a partition of unity $ \{ \varphi_i \}_{i \in I} $ of $Y$ and a decomposition $ I = \bigcup_{l =0} ^d I^{(l)} $ satisfying the three conditions with regard to $\eta$ and $ K $. Thus each $ \varphi_i $ is supported in some box $B_i$, which by definition, yields continuous functions (cf.~Lemma~\ref{basicpropertiesofboxes}) $ a_{B_i, \pm} : B \to {\mathbb{R}^{}} $ that satisfy
\[
a_{B_i, \pm} ( \Phi_t (y) ) = a_{B_i, \pm} ( y ) - t
\]
for all $y \in B$ and $t \in [ a_{B_i, -} ( y ), a_{B_i, +} ( y ) ] $. Set
\[
\psi_i = \varphi_i ^{\frac{1}{2}} \cdot \mathrm{exp} \left( \frac{ 2 \pi i }{M} \cdot a_{B_i, +} \right) : B_i \to {\mathbb{C}^{}} ,
\]
which we continuously extend to all of $Y$ by setting $ \psi_i (y) = 0$ for $y \in Y \setminus B_i$. We then define
\[
x^{(l)} = \sum_{i \in I^{(l)} } \psi_i \in {C}_{c}(Y)
\]
for $l = 0, \dots, d$. Note that for any $y \in Y$, at most one of the functions $\psi_i$ in the sum is nonzero because they are pairwise orthogonal. We check that $ \{ x^{(l)} \}_{l=0, \dots, d} $ satisfies the conditions in Lemma~\ref{Lemma:def-dimrok-lift}(\ref{alternativedefinitionofRokhlindimensionforflows}).
Conditions~(\ref{Lemma:def-dimrok-lift-item-5c}) and (\ref{Lemma:def-dimrok-lift-item-5d}) are trivially satisfied since ${C}_{0}(Y)$ is commutative. To check condition~(\ref{Lemma:def-dimrok-lift-item-5b}), we compute for $ l = 0, \dots, d $ that
\[
x^{(l)} x^{(l)*} = \sum_{i, j \in I^{(l)} } \psi_i \psi_j^* = \sum_{i \in I^{(l)} } | \psi_i |^2,
\]
using that $ \psi_i $ and $ \psi_j $ are orthogonal when $i \not= j$. Thus
\begin{align*}
\sum_{l=0}^{d} x^{(l)} x^{(l)*} & = \sum_{i\in I} | \psi_i |^2 = \sum_{i\in I} \varphi_i ,
\end{align*}
which is equal to $1$ on $ K $. This implies that for any $ f \in \mathcal{F} $, we have
\[
f = \sum_{l= 0 }^{d} x^{(l)} x^{(l)*} \cdot f .
\]
This shows condition~(\ref{Lemma:def-dimrok-lift-item-5b}).
As for condition~(\ref{Lemma:def-dimrok-lift-item-5a}), it suffices to check that for any $ l \in \{ 0, \dots, d \} $, for any $t \in [ -T, T] $ and for any $y \in Y$, we have $ \left| x^{(l)} ( y ) - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot x^{(l)} ( \Phi_{t} (y) ) \right| \le \delta $. For $ i\in I $, let us define the function
$$ \widetilde{\psi}_i = \varphi_i \cdot \mathrm{exp} \left( \frac{ 2 \pi i }{M} \cdot a_{B_i, +} \right) : B_i \to {\mathbb{C}^{}} , $$
which we continuously extend to all of $Y$ by setting $ \widetilde{\psi}_i (y) = 0$ for $y \in Y \setminus B_i$. We then define
$$ \widetilde{x}^{(l)} = \sum_{i \in I^{(l)} } \widetilde{\psi}_i \in {C}_{c}(Y) $$
for $l = 0, \dots, d$. Observe that $ \psi_i = \rho \circ \widetilde{\psi}_i $ for all $ i\in I $, and using the orthogonality of $ \{ \varphi_i \}_{i\in I^{(l)}} $, we also have $ x^{(l)} = \rho \circ \widetilde{x}^{(l)} $. Also notice that $ \rho $ preserves multiplication by complex numbers with norm $1$. Hence by our choice of $ \eta $, it suffices to check that for any $ l \in \{ 0, \dots, d \} $ and $y \in Y$, the function
$$ {\mathbb R}\ni t \mapsto \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot \widetilde{x}^{(l)} ( \Phi_{t} (y) ) \in \mathbb{C} $$
is Lipschitz with constant $\eta$. Since the Lipschitz condition on a geodesic space can be checked locally, it suffices to show for any $ l \in \{ 0, \dots, d \} $ and $y \in Y$ that there is $ t_y > 0 $ such that for any $ t \in ( - t_y, t_y) $, we have
$$ \left| \widetilde{x}^{(l)} ( y ) - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot \widetilde{x}^{(l)} ( \Phi_{t} (y) ) \right| \le \eta \cdot | t | . $$
This breaks down into two cases:
\begin{enumerate}[label={Case {\arabic*}:~},leftmargin=*]
\item Suppose $ \varphi_i (y) = 0 $ for every $ i\in I^{(l)} $.
Since $ \varphi_i (\Phi_{t} (y)) \not= 0 $ for at most one function in $ \{ \varphi_i \}_{i\in I^{(l)}} $, we have for every $t \in \mathbb{R}$ that
\[
\begin{array}{cl}
\multicolumn{2}{l}{\displaystyle \left| \widetilde{x}^{(l)} ( y ) - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot \widetilde{x}^{(l)} ( \Phi_{t} (y) ) \right| }\\
= & \displaystyle \left| \widetilde{x}^{(l)} ( \Phi_{t} (y) ) \right| \\
= &\displaystyle \max_{i\in I^{(l)}} \left\{ \left| \varphi_i ( \Phi_{t} (y) ) \right| \right\} \\
= &\displaystyle \max_{i\in I^{(l)}} \left\{ \left| \varphi_i ( y ) - \varphi_i ( \Phi_{t} (y) ) \right| \right\} \\
\le & \eta \cdot |t| .
\end{array}
\]
\item Suppose $ \varphi_{i_0} (y) \not= 0 $ for some $ i_0 \in I^{(l)} $. Then we may pick $ t_y $ small enough so that $ \varphi_{i_0} ( \Phi_{t} (y) ) \not= 0 $ for all $ t \in ( - t_y, t_y ) $. By orthogonality, we know that $ \varphi_{i} ( \Phi_{t} (y) ) = 0 $ for all $ t \in ( - t_y, t_y ) $ and $ i\in I^{(l)} \setminus \{i_0\} $. Observe that the segment $ \Phi_{( - t_y, t_y)} (y) $ of a flow line falls entirely in the box $B_i$. Hence for any $ t \in ( - t_y, t_y ) $, we have
\[
\def2.2{2}
\begin{array}{cl}
\multicolumn{2}{l} { \displaystyle \left| \widetilde{x}^{(l)} ( y ) - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot \widetilde{x}^{(l)} ( \Phi_{t} (y) ) \right| }\\
= & \displaystyle \left| \widetilde{\psi}_i ( y ) - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot \widetilde{\psi}_i ( \Phi_{t} (y) ) \right| \\
= & \displaystyle \bigg| \varphi_i ( y ) \cdot \mathrm{exp} \left( \frac{ 2 \pi i }{M} \cdot a_{B_i, +} ( y ) \right) \\
& \displaystyle - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot \varphi_i ( \Phi_{t} (y) ) \cdot \mathrm{exp} \left( \frac{ 2 \pi i }{M} \cdot a_{B_i, +} ( \Phi_{t} (y) ) \right) \bigg| \\
= & \bigg| \varphi_i ( y ) \cdot \mathrm{exp} \left( \frac{ 2 \pi i }{M} \cdot a_{B_i, +} ( y ) \right) \\
& \displaystyle - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot \varphi_i ( \Phi_{t} (y) ) \cdot \mathrm{exp} \left( \frac{ 2 \pi i }{M} \cdot \big( a_{B_i, +} ( y ) - t \big) \right) \bigg| \\
= & \displaystyle \left| \Big( \varphi_i ( y ) - \varphi_i ( \Phi_{t} (y) ) \Big) \cdot \mathrm{exp} \left( \frac{ 2 \pi i }{M} \cdot a_{B_i, +} ( y ) \right) \right| \\
= & \displaystyle \left| \varphi_i ( y ) - \varphi_i ( \Phi_{t} (y) ) \right| \le \eta \cdot |t| .
\end{array}
\]
\end{enumerate}
We conclude that $ \left\| f \cdot \left( \alpha_{t}( x^{(l)} ) - \mathrm{exp}{ \left( \frac{2\pi i t }{M} \right) } \cdot x^{(l)} \right) \right\| \le \delta $ for any $ l \in \{ 0, \dots, d \} $, $t \in [ -T, T] $ and $f \in \mathcal{F}$. This completes the proof of the first inequality.
\end{proof}
\begin{proof}[Proof of the second inequality]
Let us assume that $ \mathrm{dim}_\mathrm{Rok}(\alpha) \le d $ for some non-negative integer $d$, and show that $ \mathrm{dim}_\mathrm{tube}(\Phi) \le 2 (d + 1 ) - 1 $ by verifying the conditions in Definition~\ref{def:tube-dimension}. Before we start, let us make some local definitions:
\begin{enumerate}[label=\textup{({D}\arabic*)}]\setcounter{enumi}{\value{proofenumi}}
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-disk} We present the closed unit disk $ \overline{\mathbb{D}} \subset \mathbb{C} $ as
$$ \overline{\mathbb{D}} \cong ( \mathbb{R} \times [0,1] ) / \sim $$
where $ ( \theta_1, r ) \sim (\theta_2, r ) $ if and only if $ r = 0 $ or $ \theta_1 - \theta_2 \in \mathbb{Z} $, for any $ \theta_1, \theta_2 \in \mathbb{R} $.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-varepsilon} Set $ \varepsilon = \frac{1}{96} $.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-Dj} For $j= 0,1,2$, define the following annular sections
\begin{align*}
D_j = & \left( [-j \varepsilon, j \varepsilon] \times \left[ \frac{3 - j}{3\sqrt{d+2}} , 1 \right] \right) / \sim
\end{align*}
as subsets of $\overline{\mathbb{D}}$ (note that $D_0$ is a line segment).
\begin{center}
\begin{tikzpicture}
\filldraw[fill=gray] (-20:3cm) arc [radius=3, start angle=-20, delta angle=40]
-- (20:1cm) arc [radius=1, start angle=20, delta angle=-40]
-- cycle;
\filldraw[fill=gray!30] (-10:3cm) arc [radius=3, start angle=-10, delta angle=20]
-- (10:2cm) arc [radius=2, start angle=10, delta angle=-20]
-- cycle;
\draw (0,0) circle (3cm);
\node at (0:1.5) {$D_2$};
\node at (0:2.5) {$D_1$};
\node at (-0.5,0.5) {$\overline{\mathbb{D}}$};
\end{tikzpicture}
\end{center}
Observe that $ D_j \subset D_{j+1}^o $ for $j= 0,1$, where $ D_{j}^o $ is the interior of $ D_j $ relative to $ \overline{\mathbb{D}} $.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-nbhd} Let $ N_\delta (D_j) = \{ z \in \overline{\mathbb{D}} \ | \ d(z, D_j) \le \delta \} $ denote the $\delta$-neighborhood of $ D_j $ with regard to the Euclidean metric on $ \overline{\mathbb{D}} \subset \mathbb{C} $. Note that the Euclidean metric is invariant under rotation.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-delta} Pick $ 0< \delta \le \frac{1}{d + 2} $ so small that $ N_\delta(D_j) \subset D_{j+1}^o $ for $j= 0,1$.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-g} Pick a continuous function $ g : \overline{\mathbb{D}} \to [0,1] $ such that $ g |_{D_1} = 1 $ and $ \mathrm{supp}(g) \subset D_2 $. In particular, we have $ g |_{\overline{\mathbb{D}} \setminus D_2^o} = 0 $.
\end{enumerate}\setcounter{proofenumi}{\value{enumi}}
Now given $ L >0 $ and a compact subset $K\subset Y$, we would like to find a collection $\mathcal{U}$ of open subsets of $Y$ satisfying conditions~(\ref{def:tube-dimension-1})-(\ref{def:tube-dimension-3}) in Definition~\ref{def:tube-dimension}.
For this we also make the following local definitions:
\begin{enumerate}[label=\textup{({D}\arabic*)}]\setcounter{enumi}{\value{proofenumi}}
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-K} By local compactness, there is a compact set $ K' \subset Y $ such that $ \Phi_{ [-8L, 8L] } (K) \subset (K')^o $.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-f} By Urysohn's lemma, we can find $ f \in {C}_{0}(Y)_{\le1,+} $ such that $ f (y) = 1 $ for any $y \in K'$.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x} Since we have assumed $ \mathrm{dim}_\mathrm{Rok}(\alpha) \le d $, we can apply Lemma~\ref{Lemma:def-dimrok-lift}(\ref{alternativedefinitionofRokhlindimensionforflows}) and find contractions $ x^{(0)}, \dots, x^{(d)} \in {C}_{0}(Y) $ satisfying conditions~(\ref{Lemma:def-dimrok-lift-item-5a}) - (\ref{Lemma:def-dimrok-lift-item-5d}) with $ p, T, \delta $ and $F$ replaced by $\frac{\pi}{4L}$, $8L$, $\delta$ and $\{f\}$. Written out, this means:
\begin{enumerate}[label=(\alph*)]
\item
\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-a}
$ \left\| f \cdot (\alpha_{t}( x^{(l)} ) - e^{\frac{2 \pi i t}{8L}} \cdot x^{(l)}) \right\| \le \delta $ for all $ l = 0, \dots, d $, for all $t \in [ -8L, 8L] $;
\item
\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-b}
$ \left\| f-f\cdot\sum_{l= 0 }^{d} x^{(l)} x^{(l)*} \right\| \le \delta $;
\item
\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-c}
$ \left\| [ x^{(l)} , f ] \right\| \le \delta $ for all $ l = 0, \dots, d $;
\item
\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-d}
$ \left\| f \cdot [ x^{(l)} , x^{(l)*} ] \right\| \le \delta $ for all $ l = 0, \dots, d $.
\end{enumerate}
\end{enumerate}\setcounter{proofenumi}{\value{enumi}}
Note that the spectra of $ x^{(0)}, \dots, x^{(d)} $ (which are nothing but their ranges as functions on $Y$) are contained in $ \overline{\mathbb{D}} $. Also observe that replacing any number of the elements $ x^{(l)} $ by $ - x^{(l)} $ (or, more generally, by $ \lambda \, x^{(l)} $ for any $ \lambda \in \mathbb{C} $ with $ |\lambda| = 1 $) does not violate any of the four conditions~\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-a}-\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-d} in \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x}.
Let us fix an index $ l \in \{0,\dots,d\} $ in the following definitions:
\begin{enumerate}[label=\textup{({D}\arabic*)}] \setcounter{enumi}{\value{proofenumi}}
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-xpm} Set $ x^{(l, \pm)} = \pm x^{(l)} $.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-Delta} For $ j = 0,1,2 $, define $ \Delta^{(l, \pm)}_j = \left( x^{(l, \pm)} \right) ^{-1} (D_j) , $ which are compact subsets of $ Y $, as $ 0 \not\in D_j $.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-gamma} Define continuous functions
$$ \gamma^{(l, \pm)} = g \circ x^{(l, \pm)} : Y \to [0,1] , $$
which are supported in $ \Delta^{(l, \pm)}_2 $ and equal to $1$ on $ \Delta^{(l, \pm)}_1 $.
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-xi} Define $\xi^{(l, \pm)} \colon Y \to \overline{\mathbb{D}}$ by
$$ \xi^{(l, \pm)} (y) = \frac{1}{8L} \int_{-4L}^{4L} \gamma^{(l, \pm)}(\Phi_s(y)) \cdot \mathrm{exp}{ \left( \frac{2\pi i s }{8L} \right) } \: d s , $$
which are continuous because the integrand is uniformly continuous.
\end{enumerate}\setcounter{proofenumi}{\value{enumi}}
Intuitively speaking, the integral in \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-xi} implements an averaging process that turns the approximate equivariance of $x^{(l, \pm)}$ as expressed by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x}\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-a} into exact equivariance of $\xi^{(l, \pm)}$, albeit restricted to a local scale. This will be made precise later in \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant}.
We first claim that for any $\sigma \in \{+, -\}$, $ y \in \Delta^{(l, \sigma)}_1 \cap K' $ and
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-1}
t \in [ - (1 - 4\varepsilon ) 8L, - 4\varepsilon \cdot 8L ] \cup [ 4\varepsilon \cdot 8L, (1 - 4\varepsilon) 8L ] \, ,
\end{equation}
we have
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-gamma0}
\gamma^{(l, \sigma)} (\Phi_t(y)) = 0 \; .
\end{equation}
Indeed, since $ f(y) = 1 $ by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-f}, we have, by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x}\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-a},
$$ \left| x^{(l, \sigma)} (\Phi_t(y) ) - \mathrm{exp}{ \left(- \frac{2\pi i t }{8L} \right) } \cdot x^{(l, \sigma)} (y ) \right| \le \delta $$
and thus
\[
\def2.2{2}
\begin{array}{rcl}
x^{(l, \sigma)} (\Phi_t(y) ) & \in & \displaystyle N_\delta \left( \mathrm{exp}{ \left(- \frac{2\pi i t }{8L} \right) } \cdot x^{(l, \sigma)} (y ) \right) \\
& \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-Delta}}{\subset} & \displaystyle N_\delta \left( \mathrm{exp}{ \left(- \frac{2\pi i t }{8L} \right) } \cdot D_1 \right) \\
& \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-nbhd}}{ =} & \displaystyle \mathrm{exp}{ \left(- \frac{2\pi i t }{8L} \right) } \cdot N_\delta \left( D_1 \right) \\
& \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-delta}}{ \subset} & \displaystyle \mathrm{exp}{ \left(- \frac{2\pi i t }{8L} \right) } \cdot D_2 \\
& \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-Dj}}{ =} & \displaystyle \left( \left[ \frac{t}{8L} - 2\varepsilon, \frac{t}{8L} + 2\varepsilon \right] \times \left[ \frac{1}{3 \sqrt{d+2}} , 1 \right] \right) / \sim \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-1}}{ \subset} & \displaystyle \left( \left( [ -1 + 2\varepsilon, - 2\varepsilon ] \cup [ 2\varepsilon, 1 - 2\varepsilon ] \right) \times \left[ \frac{1}{3 \sqrt{d+2}} , 1 \right] \right) / \sim \\
& \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-Dj}}{ \subset} & \displaystyle \overline{\mathbb{D}} \setminus D_2^o .
\end{array}
\]
But this implies that $ \gamma^{(l, \sigma)} (\Phi_t(y)) \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-gamma}}{=} g \left( x^{(l, \sigma)} (\Phi_t(y) ) \right) \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-g}}{=} 0 $, and the claim is proved.
This brings about several consequences regarding $ \xi^{(l, \pm)} $. To this end, we compute, for any $ y \in \Delta^{(l, \pm)}_1 \cap K' $ and any $ t \in \left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right] $,
\begin{equation}
\stepcounter{equation}\tag{\theequation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-integral}
\def2.2{2}
\begin{array}{cl}
\multicolumn{2}{l} { \displaystyle \mathrm{exp}{ \left( \frac{ 2\pi i t }{8L} \right)} \cdot \xi^{(l, \pm)} (\Phi_t(y)) } \\
\stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-xi}}{ =} & \displaystyle \frac{1}{8L} \int_{-4L}^{4L} \gamma^{(l, \pm)}(\Phi_{s + t}(y)) \cdot \mathrm{exp}{ \left( \frac{2\pi i (s+t) }{8L} \right) } \: d s \\
\stackrel{s+t \to s'}{=} & \displaystyle \frac{1}{8L} \int_{-4L + t }^{4L + t } \gamma^{(l, \pm)}(\Phi_{s' }(y)) \cdot \mathrm{exp}{ \left( \frac{2\pi i s' }{8L} \right) } \: d s' \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-gamma0}}{ =} & \displaystyle \frac{1}{8L} \int_{-32 \varepsilon L }^{ 32 \varepsilon L } \gamma^{(l, \pm)}(\Phi_{s' }(y)) \cdot \mathrm{exp}{ \left( \frac{2\pi i s' }{8L} \right) } \: d s' \; ,
\end{array}
\end{equation}
where in the last step, we also used the fact
\begin{align*}
& (-32 \varepsilon L , 32 \varepsilon L) = \\
& [-4L+t, 4L+t] \setminus \left( [ - (1 - 4\varepsilon ) 8L, - 4\varepsilon \cdot 8L ] \cup [ 4\varepsilon \cdot 8L, (1 - 4\varepsilon) 8L ] \right) \; .
\end{align*}
Since the last integral in \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-integral} is independent of $t$, we obtain
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant}
\mathrm{exp}{ \left( \frac{ 2\pi i t }{8L} \right)} \cdot \xi^{(l, \pm)} (\Phi_t(y)) = \mathrm{exp}{ \left( \frac{ 2\pi i 0 }{8L} \right)} \cdot \xi^{(l, \pm)} (\Phi_0(y)) = \xi^{(l, \pm)} (y)
\end{equation}
for any $ y \in \Delta^{(l, \pm)}_1 \cap K' $ and any $ t \in \left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right] $.
Moreover, setting $t = 0$ in \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-integral}, we obtain
\[
\def2.2{2}
\begin{array}{cl}
\multicolumn{2}{l} { \xi^{(l, \pm)} (y) }\\ \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-integral}}{ =} & \displaystyle \frac{1}{64 \varepsilon L} \int_{-32 \varepsilon L }^{ 32 \varepsilon L } 8 \varepsilon \cdot \gamma^{(l, \pm)}(\Phi_{s }(y)) \cdot \mathrm{exp} \left( \frac{2\pi i s }{8L} \right) \: d s \\
\stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-gamma}}{ \in} & \displaystyle \mathrm{conv} \left( \left\{ 8 \varepsilon \cdot \lambda \cdot \mathrm{exp} \left( \frac{2\pi i s }{8L} \right) \ \bigg| \ s \in [ -32 \varepsilon L , 32 \varepsilon L ], \ \lambda \in [0, 1 ] \right\} \right)\\
= &\displaystyle \left( [ -4\varepsilon, 4\varepsilon ] \times \left[ 0 , 8 \varepsilon \right] \right) / \sim .
\end{array}
\]
Since when $ s =0 $, the integrand above is equal to
$$ 8 \varepsilon \cdot \gamma^{(l, \pm)}(\Phi_{0}(y)) \cdot \mathrm{exp}{ \left( \frac{2\pi i \cdot 0 }{8L} \right) } = 8 \varepsilon , $$
which falls outside of the faces $\left( \{ 4\varepsilon \} \times \left[ 0 , 8 \varepsilon \right] \right) / \sim$ and $\left( \{ - 4\varepsilon \} \times \left[ 0 , 8 \varepsilon \right] \right) / \sim$ of the convex set $\left( [ -4\varepsilon, 4\varepsilon ] \times \left[ 0 , 8 \varepsilon \right] \right) / \sim $. It follows from the properties of faces of a convex set that
\begin{align*}
\stepcounter{equation}\tag{\theequation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-range}
\xi^{(l, \pm)} (y) \in & \big( \left( [ -4\varepsilon, 4\varepsilon ] \times \left[ 0 , 8 \varepsilon \right] \right) / \sim \big) \setminus \big( \left( \{ - 4\varepsilon, 4\varepsilon \} \times \left[ 0 , 8 \varepsilon \right] \right) / \sim \big) \\
= & \left( ( -4\varepsilon, 4\varepsilon ) \times \left( 0 , 8 \varepsilon \right] \right) / \sim \; .
\end{align*}
We define:
\begin{enumerate}[label=\textup{({D}\arabic*)}] \setcounter{enumi}{\value{proofenumi}}
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-V} the subset $\displaystyle V = \left( \left[ - \left( \frac{1}{2} - 8\varepsilon \right) , \frac{1}{2} - 8\varepsilon \right] \times \left( 0 , 8 \varepsilon \right] \right) / \sim $ in $\overline{\mathbb{D}}$, and
\item\label{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-B} the subset
\[
B^{(l, \sigma)} = \Phi_{\left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right]} ( \Delta^{(l, \sigma)}_1 \cap K' ) \cap \left( \xi^{(l, \sigma)} \right)^{-1} ( V )
\]
in $Y$, for any $l \in \{0, \ldots, d\}$ and any $\sigma \in \{+, -\}$.
\end{enumerate}\setcounter{proofenumi}{\value{enumi}}
\begin{clm}\label{claim1withintheproofaboutrelationbetweenboxdimandRokdimthatasetisabox}
For any $l \in \{0, \ldots, d\}$ and any $\sigma \in \{+, -\}$, $ B^{(l, \sigma)} $ is a box with length $ l_{B^{(l, \sigma)} } = (1 - 16 \varepsilon) 8L $.
\end{clm}
\begin{clm}\label{claim2withintheproofaboutrelationbetweenboxdimandRokdimthatasetisabox}
For any $l \in \{0, \ldots, d\}$, any $\sigma \in \{+, -\}$ and any $ y \in K $ with
$$ x^{(l, \sigma)}(y) \in \left( \left[-\frac{1}{4}, \frac{1}{4}\right] \times \left[ \frac{1}{\sqrt{d+2}} , 1 \right] \right) / \sim , $$
we have
$$ \Phi_{\left[ - L, L \right]} (y) \subset \left( B^{(l, \sigma)} \right)^o . $$
\end{clm}
We postpone the proofs of these claims until after the proof of this theorem, and first show how to complete the proof using them.
Let us show that $ \mathcal{U} = \left\{ \left( B^{(l, \sigma)} \right)^o \right\}_{ l \in \{0, \dots, d \}, \sigma \in \{+,-\} } $ is a collection of open subsets of $ Y $ that satisfies the conditions in Definition~\ref{def:tube-dimension} for $ \mathrm{dim}_\mathrm{tube} ( \Phi ) \le 2(d +1 ) -1 $ with regard to $ L $ and $ K $. By design, each $ \left( B^{(l, \sigma)} \right)^o $ is contained in a box, and the multiplicity is at most $ 2 (d +1) $, which is the cardinality of the collection. Since $ f (y) = 1 $ for all $y\in K$ by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-f} and
$$ \left\| \sum_{l= 0 }^{d} x^{(l)} x^{(l)*} \cdot f - f \right\| \le \delta $$
by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x}\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-b}, we get for all $y\in K$ that
$$ \sum_{l= 0 }^{d} \left| x^{(l)} (y) \right| ^2 \ge 1 - \delta \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-delta}}{\ge} \frac{ d + 1}{ d + 2} \; . $$
Thus there is a number $ l = l(y) \in \{0, \dots, d\} $ such that
$$ \left| x^{(l)} (y) \right| ^2 \ge \frac{1}{ d + 2} . $$
It follows that at least one of $ x^{(l, +)}(y) $ and $ x^{(l, -)}(y) $ is contained in
$$ \left( \left[-\frac{1}{4}, \frac{1}{4}\right] \times \left[ \frac{1}{\sqrt{d+2}} , 1 \right] \right) / \sim \; .$$
Hence by Claim~\ref{claim2withintheproofaboutrelationbetweenboxdimandRokdimthatasetisabox}, we know that $ \Phi_{\left[ - L, L \right]} (y) $ is contained in $\left( B^{(l, +)} \right)^o $ or $\left( B^{(l, -)} \right)^o $. This shows $ \mathrm{dim}_\mathrm{tube} ( \Phi ) \le 2(d +1 ) -1 $.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim1withintheproofaboutrelationbetweenboxdimandRokdimthatasetisabox}]
Fix $l \in \{0, \ldots, d\}$ and $\sigma \in \{+, -\}$. We first show $B^{(l, \sigma)}$ is compact. To this end, for any $ z \in \Delta^{(l, \sigma)}_1 \cap K' $, and any
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-2}
t \in \left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right] ,
\end{equation}
we have
\begin{align*}
\xi^{(l, \sigma)} \left( \Phi_t (z) \right) \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant}}{=} & \mathrm{exp}{ \left( - \frac{2\pi i t }{8L} \right) } \cdot \xi^{(l, \sigma)} (z) \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-range}}{\in} & \left( ( -4\varepsilon - \frac{t}{8L}, 4\varepsilon - \frac{t}{8L} ) \times \left( 0 , 8 \varepsilon \right] \right) / \sim \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-2}}{\subset} & \left( \left( - \frac{1}{2} , \frac{1}{2} \right) \times \left( 0 , 8 \varepsilon \right] \right) / \sim \; .
\end{align*}
Hence the image of the compact set $\Phi_{\left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right]} ( \Delta^{(l, \sigma)}_1 \cap K' ) $ under $ \xi^{(l, \sigma)} $ does not include $0$, and thus its intersection with $V$ is the same as its intersection with $V \cup \{0\}$, which, by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-V}, is the compact set
\begin{align*}
\left( \left[ - \left( \frac{1}{2} - 8\varepsilon \right) , \frac{1}{2} - 8\varepsilon \right] \times \left[ 0 , 8 \varepsilon \right] \right) / \sim \; .
\end{align*}
It follows that this intersection and thus its preimage under $\xi^{(l, \sigma)}$ are also compact, and thus so is $B^{(l, \sigma)}$ by its definition in \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-B}.
Now let
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-log}
\log : \left( \left( - \frac{1}{2} , \frac{1}{2} \right) \times \left( 0 , 1 \right] \right) / \sim \ \to \{ a + b i \in \mathbb{C} \ | \ a \le 0, \ b \in ( - \pi, \pi ) \}
\end{equation}
be a continuous branch of the log function, and define a continuous function
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda}
\Lambda = \frac{1}{2\pi} \mathrm{Im} \log : \left( \left( - \frac{1}{2} , \frac{1}{2} \right) \times \left( 0 , 1 \right] \right) / \sim \ \to \left( - \frac{1}{2}, \frac{1}{2} \right) ,
\end{equation}
where $ \mathrm{Im} \log $ denotes the imaginary coordinate of the log function.
Since $\xi^{(l, \sigma)} (B^{(l, \sigma)}) \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-B}}{\subset} V \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-V}}{\subset} \left( \left( - \frac{1}{2} , \frac{1}{2} \right) \times \left( 0 , 1 \right] \right) / \sim $, we may define functions $a^{(l, \sigma)}_\pm \colon B^{(l, \sigma)} \to {\mathbb C}$ by the formulas
\begin{align}
a^{(l, \sigma)}_-(y) = & \left( \Lambda \left( \xi^{(l, \sigma)} (y) \right) - \left( \frac{1}{2} - 8\varepsilon \right) \right) \cdot 8L \;, \label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus} \\
a^{(l, \sigma)}_+(y) = & \left( \Lambda \left( \xi^{(l, \sigma)} (y) \right) + \left( \frac{1}{2} - 8\varepsilon \right) \right) \cdot 8L \label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aplus} \;.
\end{align}
Fix $y \in B^{(l, \sigma)}$. An immediate consequence of the above definition is that
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-diff}
a^{(l, \sigma)}_+(y) - a^{(l, \sigma)}_-(y) = (1 - 16 \varepsilon) 8L = l_{B^{(l, \sigma)} } \; .
\end{equation}
By \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-B}, there are $z \in \Delta^{(l, \sigma)}_1 \cap K' $ and
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-3}
t \in \left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right]
\end{equation}
such that $y = \Phi_t (z)$. It follows from \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-range} and \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda} that
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-range}
\Lambda \left( \xi^{(l, \sigma)} (z) \right) \in ( - 4 \varepsilon, 4 \varepsilon )
\end{equation}
and from \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant} and \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda} that
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-yz}
\Lambda \left( \xi^{(l, \sigma)} (y) \right) = \Lambda \left( \xi^{(l, \sigma)} (z) \right) - \frac{t}{8L} \; .
\end{equation}
Thus we have
\[
\stepcounter{equation}\tag{\theequation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus-lowerbound}
\def2.2{2}
\begin{array}{rcl}
a^{(l, \sigma)}_-(y) &
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus}}{ =} & \displaystyle \left( \Lambda \left( \xi^{(l, \sigma)} (y) \right) - \left( \frac{1}{2} - 8\varepsilon \right) \right) \cdot 8L \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-yz}}{ =} &\displaystyle \left( \Lambda \left( \xi^{(l, \sigma)} (z) \right) - \frac{t}{8L} - \left( \frac{1}{2} - 8\varepsilon \right) \right) \cdot 8L \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-range}}{ >} & \displaystyle - 4 \varepsilon \cdot 8L + \left( - \frac{t}{8L} - \left( \frac{1}{2} - 8\varepsilon \right) \right) \cdot 8L \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-3}}{ =} & \displaystyle - t - \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L \; .
\end{array}
\]
On the other hand, since $\xi^{(l, \sigma)} (y) \in V$ by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-B}, we have $\Lambda \left( \xi^{(l, \sigma)} (y) \right) \in \left[ - \left( \frac{1}{2} - 8\varepsilon \right) , \frac{1}{2} - 8\varepsilon \right]$ and thus
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus-upperbound}
a^{(l, \sigma)}_-(y) \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus}}{ =} \left( \Lambda \left( \xi^{(l, \sigma)} (y) \right) - \left( \frac{1}{2} - 8\varepsilon \right) \right) \cdot 8L \, \leq 0 \; . \\
\end{equation}
Similarly we have
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aplus-range}
0\leq a^{(l, \sigma)}_+(y) < - t + \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L \; .
\end{equation}
Now for any $ s \in \Big[ - t - \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L , - t + \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L \Big] $, we have
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-4}
(s + t) \in \Big[ - \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L , \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L \Big]
\end{equation}
and thus
\[
\stepcounter{equation}\tag{\theequation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant-s}
\def2.2{2}
\begin{array}{rcl}
\xi^{(l, \sigma)} \left( \Phi_s(y) \right) & = &\displaystyle \xi^{(l, \sigma)} \left( \Phi_{ s + t } (z) \right) \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant}}{ =} &\displaystyle \mathrm{exp}{ \left( - \frac{2\pi i (s+t) }{8L} \right) } \cdot \xi^{(l, \sigma)} (z) \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant}}{ =} & \displaystyle \mathrm{exp}{ \left( - \frac{2\pi i s }{8L} \right) } \cdot \xi^{(l, \sigma)} (y) \; .
\end{array}
\]
In particular, we have
\begin{align}
\Lambda \left( \xi^{(l, \sigma)} \left( \Phi_s(y) \right) \right) & \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant-s}}{ =} \displaystyle \Lambda \left( \xi^{(l, \sigma)} \left( z \right) \right) - \frac{ s + t }{8L} \label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-syz} \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant-s}}{ =} \displaystyle \Lambda \left( \xi^{(l, \sigma)} \left( y \right) \right) - \frac{ s }{8L} \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus}}{=}\hspace{1mm} \displaystyle \frac{ a^{(l, \sigma)}_-(y) - s }{8L} + \left( \frac{1}{2} - 8\varepsilon \right) \label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-upperbound} \\
& \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aplus}}{ =} \displaystyle \frac{ a^{(l, \sigma)}_+(y) - s }{8L} - \left( \frac{1}{2} - 8\varepsilon \right) \; . \label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-lowerbound}
\end{align}
We conclude that if
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-s-1}
s \in \left[ a^{(l, \sigma)}_-(y), a^{(l, \sigma)}_+(y) \right] \; ,
\end{equation}
then
\begin{align*}
\Phi_s(y) = \Phi_{ s + t } (z) \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-t-4}}{ \in} \Phi_{\left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right]} ( \Delta^{(l, \sigma)}_1 \cap K' ) \; ,
\end{align*}
and combining
\[
\def2.2{2}
\begin{array}{cl}
\multicolumn{2}{l}{ \displaystyle \Lambda \left( \xi^{(l, \sigma)} \left( \Phi_s(y) \right) \right)} \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-upperbound}, \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-lowerbound}}{ \in} &\displaystyle \left[ \frac{ a^{(l, \sigma)}_+(y) - s }{8L} - \left( \frac{1}{2} - 8\varepsilon \right) , \frac{ a^{(l, \sigma)}_-(y) - s }{8L} + \left( \frac{1}{2} - 8\varepsilon \right) \right] \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-s-1}}{\subset} &\displaystyle \left[ - \left( \frac{1}{2} - 8\varepsilon \right) , \left( \frac{1}{2} - 8\varepsilon \right) \right]
\end{array}
\]
with
\begin{align*}
\left| \xi^{(l, \sigma)} \left( \Phi_s(y) \right) \right| \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant-s}}{ =} \left| \xi^{(l, \sigma)} \left( y \right) \right| \in (0, 8\varepsilon] \; ,
\end{align*}
we also see that $\xi^{(l, \sigma)} \left( \Phi_s(y) \right) \in V$. Thus $\Phi_s(y) \in B^{(l, \sigma)}$.
On the other hand, if
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-s-2a}
s \in \left[- t - \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L , a^{(l, \sigma)}_-(y) \right)
\end{equation}
(note that this is nonempty), then
\[
\def2.2{2.2}
\begin{array}{cl}
\multicolumn{2}{l} { \displaystyle \Lambda \left( \xi^{(l, \sigma)} \left( \Phi_s(y) \right) \right)} \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-upperbound}}{ =} &\displaystyle \frac{ a^{(l, \sigma)}_-(y) - s }{8L} + \left( \frac{1}{2} - 8\varepsilon \right) \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-s-2a}}{ \in} &\displaystyle \left( \left( \frac{1}{2} - 8\varepsilon \right) , \frac{ a^{(l, \sigma)}_-(y) + t + \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L }{8L} + \left( \frac{1}{2} - 8\varepsilon \right) \right] \\
\stackrel{ \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus}}{ =} &\displaystyle \left( \left( \frac{1}{2} - 8\varepsilon \right) , \Lambda \left( \xi^{(l, \sigma)} (y) \right) + \frac{ t }{8L} + \left( \frac{1}{2} - 4\varepsilon \right) \right] \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-yz}}{ =} &\displaystyle \left( \left( \frac{1}{2} - 8\varepsilon \right) , \Lambda \left( \xi^{(l, \sigma)} (z) \right) + \left( \frac{1}{2} - 4\varepsilon \right) \right] \\
\stackrel{ \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-range}}{\subset} & \displaystyle \Bigg(\left( \frac{1}{2} - 8\varepsilon \right) , \frac{1}{2} \Bigg) \; .
\end{array}
\]
Similarly, if
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-s-2b}
s \in \left(a^{(l, \sigma)}_+(y), - t + \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L \right]
\end{equation}
(this is also nonempty), then the same argument as above, yet with \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-upperbound} replaced by \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-Lambda-lowerbound} and \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aminus} replaced by \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-aplus}, shows that
\begin{equation}
\Lambda \left( \xi^{(l, \sigma)} \left( \Phi_s(y) \right) \right) \in \Bigg( - \frac{1}{2} , - \left( \frac{1}{2} - 8\varepsilon \right) \Bigg) \; .
\end{equation}
As a consequence, whenever
\begin{equation*}
s \in \left[- t - \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L , a^{(l, \sigma)}_-(y) \right)
\cup \left(a^{(l, \sigma)}_+(y), - t + \left( \frac{1}{2} - 4\varepsilon \right) \cdot 8L \right] \; ,
\end{equation*}
we have $\xi^{(l, \sigma)} \left( \Phi_s(y) \right) \not\in V$ by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-V} and thus $\Phi_s(y) \not\in B^{(l, \sigma)}$ by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-B}.
Therefore we have verified the conditions in Definition~\ref{definitionofboxes} for $ B^{(l, \sigma)} $.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim2withintheproofaboutrelationbetweenboxdimandRokdimthatasetisabox}]
Given $y \in K$, $l \in \{0, \ldots, d\}$ and $\sigma \in \{+,-\}$ satisfying
\begin{equation*}
x^{(l, \sigma)}(y) \in \left( \left[-\frac{1}{4}, \frac{1}{4}\right] \times \left[ \frac{1}{\sqrt{d+2}} , 1 \right] \right) / \sim \; ,
\end{equation*}
we set $ t = - 8L \cdot \Lambda( x^{(l, \sigma)}(y) ) \in \left[ - 2L, 2L \right] $ and observe that this choice of $t$ guarantees
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-yD0}
\mathrm{exp}{ \left( \frac{2\pi i t }{8L} \right) } \cdot x^{(l, \sigma)} (y ) \in D_0 \; .
\end{equation}
Set $ z = \Phi_{ -t} (y) $. Since $ f(y) = 1 $ by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-f}, we have, by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x}\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-x-a},
$$ \left| x^{(l, \sigma)} (z ) - \mathrm{exp}{ \left( \frac{2\pi i t }{8L} \right) } \cdot x^{(l, \sigma)} (y ) \right| \le \delta $$
and thus
\begin{align*}
x^{(l, \sigma)} (z ) \in & N_\delta \left( \mathrm{exp}{ \left( \frac{2\pi i t }{8L} \right) } \cdot x^{(l, \sigma)} (y ) \right) \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-yD0}}{\subset} N_\delta \left( D_0 \right) \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-delta}}{\subset} D_1^o .
\end{align*}
We also have $ z \in \Phi_{-t} (K) \stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-K}}{\subset} (K')^o $. Hence $z$ is included in the open set
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-U}
U = (K')^o \cap \left( x^{(l, \sigma)} \right) ^{-1} (D_1^o) \; ,
\end{equation}
which itself is contained in $ \Delta^{(l, \sigma)}_1 \cap K' $ by \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-Delta}. This latter fact has two immediate consequences:
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-UK}
\Phi_{\left[ - \left( \frac{1}{2} - 12\varepsilon \right) 8L , \left( \frac{1}{2} - 12\varepsilon \right) 8L \right]} ( U ) \subset \Phi_{\left[ - \left( \frac{1}{2} - 4\varepsilon \right) 8L , \left( \frac{1}{2} - 4\varepsilon \right) 8L \right]} ( \Delta^{(l, \sigma)}_1 \cap K' )
\end{equation}
and
$$ \xi^{(l, \sigma)} (U) \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-range}}{\subset} \left( ( -4\varepsilon, 4\varepsilon ) \times \left( 0 , 8 \varepsilon \right] \right) / \sim \; , $$
which, in turn, leads to
\begin{equation}
\stepcounter{equation}\tag{\theequation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-U-range}
\def2.2{2.2}
\begin{array}{cl}
\multicolumn{2}{l} {\displaystyle\xi^{(l, \sigma)} \left( \Phi_{\left[ - \left( \frac{1}{2} - 12\varepsilon \right) 8L , \left( \frac{1}{2} - 12\varepsilon \right) 8L \right]} ( U ) \right)} \\
\stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-equivariant}}{ \subset} & \displaystyle \left( \Bigg( ( -4\varepsilon, 4\varepsilon ) + \left[ - \left( \frac{1}{2} - 12\varepsilon \right) , \left( \frac{1}{2} - 12\varepsilon \right) \right] \Bigg) \times \left( 0 , 8 \varepsilon \right] \right) / \sim \\
= & \displaystyle \left( \left( - \left( \frac{1}{2} - 8\varepsilon \right) , \left( \frac{1}{2} - 8\varepsilon \right) \right) \times \left( 0 , 8 \varepsilon \right] \right) / \sim \\
\stackrel{\ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-V}}{ \subset} & V \; .
\end{array}
\end{equation}
By the definition of $B^{(l, \sigma)}$ in \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-B}, we infer from \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-U-range} and \eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-U-range} that
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-UB}
\Phi_{\left[ - \left( \frac{1}{2} - 12\varepsilon \right) 8L , \left( \frac{1}{2} - 12\varepsilon \right) 8L \right]} ( U ) \subset B^{(l, \sigma)} \; .
\end{equation}
Observe that by our choice in \ref{proof:thmaboutrelationbetweenboxdimensionandRokhlindimension-varepsilon} that $\varepsilon = \frac{1}{96}$, we have the identity
\begin{equation}\label{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-varepsilon-intervals}
\left[ - 2L, 2L \right] + \left[ - L, L \right] = \left[ - 3L, 3L \right] = \left[ - \left( \frac{1}{2} - 12\varepsilon \right) 8L , \left( \frac{1}{2} - 12\varepsilon \right) 8L \right] \; .
\end{equation}
Now since $ t \in \left[ - 2L, 2L \right] $ and $ y = \Phi_t (z) \in \Phi_t (U) $, we have
$$ \Phi_{\left[ - L, L \right]} (y) \subset \Phi_{\left[ t - L, t + L \right]} (U) \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-varepsilon-intervals}}{\subset} \Phi_{\left[ - \left( \frac{1}{2} - 12\varepsilon \right) 8L , \left( \frac{1}{2} - 12\varepsilon \right) 8L \right]} ( U ) \stackrel{\eqref{eq:thmaboutrelationbetweenboxdimensionandRokhlindimension-UB}}{\subset} B^{(l, \sigma)} $$
and thus $ \Phi_{\left[ - L, L \right]} (y) \subset \left( B^{(l, \sigma)} \right)^o $.
\end{proof}
\begin{rmk}
In the proof for the second inequality, we have proved something stronger than a bound on the tube dimension. Namely we can find a cover for an arbitrary compact subset in $ Y $ and an arbitrarily large Lebesgue number along flows, so that this cover consists of $ 2 (\mathrm{dim}_\mathrm{Rok}(\alpha) + 1 ) $ boxes of the same length, which can be as short as $ \frac{28}{9} $ times the required Lebesgue number. In analogy with the asymptotic Assouad-Nagata dimension, also known as asymptotic dimension of linear type or asymptotic dimension with the Higson property (\cite{DranishnikovSmith2007asymptotic, Higson1999Counterexamples}), we may define the \emph{tube dimension of linear type} of the flow space $(Y, \mathbb{R}, \Phi)$ as the infimum of natural numbers $d$ such that there is a linear function $\lambda \colon [0,\infty) \to [0,\infty)$ such that for any $L > 0$ and compact set $K \subset Y$, we can find a finite collection $\mathcal{U}$ of open subsets of $Y$ satisfying the conditions in Definition~\ref{def:tube-dimension} plus the extra assumption that the lengths of the boxes $B_U$ are less than $\lambda(L)$ for all $U \in \mathcal{U}$. Then, when we combine the first inequality of the theorem with the above stronger statement extracted from the second inequality, we see that the tube dimension of linear type is controlled by $2 \mathrm{dim}_\mathrm{tube}(\Phi) + 1 $.
\end{rmk}
\section{Some consequences}
\begin{cor}\label{cor:top-flow-Rokhlin-estimate}
Let $Y$ be a locally compact and metrizable space and $\Phi$ a flow on $Y$. Suppose that $Y$ has finite covering dimension and $\Phi$ is free. Let $\alpha$ be the flow on ${C}_{0}(Y)$ induced by $\Phi$. Then
\[
\dimrokone(\alpha)\leq 5 \cdot\mathrm{dim}^{\!+1}(Y) \; .
\]
\end{cor}
\begin{proof}
This is a direct consequence of Theorem~\ref{thmaboutrelationbetweenboxdimensionandRokhlindimension} and Corollary~\ref{cor:estimate-tube-dim}.
\end{proof}
\begin{cor}\label{cor:top-flow-dimnuc-estimate}
Let $Y$ be a locally compact and metrizable space and $\Phi$ a flow on $Y$. Suppose that $Y$ has finite covering dimension and $\Phi$ is free. Then the nuclear dimension of $ {C}_{0}(Y) \rtimes {\mathbb{R}^{}} $ is finite, and in fact
\[
\dimnucone(C_0(Y)\rtimes\mathbb{R})\leq 10 \cdot \left( \mathrm{dim}^{\!+1}(Y) \right) ^2 \; .
\]
\end{cor}
\begin{proof}
This is a direct consequence of Corollary~\ref{cor:top-flow-Rokhlin-estimate} and Theorem~\ref{Thm:dimnuc-bound}.
\end{proof}
\begin{cor}\label{cor:classification}
Let $Y$ be a locally compact and metrizable space and $\Phi$ a flow on $Y$. Suppose that $Y$ has finite covering dimension and $\Phi$ is free and minimal. Then the crossed product $ {C}_{0}(Y) \rtimes {\mathbb{R}} $ is classifiable in the sense of Elliott, provided that it contains a nonzero projection.
When $Y$ is a compact manifold and $\Phi$ is smooth and uniquely ergodic the classifying invariant consists of the topological $K$-groups $(K^{0}(Y),K^{1}(Y))$ with order given by the Ruelle-Sullivan map $K^{1}(Y) \to \mathbb{R}$ induced by the unique invariant probability measure.
\end{cor}
\begin{proof}
A free and minimal flow yields a simple crossed product $C^{*}$-algebra by \cite[Corollary~3.3]{GooRos:EH}. $C_{0}(Y) \rtimes \mathbb{R}$ satisfies the Universal Coefficient Theorem of Rosenberg and Schochet by \cite{connes}; see also \cite[Theorem~19.3.6]{Bla:k-theory}. By Theorem~\ref{thm:stability}, ${C}_{0}(Y) \rtimes {\mathbb{R}} $ is stable. By the main result of \cite{TWW} every trace on $C_{0}(Y) \rtimes \mathbb{R}$ is quasidiagonal, so if the crossed product is the stabilization of a unital $C^{*}$-algebra the main result of \cite{EGLN:arXiv} applies and yields classification. The remaining statements follow from Connes' Thom isomorphism and Corollary~2 of \cite{connes} (and its proof); see also \cite{KelPut} for a more detailed account of the Ruelle-Sullivan map associated to an invariant measure.
\end{proof}
\begin{Rmks}
In the preceding corollary we need existence of a projection since the classification program is not yet developed well enough for the stably projectionless case.
At least in the smooth situation, non-vanishing of the first cohomology is a necessary requirement:
For a free and minimal smooth flow on a compact manifold, it follows from \cite[Corollary~2]{connes} that the crossed product will be stably projectionless whenever the first cohomology is trivial. But even when the latter is nonzero, it is still not clear whether there is a projection in the crossed product.
In the uniquely ergodic case one has such a nonzero projection provided the Ruelle-Sullivan current associated to the unique invariant measure yields a nontrivial element of $H^{1}(Y;\mathbb{R}) \neq 0$, since then there is an element of $K_0({C}_{0}(Y) \rtimes {\mathbb{R}})$ which has strictly positive trace. One can employ the comparison and $\mathcal{Z}$-stability results of \cite{Rob:projectionless} and \cite{Tik:projectionless} to conclude that the crossed product itself (and not just a matrix algebra over its unitzation) contains a nontrivial projection.
Let us also give a condition that produces explicitly a nontrivial projection in the crossed product as follows:
\end{Rmks}
\begin{Rmk}
We call a subset $X \subset Y$ \emph{transversal} if for any $y \in X$, there is a box $B$ such that $y \in B^o$ and $X \cap B$ is equal to the central slice of $B$. (For example, this is the case if $Y$ is a smooth manifold, $\Phi$ is generated by a vector field $V$ over $Y$, and $X$ is a codimension-$1$ submanifold of $Y$ such that for any $y \in X$, the vector $V_y \in T_y Y$ lies outside of the subspace $T_y X$.)
We claim that any compact transversal subset $X \subset Y$ gives rise to a nontrivial projection in ${C}_{0}(Y) \rtimes {\mathbb{R}}$. By the compactness of $X$, there is $r>0$ such that for any $y \in X$ and any $0\neq t \in [-3r, 3r]$, we have $\Phi_t(y) \not\in X$. From the definition of transversality we see that $X \subset \left( \Phi_{(-r,r)}(X) \right)^o$, which enables us to find a nonnegative continous function $f \in C_0(Y)$ supported within $\Phi_{(-r,r)}(X)$ such that $f(X) = \{1\}$. These assumptions on $f$ allow us to define a cut-off function $g \in C_0(Y)_+$ by
\[
g(y) = \begin{cases}
\left( \frac{ f(y) } { \int_{-2r}^{2r} f(\Phi_t(y)) \, dt } \right)^{\frac{1}{2}} &\mid y \in \Phi_{[-r,r]}(X) \\
0 \, , &\mid y \not\in \Phi_{[-r,r]}(X) .
\end{cases}
\]
Observe that $g$ is also supported within $\Phi_{(-r,r)}(X)$. For any $y \in Y$, our choice of $r$ implies that the points $y$ and $\Phi_{2r}(y)$ cannot both be in $\Phi_{(-r,r)}(X)$; thus $ {g} \cdot \alpha_{2r}(g) = {g} \cdot \alpha_{-2r}(g) = 0$. This allows us to define an element $p \in C_c(\mathbb{R}, C_0(Y) ) \subset C_0(Y) \rtimes \mathbb{R}$ by
\[
p(t) = \begin{cases}
{g} \cdot \alpha_t(g) &\mid t \in [-2r, 2r] \\
0 &\mid t \not\in [-2r, 2r] .
\end{cases}
\]
A simple computation shows $p^* = p$ and $p* p = p$, i.e., $p$ is a (nontrivial) projection in $C_0(Y) \rtimes \mathbb{R}$. Therefore we have proved our claim.
The existence of a nonzero projection can be seen more easily if the flow on $Y$ factors onto the rotation system on a circle (this happens for example if $Y$ is compact and the flow has a nontrivial maximal equicontinuous factor), because then $C(\mathbb{T}) \rtimes \mathbb{R} \cong C(\mathbb{T}) \otimes \mathcal{K}$ embeds in $C(Y) \rtimes \mathbb{R}$, so the latter crossed product contains a nonzero projection. Of course, this situation is still a special case of the above, as the preimage of any point on the circle is a compact transversal set of $Y$.
\end{Rmk}
\begin{eg} \label{ex:tilings}
An interesting construction of free flows comes from the theory of aperiodic tilings, the study of non-periodic ways to partition the Euclidean space into bounded pieces, which has provided mathematical models for aperiodic media in solid state physics. For an introduction to the subject, we refer to reader to \cite{Sadun2008Topology}. Below we are going to describe, in the special case of $1$-dimensional tilings, a pivotal construction in the theory called the \emph{continuous hull}. For this, we follow a $C^*$-algebraic approach found in \cite[Section 2.3]{Kellendonk13}. By a \emph{tiling of the real line} we mean a partition of the real line into intervals with uniform upper and lower bounds on their lengths. Equivalently, we may also define a tiling of the real line by considering a discrete subset $\Lambda$ in $\mathbb{R}$ \textemdash~ intended to be the midpoints of the intervals \textemdash~ satisfying that there are $r, R > 0$ such that for each point $x \in \mathbb{R}$, there is at most one point in $\Lambda \cap B_r(x)$ and at least one point in $\Lambda \cap B_R(x)$. (One may as well take the boundary points of the intervals. The resulting discrete set will be mutually locally derivable from the one constructed from the midpoints, which means that the two discrete sets amount to essentially the same thing as far as constructions in the theory of aperiodic tilings are concerned; cf.\ \cite[Section 1.3]{Sadun2008Topology}.) A tiling $\Lambda$ is said to be a \emph{perfect tiling} if all of the following conditions hold:
\begin{enumerate}
\item $\Lambda$ is \emph{aperiodic}, if there is no $p > 0$ such that $\Lambda + p = \Lambda$;
\item $\Lambda$ has \emph{finite local complexity}, if for any $r > 0$, the set
\[
\left\{ (\Lambda - x) \cap B_r(0) \ \big| \ x \in \Lambda \, \right\} \;,
\]
called the collection of \emph{$r$-patches}, is finite;
\item $\Lambda$ is \emph{repetitive}, if for any $r > 0$, there is $R > 0$, such that for any $x \in \mathbb{R}$, there is $y \in \mathbb{R}$ with $|x - y| \leq R$ and $\Lambda \cap B_r(0) = (\Lambda - y) \cap B_r(0) $, that is, each $r$-patch repeats inside $\Lambda$ with bounded gaps.
\end{enumerate}
Given a perfect tiling $\Lambda$ of the real line, we call a bounded continuous complex function $f$ on $\mathbb{R}$ \emph{pattern equivariant} for $\Lambda$ if for any $\varepsilon > 0$, there is $r > 0$ such that for any $x , y \in \mathbb{R}$ satisfying
\[
(\Lambda - x) \cap B_r(0) = (\Lambda - y) \cap B_r(0) \; ,
\]
we have $|f(x) - f(y)| < \varepsilon$. It follows from the repetitivity of the tiling that such a function must be uniformly continuous. The set of all pattern equivariant functions for $\Lambda$ forms a unital $C^*$-subalgebra of the $C^*$-algebra $C_\mathrm{u}(\mathbb{R})$ of all bounded uniformly continuous complex functions on $\mathbb{R}$. The Gelfand spectrum of this subalgebra is called the \emph{continuous hull} associated to the tiling $\Lambda$ and will be denoted by $\Omega_\Lambda$. It is easy to see that if $f \in C_\mathrm{u}(\mathbb{R})$ is {pattern equivariant}, then so is every translation of $f$. Hence $C(\Omega_\Lambda)$ is an invariant $C^*$-subalgebra of $C_\mathrm{u}(\mathbb{R})$ under the (continuous) action of $\mathbb{R}$ by translation, and this induces a flow on $\Omega_\Lambda$. The associated crossed product $C(\Omega_\Lambda) \rtimes \mathbb{R}$, called the \emph{noncommutative Brillouin zone}, plays an important role in the study of analytic and geometric properties of the tiling; cf.\ \cite{Bellissard2006}.
It is well-known that for a perfect tiling $\Lambda$ of the real line, the flow $\mathbb{R} \times \Omega_\Lambda \to \Omega_\Lambda$ is free and minimal (cf.\ \cite[Section 2]{KellendonkPutnam2000Tilings}), while $\Omega_\Lambda$ is the inverse limit of $1$-dimensional spaces called \emph{G\"{a}hler approximants} (cf.\ \cite[Section 2.4]{Sadun2008Topology}) and thus has covering dimension equal to $1$. Applying Corollaries~\ref{cor:estimate-tube-dim}, \ref{cor:top-flow-Rokhlin-estimate} and \ref{cor:top-flow-dimnuc-estimate}, we see that this flow has finite tube dimension, the induced $C^*$-flow $\mathbb{R} \to \mathrm{Aut} (C(\Omega_\Lambda) )$ has finite Rokhlin dimension, and the associated crossed product $C(\Omega_\Lambda) \rtimes \mathbb{R}$ has finite nuclear dimension.
\end{eg}
\begin{Rmk}
Let us also explain a direct argument why a crossed product $C(\Omega_\Lambda) \rtimes \mathbb{R}$ from Example \ref{ex:tilings} always contains a nontrivial projection. Indeed, fix $r > 0$ such that there is at most one point in $\Lambda \cap B_{3r}(x)$ for each point $x \in \mathbb{R}$. Fix a continous function $f \in C_0(\mathbb{R})$ supported in $B_{r}(0)$ such that $\int_{\mathbb{R}} |f(x)|^2 dx = 1$. Define $g \in C_\mathrm{u}(\mathbb{R})$ by
\[
g (x) = \sum_{y \in \Lambda } f(x - y) \; ,
\]
for every $x \in \mathbb{R}$. For each $x$, observe that at most one summand is nonzero because of the support condition above. It is also clear from the definition that $g$ is pattern equivariant. Moreover, note that for every $x \in \mathbb{R}$, one has
\[
\left( \overline{g} \cdot \alpha_{2r}(g) \right) (x) = \sum_{y \in \Lambda } \sum_{y' \in \Lambda } \overline{ f(x - y) } f(x-2r - y') = 0
\]
because our assumptions prevent $x$ and $x - 2r$ from being within distance $r$ to $\Lambda$ at the same time. Thus we have $\overline{g} \cdot \alpha_{2r}(g) = \overline{g} \cdot \alpha_{-2r}(g) = 0$. This allows us to define an element $p \in C_c(\mathbb{R}, C(\Omega_\Lambda) ) \subset C(\Omega_\Lambda) \rtimes \mathbb{R}$ by
\[
p(t) = \begin{cases}
\overline{g} \cdot \alpha_t(g) &\mid t \in [-2r, 2r] \\
0 &\mid t \not\in [-2r, 2r].
\end{cases}
\]
A simple computation shows $p^* = p$ and $p* p = p$, i.e., $p$ is a (nontrivial) projection in $C(\Omega_\Lambda) \rtimes \mathbb{R}$.
Consequently, it follows that $C(\Omega_\Lambda) \rtimes \mathbb{R}$ is classifiable in the sense of Elliott. When the topological flow $\mathbb{R} \curvearrowright \Omega_\Lambda$ is uniquely ergodic, or, equivalently, when $\Lambda$ has uniform cluster frequencies in the sense of \cite{LeeMoodySolomyak2002Pure} (see also \cite[Section 3.3]{FrankSadun2014Fusion}), the classifying invariant consists of the topological $K$-groups $K^{0}(\Omega_\Lambda)$ and $K^{1}(\Omega_\Lambda)$, which are direct limits of the often easily computable topological $K$-groups of the G\"{a}hler approximants, together with the order of the latter group given by the Ruelle-Sullivan map $K^{1}(\Omega_\Lambda) \to \mathbb{R}$ induced by the unique invariant probability measure.
\qed
\end{Rmk}
\bibliographystyle{alpha}
|
1,941,325,221,017 | arxiv | \section{Introduction}
The bulk of the baryonic mass in galaxy clusters exists in the form of a low density, diffused ionized gas filling up the space between cluster galaxies. This hot intra-cluster medium (ICM, $T \sim 2-10$ keV) emits in the X-rays through thermal Bremsstrahlung emission, and is also observable in the millimeter/sub-millimeter wavelengths through the Sunyaev-Zel'dovich (SZ) effect, which is a distortion in the intensity of the Cosmic Microwave Background (CMB) radiation caused by the same thermal component (Sunyaev \& Zel'dovich 1980). Together, these two observables are central to the use of galaxy clusters as cosmological probes.
The ICM is also host to a large population of ultra-relativistic particles (cosmic rays) and magnetic fields, seen primarily through radio observations. The most spectacular evidence for this non-thermal population comes from observations of giant radio haloes, which are diffuse sources of radio synchrotron emission extending over $\sim 1$ Mpc scales. The haloes are not associated to any particular cluster galaxy, and are morphologically distinct from radio mini-haloes (residing in cluster cool cores), radio relics (formed at the edge of a merger shock) and radio lobes associated with active galactic nuclei. The similarity of their morphology with the ICM suggests a correspondence between their powering mechanism and the total cluster mass (e.g. Liang et al. 2000). They are relatively rare and are found mostly in clusters showing evidence of ongoing mergers. As such, they can prove to be essential in understanding cluster merger dynamics and associated heating processes in the ICM (see e.g. review by Ferrari et al. 2008).
Despite their importance, the powering mechanism of these giant radio haloes remains uncertain. There are two models for particle acceleration in a radio halo volume: the hadronic model which uses collisions between cosmic-ray protons and thermal protons for generating relativistic electrons (Dennison 1980), and the turbulence models where the electrons are re-accelerated through MHD turbulence in the ICM caused by cluster mergers (Brunetti et al. 2001, Petrosian 2001). The distinction between these two models is partly based upon the observed scaling between radio and X-ray power (the latter indicating the total cluster mass), and the fact that X-ray selection seems to indicate two distinct populations of clusters: the radio halo and ``radio quiet" ones (e.g Brunetti et al. 2007). However, recent discoveries of radio haloes in clusters with very low X-ray luminosities (Giovannini et al. 2011), and the lack of radio haloes in some mergers (e.g. Russell et al. 2011) show that the X-ray selection may not be as clean as expected. These new observations and the underlying large scatter in the $L_{X}-P_{\mathrm{radio}}$ correlation suggest that a new observational window on the selection and mass estimation of clusters harboring radio haloes can bring some much needed clarity.
One further reason for expecting a robust correlation between radio power and SZ is the timescale argument: the boost in the X-ray luminosity during mergers happens in a relatively short timescale, compared to the gas thermalization in a modified potential well producing a more gradual and moderate increase in the SZ signal (Poole et al. 2007, Wik et al. 2008). This should correspond better with the radio halo time scale ($\sim 1$ Gyr), derived from the spatial extent of the haloes. The integrated SZ signal is also a more robust indicator of cluster mass than the X-ray luminosity, irrespective of cluster dynamical state (e.g. Motl et al. 2005, Nagai 2006). Thus SZ-selection might be able to find radio haloes in late mergers and other massive systems which are left out in X-ray selection.
\medskip
This letter presents the first radio-SZ correlation for clusters with radio haloes. The radio data is a compilation of published results, and the SZ measurements are taken from the {\it Planck~} all-sky cluster catalog (Planck collaboration 2011). All results are derived using the $\Lambda$CDM concordance cosmology with $\Omega_M = 0.26$, $\Omega_{\Lambda} = 0.74$ and $H_0 = 71$ km s$^{-1}$ Mpc$^{-1}$. The quantity $Y_{\mathrm{SZ}}$ is used throughout to denote the {\it intrinsic} Compton $Y$-parameter for a cluster, $Y d_A^2$, where $d_A$ is its angular diameter distance.
\vspace*{-2mm}
\section{Radio \& SZ data sets}
We do not attempt to define a new comprehensive sample for this work, rather use a set of available cluster catalogs with radio halo detections and non-detections to probe the robustness of the radio-SZ scaling.
Published radio error estimates often ignore systematic effects like flux loss in interferometric imaging and contribution from unresolved point sources, which in turn create an over-estimation of the intrinsic scatter.
Since the present work is mainly concerned with the mean slope of the radio-SZ scaling and not its dispersion, error underestimation in the literature will not affect the results as long as the measurements are unbiased.
The radio catalogs can be divided into two groups: those with and without a listing of non-detections. A comprehensive sample in the former category is by Giovannini et al. (2009, hereafter G09), presenting results at 1.4 GHz for $z<0.4$ clusters. Potentially problematic are its mixing of radio halos and mini halos, and not separating contributions from radio relics. More critically, the sizes of the radio haloes are approximated by the observed largest linear sizes (LLS), which is not a good approximation for radio halo diameter. To address the latter issue we use a smaller subsample by Cassano et al. (2007, hereafter C07), which provides a better measurement of radio halo sizes by averaging their minimum and maximum extensions.
The most systematic study of radio halo non-detections in an X-ray selected sample is by Venturi et al. (2008), using GMRT observation at 610 GHz. However, this sample contains too few clusters which have {\it Planck~} SZ measurements to obtain any robust correlation. We therefore use the compilation by Brunetti et al. (2009, hereafter B09), which lists GMRT results scaled to 1.4 GHz with other unambiguous halo detections. The non-detection upper limits were obtained by simulating fake radio halos in the GMRT data and scaling to 1.4 GHz by using $\alpha=1.3$, a typical spectral index for radio haloes. Our final sample is from Rudnick \& Lemmerman (2009, hereafter R09) who re-analyzed WENSS survey data at 327 MHz for an X-ray selected sample. The shallowness of WENSS data makes R09 ineffective in testing bi-modality, as the $3\sigma$ upper limits are not sufficiently below the detection level. Its use is mainly limited to testing possible changes in the scaling law at lower frequencies.
The Sunyaev-Zel'dovich effect measurements are taken from the Planck ESZ catalog (Planck collaboration 2011). This all-sky cluster catalog provides a list of 189 objects in the highest mass range out to a redshift $z\sim 0.6$, selected at $S/N > 6$ from the first year survey data. Out of these, 22 are either new cluster candidates or have no X-ray data. The remaining 167 clusters, spanning a redshift range $0.01 <z <0.55$, are cross-correlated against the radio catalogs. All radio halo clusters therefore have an $R_{500}$ estimate in the {\it Planck~} catalog obtained from the $L_X-M_{500}$ relation. This is used to model the pressure profile for each cluster when scaling their integrated SZ signal, $Y_{\mathrm{SZ}}$, between different radii.
Proper regression analysis between the radio and SZ observables is fundamental to this work. We must allow for measurement errors and intrinsic scatter in both observables, which makes the regression analysis non-trivial (see Kelly 2007). We use the publicly available \textsc{idl} code by Kelly to perform the regression analysis using a Bayesian approach. An important advantages of this method is the provision for including non-detections.
\begin{table
\caption{Regression coefficients for the scaling relation $\log(P_{\nu}) = A + B ~\log(Y_{\mathrm{SZ}})$.
The term {\it global} implies correlation with the total SZ signal, as opposed to that scaled inside the halo radius. }
\label{regtable}
\centering
\begin{tabular}{l l c c}
\hline
Sample & sub-category & ~~B (slope)~~ & ~~A (norm.)~~ \\
\hline\hline
\small
G09 & global & $1.84\pm 0.38$ & $31.3\pm1.4$ \\
& inside LLS & $0.95\pm 0.14$ & $28.8\pm 0.5$ \\
C07 & global & $1.88\pm 0.24$ & $31.4\pm0.8$ \\
& inside $R_{H}$ & $1.17\pm 0.18$ & $29.7\pm0.8$ \\
B09 & global, haloes only & $2.03\pm 0.28$ & $32.1\pm 1.0$ \\
& $+$ non-detections & $2.41\pm 0.44$ & $33.4\pm 1.6$ \\
R09 & global, haloes only & $0.81\pm 0.36$ & $28.1\pm 1.4$ \\
& $+$ non-detections & $1.38\pm 0.43$ & $29.8\pm 1.8$ \\
\hline
\end{tabular}
\begin{minipage}[b]{0.92\columnwidth}
\centering
\vspace{1mm}
Samples: G09=Giovannini et al. 2009; C07=Cassano et al. 2007; B09=Brunetti et al. 2009;
R09=Rudnick \& Lemmerman 2009.
\end{minipage}
\end{table}
\begin{figure*
\includegraphics[width=0.92\columnwidth]{giovannini_unscaled_z.eps}
\hspace{4mm}
\includegraphics[width=0.92\columnwidth]{giovLLS_cas_combi_z.eps}
\caption{Radio-SZ correlation for {\it Planck} detected clusters with radio haloes. {\it Left --} Result from the G09 sample, correlating radio halo power against the total $Y_{\mathrm{SZ}}$ signal inside $5R_{500}$. Filled symbols correspond to clusters at $z>0.2$, and open symbols are for lower redshifts. The shaded area marks the $2\sigma$ or 95\% confidence region (some name labels are omitted for clarity). {\it Right --} The same G09 clusters after scaling the total SZ signal to inside the haloes' largest linear dimensions, resulting in a much flatter slope with reduced scatter. ({\it Inset}) Result from the C07 sample, with scaled SZ signal inside their quoted radio halo radius, $R_H$. Mean slope is $1.17\pm 0.18$, which is used for comparison with theoretical models.}
\label{GCsamp}
\end{figure*}
\vspace*{-2mm}
\section{Results}
\subsection{Radio-SZ scaling}
The radio-SZ scaling relation is obtained by performing linear regression in log-space: $\log(P_{\nu}) = A + B\log(Y_{\mathrm{SZ}})$.
The normalization $A$ and slope $B$ are obtained from the Markov Chains, as well as the intrinsic scatter $\sigma_{\log P|\log Y}$. A summary of the results are given in Table 1. The first correlation example is from the G09 sample, comparing the radio power with the {\it global} SZ signal (Fig. \ref{GCsamp} {\it left}). Out of 32 objects in this sample 24 have {\it Planck} counterparts. The mean slope is $1.84\pm 0.38$. There is a lot of scatter in this correlation, with mean scatter 0.45 dex, i.e. roughly a factor $\sim 2.8$. Much of this scatter is driven by the low-redshift objects which are under-luminous (e.g. A401 and A754). This potentially indicates a systematic bias in their total flux and size measurements with interferometers. Only A1351 stands out as overtly radio luminous for its mass, although later revisions of its radio power (Giacintucci et al. 2009) moves it closer to the mean value.
The {\it Planck~} catalog provides the integrated $Y$ parameter within radius $5 R_{500}$, obtained from a matched filtering algorithm assuming a universal gas pressure profile (see Planck collaboration 2011). At this radius, $Y_{5R_{500}}^{\mathrm{cyl}} \approx Y_{5R_{500}}^{\mathrm{sph}}$. This is nearly 3 times the cluster virial radius, and much larger than the extent of the radio emitting regions. Therefore, a tighter correlation can be expected if this {\it global} SZ signal is scaled down to that inside the radio halo volume. We do this conversion by assuming the universal pressure profile of Arnaud et al. (2010), as also used by the {\it Planck} team. In particular, the best fit profile for mergers/disturbed clusters from the appendix of Arnaud et al. (2010) is used, but the difference is negligible if the mean profile is used instead.
This scaling of the SZ signal inside LLS changes the results significantly. Correlation between radio and SZ powers inside the halo volume becomes consistent with a linear relation, with mean slope $0.95\pm 0.14$ (Fig.\ref{GCsamp} {\it right}) and a reduced mean intrinsic scatter in radio (0.35 dex).
The largest linear sizes are in general not a good approximation for radio halo diameters, so the above analysis is repeated with the C07 sample using their revised measurement of halo radius, $R_H$. The slope for the global correlation with this sample is $1.88\pm 0.24$, whereas after scaling the SZ signal inside $R_H$ it becomes $1.17\pm 0.18$, with mean intrinsic scatter 0.28 dex (Fig.\ref{GCsamp} {\it inset}). Although this is statistically fully consistent with the scaled result of the G09 sample inside the LLS, we use this slightly super-linear correlation when making comparisons with theoretical models, due to the better definition of halo radius. We emphasize that from the current analysis using radio data from the literature, a linear correspondence between radio and SZ power is a fully valid result.
The R09 sample at 327 MHz indicates a flattening of the correlation slope at lower frequencies: the best fit value is $0.81\pm 0.36$ when considering the halo sample, with a scatter of only 0.21 dex. The large flux uncertainties (and correspondingly low intrinsic scatter) reflect on the shallowness of the WENSS data, which is more than an order of magnitude less sensitive compared to typical VLA measurements scaled to its frequency. However, the method used by R09 to detect haloes (and place upper limits) based on simulating sources in control regions safeguards against flux underestimation bias, which will otherwise occur in a visual inspection.
There can be a residual bias from fluxes associated with small scale structures that are not recovered.
If non-detection upper limits in R09 are included in the correlation, then the slope becomes $1.38\pm 0.43$, which is consistent at $1\sigma$ with the scaling result at 1.4 GHz.
\begin{figure*}
\includegraphics[width=0.92\columnwidth]{brunetti_twoz_PnuYsz.eps}
\hspace{4mm}
\includegraphics[width=0.92\columnwidth]{Mgas_Pradio_B09samp.eps}
\caption{ Test of bi-modality with non-detections from the B09 sample.
{\it Left --} Correlation of radio power against the total SZ signal: filled symbols are for $z>0.2$ cluster haloes, and open symbols are for lower redshifts. All non-detections are at $z>0.2$ and are extrapolated from 610 MHz data. The only mini-halo (A2390) in the sample is marked by the orange square. The short-dashed line corresponds to the fit for haloes only, and the long-dashed line when non-detections are included. Filled regions mark the 95\% confidence intervals. {\it Right --} Correlation of radio halo power with the gas mass. Symbols and lines have the same meaning as in the left panel.
}
\label{Bsamp}
\end{figure*}
\vspace*{-2mm}
\subsection{Lack of strong bi-modality}
To test whether there are two distinct populations of clusters: those hosting powerful radio haloes and those without, we use the B09 sample. Actual flux measurements at 1.4 GHz are used for the clusters A209 and RXCJ1314 (Giovannini et al. 2009), instead of the extrapolated values from 610 MHz as given by B09. All non-detection upper limits are extrapolation of simulation results at 610 MHz using halo spectral index $\alpha=1.3$.
Regression analysis for the two cases (with and without halo non-detections) yields slopes which are statistically consistent with each other, even though a bi-modal division appears to be emerging (Fig. \ref{Bsamp} {\it left}). Significantly, we do not find high-$Y_{\mathrm{SZ}}$ objects with radio non-detections, as in the case of highly X-ray luminous ``radio quiet'' cool core clusters. But the small number of non-detections makes it difficult to conclude whether the bi-modality is weaker or non-existent. All non-detection upper limits lie below the 95\% confidence interval of the halo-only correlation, suggesting clusters with upper limits are generally radio under-luminous. This can be partly redshift-driven, as the samples are not SZ-complete.
An alternative, although not independent, way to visualize this result is to correlate the radio power directly against cluster gas mass. Rough estimates of $M_{\mathrm{gas}}$ inside $R_{500}$ were obtained by dividing the Planck $Y_{\mathrm{SZ}}$ values by the mean X-ray temperatures taken from the literature. The global $Y_{\mathrm{SZ}}$ measurements of {\it Planck~} are scaled to that inside $R_{500}$ using the universal pressure profile, as this is the radius within which X-ray temperatures are typically obtained. The result is similar to the $P_{1.4}-Y_{\mathrm{SZ}}$ correlation, lacking a strong bi-modal division (Fig.\ref{Bsamp}, right panel). The scatter is increased by roughly 30\% (from 0.5 dex to 0.6 dex), in line with the expectation that $Y_{\mathrm{SZ}}$ is a lower scatter mass proxy. The four non-detection clusters have generally lower $T_X$ values as compared to the halo detections (median $T_X$ is $7.3$ keV compared to $8.7$ keV). It should be made clear that by taking $T_X$ estimates from literature we ignore potential errors due to non-uniform radius for extracting $T_X$, systematic differences between XMM-Newton and Chandra measurements, etc. However, the mean slope for the $P_{1.4}-M_{\mathrm{gas}}$ scaling relation, $3.2\pm 0.7$ with the full sample, is consistent with the global mass scaling derived from the $Y-M$ relation (see \S\ref{Msection}), indicating that no additional biases are incurred while using this non-uniform selection of X-ray temperatures.
A tentative argument for a selection bias in X-ray complete samples and the ensuing bi-modality can be given by comparing the relative frequency with which radio haloes and non-detections occur in the {\it Planck~} catalog. In Venturi et al. (2008), GMRT data were obtained for a complete X-ray selected sample, with 6 detection of radio haloes
plus 20 non-detections. The {\it Planck~} catalog contains 5 out of these 6 halo clusters, but only 4 out of 20 non-detection clusters. For the B09 sample this ratio is 16 out of 21 radio halo clusters and 4 out of 20 non-detections (the same non-detections as in the Venturi et al. sample). Since the {\it Planck~} catalog should not have a significant bias towards mergers, this provides an indirect evidence for our hypothesis that being SZ-bright (hence massive) is a better indicator for clusters hosting radio haloes, as opposed to being X-ray luminous. Even though the R09 sample is too shallow to directly test bi-modality, it interestingly follows this same trend: {\it Planck~} reports measurement of 12 out of 14 counterparts for haloes and other diffuse emissions, as opposed to only 15 out of 58 counterparts for non-detections.
\vspace*{-1mm}
\section{Theoretical considerations}
\subsection{Mass scaling of the radio halo power}
\label{Msection}
The independent variable, $Y_{\mathrm{SZ}}$, is defined as the integral of the total pressure in a spherical volume, and hence is proportional to the total gas mass:
\begin{equation}
Y_{\mathrm{SZ}} \equiv Y d_A^2 \propto \int n_e T_e~ dV \propto M_{\mathrm{gas}} T_e = f_{\mathrm{gas}} M_{\mathrm{tot}} T_e.
\label{eq:szdef}
\end{equation}
Here $T_e$ is the mean gas temperature within the integration radius, and $f_{\mathrm{gas}}$ is the gas-to-mass ratio.
Assuming hydrostatic equilibrium and isothermality, the temperature scales to the total mass as $T_e \propto M_{\mathrm{tot}}^{2/3} E(z)^{2/3}$ (e.g. Bryan \& Norman 1998), where $E(z)$ is the ratio of the Hubble parameter at redshift $z$ to its present value. Therefore, the scaling between the SZ observable and total mass is $Y_{\mathrm{SZ}} E(z)^{-2/3} \propto f_{\mathrm{gas}} M_{\mathrm{tot}}^{5/3}$. Numerical simulations, analytical models and SZ observations indicate that this mass scaling is extremely robust, with little scatter over a large range of cluster mass, dynamical state or other details of cluster physics (e.g. Motl et al. 2005, Reid \& Spergel 2006, Andersson et al. 2011).
We thus assume this scaling to be valid also {\it inside} a cluster at different radii, provided that the radius is sufficiently large to exclude complex physics at clusters cores. The large halo sizes measured by C07 ($\bar{R}_H \sim 600$ kpc, typically of the same order as $R_{2500}$), ensures that they encompass a representative cluster volume.
The $E(z)^{-2/3}$ factor for self-similar evolution changes the scaling results only marginally, well within the statistical errors. The gas mass fraction, $f_{\mathrm{gas}}$, has a weak dependence on cluster mass: $f_{\mathrm{gas}} \propto M_{\mathrm{500}}^{~0.14}$ (Bonamente et al. 2008, Sun et al. 2009). Assuming the same mass dependence of $f_{\mathrm{gas}}$ for all radii, we therefore obtain
\begin{equation}
P_{1.4} \propto M_H^{~~2.1\pm 0.3} \propto M_{\mathrm{vir}}^{~~3.4 \pm 0.4}.
\label{eq:mscale}
\end{equation}
In the above, $M_H$ is the total mass inside radio haloes, and $M_{\mathrm{vir}}$ is the cluster virial mass which scales linearly with $M_{\mathrm{tot}}(<5R_{500})$. The scaling index inside haloes is in good agreement with previous X-ray hydrostatic mass estimates (e.g. Cassano et al. 2007). The global scaling with total cluster mass can be a useful parameter for estimating radio halo statistics, particularly in simulations.
The radio halo sizes are known to scale non-linearly with cluster radius, in a break from self-similarity (Kempner \& Sarazin 2001, Cassano et al. 2007).
Indeed, using the X-ray derived $R_{500}$ measurements from the {\it Planck~} catalog, we obtain the empirical relation $R_H \propto R_{500}^{\ 3.1 \pm 0.2}$ with the C07 sample, consistent with the estimate by C07 using $L_X-M_{\mathrm{vir}}$ scaling relation ($R_H \propto R_{\mathrm{vir}}^{2.6\pm 0.5}$). A consequence of this rapid increase in radius is a drop of the mean gas density inside haloes with increasing halo mass. Our observed scaling between the halo radius and scaled SZ signal, $R_H \propto Y_H^{\ 0.31\pm 0.03}$, implies that the mean gas density ($\bar{n}_H$) scales down as roughly $\bar{n}_H \propto T_e^{-0.9}$, or assuming thermal equilibrium inside haloes, as $\bar{n}_H \propto M_H^{\ -0.6}$. This brings the observed non self-similar scaling between $R_H$ and $R_{\mathrm{vir}}$ in conformity with the mass scaling in Eq.(\ref{eq:mscale}). It is worth mentioning at this point that radio halo size measurements with insufficient S/N will tend to show a steeper dependence on luminosity than the true scaling, since only the bright central regions will be picked up.
\vspace*{-2mm}
\subsection{Comparison with radio/X-ray scaling}
There is some confusion in the literature about the exact power in the X-ray/radio scaling: reported values using the luminosity in the soft X-ray band range between $P_{\nu} \propto L_{X[0.1-2.4]}^{1.6-2}$ (Brunetti et al. 2007, Kushnir et al. 2009). Using the regression method used in this work we find the scaling index in the middle of this range, e.g. from the B09 sample the mean slope for $\log P_{\nu} - \log L_{X[0.1-2.4]}$ correlation is $1.80\pm 0.21$, with mean intrinsic scatter 0.3 dex. The mass-luminosity relation for the X-ray soft band is well-established observationally. We use the result given by Zhang et al. (2011) for disturbed clusters in the HIFLUGCS sample: $L_{X[0.5-2]} \propto [M_{\mathrm{gas}, R_{500}} E(z)]^{1.16\pm 0.04}$, where the luminosities are core corrected. This combined with the weak mass dependence of $f_{\mathrm{gas}}$ produces a mass scaling of radio power as $P_{1.4} \propto M_{500}^{\ 2.4}$, which is much shallower than the virial mass scaling obtained in Eq.(\ref{eq:mscale}) but roughly consistent with the halo mass power law. This indicates that the global X-ray emission acts as a relatively good proxy for radio halo masses due to its peaked profile, as most of the X-ray flux comes from within a radius that is $\lesssim R_H$.
\vspace*{-2mm}
\subsection{Expectations from theoretical models}
The hadronic model for radio synchrotron emission postulates that electrons at ultra-relativistic energies are produced in the ICM by $p$-$p$ collisions between cosmic ray protons and thermal protons (see review by Ensslin et al. 2011 and references therein). For estimating the scaling relation between radio halo power and cluster mass, we follow the formulation by Kushnir et al. (2009). In this model, the total radio power is the volume integral of the cosmic ray energy density ($\epsilon_{\mathrm{CR}} = X_{\mathrm{CR}} ~n ~k T_e$) and the hadronic interaction rate ($\tau_{\mathrm{pp}}^{-1} \sim n ~\sigma_{\mathrm{pp}}$):
\begin{equation}
P_{\nu} = \int \tau_{\mathrm{pp}}^{-1} ~\epsilon_{\mathrm{CR}} ~dV
\sim X_{\mathrm{CR}} ~n^2 ~k T_e ~\sigma_{\mathrm{pp}} ~f_B ~R_H^3.\\
\end{equation}
Here $X_{\mathrm{CR}}$ is the ratio between cosmic ray pressure and thermal pressure, $n$ is the gas density, $\sigma_{\mathrm{pp}}$ is the $p$-$p$ collision cross-section, $f_B$ is the volume filling factor for magnetic fields, and $R_H$ is the halo radius. In the second step we have assumed the magnetic field energy density to be much larger than the CMB energy density, $B >> B_{\mathrm{CMB}} \approx 3.2(1+z)^2 \mu$G.
Considering that $Y_{\mathrm{SZ}}(<R_H) \sim n ~k T_e ~R_H^3$, we thus obtain:
$P_{\nu} / Y_{\mathrm{SZ}} \propto X_{\mathrm{CR}} ~f_B ~n ~\sigma_{\mathrm{pp}}$.
Therefore, if the cosmic ray fraction and mean density do not depend on the halo mass, we recover the observed linear dependence between radio power and the SZ signal inside haloes. The latter assumption, however, is in conflict with our observed scaling of mean gas density (\S\ref{Msection}), which actually drops with increasing halo mass. Another potential problem is the assumption of strong magnetic fields, $B_H >> B_{\mathrm{CMB}}$, over the entire halo volume. A likely scenario with the hadronic model would therefore be to assume a more clumpy radio emission, where the regions contributing most of the radio power have constant densities and strong magnetic fields.
In the turbulent re-acceleration model a pre-existing population of electrons at lower energies gets re-accelerated by merger induced turbulence (see review by Ferrari et al. 2008 and references therein). The powering mechanism of radio haloes is complex, but for a simple estimate we can follow the formulation by Cassano \& Brunetti (2005) and Cassano et al. (2007). In their model, the energy injection rate from turbulence depends on the mean density and velocity dispersion inside the radio haloes; $\dot{\varepsilon}_t \propto n ~\sigma_H^2$, where $\sigma_H^2 \equiv GM_H/R_H$. The total power of a radio halo is then
\begin{equation}
P_{\nu} \sim \int \dot{\varepsilon}_t ~(\Gamma_{\mathrm{rel}}/\Gamma_{\mathrm{th}})
~dV \propto \dfrac{M_H~ \sigma_H^3}{{\cal F}(z, M_H, B_H)} ,
\label{eq:turb}
\end{equation}
where $\Gamma$ is the turbulence damping rate transferring energy to the particles,
and the function ${\cal F}(z, M_H, B_H)$ is constant in the asymptotic limit of strong magnetic fields. Thus to the first approximation, Eq.(\ref{eq:turb}) implies $P_{\nu} \propto Y_H T_e^{1/2}$, in good agreement with the slightly super-linear slope inside haloes seen from the C07 sample ($P_{1.4} \propto Y_H^{\ 1.17 \pm 0.18}$). However, using the definition of $\sigma_H$ and the observed scaling between halo mass and radius, we find a mass dependence slightly shallower than observed: $P_{\nu} \propto M_H^{1.7-1.8}$, which is still consistent with Eq.(\ref{eq:mscale}). This may indicate a preference for more realistic field strengths,
e.g. figure 2 in C07 suggests approximately ${\cal F} \propto M_H^{-0.3}$ if the mean field strength is of the order $5-6 ~\mu$G inside a radio halo of mass $M_H \sim 10^{14.5}$ M$_{\odot}$, assuming $B_H \propto M_H^{0.5}$. A shallower $B_H-M_H$ relation will correspondingly imply an weaker field to explain the observed $P_{1.4}-M_H$ scaling.
\vspace*{-2mm}
\section{Conclusions}
In this letter we presented the first radio-SZ correlation results for clusters hosting radio haloes, using published radio data and the {\it Planck~} SZ catalog. There is a clear correspondence between these two thermal and non-thermal components as expected from the well-established radio/X-ray correlation.
On the other hand, we found no strong bi-modal division in the cluster population split between radio halo and ``radio quiet'' objects. The halo non-detection clusters are generally radio under-luminous, but their occurrence in the {\it Planck~} catalog is much less frequent as compared to all X-ray complete samples, and as such we can not conclude whether the bi-modality is weaker or non-existent when measured against SZ. A likely explanation for this difference can be that the bi-modality seen in the $L_X$ selection comes from a bias towards lower mass cool core systems (which are radio quiet), whereas SZ selection picks up the most massive systems irrespective of their dynamical states.
A forthcoming work will purport to test this hypothesis using a complete SZ-selected sample.
The radio-SZ correlation results were compared with the simplified theoretical predictions from hadronic and turbulent re-acceleration models. Even though the observed correlation power can be explained from both these models under certain assumptions, the turbulent re-acceleration model can be considered a better fit to the data given the simple formulations used in this letter.
The difference between the global radio-SZ scaling and the one within the halo volume is explainable from the non-linear scaling between radio halo mass and total cluster mass. An indicative flattening of the correlation slope was observed when considering a cluster sample at 327 MHz, but the result became consistent with 1.4 GHz observations when the haloes and non-detections were considered together as one population.
\vspace*{-2mm}
\section*{Acknowledgments}
I am grateful to Klaus Dolag and Christoph Pfrommer for their help in clarifying the theory and observations of radio haloes. I thank Arif Babul, Luigi Iapichino, Silvano Molendi, Florian Pacaud and Martin Sommer
for helpful discussions, Mariachiara Rossetti for providing a temperature estimate for the cluster AS780, and in particular the anonymous referee for a thorough reading of the manuscript and suggesting numerous improvements. I acknowledge the invitation to participate in the KITP Program ``Galaxy clusters: the crossroads of astrophysics and cosmology'', supported by the NSF Grant No. PHY05-51164, where this research was initiated.
\vspace*{-2mm}
|
1,941,325,221,018 | arxiv | \section{Introduction} \label{sec:intro}
Broad emission lines, one of the most prominent features in active galactic nuclei (AGNs), are emitted from the broad-line region (BLR). The BLR is one of the main components in the interior part of AGNs located between the accretion disk and dust structure \citep{Antonucci1993}. It is generally believed that motions of BLR clouds are dominated by gravity of central supermassive black hole (SMBH), while some researchers argued that the radiation pressure from the accretion disk can also contribute to non-negligible effects. Although broad emission lines allow to study the geometry and kinematics of the BLR, and measure the mass of SMBH, the BLR is still not yet fully understood. The BLR is too small to be directly spatial resolved by most telescopes. To date, only three AGNs have been resolved using the infrared interferometry \citep[e.g.,][]{Gravity2018}, therefore, other approaches are urgently needed.
Among them, reverberation mapping (RM) is an effective way of investigating the BLR and SMBH, which subtly uses the time resolution of the telescope to substitute the spatial resolution \citep[e.g.,][]{Blandford1982,Cackett2021}. In principle, the continuum radiated from the accretion disk acts as ionizing photons to ionize the gas of the BLR, and this photoionization process generates the emission lines that are broadened by the BLR cloud motion into the corresponding broad emission lines. The characteristic size of the BLR can be inferred from monitoring the responses of broad emission lines to continuum variations and measuring the time delays between broad emission-line and continuum light curves. When combining the size of the BLR with broad emission-line velocity, and using the virial relationship, one can estimate the mass of SMBH. Over the past four decades, the RM method has measured the black hole masses of $\sim$ 100 AGNs, and has become a standard tool for measuring the masses of SMBHs in AGNs \citep[e.g.,][]{kaspi2000, peterson2004, bentz2009, barth2013, du2014, Fausnaugh2017, derosa2018, lu2019,zhang2019, hu2021, bao2022}. Additionally, the application of RM to the accretion disk and torus can measure the size of disk \citep[e.g.,][]{edelson2017, Cackett2020, Fian2022}, and determine dust sublimation radius \citep[e.g.,][]{Koshida2014, Oknyansky2014, lyu2021, lyu2022}.
Here, the geometry and kinematics of BLR can be inferred from velocity-resolved RM analysis utilizing high-quality spectral data. The details are derived by measuring time lag as a function of line-of-sight velocity for the broad emission line. Currently, more than 30 AGNs have been measured using this method, and different kinematic characteristics of their BLR, such as rotation, inflow, and outflow, have been obtained (e.g., \citealt{du2016, Williams2018, Brotherton2020, Horne2021, lu2021, U2022, Villafana2022}). And a few well-studied AGNs were measured multiple times for velocity-resolved results in different periods. For example, \citet{xiao2018} recovered velocity-delay maps of NGC 5548 from multi-year data and found that the kinematics of the BLR varied between inflow and virial. Then, \citet{hu2020b} detected that the kinematics of the BLR in PG 2130+099 changed from virialized motion to inflow, and the timescale of change was less than one year. However, some other AGNs, e.g., NGC 3516, showed nearly consistent velocity-resolved signature over a decade \citep{denney2010, feng2021a}, indicating the complicated evolution in BLR. Therefore, to investigate the kinematic evolution of BLR, multiple velocity-resolved time delay measurements are necessary.
To achieve the above-mentioned purpose of studying the kinematic evolution of BLR, NGC~4151 is selected for RM monitoring in this work. As one of the brightest ($m_{V} = 11.48$ mag), nearest ($z = 0.003326$), and earliest Seyfert galaxies to be discovered \citep{Seyfert1943}, NGC~4151 was one of the best-studied AGNs. This target has been monitored by multiple RM projects in the last three decades including optical RM campaigns \citep{maoz1991, kaspi1996, peterson2004, bentz2006, derosa2018, bentz2022}, and ultraviolet (UV) RM campaigns \citep{Clavel1990, Ulrich1996, Metzroth2006}. Among these projects, \citet{Ulrich1996}, \citet{derosa2018} and \citet{bentz2022} also presented velocity-delay maps of BLR in the UV and optical emission lines, respectively. And these results provide a suitable condition for investigating the kinematic evolution of BLR. Furthermore, NGC~4151 is one of the few AGNs with accretion disk RM \citep{edelson1996, edelson2017}, which limits the size and structure of the disk. Also, it is a changing-look (CL) AGN which usually shows large variability amplitudes in both continuum and emission lines. In the past, NGC~4151 repeated CL phenomenon. For example, in 1984 the spectral type of NGC~4151 changed from type 1 to type 2 \citep{Penston1984}, and \citet{shapovalova2008} found a change from type 1.5 to type 1.8 between 1996 and 2006. Historical data of NGC~4151 show an extreme outburst lasting for more than ten years, and it recently arrived at the second outburst stage. Such unusual variability of continuum should lead to dramatic changes in the radiation pressure on BLR, and then in the properties of geometry and kinematics of BLR, which provide a good opportunity to investigate the nature of BLR during this period. NGC~4151 is only one of two AGNs with several methods for measuring black hole mass (the other is NGC 3227), including gas dynamical modeling \citep{hicks2008}, and stellar dynamical modeling \citep{onken2014, roberts2021}. Thus, NGC~4151 is an ideal candidate to verify the reliability of RM measurement when compared to other results.
In this work, we report the results from the RM campaign of NGC~4151. We successfully measure time lags of broad \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, \ifmmode {\rm He\ I} \else He~{\sc i}\fi, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ emission lines, and obtain their velocity-resolved maps, respectively. In comparison with previous velocity-delay maps, the BLR kinematics of NGC~4151 changed, implying evolutionary kinematics of BLR. Furthermore, we also calculate the black hole mass and dimensionless accretion rate of NGC~4151, and the result indicates a sub-accretor. This paper is arranged as follows. Observations and data reduction are introduced in Section~\ref{sec:obs}. Section \ref{sec:measure} presents light-curve measurements, variability characteristics, and intercalibration of multi-band continuum light curves. Time lag analysis, and measurements of black hole mass and dimensionless accretion rate are given in Section \ref{sec:analysis}. In Section~\ref{sec:discuss}, we discuss the long-term variability trend, kinematics characteristics of BLR, and comparison with previous results in measurements of time lag and black hole mass. Section \ref{sec:conclusion} provides a brief summary.
Throughout the paper, we adopt the distance to NGC~4151 of 15.8 Mpc \citep{yuan2020}.
\section{Observations and Data Reduction} \label{sec:obs}
NGC~4151 was monitored between 2020 November and 2021 June using the 2.4 m telescope of Lijiang Observatory, Yunnan Observatories, Chinese Academy of Sciences. The telescope is located at $100^{\circ}01^{\prime}48^{\prime\prime}$ east longitude and $26^{\circ}41^{\prime}42^{\prime\prime} $ north latitude. This Observatory can be divided into the dry season and rainy season. The rainy season is from July to September with few sunny days, while the rest of the year is the dry season. The weather during the dry season is clear, so we can perform RM observations. Additionally, the telescope is equipped with Yunnan Faint Object Spectrograph and Camera (YFSOC) which can quickly switch observing modes and is convenient. Thus, we carried out photometric and spectroscopic observations each night. The detailed information about the telescope and Observatory were described in \citet{wang2019} and \citet{xin2020}, respectively.
\subsection{Photometry} \label{sec:photo}
Before or after spectroscopic observations, we would use the Johnson $B$ filter and take 30 s exposure to get the broadband image. There are two motivations for performing photometry. Firstly, the photometric data is available for inspecting our spectral data quality. Secondly, we can examine whether the comparison star is unchanging \citep{hu2020a}. The data was reduced by the standard IRAF procedures, including bias subtraction and flat-fielding correction. Since the field of view (FoV) is about $10^{\prime} \times 10^{\prime}$, we could select several stars in the FoV for differential photometry. In addition, we employed a circular aperture with a radius of 4\farcs24 (corresponding to 15 pixels), and the inner and outer radii of the background annulus are 14\farcs15 and 19\farcs81 (corresponding to 50 and 70 pixels), respectively. The size of the aperture was estimated based on the FWHM of stars. Finally, we successfully obtained 55 epochs of photometric observations, and Figure~\ref{fig:objcomp} shows the light curves of our target and comparison star. The standard deviation of the light curve of the comparison star is $\sim$ 0.017 mag, suggesting that it was stable and can be used to calibrate spectral flux.
\begin{figure}[!ht]
\centerline{
\includegraphics[scale=0.50]{obj-comp.pdf}
}
\caption{The photometric light curves of Johnson $B$ filter of NGC~4151 (upper) and comparison star (lower).}
\label{fig:objcomp}
\end{figure}
\subsection{Spectroscopy} \label{sec:spec}
For spectroscopic observations, we applied a slit of 2\farcs5 width to mitigate light loss after taking into account the local average seeing $\sim$ 1\farcs5. The telescope provides various grisms with different resolutions, and we chose Grism 3 possessing a dispersion of 2.86 \AA\,pixel$^{-1}$. Besides that, we also added a UV-blocking filter in observation which can cut off wavelength less than 4150 \AA, accordingly eliminating secondary spectral contamination \citep{feng2021a,feng2021b}. Indeed, taking advantage of the long-slit spectra of the Lijiang 2.4~m telescope, we can simultaneously put NGC~4151 and the comparison star into the slit to adjust the flux of spectroscopy \citep{maoz1990, kaspi2000}. We spent 600 s exposure on each spectrum and utilized IRAF software to process spectra. Furthermore, the extraction aperture and background cover the same region as photometry.
Concerning the flux calibration, we adopted the following steps. First, we picked some spectra of the standard star observed on clear nights to measure the absolute flux of the comparison star. Second, we combined the flux calibrated spectra of the comparison star to generate a template spectrum. Finally, we compared the individual spectrum of the comparison star with the template spectrum to create a response function, which was used for NGC~4151 spectral calibration. As above-mentioned (Section \ref{sec:photo}), the comparison star is invariable and can be employed for calibration. At last, we obtained 41 epochs spectra, one of which was taken using 5\farcs05 width slit. The spectral average signal-to-noise ratio at the rest-frame 5100 \AA\, is $\sim$ 149 pixel$^{-1}$.
\subsection{Other Telescopes}
Nowadays, with the development of time-domain survey projects, such as the All-Sky Automated Survey for Supernovae (ASAS-SN\footnote{\url{http://www.astronomy.ohio-state.edu/asassn/index.shtml}}) and the Zwicky Transient Facility (ZTF\footnote{\url{https://www.ztf.caltech.edu/}}), it is convenient to increase the cadence and length of observation or to explore the long-term variability. ASAS-SN is used to image the whole sky, and the current depth can reach 18 mag. It consists of multiple stations, and provides 3 dithered 90~s exposures in $V$ or $g$ bands \citep{shappee2014, kocha2017}. Aperture photometry is applied to handle the images with an aperture radius of 16\farcs0.
ZTF is another survey using a 48-inch telescope at Palomar observatory with 47~deg$^{2}$ FoV to monitor transients and variable astronomy phenomenons \citep{Bellm2019, Graham2019, Masci2019}. It scans the Northern Sky on a three-day cadence and the Galactic Plane every night. The depths of $g$ and $r$ bands in 30~s exposures are 20.8 and 20.6 mag, respectively. In addition, the image quality for $g$ and $r$ are 2\farcs1 and 2\farcs0, respectively. But in this work, we adopt the Automatic Learning for the Rapid Classification of Events (ALeRCE\footnote{\url{https://alerce.online}}) API to get accurate ZTF data. ALeRCE has succeeded in managing the ZTF alert stream before it is developed for the Legacy Survey of Space and Time \citep{forster2021, sanchez2021}.
\begin{deluxetable*}{lcccccccccc}[!htbp]
\tablecolumns{10}
\tablewidth{\textwidth}
\tabletypesize{\scriptsize}
\tablecaption{The Broad Emission-line and Photometric Light Curves}
\label{table:lc}
\tablehead{\multicolumn{6}{c}{Spectra} &
\colhead{} &
\multicolumn{3}{c}{Photometry} \\
\cline{1-6} \cline{8-10}
\colhead{JD - 2,450,000} &
\colhead{$F_{\rm H\alpha}$} &
\colhead{$F_{\rm H\beta}$} &
\colhead{$F_{\rm H\gamma}$} &
\colhead{$F_\ifmmode {\rm He\ I} \else He~{\sc i}\fi$} &
\colhead{$F_\ifmmode {\rm He\ II} \else He~{\sc ii}\fi$} &
\colhead{} &
\colhead{JD - 2,450,000} &
\colhead{mag} &
\colhead{$\rm Obs$}
}
\startdata
9162.44 & $147.136 \pm 1.821$ & $46.207 \pm 0.725$ & $19.227 \pm 0.626$ & $7.533 \pm 0.216$ & $8.290 \pm 1.395$ & & 9153.05 & $12.959 \pm 0.020$ & ZTF \\
9224.38 & $163.628 \pm 1.828$ & $51.484 \pm 0.735$ & $22.135 \pm 0.652$ & $8.387 \pm 0.244$ & $13.503 \pm 1.400$ & & 9156.02 & $12.959 \pm 0.020$ & ZTF \\
9225.37 & $164.933 \pm 1.826$ & $52.237 \pm 0.732$ & $21.685 \pm 0.641$ & $8.742 \pm 0.236$ & $14.133 \pm 1.399$ & & 9158.02 & $12.924 \pm 0.019$ & ZTF \\
9230.31 & $167.195 \pm 1.830$ & $53.336 \pm 0.738$ & $23.489 \pm 0.661$ & $8.966 \pm 0.255$ & $14.731 \pm 1.402$ & & 9162.43 & $12.903 \pm 0.017$ & LJ \\
9239.30 & $169.726 \pm 1.833$ & $52.926 \pm 0.742$ & $22.769 \pm 0.671$ & $9.259 \pm 0.265$ & $13.153 \pm 1.404$ & & 9166.01 & $12.867 \pm 0.019$ & ZTF \\
\enddata
\tablecomments{The emission-line flux is in units of $10^{-13}~\rm erg\,s^{-1}\,cm^{-2}$. The photometry includes the $B$ band data of Lijiang and the $g$ band data of ZTF, which are intercalibrated by PyCALI, and are also marked with the specific telescope in the ``Obs'' column, ``LJ'' refers to Lijiang 2.4 m telescope, and ``ZTF'' refers to ZTF.
\\
(This table is available in its entirety in a machine-readable form in the online journal.)}
\end{deluxetable*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.49\textwidth]{4151-fit-mean.pdf}
\includegraphics[width=0.49\textwidth]{4151-fit-individual.pdf}
\caption{The upper left panel shows the fit of the mean spectrum, where the broad Balmer lines (\rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}) are shown in magenta, the broad \ifmmode {\rm He\ I} \else He~{\sc i}\fi\ and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ are shown in cyan, and orange lines represent narrow emission lines. The blue, green, and grey lines correspond to AGN power-law, \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi\ template, and host galaxy template, respectively. Additionally, the red line symbolizes the best-fit model, and the original spectrum is shown in black. The lower left panel is fitting residuals. The right panels are the same as the left panels but for an individual spectrum.}
\label{fig:fit}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.9\textwidth]{4151_lc.pdf}
\caption{The left panels show photometry, \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, \ifmmode {\rm He\ I} \else He~{\sc i}\fi, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ light curves. Panel (a) is the intercalibrated result of $B$ (Lijiang) and $g$ (ZTF). The blue and orange shaded regions represent light curves reconstructed by MICA and JAVELIN. The right panels show time-lag distributions. Panel (g) shows the photometric autocorrelation function (ACF). Furthermore, in panels (h)-(l), the black and green lines correspond to ICCF and CCCD, respectively, and the blue dotted and orange dashed lines represent time-lag probability distributions of MICA and JAVELIN, respectively. The vertical dashed lines represent a time lag of 0 days.
}
\label{fig:lc}
\end{figure*}
\section{Measurements} \label{sec:measure}
\subsection{Light Curves} \label{sec:lc}
There are two methods to measure light curves of emission lines, one of which is integration and the other is spectral decomposition. For the integration method, we should first select two continuum windows on either side of the emission line and fit them with a straight line as the underlying continuum contribution. Then, the continuum is subtracted from the emission-line profile, and the continuum-subtraction emission-line profile is integrated as emission-line flux. This method is common to extract emission-line flux in RM (e.g., \citealt{kaspi2000, peterson2004, bentz2009, hu2021}), but is not suited for mixed and weak emission lines like \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ and \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi. Alternatively, the spectral decomposition can mitigate the contamination of other contributions, because it can decompose pure emission-line components to obtain multi-line light curves (e.g., \citealt{barth2015, hu2015, li2021}). Here, we adopt multi-component spectral fitting to calculate emission-line light curves, and introduce the fitting scheme in the following.
Before the fitting, the galactic extinction and redshift need to be corrected. According to the dust map of \citet{schlafly2011}, the value of $A_{\rm V}$ = 0.074 mag is provided by NED\footnote{\url{http://ned.ipac.caltech.edu}}. We adopted an extinction law of $R_{\rm V}$ = 3.1 to fix reddening \citep{cardelli1989, donnell1994}. Then, each spectrum was shifted to the rest-frame using the \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi$\lambda$5007. After these corrections, we decomposed the spectra using DASpec\footnote{\url{https://github.com/PuDu-Astro/DASpec}} software with a GUI interface based on the Levenberg-Marquardt technique. We use all spectra except one with slit 5\farcs05 to produce a mean spectrum and fit it at first. The detailed fitting strategy involves several aspects: (1) the host galaxy template from \citet{bc2003}; (2) a \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi\ template from \citet{boroson1992}; (3) a power-law representing AGN continuum; (4) a single-Gaussian function is used to describe each of the broad components of Balmer lines (\rm{H$\alpha$}, \rm{H$\beta$}\ and \rm{H$\gamma$}) and Helium lines (\ifmmode {\rm He\ I} \else He~{\sc i}\fi\ and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi); (5) moreover, narrow components from the Balmer and Helium lines, and some narrow forbidden lines (see details in \citet{feng2021a}) are depicted by a series of single-Gaussian functions. We compare the widths of \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi$\lambda$5007 on the mean spectrum and the high-resolution spectrum from HST to estimate the instrumental broadening, which is about 1200~$\rm km\,s^{-1}$. Moreover, the host galaxy and \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi\ in the spectrum are weak and degenerate so that it is not easy to fit the host, therefore we limit the host's FWHM within 1100 and 1300~$\rm km\,s^{-1}$. In the fitting, the widths and shifts of all narrow emission lines are bound to \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi$\lambda$5007, while the flux ratios of \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi$\lambda$5007 to \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi$\lambda$4959 and \ifmmode {\rm~[N\ II]} \else [N~{\sc ii}]\fi$\lambda$6583 to \ifmmode {\rm~[N\ II]} \else [N~{\sc ii}]\fi$\lambda$6548 are set to 3 and 2.96, respectively. Correspondingly, we first obtain a series of parameters by fitting the mean spectrum, including the spectral index of power law, the FWHM and shift of \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi, and the flux ratios of all narrow lines relative to \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi$\lambda$5007. And then these parameters are fixed to the fitting of each individual spectrum.
Figure~\ref{fig:fit} shows the decomposition of the mean spectrum (left) and one example of fitting an individual spectrum (right). We also fit the spectrum with a slit width of 5\farcs05, using the same spectral index and flux ratios of narrow lines as the other spectra.
Finally, the broad emission-line fluxes are measured for the relevant emission lines from the best-fit model to each individual spectrum, and are summarized in Table~\ref{table:lc}. The light-curve error actually contains both Poisson deviation and systematic uncertainty. The systematic errors are caused by the conditions of weather and telescope, which would lead to additional impact on our measurements. We follow \citet{du2014} and apply the median smooth method to assess such errors. The light curves are displayed in the left panel of Figure~\ref{fig:lc}. However, \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi\ is too weak to accurately measure its variability and has not been taken to be analyzed in the present work.
\subsection{Variability Characteristics}
We utilize the following ways to describe the statistical characteristics of the data. The first is the sample mean flux, defined as
\begin{equation}
\langle F \rangle = \frac{1}{N}\sum\limits_{i=0}^N F_{i},
\end{equation}
where $F_{i}$ is $i$-th flux and $\it N$ is the total number of observations. The second is the sample standard deviation, written as
\begin{equation}
S = \bigg(\frac{1}{N-1}\sum\limits_{i=0}^N (F_{i} - \langle F \rangle)^2\bigg)^{1/2}.
\end{equation}
The third is the fractional variability amplitude $F_{\rm var}$ to describe AGN intrinsic variability \citep{Rodr1997},
\begin{equation}
F_{\rm var}=\frac{(S^2-\triangle^2)^{1/2}}{\langle F \rangle},
\end{equation}
where $\triangle^2 = \frac{1}{N}\sum\limits_{i=0}^N \triangle_{i}^2$, and $\triangle_{i}$ is the uncertainty of $F_{i}$. According to \citet{edelson2002}, the error $\sigma_{\rm var}$ of $F_{\rm var}$ is expressed as
\begin{equation}
\sigma_{\rm var}=\frac{1}{F_{\rm var}}\bigg(\frac{1}{2N}\bigg)^{1/2}\frac{S^2}{{\langle F \rangle}^2}.
\end{equation}
The last is $R_{\max}$, the ratio between the maximum and the minimum of the light curve. The relevant measurements are listed in Table~\ref{table:stas}.
\begin{deluxetable}{lcccccccccc}[!ht]
\tablecolumns{10}
\tablewidth{\textwidth}
\tabletypesize{\scriptsize}
\tablecaption{Light Curve Statistics \label{table:stas}}
\tablewidth{\textwidth}
\tablehead{
\colhead{Light Curve} &
\colhead{Mean Flux} &
\colhead{Standard Deviation} &
\colhead{$F_{\rm var}(\%)$} &
\colhead{$R_{\rm max}$}
}
\startdata
\rm{H$\alpha$} & 167.73 & 6.06 & $3.49\pm0.42$ & 1.23\\
\rm{H$\beta$} & 52.78 & 2.86 & $5.30\pm0.62$ & 1.28\\
\rm{H$\gamma$} & 22.48 & 1.55 & $6.39\pm0.84$ & 1.34\\
\ifmmode {\rm He\ I} \else He~{\sc i}\fi & 8.98 & 1.00 & $10.95\pm1.27$ & 1.50\\
\ifmmode {\rm He\ II} \else He~{\sc ii}\fi & 12.95 & 2.92 & $20.12\pm2.86$ & 2.57\\
\enddata
\tablecomments{The emission-line fluxes are in units of $10^{-13}~\rm erg\,s^{-1}\,cm^{-2}$.}
\end{deluxetable}
\subsection{Intercalibration} \label{sec:intercali}
We only add the $g$ band of ZTF into the $B$ band of Lijiang as the continuum light curve to analyze time lag, considering that the $r$ band data of ZTF in our monitoring period is few. Before merging the data, we confirm that the time lag between the $g$ and $B$ bands is less than one day. Owing to utilizing different apertures and different filters, the datasets need to be intercalibrated.
Here, we calibrate multi-band data via PyCALI\footnote{\url{https://github.com/LiyrAstroph/PyCALI}}, which fits light curve through the damped random walk process and uses multiplicative and additive factors to regulate these data into a common scale \citep{li2014L}. And the intercalibrated parameters are obtained based on the Bayesian framework of the diffusive nested sampling algorithm \citep{Brewer2011}. Finally, the calibrated data are presented in the panel (a) of Figure~\ref{fig:lc} as the continuum light curve used in subsequent analyses, and the data are given in Table~\ref{table:lc}.
In addition, we collect the historical light curves of NGC~4151 from previous literature to probe the long-term variations of NGC~4151, involving 5100 \AA\ continuum \citep{bentz2006, shapovalova2008, derosa2018}, $B$ band \citep{lyu2021}, $b$ and $v$ bands \citep{edelson2017}, as well as sky survey data, like $V$ and $g$ bands of ASAS-SN, $g$ and $r$ bands of ZTF. Similarly, we also scale and shift these data with the $B$ band of Lijiang through PyCALI (see Figure~\ref{fig:calilc}).
\begin{figure*}[!ht]
\centering
\includegraphics[width=1\textwidth]{4151-intercali5.pdf}
\caption{This is the long-term light curve of NGC~4151, which was generated by combing the collected continuum and photometry with the $B$ band of Lijiang. The details about the intercalibrated data can be seen in Section~\ref{sec:intercali}. The horizontal dashed line represents the latest peak.}
\label{fig:calilc}
\end{figure*}
\begin{deluxetable}{lcccccccccc}[!ht]
\tablewidth{\textwidth}
\tabletypesize{\scriptsize}
\tablecaption{Rest-frame Time Lags \label{table:lag}}
\tablewidth{\textwidth}
\tablehead{
\colhead{$\rm Line$} &
\colhead{$r_{\rm max}$} &
\colhead{$\tau_{\rm cent}$} &
\colhead{$\tau_{\rm MICA}$} &
\colhead{$\tau_{\rm JAV}$}&
\colhead{$\tau_{\rm Mean}$}
}
\startdata
\rm{H$\alpha$} & 0.81 & $5.00_{-3.80}^{+0.84}$ & $9.28_{-2.50}^{+2.91}$ & $8.60_{-1.55}^{+1.80}$ & $7.63_{-2.62}^{+1.85}$ \\
\rm{H$\beta$} &0.88 & $4.95_{-1.20}^{+2.12}$ & $6.94_{-1.27}^{+1.21}$ & $6.73_{-0.91}^{+0.91}$ & $6.21_{-1.13}^{+1.41}$ \\
\rm{H$\gamma$} & 0.81 & $4.90_{-2.61}^{+1.99}$ & $5.95_{-1.88}^{+1.69}$, & $6.16_{-1.33}^{+1.26}$ & $5.67_{-1.94}^{+1.65}$ \\
\ifmmode {\rm He\ I} \else He~{\sc i}\fi & 0.91 & $1.46_{-1.52}^{+0.86}$ & $1.52_{-1.00}^{+1.02}$ & $1.78_{-0.83}^{+0.70}$ & $1.59_{-1.11}^{+0.86}$ \\
\ifmmode {\rm He\ II} \else He~{\sc ii}\fi & 0.90 & $0.23_{-1.39}^{+1.76}$ & $0.55_{-0.94}^{+0.91}$ & $0.59_{-0.84}^{+1.00}$ & $0.46_{-1.06}^{+1.22}$ \\
\enddata
\tablecomments{$\tau_{\rm Mean}$ is the mean value of $\tau_{\rm cent}$, $\tau_{\rm MICA}$, and $\tau_{\rm JAV}$ in units of days.}
\end{deluxetable}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.33\textwidth]{4151velocity-resolve-lag-ha.pdf}
\includegraphics[width=0.33\textwidth]{4151velocity-resolve-lag-hb.pdf}
\includegraphics[width=0.33\textwidth]{4151velocity-resolve-lag-hg.pdf}
\includegraphics[width=0.33\textwidth]{4151velocity-resolve-lag-he1.pdf}
\includegraphics[width=0.33\textwidth]{4151velocity-resolve-lag-he2.pdf}
\caption{The velocity-resolved time-delay results of \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, \ifmmode {\rm He\ I} \else He~{\sc i}\fi, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi. The top panel in each subgraph shows the velocity-resolved structure for each emission line. In each panel, the horizontal dotted line and grey band represent the ICCF measured lag and CCCD estimated errors, respectively. The vertical dashed lines denote the edges of each velocity bin. The bottom panel in each subgraph shows the rms spectrum of each emission line, for which the continuum is subtracted.}
\label{fig:velocitylag}
\end{figure*}
\section{Analysis and Results} \label{sec:analysis}
\subsection{Mean Time Lags} \label{sec:lag}
In order to investigate the responses of broad emission lines to continuum variations, we take the intercalibrated photometry light curve in Section~\ref{sec:intercali} as the continuum light curve used to analyze time lags of emission lines with respect to the continuum variations. Three approaches are adopted to estimate time lags between emission lines and continuum, which include the interpolated cross-correlation function \citep[ICCF;][]{Gaskell1986,Gaskell1987}, MICA\footnote{\url{https://github.com/LiyrAstroph/MICA2}} \citep{li2016}, and JAVELIN \citep{zu2011}.
Traditionally, the ICCF is the most commonly used method for measuring time lags, and we choose the centroid of a region with the correction coefficients $r \geq 0.8 r_{\rm max}$ as a time lag $\tau_{\rm cent}$, where $r_{\rm max}$ is the maximum correlation coefficient. Similar to \citet{peterson1998, peterson2004}, we utilize the ``flux randomization'' (FR) and ``random subset selection'' (RSS) procedures to determine the lag uncertainties. Accordingly, a cross-correlation centroid distribution (CCCD) is built by $10^4$ Monte Carlo realizations of the FR/RSS procedures, and we set the 15.87\% and 84.13\% quantiles of the CCCD as the lower and upper bounds of the lag.
MICA and JAVELIN are dependent on the damped random walk model, while the specific shapes of transfer functions are different. Specially, MICA and JAVELIN adopt the superposition of a set of moved Gaussian functions and a top-hat function as transfer function, respectively. For simplicity, we apply one Gaussian as the transfer function of MICA. The Gaussian and the top-hat centers stand for time delays in MICA and JAVELIN, respectively. Additionally, their time-lag posterior probability distributions are evaluated by the Monte Carlo Markov Chain (MCMC) technique. The medians of their distributions are taken as time delays for MICA and JAVELIN, respectively. The lower and upper uncertainties of the time delays are determined by the 15.87\%, and 84.13\% quantiles of the relevant distributions. Besides that, both MICA and JAVELIN can reconstruct light curves, and the reconstructed results are filled with blue and orange blocks individually, as shown in the left panels of Figure~\ref{fig:lc}.
The time lags calculated from these three ways are listed in Table~\ref{table:lag}, where $\tau_{\rm MICA}$ and $\tau_{\rm JAV}$ are the time lags measured by MICA and JAVELIN, respectively. Additionally, the time-lag posterior probability distributions are demonstrated in the right panels of Figure~\ref{fig:lc}. Remarkably, the results of MICA and JAVELIN are consistent with each other, and are also in agreement with the ICCF measurements except \rm{H$\alpha$}\ and \rm{H$\beta$}. Overall, the time lags measured by the ICCF method are lower than the results of the other two methods. This may be that both MICA and JAVELIN describe the light curve based on the damped random walk model, while ICCF directly linearly interpolates the light curve. Therefore, the measured lags will be affected by the principle difference between the ICCF and the MICA and JAVELIN, as well as some scattered points or gaps on the light curve, which may be why the ICCF lags are generally lower than the MICA and JAVELIN lags. We also find that time lags measured by each method follow $\tau_{\rm H\alpha} > \tau_{\rm H\beta} > \tau_{\rm H\gamma} > \tau_{\rm He\ I} >\tau_{\rm He\ II}$, which are radial stratification related to optical depths \citep{Korista2004} and ionization energy \citep{CS1988}. We adopt the mean values of their respective three time lags in the subsequent analyses. We also compute the ratios of the mean lags of \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, \ifmmode {\rm He\ I} \else He~{\sc i}\fi, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ relative to \rm{H$\beta$}\ to be $1.23 : 1.00 : 0.91 : 0.26 : 0.07$.
\subsection{Velocity-resolved Lags}
The time lags between continuum and broad emission lines, measured in Section~\ref{sec:lag}, characterize the mean sizes of the BLRs. By analyzing time series at different line-of-sight velocity bins, that is, velocity-resolved time lags, it is possible to yield some information about the kinematics and geometry of BLR. Note that the results of this approach are not necessarily accurate, as weak reverberation effects are ignored using the mean time lag for each bin \citep[see][]{derosa2018}. There are three patterns to depict the kinematics of BLR, including virialized, inflowing, and outflowing motions. If lags at the line center are longer than those at line wings, the BLR is a virialized motion. When the lags of blueshifted velocity bins are longer than those of redshifted velocity bins, the BLR is an infall model, otherwise, it is the signature of outflow.
Below are the detailed steps for velocity-resolved RM. First, we select two windows on both sides of the emission line on the rms spectrum to fit the underlying continuum with a straight line. Second, we divide each continuum-subtracted emission line into several velocity-space bins with an equal velocity interval. Then we apply these bins in individual spectrum to obtain their light curves, which are shown in the Appendix~\ref{sec:appendix}. Finally, we perform ICCF between the light curves of emission line and continuum for each bin to measure lag with uncertainties estimated by CCCD. The velocity-dependent lags are shown in Figure~\ref{fig:velocitylag}. Indeed, we employ a different method from Section~\ref{sec:lc} to decompose emission lines, and this method used here can mitigate the degeneracy in the line wings introduced by the multi-component spectral fitting. Furthermore, the measurements from each method are generally consistent.
We note that velocity-delay structures of \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, and \ifmmode {\rm He\ I} \else He~{\sc i}\fi\ are complicated. All the four emission lines show the double-peaked profiles of velocity-delay structures, but are not identical to each other. The blue-side peaks are larger than the red-side ones in \rm{H$\alpha$}, \rm{H$\beta$}, and \ifmmode {\rm He\ I} \else He~{\sc i}\fi, but \rm{H$\gamma$}\ appears to have a larger red-side peak. For \rm{H$\gamma$}, it may be affected by \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi\,$\lambda$4363. Additionally, \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ shows an infalling motion, however it seems unreliable. In the individual spectrum, \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ is weak and blended with \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi, making it hard to acquire a reliable velocity-delay structure. In short, since the velocity-resolved lags of \rm{H$\alpha$}, \rm{H$\beta$}, and \ifmmode {\rm He\ I} \else He~{\sc i}\fi\ are consistent, we treat this kind of signature as the BLR kinematics, which may be a combined effect between virial and inflow motions. Mrk~6 also shows a similar velocity-delay structure \citep{doroshenko2012, grier2013b, du2018}. However, in 2012 \citep{derosa2018}, the pattern of NGC~4151 is virialized motion, perhaps indicating a change in the kinematics of BLR. In addition, \citet{bentz2022} reanalyzed RM data from NGC~4151 in 2005, and modeled these data to constrain the geometry and kinematics of BLR. The results indicate that the BLR can be characterized by a thick disk with opening angle $\sim 57^{\circ}$ and inclination angle $\sim 58^{\circ}$, and the kinematics of BLR are dominated by eccentric bound orbits, about 10\% of the orbits tending to be near-circular motions. The above results further support that the kinematics of BLR have changed, as well as the geometry of BLR.
\begin{deluxetable*}{lcccccccccc}[!ht]
\tablewidth{\textwidth}
\tabletypesize{\scriptsize}
\tablecaption{Time Lags, Line Widths, and Black Hole Masses \label{table:width}}
\tablewidth{\textwidth}
\tablehead{
\multirow{2}{*}{$\rm Line$} &
\multicolumn{1}{c}{$\tau_{\rm Mean}$} &
\multicolumn{1}{c}{$\tau_{\rm Tot}$} &
\multicolumn{1}{c}{FWHM} &
\multicolumn{1}{c}{$\sigma_{\rm line}$} &
\multicolumn{1}{c}{$M_{\rm vir}$}&
\multicolumn{1}{c}{$M_{\bullet}$}&
\multicolumn{1}{c}{$M_{\bullet}^{'}$}\\
\colhead{}&
\multicolumn{2}{c}{(days)} &
\multicolumn{2}{c}{($\rm km\,s^{-1}$)}&
\multicolumn{3}{c}{($\times 10^7 M_{\odot}$)}
}
\startdata
\rm{H$\alpha$} & $7.63_{-2.62}^{+1.85}$ & $8.45_{-2.96}^{+2.17}$ & $4897\pm 5$ & $ 2069\pm 2$ & $3.57_{-1.23}^{+0.87}$ &$ 4.64_{-1.59}^{+1.13}$ &$ 5.14_{-1.80}^{+1.32}$\\
\rm{H$\beta$} & $6.21_{-1.13}^{+1.41}$ & $7.03_{-1.47}^{+1.73}$ & $5003\pm 13$ & $ 2074\pm 5$ & $3.03_{-0.55}^{+0.69}$ &$ 3.94_{-0.72}^{+0.90}$ &$ 4.47_{-0.93}^{+1.10}$\\
\rm{H$\gamma$} & $5.67_{-1.94}^{+1.64}$ & $6.50_{-2.28}^{+1.96}$ & $4799\pm 20$ & $ 2011\pm 8$ & $2.55_{-0.87}^{+0.74}$ &$ 3.31_{-1.13}^{+0.96}$ &$ 3.80_{-1.33}^{+1.15}$\\
\ifmmode {\rm He\ I} \else He~{\sc i}\fi & $1.59_{-1.11}^{+0.86}$ & $2.41_{-1.45}^{+1.18}$ & $6031\pm 50$ & $ 2340\pm 11$ & $1.13_{-0.79}^{+0.61}$ &$ 1.46_{-1.03}^{+0.79}$ &$ 2.23_{-1.34}^{+1.09}$\\
\ifmmode {\rm He\ II} \else He~{\sc ii}\fi & $0.46_{-1.06}^{+1.22}$ & $1.28_{-1.40}^{+1.54}$ & $7099\pm 52$ & $ 2375\pm 6$ & $0.45_{-1.04}^{+1.20}$ &$ 0.58_{-1.35}^{+1.56}$ &$ 1.64_{-1.78}^{+1.97}$\\
\enddata
\tablecomments{$\tau_{\rm Mean}$ is the mean lag of three methods in Section~\ref{sec:lag}, $\tau_{\rm Tot}$ is the result of $\tau_{\rm Mean}$ plus $\tau_{\rm b-uv}$ (see details in Section~\ref{sec:uv/optical}). FWHM and $\sigma_{\rm line}$ represent line widths of mean spectrum. $M_{\rm vir}$ is the virial product, $M_{\bullet}$ is the black hole mass when adopting $f =1.3$ and not considering the error of $f$, and $M_{\bullet}^{'}$ is measurement with $\tau_{\rm Tot}$.}
\end{deluxetable*}
\subsection{Black Hole Mass and Accretion Rate} \label{sec:bhmass}
Assuming that BLR clouds are bound by the gravity of the black hole and follow the virialized motion, the black hole mass can be expressed as
\begin{equation}
M_{\bullet} = f \frac{R_{\rm BLR} (\Delta V)^2}{G} \equiv f M_{\rm vir},
\end{equation}
where $R_{\rm BLR} = c \tau$ represents the mean size of BLR, $c$ is the speed of light, $\tau$ is the mean time lag of the BLR, $G$ is the gravitational constant, $\Delta V$ is the broad emission-line velocity, usually characterized by the emission-line width, $f$ is the virial factor, depending on geometry, kinematics, and inclination of the BLR, and $M_{\rm vir}$ is virial product.
The line width is generally depicted by FWHM or line dispersion $\sigma_{\rm line}$ of a mean or rms spectrum. $\sigma_{\rm line}$ is expressed as
\begin{equation}
\sigma_{\rm line}^{2}(\lambda) = \frac{\int \lambda^{2} F(\lambda)d\lambda}{\int F(\lambda)d\lambda} - \bigg\lbrack \frac{\int \lambda F(\lambda)d\lambda}{\int F(\lambda)d\lambda} \bigg\rbrack^{2},
\end{equation}
where $F(\lambda)$ is the flux density of emission-line profile at $\lambda$. Here, we present line-width measurements of a mean spectrum, and utilize the FWHM to estimate $M_{\bullet}$. Because the rms spectrum contains narrow line residuals and produces unreliable line widths, we do not use them here, but instead introduce them in Section~\ref{sec:vr}.
We use the bootstrap technique to assess the line width and corresponding uncertainties. Specifically, we randomly select $\it N$ spectra with replacement from $\it N$ spectra, and then create a new mean spectrum after removing the repeated spectra. This process is repeated 1000 times to generate 1000 new mean spectra. Regarding each new mean spectrum, we adopt the spectral decomposition method and employ the best-fitting model in Section~\ref{sec:lc} to fit it, thereby acquiring FWHMs of broad emission lines from the fitted Gaussian models. The FWHMs and errors of emission lines are calculated by the median and the standard deviation of these measurements. Then, we subtract the fitting models except for broad emission-line components from the aforementioned mean spectra, and this subtraction generates the corresponding residual spectra. The distribution of $\sigma_{\rm line}$ is measured from the residual spectra for each broad emission line. The median and the standard deviation of this distribution are regarded as the measured values of $\sigma_{\rm line}$ and its error, respectively. The above FWHM and $\sigma_{\rm line}$ also need to subtract instrumental broadening contribution in quadrature, which is about 1200 $\rm km\,s^{-1}$ measured in Section~\ref{sec:lc}, and the corrected line widths are summarized in Table~\ref{table:width}.
The $f$ factor is estimated from the following methods: the black hole mass and stellar velocity dispersion ($M_{\bullet}-\sigma_{*}$) relation \citep[e.g.,][]{onken2004, woo2010, grier2013a, ho2014}, the BLR dynamical modeling analysis in a single AGN \citep{pancoast2012, li2013, li2018}, the widths and shifts of redward shifted broad emission lines \citep{liu2017,liu2022,MJ20}, and fitting the spectral energy distributions with the accretion disk model \citep{mej2018}. \citet{Kormendy2013} showed that $M_{\bullet}-\sigma_{*}$ relation is related to the bulge type, thus we adopt the $f$ factor obtained from \citet{ho2014}, who derived it by subdivided bulge types. Due to a classical bulge in NGC~4151 \citep{ho2014}, we utilize $f = 1.3$, $\tau_{\rm mean}$, and FWHM in the mean spectrum to measure $M_{\bullet}$, and also provide $M_{\rm vir}$ in Table~\ref{table:width}. We note that the black hole masses measured by the different emission lines are consistent with each other within errors, though the masses estimated from the Balmer lines are larger than those from \ifmmode {\rm He\ I} \else He~{\sc i}\fi\ and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi.
The dimensionless accretion rate is defined as $\ifmmode {\dot{\mathscr{M}}} \else $\dot{\mathscr{M}}$\fi=\dot{M}_{\bullet}\,c^2/L_{\rm Edd}$, where $\dot{M}_{\bullet}$ is the mass accretion rate and $L_{\rm Edd}$ is the Eddington luminosity \citep{wang2014}. Based on the standard accretion disk model \citep{Shakura1973}, $\ifmmode {\dot{\mathscr{M}}} \else $\dot{\mathscr{M}}$\fi$ can be estimated by the following formula \citep{wang2014}
\begin{equation}\label{equ:accre}
\ifmmode {\dot{\mathscr{M}}} \else $\dot{\mathscr{M}}$\fi = 20.1\,\left(\frac{\ell_{44}}{\cos i}\right)^{3/2}m_7^{-2},
\end{equation}
where $\ell_{44}=L_{5100}/10^{44} \rm erg\,s^{-1}$, $m_7 = M_{\bullet}/10^7M_{\odot}$, and $i$ is the inclination of the accretion disk. Here, we adopt $i=45 \pm 5^{\circ}$ that is the inclination of narrow-line region (NLR) in NGC 4151 \citep{Das2005}, which agrees with the inclination of the NLR bicone model reported by \citet{Fischer2013}. Note that the inclination of NLR does not always coincide with accretion disk, BLR or torus. For instance, the inclination $i=45^{\circ}$ disagrees with an average inclination of the accretion disk to be $\sim 20^{\circ}$ derived from Fe K$\alpha$ modeling \citep{Nandra1997}. From the fitting results of all individual spectra, we extract the AGN continuum component and obtain a mean flux of AGN continuum at 5100 \AA, $29.95 (\pm 5.99) \times 10^{-15} \rm erg\,s^{-1}\,cm^{-2}\,\AA^{-1}$, and then a luminosity of $L_{5100} = 4.56 (\pm 0.91) \times 10^{42} \rm erg\,s^{-1}$. Combining this luminosity with $M_{\bullet}=3.94_{-0.72}^{+0.90} \times 10^7 M_{\odot}$ measured from \rm{H$\beta$}, the value of $\ifmmode {\dot{\mathscr{M}}} \else $\dot{\mathscr{M}}$\fi$ is $0.02_{-0.01}^{+0.01}$ (not considering uncertainties of inclination), implying that NGC~4151 is a sub-Eddington accretor.
\section{Discussion} \label{sec:discuss}
\subsection{Long-term Variability Trend} \label{sec:long-term}
We collect the historical light curves of NGC~4151 with a temporal baseline of $\sim$53 years, and intercalibrate them in Section~\ref{sec:intercali}, as displayed in Figure~\ref{fig:calilc}. The light curve consists of a major outburst and a series of minor flares. And, some of these peaks and valleys are accompanied by different spectral types, i.e., multiple CL phenomena. For example, the spectral type of NGC~4151 is an intermediate type in \citet{Osterbrock1976}, but is close to type 2 in 1984 while the flux is also fading to a minimum \citep{Penston1984}. When the flux reaches the maximum in 1996, the spectral type is Seyfert 1.5. The type changes to Seyfert 1.8 at the two minimum states in 2001 and 2005 \citep{shapovalova2008}. Interestingly, during the period of $1996-1999$, NGC~4151 has strong continuum, but the emission line flux is saturated and does not respond to continuum variations \citep{shapovalova2008}.
Previous optical RM programs \citep[e.g.,][]{bentz2006, derosa2018} are located around minima, while our RM campaign is situated on the rise and near the second outburst. \citet{kaspi1996} made an RM campaign in 1993, when NGC~4151 is in the rising period of the first outburst. Section~\ref{sec:comparison} will compare these RM results in details.
\subsection{Changing kinematics of BLR}
NGC~4151 has been monitored by several RM campaigns in the past, some of which provided velocity-resolved results that allowed us to study its BLR properties. The velocity-delay map for \ifmmode {\rm~C\ IV} \else C~{\sc iv}\fi $\lambda$1549 was first measured in 1991, where the BLR structure of \ifmmode {\rm~C\ IV} \else C~{\sc iv}\fi\ is asymmetric. Also, the time lags are 2 days in the strong redshifted line wing, and less than 10 days in the weak blueshifted line wing, indicating that gas motion is infalling \citep{Ulrich1996}. Additionally, \citet{bentz2022} obtained the kinematics of BLR by modeling the RM data in 2005. And the motions of BLR are dominated by eccentric bound orbits, but 10\% of orbits are near circular motions. In 2012, the BLR structure of \rm{H$\beta$}\ was investigated, and the BLR was found to be in a virialized motion \citep{derosa2018}. However, when checking their result, we find that the redshifted velocity-bin lag is larger than the blueshifted velocity-bin lag at the same velocity, which reveals a signature of outflow. In this campaign, we measure multiple emission-line velocity-resolved delays, but virial and infall signatures coexist. In general, the kinematics of BLR have changed for unknown reasons, possibly suggesting an evolution of the BLR.
\subsection{Influence of Time Lags between UV and B Bands} \label{sec:uv/optical}
In 2016, a 69-day multi-band, covering X-ray, UV, and optical bands, monitoring campaign of NGC~4151 was carried out at {\it{Swift}} \citep{edelson2017}. We note that the time delay between $uvw2$ (with a central wavelength of 1928 \AA) and $b$ (with a central wavelength of 4395 \AA) bands is $0.83_{-0.34}^{+0.32}$ days, denoted by $\tau_{\rm b-uv}$. This value can be regarded as the size of radiation region of the $B$ band, which should not be ignored in the analysis of lags between the BLR and ionization source. To date, there is no clear evidence that the size of the $B$/optical band can change significantly. Therefore, we sum it with our measured $\tau_{\rm Mean}$ and write it as $\tau_{\rm Tot}$ after being corrected to the rest frame. Meanwhile, we employ $\tau_{\rm Tot}$, FWHM, and $f=1.3$ to compute $M_{\bullet}$, denoted as $M_{\bullet}^{'}$ in Table~\ref{table:width}.
The rations of $\tau_{\rm Tot}$ to $\tau_{\rm Mean}$ increased by 0.108, 0.133, 0.146, 0.521, 1.815 in \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, \ifmmode {\rm He\ I} \else He~{\sc i}\fi, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi, respectively. Correspondingly, the black hole mass also has the same change. The UV/optical lag has a significant impact on the results of \ifmmode {\rm He\ I} \else He~{\sc i}\fi\ and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi, but a less impact on those of the Balmer lines. Moreover, considering the UV/optical lag would reduce the discrepancy in measuring the mass of the black hole using different emission lines.
\subsection{Virial Test} \label{sec:vr}
In order to investigate the motion of BLR in NGC~4151, we should check whether it is governed by the gravity of the black hole and obeys the virial relation ($\Delta V \propto \tau^{-0.5}$). This can be achieved by the following test. First, we investigate this relation through our estimates of time lags and velocities. Regression analysis of $\Delta V \propto \tau^{\alpha}$ is run for $\tau_{\rm Mean}$ and $\tau_{\rm Tot}$ with FWHMs measured from the mean spectrum. The best fittings give slopes of $\alpha=-0.15 \pm 0.04$ and $-0.21 \pm 0.06$ for the $\tau_{\rm Mean}$ and $\tau_{\rm Tot}$ data, respectively (see Figure~\ref{fig:vr}). The two best-fit results deviate from that of the virial relation with $\alpha=-0.5$. It is evident that the slope of applying $\tau_{\rm Tot}$ is closer to the virial relation but still deviates from it. This might be caused by too few data points. Therefore, we collected time lags and line widths from the UV emission-line measurements and other optical RM results. Our data and those collected data are listed in Table~\ref{table:comparison}.
Considering that the lags of UV emission lines are not affected by accretion disk size because they are measured using UV continuum, we only add $\tau_{\rm b-uv}$ to the lags of optical lines for analysis. However, most previous results only provide emission-line widths of the rms spectrum. In comparison with them, we also present $\sigma_{\rm line}$ of the rms spectrum. Similar to the way for the mean spectrum described in Section~\ref{sec:bhmass}, the line widths and corresponding uncertainties are assessed using 1000 times bootstrapping rms spectrum. For an individual rms spectrum, the $\sigma_{\rm line}$ of each line is directly measured after subtracting the underlying continuum via a straight line. The straight line is fitted using two continuum windows on both sides of the emission line. The median and standard deviation of the $\sigma_{\rm line}$ distribution are taken as the final value of $\sigma_{\rm line}$ and associated error, respectively, while subtracting the instrumental broadening (see Table~\ref{table:comparison}). However, it should be stressed that the narrow-line residuals also appear in the rms spectrum, which will cause uncertainty in the estimate of $\sigma_{\rm line}$. We fit all the data of $\sigma_{\rm line}$ and time lag in Table~\ref{table:comparison}, and obtain a slope of $-0.36 \pm 0.10$ (see Figure~\ref{fig:vr}), which is marginally consistent with the expected value of $\alpha=-0.5$ within the error bar. However, it should be noted that there is also sizable scatter. Additionally, the data points are basically distributed around the virial relation of $\Delta V \propto \tau^{-0.5}$, implying that the BLR of NGC~4151 is basically in a virialized motion.
\begin{deluxetable}{lccccccccc}[!ht]
\tablewidth{\textwidth}
\tabletypesize{\scriptsize}
\tablecaption{Rest-frame Lags, rms $\sigma_{\rm line}$ and Black Hole Masses \label{table:comparison}}
\tablewidth{\textwidth}
\tablehead{
\multirow{2}{*}{Method} &
\multicolumn{1}{c}{Lag}&
\multicolumn{1}{c}{rms $\sigma_{\rm line}$}&
\multicolumn{1}{c}{$M_{\bullet}~^d$}&
\multirow{2}{*}{Reference}\\
\colhead{}&
\multicolumn{1}{c}{(day)}&
\multicolumn{1}{c}{($\rm km\,s^{-1}$)}&
\multicolumn{1}{c}{($\times 10^7 M_{\odot}$)}
}
\startdata
\rm{H$\alpha$}\ RM & $7.63_{-2.62}^{+1.85}$ & $1772\pm 59~^b$ & $2.94_{-1.03}^{+0.74}$ & This work \\
\rm{H$\beta$}\ RM & $6.21_{-1.13}^{+1.41}$ & $2020\pm 76~^b$ & $3.11_{-0.61}^{+0.75}$ & This work \\
\rm{H$\gamma$}\ RM & $5.67_{-1.94}^{+1.64}$ & $2359\pm 73~^b$ & $3.88_{-1.35}^{+1.15}$ & This work \\
\ifmmode {\rm He\ I} \else He~{\sc i}\fi\ RM & $1.59_{-1.11}^{+0.86}$ & $2004\pm 48~^b$ & $0.78_{-0.55}^{+0.43}$ & This work \\
\ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ RM & $0.46_{-1.06}^{+1.22}$ & $3172\pm 82~^b$ & $0.56_{-1.31}^{+1.51}$ & This work \\
\rm{H$\alpha$}\ RM & $11.00_{-3.10}^{+4.10}$ & $1721\pm 47~^b$ & $4.01_{-1.15}^{+1.51}$ & 1, 3\\
\rm{H$\beta$}\ RM & $11.50_{-3.70}^{+3.70}$ & $1958\pm 56~^b$ & $5.42_{-1.77}^{+1.77}$ & 1, 3\\
\rm{H$\alpha$}\ RM & $3.20_{-1.70}^{+1.90}$ & $2422\pm 79$ & $2.31_{-1.24}^{+1.38}$ & 2, 3\\
\rm{H$\beta$}\ RM & $3.10_{-1.30}^{+1.30}$ & $1914\pm 42$ & $1.40_{-0.59}^{+0.59}$ & 2, 3\\
\rm{H$\beta$}\ RM & $6.57_{-0.72}^{+1.12}$ & $2680\pm 64$ & $5.80_{-0.69}^{+1.03}$ & 4\\
\rm{H$\beta$}\ RM & $6.59_{-0.21}^{+0.19}$ & $1940\pm 22$ & $3.05_{-0.12}^{+0.11}$ & 5\\
\ifmmode {\rm~C\ IV} \else C~{\sc iv}\fi\ RM & $3.43_{-1.24}^{+1.42}$ & $5698\pm 245$ & $13.69_{-5.09}^{+5.79}$ & 6, 8 \\
\ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ RM $^a$ & $3.46_{-1.60}^{+1.96}$ & $5013\pm 323$ & $10.69_{-5.13}^{+6.21}$ & 6, 8 \\
\ifmmode {\rm~C\ III]} \else C~{\sc iii}]\fi\ RM & $6.88_{-3.82}^{+4.56}$ & $2553\pm 307$ & $5.51_{-3.34}^{+3.89}$ & 6, 8 \\
\ifmmode {\rm~Mg\ II} \else Mg~{\sc ii}\fi\ RM & $6.81_{-2.09}^{+1.73}$ & $2581\pm 179$ & $5.58_{-1.88}^{+1.61}$ & 6, 8 \\
\ifmmode {\rm~C\ IV} \else C~{\sc iv}\fi\ RM & $3.27_{-0.91}^{+0.83}$ & $5140\pm 113~^c$ & $10.62_{-2.99}^{+2.74}$ & 7, 8 \\
\ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ RM $^a$ & $2.59_{-1.21}^{+1.10}$ & $4530\pm 92$ & $6.54_{-3.06}^{+2.79}$ & 7, 8 \\
\ifmmode {\rm~C\ III]} \else C~{\sc iii}]\fi\ RM & $3.44_{-1.22}^{+1.51}$ & $2817\pm 81$ & $3.36_{-1.21}^{+1.49}$ & 7, 8 \\
\ifmmode {\rm~Mg\ II} \else Mg~{\sc ii}\fi\ RM & $5.33_{-1.76}^{+1.86}$ & $2721\pm 141$ & $4.85_{-1.68}^{+1.77}$ & 7, 8 \\
Gas dynamics & --- & --- &$3.6_{-2.6}^{+0.9}$ & 9, 11 \\
Stellar dynamics & --- & --- & $4.27_{-1.31}^{+1.31}$ & 10, 11 \\
Stellar dynamics & --- & --- & $0.25-3.0$ & 11 \\
RM modeling & --- & --- & $1.66_{-0.34}^{+0.48}$ & 12 \\
\enddata
\tablecomments{References: (1) \citet{maoz1991}, (2) \citet{kaspi1996}, (3) \citet{peterson2004}, (4) \citet{bentz2006}, (5) \citet{derosa2018}, (6) \citet{Clavel1990}, (7) \citet{Ulrich1996}, (8) \citet{Metzroth2006}, (9) \citet{hicks2008}, (10) \citet{onken2014}, (11) \citet{roberts2021}, (12) \citet{bentz2022}. \\
$^a$ Here \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ refers to the \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\,$\lambda$1640 line.\\
$^b$ The value is uncertain, since the rms spectrum incorporate the narrow line residuals, and bring unconvinced line width measurements.\\
$^c$ The value is also unsure, because that \ifmmode {\rm~C\ IV} \else C~{\sc iv}\fi\ line is strongly self-absorbed.\\
$^d$ The values of $M_{\bullet}$ measured by RM are corrected by $f=6.3$, which is much larger than $f=1.3$ because $f=6.3$ corresponds to $\sigma_{\rm line}$ rather than FWHM.}
\end{deluxetable}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.49\textwidth]{FWHM-lag.pdf}
\includegraphics[width=0.49\textwidth]{disper-all-lag.pdf}
\caption{Line width vs. time lag. The left panel: our observational results of FWHM vs. time lag. The blue data is $\tau_{\rm Mean}$, and the red data is $\tau_{\rm Mean}$ plus $\tau_{\rm b-uv}$. The blue and red dashed lines represent the best fit lines with slopes of $-0.15 \pm 0.04$ and $-0.21 \pm 0.06$, respectively. The right panel: rms $\sigma_{\rm line}$ vs. time lag for all the data. The black dashed line represents the best fit line with a slope of $-0.36\pm 0.10$. The black solid line in each panel is the best fit line using the virial relation with a slope of -0.5.}
\label{fig:vr}
\end{figure*}
\subsection{Comparison with Previous Results} \label{sec:comparison}
We mainly compare our measurements of broad-line lags and black hole mass of NGC~4151 with other published results. Historically, multiple RM campaigns for NGC~4151 had been carried out, and had measured the relevant time lags of optical and/or UV emission lines. Here, we separately introduce their results. In the RM monitoring of 1988, the time lag was $9\pm 2$ days in both \rm{H$\alpha$}\ and \rm{H$\beta$}\ \citep{maoz1991}. \citet{kaspi1996} measured time lags of 0--3 days for \rm{H$\alpha$}\ and \rm{H$\beta$}\ with respect to continuum variations in 1993. Later, the above two data sets are reanalyzed by \citet{peterson2004}. Then, \citet{bentz2006} and \citet{derosa2018} performed well-sampled RM campaigns in 2005 and 2012, respectively, and obtained $6.57_{-0.76}^{+1.12}$ and $6.59_{-0.21}^{+0.19}$ days of \rm{H$\beta$}\ lags at the rest frame. In our work, the time lags of \rm{H$\alpha$}\ and \rm{H$\beta$}\ are basically consistent with the above results of others. Additionally, \citet{Clavel1990} and \citet{Ulrich1996} performed UV RM programs. \citet{Metzroth2006} reanalyzed their data and obtained the corresponding lags and line widths.
On the other hand, the black hole mass of NGC~4151 has been measured by different methods, for instance, the gas dynamical modeling \citep{hicks2008}, the stellar dynamical modeling \citep{onken2014, roberts2021}, the RM method \citep{peterson2004, bentz2006, Metzroth2006, derosa2018}, and the RM data modeling \citep{bentz2022}. Thus, we can compare these measurements to check the reliability of different methods. The black hole masses measured by these methods are listed in Table~\ref{table:comparison}. Here, the dynamical modeling results were unified by \citet{roberts2021} according to the Cepheid distance of $15.8 \pm 0.4$~Mpc \citep{yuan2020}. Because the virial coefficients used in different RM projects are different, we adopt $f =6.3$ in \citet{ho2014} to adjust the RM results. Additionally, for consistency, we also give the values of $M_{\bullet}$ estimated with rms $\sigma_{\rm line}$ measured in Section~\ref{sec:vr} (see Table~\ref{table:comparison}). It's worth noting that the black hole masses determined by different methods are consistent within errors. However, the results for the UV lines are overall larger than others, even an order of magnitude.
\section{Conclusion} \label{sec:conclusion}
Here, we report a new RM campaign on NGC~4151 and the detailed results are summarized below.
\begin{enumerate}
\item We measure time delays between multiple broad emission lines and continuum simultaneously, to be $7.63_{-2.62}^{+1.85}$, $6.21_{-1.13}^{+1.41}$, $5.67_{-1.94}^{+1.65}$, $1.59_{-1.11}^{+0.86}$, and $0.46_{-1.06}^{+1.22}$ days for the broad \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, \ifmmode {\rm He\ I} \else He~{\sc i}\fi, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ lines, respectively. These lags satisfy $\tau_{\rm H\alpha} > \tau_{\rm H\beta} > \tau_{\rm H\gamma} > \tau_{\rm He\ I} >\tau_{\rm He\ II}$, which are radial stratification and may be affected by optical depths and ionization energy. Besides that, the time lag ratios of \rm{H$\alpha$}, \rm{H$\beta$}, \rm{H$\gamma$}, \ifmmode {\rm He\ I} \else He~{\sc i}\fi, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ relative to \rm{H$\beta$}\ are $1.23 : 1.00 : 0.91 : 0.26 : 0.07$. If considering the time lag of the optical band relative to the UV band, it would bring more variation in the time lags of \ifmmode {\rm He\ I} \else He~{\sc i}\fi\ and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi.
\item It's the first time for NGC~4151 to explore velocity-resolved time lags of multiple optical emission lines. The velocity-resolved structures of \rm{H$\alpha$}, \rm{H$\beta$}, and \ifmmode {\rm He\ I} \else He~{\sc i}\fi\ are consistent with each other, the lags of the blue wing are overall larger than those of the red wing, and the lags of the line core are larger than those of both wings, indicating that virial and infalling motions coexist. \rm{H$\gamma$}\ may be affected by \ifmmode {\rm~[O\ III]} \else [O~{\sc iii}]\fi\,$\lambda$4363, and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi\ is weak and likely contaminated by \ifmmode {\rm Fe\ II} \else Fe~{\sc ii}\fi, causing less reliable results for \rm{H$\gamma$}\ and \ifmmode {\rm He\ II} \else He~{\sc ii}\fi. In comparison with previous velocity-resolved structures, we note that the kinematics of the BLR changes from inflow to virialization with outflow, and then to virial and inflow coexistence signature, indicating the evolutionary kinematics of the BLR.
\item Combining our measurements and the collected RM results including time delays and velocities, we verify that all the observational data of time lag and velocity for NGC~4151 basically follows a virial relation of $\Delta V \propto \tau^{-0.5}$.
\item We measure the black hole mass of NGC~4151 based on the FWHM of the mean spectrum, the time lag and $f =1.3$, and the value is $ 3.94_{-0.72}^{+0.90} \times 10^7 M_{\odot}$ for \rm{H$\beta$}. Furthermore, comparing our measurements with previous results, we find that they are consistent within errors. Using the black hole mass measured by \rm{H$\beta$}, we calculate the dimensionless accretion rate to be $0.02_{-0.01}^{+0.01}$, indicating NGC~4151 is in a sub-Eddington accretion state.
\end{enumerate}
NGC~4151 is one of the best-studied AGNs with multiple RM observations. Our result of the time lag of \rm{H$\beta$}\ is consistent with \citet{derosa2018}, but the velocity-resolved results are quite different, which hints the evolution of kinematics of the BLR. This campaign of NGC~4151 is located at the rising phase of the second outburst, and it is worth continuously monitoring NGC~4151 to explore the origin of variable kinematics.
\vspace{5mm}
We thank the referee for useful comments that improved the manuscript. We are grateful to Zi-Xu Yang, Jun-Rong Liu, and Zhu-Heng Yao for their time and effort in our observations. This work is supported by the National Key R\&D Program of China with No. 2021YFA1600404, the National Natural Science Foundation of China (NSFC; grants No. 11991051, 12103041, 12073068), the joint fund of Astronomy of the NSFC and the CAS (grant No. U1931131), and the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A06.
We acknowledge the support of the staff of the Lijiang 2.4m telescope. Funding for the telescope has been provided by Chinese Academy of Sciences and the People's Government of Yunnan Province.
The Zwicky Transient Facility Collaboration is supported by U.S. National Science Foundation through the Mid-Scale Innovations Program (MSIP). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space.
\vspace{5mm}
\facilities{Lijiang: 2.4 m}
\software{IRAF \citep{tody1986}, DASpec (\url{https://github.com/PuDu-Astro/DASpec}), PyCALI \citep{li2014L}, MICA \citep{li2016}, JAVELIN\citep{zu2011}
}
|
1,941,325,221,019 | arxiv | \section{Introduction}
The notion of ball spaces, that is, a pair of a set and a family of some of its subsets, first appeared in the paper \cite{KK1}. It was used to prove some fixed point theorems in metric, ultrametric and topological spaces. This idea was continued in \cite{BCLS,CKK,KKP}. In our paper we will generalize some results obtained in mentioned articles but for semimetric spaces.
Fixed point theorems are tools which are used in many various areas of mathematics and not only. Since semimetric spaces, that is, spaces with distance function which does not have to satisfy the triangle inequality, occurred to be important in application for example in IT (e.g. see the section Applications in \cite{LNT}), also fixed point theorems in such spaces became important. This topic was investigated for example in \cite{ADKR,BP,CJT,Su,M}.
However, using the theory of ball spaces in this area is a novelty (except examining metric and ultrametric spaces, which are obviously semimetric spaces as well).
{The paper is organized as follows. In Section 2 we present notation and important definitions. In Section 3 we prove a new fixed point theorem for semimetric spaces, using the ball space theory. Continuing these ideas, in Section 4 we present some generalizations of Caristi-Kirk fixed point theorem for b-metric spaces. Finally, in Section 5, we introduce a notion of ball convergence. We show its connection to the classical notions of convergence in topological spaces and semimetric spaces.
}
\section{Preliminaries}
Let $\mathbb R_+ := [0, \infty)$.
Let $X$ be a set and let $d \colon X \times X \to \mathbb R_+$ be a function satisfying the following conditions:
\begin{itemize}
\item[(S1)] $\forall_{x,y \in X} \, d(x,y) = 0 \Leftrightarrow x=y;$
\item[(S2)] $\forall_{x,y \in X} \, d(x,y) = d(y,x).$
\end{itemize}
Then $d$ is called a semimetric and $(X,d)$ a semimetric space.
In metric spaces we have the additional condition called the triangle inequality:
\begin{itemize}
\item[(M)] $\forall_{x,y,z \in X} \, d(x,z) \leq d(x,y) + d(y,z).$
\end{itemize}
In semimetric spaces we will consider some weaker \textit{"triangle-like"} conditions. Let us begin by introducing some necessary notions.
Let $g \colon \mathbb R_+ \times \mathbb R_+ \to \mathbb R_+$. We say that $g$ is nonreducing if for any $a,b \in \mathbb R_+$ we have $g(a,b) \geq \max\{a,b\}$. We say that $g$ is nondecreasing if for any $a,b,c ,d \in \mathbb R_+$ such that $a \leq c, b \leq d$ we have $g(a,b) \leq g(c,d)$. We call $g$ amenable if $g(a,b) = 0$ if and only if $a=b=0$.
A function $h\colon X \to (-\infty,+\infty]$ is called proper if $\{x\in X\colon h(x) \in \mathbb R \} \neq \emptyset$.
A function $g \colon \mathbb R_+ \times \mathbb R_+ \to \mathbb R_+$ is called semitriangular if it is nonreducing, nondecreasing, continuous at $(0,0)$ and amenable. We define the semimetric triangle condition (G) generated by a semitriangular mapping $g$ as follows:
\begin{itemize}
\item[(G)] $\forall_{x,y,z \in X} \, d(x,z) \leq g(d(x,y), d(y,z)).$
\end{itemize}
If $d$ is a semimetric satisfying condition (G), then the space $(X,d)$ will be called a (G)-semimetric space (compare with the definition of regular semimetric in \cite{BP}). Whenever we will be writing about a semimetric triangle condition (G), we will assume that it is generated by a semitriangular function $g \colon \mathbb R_+ \times \mathbb R_+ \to \mathbb R_+$.
Let $(X,d)$ be a (G)-semimetric space, $x \in X$ and $r > 0$. Set
$$B_r(x):= \{y \in X\colon d(x,y) \leq r\}$$
and
$$\mathring B(x,r):= \{y \in X\colon d(x,y) < r\}.$$
We define limits, Cauchy sequences and completeness in semimetric spaces analogously as in metric spaces (although it is important to keep in mind that the properties of these concepts may not be the same as in the metric case). We say that a semimetric space is semicomplete if every Cauchy sequence has a convergent subsequence (see \cite{Su}). A semimetric $d$ is said to be $1$-continuous (see \cite{GS}) if for any sequence $(x_n)$ in $X$ convergent to some $x \in X$ we have
$$\forall_{y \in X} \,\,\lim\limits_{n \to \infty} d(x_n,y) = d(x,y).$$
Lastly, a semimetric $d$ is uniformly $1$-continuous if for any sequence $(x_n)$ in $X$ convergent to some $x \in X$ we have
$$\forall_{\varepsilon >0} \exists_{N \in \mathbb N} \forall_{n\geq \mathbb N} \forall_{y \in X} \,\, |d(x_n,y)-d(x,y)|<\varepsilon.$$
For any set $X$ we will denote by $\mathcal{P}(X)$ its power set, i.e. the family of all subsets of $X$.
Now, we will define a ball space. Let $X$ be a nonempty set and let $\mathcal{B} \subset \mathcal{P}(X)$ be nonempty. A pair $(X, \mathcal{B})$ is called a ball space and the elements of $\mathcal{B}$ are called balls. Any chain (that is, nonempty totally ordered subset) of $(\mathcal{B} ,\subset)$ is called a nest of balls. We say that a ball space is spherically complete if every nest of balls has a nonempty intersection.
Let $(X,d)$ be a (G)-semimetric space. Let $S \subset (0,\infty)$. Let
$$\mathcal{B}_S := \{B_r(x) \colon x \in X, r \in S\}.$$
For any set $A \subset \mathbb R$ we denote by $A^d$ the set of all accumulation points of $A$.
\section{A new fixed point theorem}
In this section we will provide a fixed point theorem for spherically complete ball spaces based on semimetrics. A discussion of its potential range of applications follows. We start with a result which generalizes Theorem 5.6. from \cite{CKK} which characterizes complete $1$-continuous (G)-semimetric spaces. The proof is similar.
\begin{theorem} \label{zup}
Let $(X,d)$ be a (G)-semimetric space, where $d$ is $1$-continuous. Then the following conditions are equivalent:
\begin{itemize}
\item[(1)] $(X,d)$ is semicomplete;
\item[(2)] $(X,d)$ is complete;
\item[(3)] for any $S \subset (0, \infty)$ such that $S^d = \{0\}$ the ball space $(X,\mathcal{B}_S)$ is spherically complete;
\item[(4)] there exists $S \subset (0, \infty)$ such that $S^d = \{0\}$ and the ball space $(X,\mathcal{B}_S)$ is spherically complete.
\end{itemize}
\end{theorem}
\begin{proof}
(1) $\Rightarrow$ (2) \\
Let $(x_n)$ be a Cauchy sequence in $X$. Since $(X,d)$ is semicomplete, there is a subsequence $(x_{n_k})$ which is converegent to some $x \in X$. Take $\varepsilon > 0$. By continuity of $g$ at $(0,0)$ and the fact that $g(0,0)=0$, there is $\delta >0$ such that for any $v \in (0,\delta]$ we have $g(v,v) < \varepsilon$. Let $N_1 \in \mathbb N$ be such that $d(x_{n_k},x) < \delta$ for all $k \geq N_1$. Let $N_2 \in \mathbb N$ be such that $d(x_n,x_m) < \delta$ for all $n,m \geq N_2$. Put $N = \max\{N_1,N_2\}.$ Then, using the fact that $g$ is nondecreasing we have for $n \geq N$
$$d(x_n,x) \leq g(d(x_n,x_{n_N}),d(x_{n_N},x)) \leq g(\delta,\delta) < \varepsilon.$$
Hence $\lim\limits_{n\to \infty}x_n = x$, so $(X,d)$ is complete.
\\
(2) $\Rightarrow$ (3)
\begin{comment}
Assume that $(X,d)$ is complete and let $S \subset (0, \infty)$ be such that $S^d = \{0\}$. Let $\mathcal{L}$ be a nest of balls from $\mathcal{B}_S$. If there is the smallest ball in $\mathcal{L}$, that is, a ball which is contained in every element of $\mathcal{L}$, then $\bigcap \mathcal{L} \neq \emptyset$. Suppose that $\mathcal{L}$ does not contain the smallest ball. Without loss of generality we can assume that it contains the largest ball. Indeed, if $\mathcal{L}$ does not contain the largest ball, then we can take any $B \in \mathcal{L}$ and consider the nest $\mathcal{L}_B: = \{D \in \mathcal{L}\colon D \subset B\}$. Then $\bigcap \mathcal{L}_B = \bigcap \mathcal{L}$ and $\mathcal{L}_B$ contains the largest ball. Recall, that elements of $\mathcal{L}$ are of the form $B_r(x)$ for $r \in S$, $x \in X$. Let $B_{r_1}(x_1)$ be the largest ball in $\mathcal{L}$. Obviously, smaller ball must have smaller radius. Since $0 \notin S$ and $S^d = \{0\}$, all elements of $S$ are isolated points, and hence the set $\{r \in S\colon \exists_{x \in X}\, B_r(x) \in \mathcal{L}\}$ (which is bounded from above by $r_1$) must be countable. So, we can write $\mathcal{L} = \{B_{r_n}(x_n)\colon n \in \mathbb N\}$, where the sequence $(r_n)$ is decreasing with $\lim\limits_{n \to \infty} r_n = 0$ and $x_n \in X$ for $n \in \mathbb N$.
For any $i \in \mathbb N$ and $j > i$ we have $x_j \in B_{r_i}(x_i)$, so $d(x_i,x_j) \leq r_i$. Since the sequence $(r_n)$ converges to $0$, the sequence $(x_n)$ is a Cauchy sequence. By completness of $(X,d)$, the sequence $(x_n)$ has a limit $x \in X$. By $1$-continuity of $d$ we have $d(x_i,x) \leq r_i$ for any $i \in \mathbb N$. Hence $x \in B_{r_i}(x_i)$ for any $i\in \mathbb N$, so $x \in \bigcap \mathcal{L}$.
\end{comment}
Assume that $(X,d)$ is complete and let $S\subset \mathbb R_+$ be such that $S^d=\{0\}$ (it can be shown that a subset of $\mathbb R$ with a single accumulation point is at most countable). Now let $(T,<)$ be a totally ordered set of indices and $\mathcal{L}:=\{B_t \, : \, t\in T \}$ be a nest of balls, where $B_t:=B_{r_t}(x_t)$ for some $r_t\in S$ and $x_t\in X$, $t\in T$. Consider the following two cases:
\begin{itemize}
\item There exists $t_0\in T$ such that for all $t\geq t_0$ we have $r_t\geq r$ for some fixed $r>0$.
Since for all $t<t_0$ we have $B_t\supset B_{t_0}$ then $\bigcap_{t\in T} B_t = \bigcap_{t\geq t_0} B_t$.
Assume that $\bigcap_{t\geq t_0} B_t = \emptyset$. In particular, this implies the existence of such $t_1>t_0$ that $x_{t_0}\notin B_{t_1}$. Therefore, we have the following two inequalities
\[
r_{t_1}<d\left(x_{t_0},x_{t_1}\right)\leq r_{t_0}.
\]
The first inequality stems from the fact that $x_{t_0}\notin B_{r_{t_1}}(x_{t_1})$ and the latter comes from the inclusion $ B_{r_{t_1}}(x_{t_1}) \subset B_{r_{t_0}}(x_{t_0})$, which, in particular, implies $x_{t_1} \in B_{r_{t_0}}(x_{t_0})$. We can now proceed inductively -- having defined $x_{t_n}$ as the center of $t_n$-th ball, if $x_{t_n}\notin \bigcap_{t\geqslant t_{n}} B_{t_n}$, then there exists $t_{n+1}$ such that $x_{t_n}\notin B_{r_{t_{n+1}}}(x_{t_{n+1}})$. Then, same argumentation as previously proves that
\[
r_{t_{n+1}}<d\left(x_{t_n},x_{t_{n+1}}\right)\leqslant r_{t_n}.
\]
As a result, we end up with a strictly decreasing sequence $(r_{t_n})_{n\in\mathbb N}$ whose elements are bounded from below by $r$. As such, $(r_{t_n})$ converges to some $r'\geq r>0$. Since the sequence $(r_{t_n})_{n\in\mathbb N}$ is one-to-one, $r'$ is an accumulation point of $S$. Thus, we have arrived at the contradiciton, as $S^d=\{0\}$. Thus, $\bigcap_{t\in T} B_t \neq \emptyset$.
\item There exists a sequence $(t_n)_{n\in\mathbb N}$ such that $r_{t_n}\to 0$ and for every $t\in T $ there exists such $n\in\mathbb N$ that $t_n>t$. If such situation occurs, the sequence $(x_{t_n})_{n\in\mathbb N}$ is a Cauchy sequence. Indeed, if we have natural indices $n<m$, then
\[
d(x_{t_n},x_{t_m})\leq r_{t_n} \xrightarrow{n,m\to \infty} 0,
\]
where the first inequality follows from inclusion $B_{r_{t_n}}(x_{t_n}) \supset B_{r_{t_m}}(x_{t_m})$. As $(X,d)$ is complete, the discussed sequence tends to some $x\in X$. From $1$-continuity of $d$ we then obtain that for any fixed $n_0\in \mathbb N$ we have
\[
d(x,x_{t_{n_0}})= \lim_{k\to\infty} d(x_{t_k},x_{t_{n_0}}) \leq r_{t_{n_0}},
\]
hence $x\in B_{t_n}$ for each $n$. One can easily see that $x\in \bigcap_{t\in T} B_t$, since for every $t\in T$ there exists $n\in\mathbb N$ such that
\[
B_t \supset B_{t_n} \ni x.
\]
Thus, the sought intersection is not empty as it contains at least one element $x$.
\end{itemize}
\noindent
(3) $\Rightarrow$ (4) Obvious. \\
(4) $\Rightarrow$ (1)
Let $S \subset (0, \infty)$ be such that $S^d = \{0\}$ and the ball space $(X,\mathcal{B}_S)$ is spherically complete. Fix a Cauchy sequence $(x_n)$ in $X$. We will inductively define a sequence $(s_n)$ in $S$ such that $s_n \geq g(s_{n+1},s_{n+1})$ for $n \in \mathbb N$. Let $s_1 \in S$. Assume that for some $n \in \mathbb N$ we have defined $s_i$ for $i \leq n$. Using continuity of $g$ at $(0,0)$ and the fact that $g(0,0)=0$ we can find $t >0$ such that for any $v \in (0,t]$ we have $g(v,v) < s_n$. Since $0$ is an accumulation point of $S$, we can find $s_{n+1} \in S$ such that $s_{n+1} < t$. Then $s_n \geq g(s_{n+1},s_{n+1})$.
Using induction, we will now define some increasing sequence of natural numbers $(n_i)$. Let $n_1 \in \mathbb N$ be such $d(x_n,x_m) < s_2$ for all $n,m \geq n_1$. Assume that for some $j \in \mathbb N$ we have defined $n_i$ for $i \leq j$. Let $n_{j+1} > n_j$ be such that $d(x_n,x_m) < s_{j+2}$ for all $n,m \geq n_{j+1}$. For $i \in \mathbb N$ let
$$B_i:=B_{s_i}(x_{n_i}).$$
We will show that the balls $B_i$ form a nest. Let $i \in \mathbb N$ and $y \in B_{i+1}$. Then $d(y,x_{n_{i+1}}) \leq s_{i+1}$. We also have that $d(x_{n_i},x_{n_{i+1}}) < s_{i+1}$, because $n_i,n_{i+1} \geq n_i.$ Using the fact that $g$ is nondecreasing we have
$$d(x_{n_i},y) \leq g(d(x_{n_i},x_{n_{i+1}}),d(y,x_{n_{i+1}})) \leq g(s_{i+1},s_{i+1}) \leq s_i.$$
Hence $x \in B_{s_i}(x_{n_i}) = B_i$, so $B_{i+1} \subset B_i$. Since $(X,\mathcal{B}_S)$ is spherically complete $\bigcap_{i\in \mathbb N} B_i \neq \emptyset$. Let $x \in \bigcap_{i\in \mathbb N} B_i$. Then $x \in B_i$ for all $i \in \mathbb N$, and thus $d(x_{n_i},x) \leq s_i$. Hence $\lim\limits_{i\to \infty}d(x_{n_i},x) = 0$, because $s_i \to 0$. Therefore, $(x_{n_i})$ is a convergent subsequence of $(x_n)$, which finally implies that $(X,d)$ is semicomplete.
\end{proof}
The next result presented is the Theorem of Ćmiel and Kuhlemanns' \cite{CKK} which we will use to obtain a fixed point theorem for semimetric spaces. Before proceeding, let us recall two useful definitions.
Let $f\colon X \to X$. We say that a subset $A \subset X$ is $f$-closed if $f(A) \subset A$.
We say that a subset $A \subset X$ is $f$-contracting if $f(A) \subsetneq A$.
\begin{theorem} \cite{CKK} \label{ps}
Let $(X,\mathcal{B})$ be a spherically complete ball space and let $f\colon X \to X$. Then
\begin{itemize}
\item[(i)] If any $f$-closed $A \subset X$ contains $f$-contracting $B \in \mathcal{B}$ as its subset (i.e., $B\subseteq A$, then for any $f$-closed $T\subset X$ there exists $x \in T$ such that $f(x)=x.$
\item[(ii)] If every $f$-closed $A \subset X$ is an $f$-contracting ball, then there is unique $x \in X$ such that $f(x)=x.$
\end{itemize}
\end{theorem}
Combining Theorem \ref{zup} and Theorem \ref{ps} immediately yields the following result:
\begin{corollary}\label{deskorollary}
Let $(X,d)$ be a semicomplete (G)-semimetric space, where $d$ is $1$-continuous and let $f\colon X \to X$.
\begin{itemize}
\item[(i)] If there is $S \subset (0, \infty)$ such that $S^d = \{0\}$ and for any $f$-closed $A \subset X$ there exist $r \in S$ and $y \in X$ such that $B_r(y)$ is $f$-contracting and $B_r(y)\subset A$, then for any $f$-closed $T\subset X$ there exists $x \in T$ such that $f(x)=x.$
\item[(ii)] If there is $S \subset (0, \infty)$ such that $S^d = \{0\}$ and every $f$-closed subset of $X$ is $f$-contracting and of the form $B_r(y)$, where $y \in X$ and $r \in S$, then there is unique $x \in X$ such that $f(x)=x.$
\end{itemize}
\end{corollary}
As a closing comment to this section, let us discuss the problems which can possibly appear when trying to apply either Theorem \ref{ps} or Corollary \ref{deskorollary}.
Let us consider $f:X\to X$ and a set $A$ which is $f$-closed. If $A$ contains a point $x$ which is not a fixed point of $f$, then it also necessary contains all iterates of the form $f^n(x)$, $n\in\mathbb N$ (here $f^n$ denotes superposition). Notice that for such point $x$ the following set $C$ given by
\[
C:=\left\{f^n(x) \, : \, n=0,1,2,\dots \right\}
\]
is $f$-closed. Indeed, $f[C]= \left\{f^n(x) \, : \, n = 1,2,3,\dots \right\}\subset C$. In order to apply either Corollary $3.3$ or Theorem $3.2$ to $f$ we need to establish that every such set $C$ contains (or is) an $f$-contracting ball. Thus, we arrive at the following.
\begin{remark}
If every ball contains an element which is not an iterate of its center (unless it is a fixed point), i.e., for a $f:X\to X$, $f\neq \operatorname{id}$ the following condition holds:
\begin{equation}\label{count}
\forall_{\stackrel{x\in X}{x\neq f(x)}} \, \forall_{r>0} \, \exists_{y\in B_r(x)} \, \forall_{n\in\mathbb N}\, \, f^n(x)\neq y \, \land \, f^n(y)\neq x,
\end{equation}
then neither assumptions of Theorem \ref{ps} nor those of Corollary \ref{deskorollary} are satisfied.
\end{remark}
\begin{proof}
Suppose that \eqref{count} holds. Since $f$ is not an identity, there exists $x_0\neq f(x_0)$. Recall that the set $C:=\{f^n(x) \, : \, n =0,1,2\dots\}$ is $f$-closed, hence if the assumptions of discussed results are to be satisfied, $C$ at least has to contain an $f$-contracting ball. In such case let $f^k\in C$ be the center of that ball and let $r>0$ be its radius. Then, due to the assumption \eqref{count}, at least one element $z\in B(y,r)$ is not of the form $f^l(x)$ (if it was, then either $l<k$ and $y=f^{k-l}\left(f^{l}(x)\right)$ or vice-versa). Such element cannot be contained in $C$, hence $B(y,r)\not\subset C$ (in particular, since $y$ was arbitrary then $C$ is not a ball ). Thus $C$ does not contain an $f$-contracting subset which is a ball.
\end{proof}
Despite this flaw, we consider both results elegant and worthwhile of discussion. We also hope that such explanation somewhat clarifies the reason behind the scarcity of nontrivial examples in this section.
\section{Generalizations of Caristi-Kirk Fixed Point Theorem}
In this part of the paper, we will present several generalizations of celebrated Caristi-Kirk fixed point theorem. We start by introducing some additional, necessary notions and then move to the new results. The proofs in this section are based on the methods used in \cite{KKP} and \cite{BCLS}.
In this section we will consider one of the most popular semimemtric spaces, that is, b-metric spaces. Let $K \geq 1.$ We say that a semimetric space $(X,d)$ is a b-metric space with a constant $K$ if
\begin{itemize}
\item[$(B_K)$] $\forall_{x,y,z \in X} \,\,d(x,z)\leq K(d(x,y)+d(y,z))$.
\end{itemize}
So, the function $g$ for b-metric spaces is of the form: $g(a,b) = K(a+b).$ Of course, for $K=1$ we obtain a metric space.
Similarly as in \cite{KKP} or \cite{BCLS} we will define the Caristi-Kirk and Oettli-Th\'era ball spaces. However, we will adjust them to b-metric spaces.
Let $\phi \colon X \to \mathbb R$. We say that $\phi$ is sequentionally lower semicontinuous if for every $y \in X$ and for any sequence $(y_n)$ convergent to $y$ we have
$$\liminf\limits_{n\to \infty} \phi(y_n) \geq \phi(y).$$
A function $\phi$ is called a Caristi-Kirk function, if it is sequentionally lower semicontinuous and bounded from below.
Given any Caristi-Kirk function $\phi$ and $x \in X$ we define the $K$-Caristi-Kirk balls as the sets of the form:
$$B^{\phi}_x:= \{y \in X \colon d(x,y) \leq K\phi(x)-K\phi(y)\}.$$
Since obviously $x \in B^{\phi}_x$, the sets $B^{\phi}_x$ are nonempty.
So, we can consider the $K$-Caristi-Kirk ball space $(X, \mathcal{B}^{\phi}),$ where
$$\mathcal{B}^{\phi}:=\{B^{\phi}_x\colon x \in X\}.$$
By $\overline{\mathbb R}$ we denote the set $\mathbb R\cup \{+\infty\}.$
We say that a function $\Phi\colon X \times X \to \overline{\mathbb R}$ is a $K$-Oettli-Th\'era function if:
\begin{itemize}
\item[(i)] for every $x \in X$ the function $\Phi(x,\cdot)\colon X \to \mathbb R$ is sequentionally lower semicontinous;
\item[(ii)] $\forall_{x \in X} \,\,\Phi(x,x) = 0$;
\item[(iii)] $\forall_{x,y,z \in X} \,\,\Phi(x,z) \leq K(\Phi(x,y)+\Phi(y,z))$;
\item[(iv)] $\exists_{x_0 \in X} \,\, \inf\limits_{x\in X} \Phi(x_0,x) > - \infty.$
\end{itemize}
Every element $x_0$ satisfying (iv) will be called an Oettli-Th\'era element for $\Phi$.
Given any $K$-Oettli-Th\'era function $\Phi$ and $x \in X$ we define the $K$-Oettli-Th\'era balls as the sets of the form:
$$B^{\Phi}_x:= \{y \in X \colon d(x,y) \leq -\Phi(x,y)\}.$$
The sets $B^{\phi}_x$ are nonempty (see Lemma \ref{lem1}).
So, we can consider the $K$-Oettli-Th\'era ball space $(X, \mathcal{B}^{\Phi}),$ where
$$\mathcal{B}^{\Phi}:=\{B^{\Phi}_x\colon x \in X\}.$$
For a fixed Oettli-Th\'era element $x_0$ for $\Phi$ we also define the ball space $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0}),$ where
$$\mathcal{B}^{\Phi}_{x_0} = \{B^{\Phi}_y\colon y \in B^{\Phi}_{x_0}\}.$$
We will need a notion of a strongly contractive ball space. We say that a ball space $(X,\mathcal{B})$ is strongly contractive if there exists the family of balls $\{B_x \in \mathcal{B}\colon x \in X\}$ such that for any $x,y \in X$ the following conditions hold:
\begin{itemize}
\item[(1)] $x \in B_x$;
\item[(2)] $y \in B_x \Rightarrow B_y \subset B_x$;
\item[(3)] $y \in B_x\setminus\{x\} \Rightarrow B_y \subsetneq B_x$.
\end{itemize}
We conclude this preliminary part with the following fixed point result for spherically complete, strongly contractive ball spaces.
\begin{theorem} \cite{BCLS} \label{twosc}
Let $(X,\mathcal{B})$ be a spherically complete, strongly contractive ball space. Then for any $x \in X$ there is $a \in B_x$ such that $B_a = \{a\}$.
\end{theorem}
The following lemma provides some insight into $K$-Oettli-Th\'{e}ra ball spaces.
\begin{lemma} \emph{(}cf. \cite[Lemma 10]{BCLS}\emph{)} \label{lem1}
Let $(X,d)$ be a b-metric space with a constant $K \geq 1$ and let $\Phi \colon X \times X \to \overline{\mathbb R}$ be such that $\Phi(x,x) =0$ for all $x \in X$ and
$$\forall_{x,y,z \in X} \,\,\Phi(x,z) \leq K(\Phi(x,y)+\Phi(y,z)).$$ For any $x \in X$ let $B_x:= \{y \in X\colon d(x,y) \leq - \Phi(x,y)\}.$ Then for every $x \in X$ we have:
\begin{itemize}
\item[(1)] $x \in B_x$;
\item[(2)] if $y \in B_x$, then $B_y \subset B_x$;
\item[(3)] if $y \in B_x$, $y \neq x$, then $B_y \subsetneq B_x$ and $\Phi(x,y)<\Phi(y,x)$.
\end{itemize}
\end{lemma}
\begin{proof}
Fix $x \in X$.
Ad (1) We have $d(x,x) =0 = -\Phi(x,x)$, so $x \in B_x$.
Ad (2) Let $y \in B_x$ and $z \in B_y$. Then
$$d(x,y) \leq - \Phi(x,y)$$
and
$$d(y,z) \leq - \Phi(y,z).$$
By the assmption on $\Phi$, we have
$$d(x,z) \leq K(d(x,y)+d(y,z)) \leq K(-\Phi(x,y)-\Phi(y,z)) = -K(\Phi(x,y)+\Phi(y,z)) \leq - \Phi(x,z).$$
Hence $z \in B_x$, and consequently $B_y \subset B_x$.
Ad (3) Let $y \in B_x$, $y \neq x$. We will prove that $x \notin B_y$. On the contrary, suppose that $x \in B_y$. Then $d(y,x) \leq -\Phi(y,x)$. We have
$$0 < K(d(x,y)+d(y,x)) \leq K(-\Phi(x,y)-\Phi(y,x)) = - K(\Phi(x,y)+\Phi(y,x)) \leq -\Phi(x,x) = 0,$$
a contradiction. Hence $x \notin B_y$. Moreover, we proved that $$-\Phi(y,x) < d(y,x)=d(x,y) \leq -\Phi(x,y).$$
\end{proof}
From this result we obtain, almost immediately, the following
\begin{corollary} \label{wstrcon}
Let $(X,d)$ be a b-metric space with a constant $K \geq 1$ and $\Phi \colon X\times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function. Then the space $(X,\mathcal{B}^{\Phi})$ is strongly contractive. Moreover, for any Oettli-Th\'era element $x_0$ for $\Phi$, the ball space $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0})$ is also strongly contractive and $\mathcal{B}^{\Phi}_{x_0}=\{B \in B^{\Phi}\colon B \subset B^{\Phi}_{x_0}\}.$
\end{corollary}
\begin{proof}
The first assertion follows directly from Lemma \ref{lem1}, when we consider the family $\{B^{\Phi}_x\in \mathcal{B}^{\Phi}\colon x \in X\}$. By $(2)$ of Lemma \ref{lem1}, for any $x \in B^{\Phi}_{x_0}$ we have $B_x^{\Phi} \subset B_{x_0}^{\Phi}$, so $\mathcal{B}^{\Phi}_{x_0}=\{B \in B^{\Phi}\colon B \subset B^{\Phi}_{x_0}\}.$ Using again Lemma \ref{lem1}, we see that the family $\{B^{\Phi}_x\in \mathcal{B}^{\Phi}_{x_0}\colon x \in B^{\Phi}_{x_0}\}$ witnesses that the ball space $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0})$ is also strongly contractive.
\end{proof}
The subsequent result highlights the connection between Caristi-Kirk functions and $K$-Oettli-Th\'{e}ra mappings.
\begin{lemma} \label{CK a OT}
Let $(X,d)$ be a b-metric space with a constant $K \geq 1$ and let $\phi \colon X \to \mathbb R$ be a Caristi-Kirk function. Then the function $\Phi \colon X \times X \to \mathbb R$ given by the formula $\Phi(x,y) = K\phi(y)-K\phi(x)$ is a $K$-Oettli-Th\'era function and $B^{\phi}_x = B^{\Phi}_x$ for any $x \in X$. Moreover, every $x \in X$ is an Oettli-Th\'era element for $\Phi$.
\end{lemma}
\begin{proof}
The proofs that conditions (i) and (ii) from the definition of $K$-Oettli-Th\'era function are immediate. Let $x,y,z \in X$. We have
$$\Phi(x,z) = K\phi(z)-K\phi(x) = K\phi(z)-K\phi(y)+K\phi(y) - K\phi(x) = K\Phi(y,z) + K\Phi(x,y).$$
Condition (iv) holds for any $x_0$, because the codomain of $\phi$ is $\mathbb R$ and $\phi$ is bounded from below.
Let $x \in X$. Then we have
$$B^{\Phi}_x = \{y \in X \colon d(x,y) \leq - \Phi(x,y)\} = \{y \in X \colon d(x,y) \leq K\phi(x) - K\phi(y)\} = B^{\phi}_x.$$
\end{proof}
\begin{corollary} \label{wn lem dla CK}
Let $(X,d)$ be a b-metric space with a constant $K \geq 1$ and let $\phi \colon X \to \mathbb R$ be a Caristi-Kirk function.
If $y \in B^{\phi}_x$, then $B^{\phi}_y \subset B^{\phi}_x$.
\end{corollary}
\begin{proof}
Let $\Phi(x,y) = K\phi(y) - K\phi(x)$ for $x,y \in X$. By Lemma \ref{CK a OT}, $\Phi$ is a $K$-Oettli-Th\'era function, and so it satisfies the assumptions of Lemma \ref{lem1}. By Lemma \ref{CK a OT}, we have
$$B^{\Phi}_x =B^{\phi}_x.$$
Hence, by (2) of Lemma \ref{lem1}, we obtain the assertion.
\end{proof}
The following proposition is the natural counterpart of Theorem \ref{zup} for $K$-Caristi-Kirk ball spaces.
\begin{proposition} \emph{(}cf. \cite[Proposition 3]{KKP} \emph{)} \label{CKzup}
Let $(X,d)$ be a b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous. Then
\begin{itemize}
\item[(1)] If $(X,d)$ is semicomplete, then every $K$-Caristi-Kirk ball space is spherically complete.
\item[(2)] If $d$ is uniformly $1$-continuous and every $K$-Caristi-Kirk ball space is spherically complete, then $(X,d)$ is complete.
\end{itemize}
\end{proposition}
\begin{proof}
Ad (1) Assume that $(X,d)$ is semicomplete. By Theorem \ref{zup}, $(X,d)$ is complete. Let $\phi\colon X \to \mathbb R$ be a Caristi-Kirk function. Let $\mathcal{L}$ be a nest of balls in $\mathcal{B}^{\phi}$. By definition of $\mathcal{B}^{\phi}$, there exists $M \subset X$ such that $\mathcal{L} = \{B^{\phi}_x\colon x \in M\}$. For any $x,y \in M$ we have $B^{\phi}_x \subset B^{\phi}_y$ or $B^{\phi}_x \subset B^{\phi}_y$. Since $x \in B^{\phi}_x$ for all $x \in X$, either $x \in B^{\phi}_y$ or $y \in B^{\phi}_x.$ In both cases,
\begin{equation} \label{nier}
d(x,y) \leq K|\phi(x)-\phi(y)|.
\end{equation}
Put $r:= \inf\limits_{x\in M} \phi(x)$.
Since $\phi$ is bounded from below, $r \in \mathbb R$.
Take a sequence $(x_n)$ in $M$ such that $\lim\limits_{n \to \infty} \phi(x_n) =r.$ The sequence $(\phi(x_n))$ is a Cauchy sequence in $\mathbb R$, because it is convergent. Let $\varepsilon > 0$ and let $N \in \mathbb N$ be such that $$|\phi(x_n)-\phi(x_m)| \leq \frac{\varepsilon}{K}$$ for $n,m \geq N$. By (\ref{nier}), we have for $n,m \geq N$
$$d(x_n,x_m) \leq K|\phi(x_n)-\phi(x_m)| \leq \varepsilon.$$
So, $(x_n)$ is a Cauchy sequence in $X$. By completness of $(X,d)$, $(x_n)$ is convergent to some $z \in X$. We will show that $z \in \bigcap \mathcal{L}$.
From sequentional lower semicontinuity of $\phi$ we infer that
$$\phi(z) \leq \lim\limits_{n \to \infty} \phi(x_n) = r.$$
Let $x \in M$. We will show that $z \in B^{\phi}_x$. By (\ref{nier}) and $1$-continuity of $d$, we have
$$d(x,z) = \lim\limits_{n\to \infty} d(x,x_n) \leq \lim\limits_{n\to \infty} K|\phi(x)-\phi(x_n)|=K|\phi(x)-r| = K\phi(x)-Kr \leq K\phi(x) - K\phi(z),$$
so $z \in B^{\phi}_x$. By arbitariness of $x$, $z \in \bigcap \mathcal{L}$. Thus, $(X,\mathcal{B}^{\phi})$ is spherically complete.
Ad (2) Let $(x_n)$ be a Cauchy sequence in $(X,d)$. If one of its terms is its limit, then the proof is finished, so assume otherwise. Let $\psi \colon X \to \mathbb R$ be given by the formula
$$\psi(x) = \limsup\limits_{n \to \infty} d(x,x_n).$$
We will show that $\psi$ is well defined, that is, it cannot be equal to $\infty$ for any $x$. Let $x \in X$ and take $N \in \mathbb N$ such that $d(x_n,x_m) < 1$ for all $n,m \geq N$. Then for any $n \geq N$ we have
$$d(x,x_n) \leq Kd(x,x_N)+Kd(x_N,x_n) \leq Kd(x,x_N) + K.$$
Hence the sequence $(d(x,x_n))$ is bounded from above, and hence $$\limsup\limits_{n \to \infty} d(x,x_n)<\infty.$$
Now, using induction, we will choose a subsequence $(x_{n_k})$ of a sequence $(x_n)$ as follows. Put $n_1:=1$. Assume that we have defined $n_k$ for some $k \in \mathbb N$. Since $x_{n_k}$ is not a limit of $(x_n)$, $\psi(x_{n_k}) > 0.$ However, $\lim\limits_{n\to \infty} \psi(x_n) =0$, because $(x_n)$ is a Cauchy sequence. Hence
$$\limsup\limits_{n\to \infty} \left(\frac{1}{2}d(x_{n_k},x_n) + \psi(x_n)\right)=\limsup\limits_{n\to \infty} \frac{1}{2}d(x_{n_k},x_n) + \lim\limits_{n \to \infty} \psi(x_n) = \frac{1}{2}\psi(x_{n_k}) \leq \psi(x_{n_k}).$$
Therefore, there is $m \in \mathbb N$ such that
\begin{equation} \label{1/2d}
\frac{1}{2}d(x_{n_k},x_m) \leq \psi(x_{n_k})-\psi(x_m).
\end{equation}
Put $n_{k+1}:=m$. Define $\phi\colon X \to \mathbb R$ by the formula:
$$\phi(x)= 2\psi(x).$$
By the definition, $\phi$ is bounded from below by $0$. We will prove that $\psi$ is sequentionally lower semicontinuous. Let $y \in X$ and $(y_n)$ be a sequence convergent to $y$. Take an increasing sequence of natural numbers $(m_k)$ such that $\lim\limits_{k \to \infty}d(y,x_{m_k}) = \psi(y)$ and $$|d(y,x_{m_k})-\psi(y)|< \frac{\varepsilon}{2}$$ for all $k \in \mathbb N$. Let $N \in \mathbb N$ be such that
$$|d(y_n,x)-d(y,x)|< \frac{\varepsilon}{2}$$
for all $x \in X$ and $n \geq N$.
Then we have
$$|d(y_n,x_{m_k}) - \psi(y)| \leq |d(y_n,x_{m_k})- d(y,x_{m_k})| + |d(y,x_{m_k}) - \psi(y)| < \varepsilon$$
for all $n \geq N$ and $k \in \mathbb N$. Hence
$$\lim\limits_{n \to \infty} \psi(y_n) = \lim\limits_{n \to \infty} \limsup\limits_{m \to \infty} d(y_n,x_m) \geq \lim\limits_{n \to \infty} \lim\limits_{k \to \infty} d(y_n,x_{m_k}) = \psi(y).$$
Therefore, $\psi$ is sequentionally lower semicontinuous, and so is $\phi$. Thus, $\phi$ is a Caristi-Kirk function. By the assumption, the $K$-Caristi-Kirk ball space $(X, \mathcal{B}^{\phi})$ is spherically complete. Let $$\mathcal{L}:=\{B^{\phi}_{x_{n_k}}\colon k \in \mathbb N\}.$$
By (\ref{1/2d}), we have
$$d(x_{n_k},x_{n_{k+1}})\leq 2\psi(x_{n_k})-2\psi(x_{n_{k+1}}) = \phi(x_{n_k})-\phi(x_{n_{k+1}}) \leq K(\phi(x_{n_k}) - \phi(x_{n_{k+1}}))$$
for all $k\in \mathbb N$, because $\phi(x_{n_k})-\phi(x_{n_{k+1}}) \geq 0$. Hence $x_{n_{k+1}} \in B^{\phi}_{x_{n_{k}}}$, and by Corollary \ref{wn lem dla CK}, $B^{\phi}_{x_{n_{k+1}}} \subset B^{\phi}_{x_{n_k}}.$ In consequence, $\mathcal{L}$ is a nest of balls. From spherical completeness of $(X, \mathcal{B}^{\phi})$ we deduce that there exist $x \in \bigcap \mathcal{L}.$ Thus,
$$d(x_{n_k},x) \leq \phi(x_{n_k})-\phi(x)\leq \phi(x_{n_k})$$
for all $k\in \mathbb N$. Since $\lim\limits_{k\to \infty}\phi(x_{n_k}) =0$, the sequence $(x_{n_k})$ is convergent to $x$. Hence $(X,d)$ is semicomplete. By Theorem \ref{zup} $(X,d)$ is complete.
\end{proof}
We proceed with a bit more technical lemma, which will be put to use in the subsequent part of the paper.
\begin{lemma} \emph{(}cf. \cite[ Lemma 13]{BCLS}\emph{)} \label{lem2}
Let $(X,d)$ be a b-metric space with a constant $K$, $\Phi\colon X \times X \to \overline{\mathbb R}$ a $K$-Oettli-Th\'era function and $x_0$ an Oettli-Th\'era element for $\Phi$. Let $\mathcal{L} \subset B^{\Phi}$ be a nest of balls of the form
$$\mathcal{L} = \{B_x \colon x \in A\},$$ where $B_x = B^{\Phi}_x,$ and $A \subset B^{\Phi}_{x_0}.$ Then for every $x, y \in A$ we have
\begin{equation}\label{nlem}
d(x,y) \leq |\Phi(x_0,x)-\Phi(x_0,y)|.
\end{equation}
Moreover, the following conditions are equivalent for any $x,y \in A$:
\begin{itemize}
\item[(i)]
$y \in B_x$;
\item[(ii)]
$\Phi(x,y)\leq \Phi(y,x)$;
\item[(iii)]
$\Phi(x_0,y) \leq \Phi(x_0,x)$.
\end{itemize}
\end{lemma}
\begin{proof}
First, observe that for any $x \in A$, $\Phi(x_0,x) \leq 0$. Indeed, since $A \subset B^{\Phi}_{x_0}$, we have $$0 \leq d(x_0,x) \leq -\Phi(x_0,x).$$ Let $x,y \in A$.
Since $\mathcal{L}$ is a nest, either $x \in B_y$ or $y \in B_x$. Hence $d(x,y) \leq -\Phi(x,y)$ or $d(x,y) \leq -\Phi(y,x).$
Without loss of generality assume that $d(x,y) \leq -\Phi(x,y)$.
By condition (iii) from the definition of a $K$-Oettli-Th\'era function, we have
\begin{equation} \label{1/k}
\frac{1}{K} \Phi(x_0,y) \leq \Phi(x_0,x) + \Phi(x,y).
\end{equation}
Hence, using the fact that $\Phi(x_0,y) \leq 0$, we obtain
$$d(x,y)\leq -\Phi(x,y) \leq \Phi(x_0,x)- \frac{1}{K} \Phi(x_0,y)\leq \Phi(x_0,x)- \Phi(x_0,y)\leq |\Phi(x_0,x)-\Phi(x_0,y)|.$$
(i) $\Leftrightarrow$ (ii) Assume that $y \in B_x$. If $y=x$, then (ii) obviously holds. If $y \neq x$. Then, by Lemma \ref{lem1} (3), $-\Phi(y,x) < -\Phi(x,y)$, and so we have (ii).
If (i) is not satisfied, that is, $y \notin B_x$, then $x \in B_y$ and $x \neq y$. Using once more Lemma \ref{lem1} (3), we obtain $-\Phi(y,x) > -\Phi(x,y)$, so (ii) does not hold.
(i) $\Leftrightarrow$ (iii) Assume that $y \in B_x$.
Then, by (\ref{1/k}) and the fact that $\Phi(x_0,y)\leq 0$, we have
$$0 \leq d(x,y)\leq -\Phi(x,y) \leq \Phi(x_0,x)- \frac{1}{K}\Phi(x_0,y) \leq \Phi(x_0,x)- \Phi(x_0,y).$$
So, (iii) holds.
If (i) is not satisfied, that is, $y \notin B_x$, then $x \in B_y$ and $x \neq y$. Using once more (\ref{1/k}) (swaping $x$ and $y$) and the fact that $\Phi(x_0,x)\leq 0$, we obtain
$$0 < d(x,y)\leq -\Phi(y,x) \leq \Phi(x_0,y)- \frac{1}{K}\Phi(x_0,x) \leq \Phi(x_0,y)- \Phi(x_0,x),$$ so (iii) does not hold.
\end{proof}
In the sequel we will need yet another version of Theorem \ref{zup} and Proposition \ref{CKzup}. This time we approach the problem of completeness from the perspective of $K$-Oettli-Th\'{e}ra function.
\begin{proposition} \label{propzup}\emph(cf. \cite[Proposition 14]{BCLS}\emph)
Let $(X,d)$ be a b-metric space with a constant $K\geq 1$, where $d$ is $1$-continuous. Then
\begin{itemize}
\item[(1)] if $d$ is uniformly $1$-continuous and the space $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0})$ is spherically complete for every $K$-Oettli-Th\'era function $\Phi$ and every Oettli-Th\'era element $x_0$ for $\Phi$, then $(X,d)$ is complete;
\item[(2)] if $(X,d)$ is semicomplete, then the space $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0})$ is spherically complete for every $K$-Oettli-Th\'era function $\Phi$ and every Oettli-Th\'era element $x_0$ for $\Phi$.
\end{itemize}
\end{proposition}
\begin{proof}
Ad (1) Let $\phi$ be a Caristi-Kirk function. Consider the ball space $(X,\mathcal{B}^{\phi})$ and take any nest of balls $\mathcal{L}$ in that space. Let $x_0 \in X$ be such that $B^{\phi}_{x_0} \in \mathcal{L}$. Let $\Phi \colon X \times X \to \mathbb R$ be given by the formula $\Phi(x,y) = K\phi(y)-K\phi(x).$ By Lemma \ref{CK a OT}, $\Phi$ is a $K$-Oettli-Th\'era function and $x_0$ is an Oettli-Th\'era element for $\Phi$. Consider the nest
$$\mathcal{L}_0 = \{B^{\phi}_{y} \in \mathcal{L} \colon B^{\phi}_{y} \subset B^{\phi}_{x_0}\}.$$ Of course, $\bigcap \mathcal{L} = \bigcap \mathcal{L}_0$. By Lemma \ref{CK a OT}, we have
$$\mathcal{L}_0 = \{B^{\Phi}_{y} \in \mathcal{L} \colon B^{\Phi}_{y} \subset B^{\Phi}_{x_0}\}.$$
Hence $\mathcal{L}_0$ is a nest in the ball space $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0})$, which is spherically complete, by the assumption. Thus,
$$\emptyset \neq \bigcap \mathcal{L}_0 = \bigcap \mathcal{L},$$
and so $(X,\mathcal{B}^{\phi})$ is spherically complete. From the arbitrariness of $\phi$ and Proposition \ref{CKzup} we infer that $(X,d)$ is complete.
Ad (2)
Assume that $(X,d)$ is semicomplete. By Theorem \ref{zup}, $(X,d)$ is complete. Let $\Phi\colon X \times X\to \overline{\mathbb R}$ be a K-Oettli-Th\'era function and $x_0 \in X$ be an Oettli-Th\'era element for $\Phi$. Let $\mathcal{L}$ be a nest of balls in $\mathcal{B}^{\Phi}_{x_0}$. By definition of $\mathcal{B}^{\Phi}_{x_0}$ there exists $M \subset B^{\Phi}_{x_0}$ such that $\mathcal{L} = \{B^{\Phi}_x\colon x \in M\}$.
Put $r:= \inf\limits_{x\in M} \Phi(x_0,x)$.
Since $x_0$ is an Oettli-Th\'era element for $\Phi$, we know that $r \in \mathbb R$.
Take a sequence $(x_n)$ in $M$ such that $\lim\limits_{n \to \infty} \Phi(x_0,x_n) =r.$ The sequence $(\Phi(x_0,x_n))$ is a Cauchy sequence in $\mathbb R$, because it is convergent. By (\ref{nlem}) from Lemma \ref{lem2}, we have
$$d(x_n,x_m) \leq |\Phi(x_0,x_n)-\Phi(x_0,x_m)|$$
for any $n,m \in \mathbb N$. Hence $(x_n)$ is a Cauchy sequence in $X$.
By completness of $(X,d)$, $(x_n)$ is convergent to some $z \in X$. We will show that $z \in \bigcap \mathcal{L}$.
Let $x \in M$. We will show that $z \in B^{\phi}_x$.
Consider the cases.
$1^{\mbox{o}}$ $\Phi(x_0,x) = r$. Using (\ref{nlem}) from Lemma \ref{lem2}, we obtain
$$d(x_n,x)\leq |\Phi(x_0,x_n)-\Phi(x_0,x)| = |\Phi(x_0,x_n)-r|\to 0.$$
By $1$-continuity of $d$, we have $d(x,z)=0$, so $x=z$. Therefore $z \in B^{\Phi}_x$.
$2^{\mbox{o}}$ $\Phi(x_0,x) > r$. Then there is $N \in \mathbb N$ such that
$$\Phi(x_0,x_n) \leq \Phi(x_0,x)$$
for all $n \geq N.$ From Lemma \ref{lem2} we deduce that $$\Phi(x,x_n) \leq \Phi(x_n,x)$$
for any $n \in \mathbb N$.
By the definition of $\mathcal{L}$ and the fact that $x,x_n \in M$, we have $x \in B^{\Phi}_{x_n}$ or $x_n \in B^{\Phi}_{x}$, so $d(x,x_n) \leq -\Phi(x,x_n)$ or $d(x,x_n) \leq -\Phi(x_n,x)$. Hence
$$d(x,x_n) \leq \max\{-\Phi(x,x_n),-\Phi(x_n,x)\} = -\Phi(x,x_n).$$
Using above inequality, $1$-continuity of $d$ and sequentional lower semicontinuity of $\Phi(x,\cdot)$ we obtain
$$d(x,z) = \lim\limits_{n\to \infty} d(x,x_n) = \limsup\limits_{n\to \infty} d(x,x_n) \leq \limsup\limits_{n\to \infty}\left(-\Phi(x,x_n)\right) = -\liminf\limits_{n\to \infty}\Phi(x,x_n) \leq - \Phi(x,z).$$
hence $z \in B^{\Phi}_x$. By arbitrariness of $x$, $z \in \bigcap \mathcal{L}$. Thus, $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0})$ is spherically complete.
\end{proof}
We can now present the initial fixed point result in the introduced setting.
\begin{proposition} \label{Propklu} \emph(cf. \cite[Proposition 16]{BCLS}\emph)
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous.
\begin{itemize}
\item[(1)] If $\Phi \colon X \times X \to \overline{\mathbb R}$ is a $K$-Oettli-Th\'era function, then for every Oettli-Th\'era element $x_0$ for $\Phi$ there exists $a \in B^{\Phi}_{x_0}$ such that $B_a^{\Phi} = \{a\}.$
\item[(2)] If $\phi \colon X \to R$ is a Caristi-Kirk function, then for every $x \in X$ there exists $a \in B^{\phi}_{x}$ such that $B_a^{\phi} = \{a\}.$
\end{itemize}
\end{proposition}
\begin{proof}
Ad (1) Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function and $x_0$ be an Oettli-Th\'era element for $\Phi$. By Proposition \ref{propzup}, $(B^{\Phi}_{x_0},\mathcal{B}^{\Phi}_{x_0})$ is spherically complete and by Corollary \ref{wstrcon}, it is strongly contractive. From Theorem \ref{twosc} we have the assertion.
Ad (2) Follows directly from (1) and Lemma \ref{CK a OT}.
\end{proof}
From above proposition we can infer the Caristi-Kirk type fixed point theorems.
\begin{theorem}\emph(cf. \cite[Theorem 21]{BCLS}\emph) \label{CK th}
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous and $f \colon X \to X$. Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function. If
\begin{equation} \label{CKwar}
\forall_{x\in X} \,\, d(x,f(x)) \leq - \Phi(x,f(x)),
\end{equation}
then there is $a \in X$ such that $f(a)=a.$
\end{theorem}
\begin{proof}
From (\ref{CKwar}) we infer that $f(x) \in B_x$ for any $x \in X$. By Proposition \ref{Propklu}, there is $a \in X$ such that $B_a = \{a\}$. Since $f(a) \in B_a$, we have $f(a)=a$.
\end{proof}
The result above should be compared with \cite[Corollary 12]{Su}, which presents a similar contribution, albeit in slightly different direction. Before we present this result we need some additional definitions. We say that $(x_n) \in X^\mathbb N$ is a $\Sigma$-Cauchy sequence if $\sum_{n=1}^\infty d(x_n,x_{n+1}) < \infty$. A semimetric space $(X,d)$ is called $\Sigma$-semicomplete if every $\Sigma$-Cauchy sequence has a convergent subsequence.
\begin{lemma} \cite{Su} \label{sigma}
Let $(X,d)$ be a $\Sigma$-semicomplete semimetric space. Then $X$ is semicomplete.
\end{lemma}
\begin{theorem}\cite[Theorem 13]{Su} \label{chara}
Let $(X,d)$ be a semimetric space. Then the following are equivalent:
\begin{itemize}
\item[(i)] $X$ is $\Sigma$-semicomplete;
\item[(ii)] every function $f \colon X\to X$ has a fixed point whenever there is a proper, sequentially lower semicontinuous function $h \colon X \to [0,\infty]$ such that
$$\forall_{x\in X} \,\, d(x,f(x)) \leq h(x)-h(f(x)).$$
\end{itemize}
\end{theorem}
Using Theorem \ref{CK th}, we can prove the following analogue to the previous result.
\begin{theorem}
Let $(X,d)$ be a b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous. Then the following are equivalent:
\begin{itemize}
\item[(i)] $X$ is semicomplete;
\item[(ii)] every function $f \colon X\to X$ has a fixed point whenever there is a proper, sequentially lower semicontinuous function $h \colon X \to [0,\infty]$ such that
$$\forall_{x\in X} \,\, d(x,f(x)) \leq h(x)-h(f(x));$$
\item[(iii)] $X$ is $\Sigma$-semicomplete.
\end{itemize}
\end{theorem}
\begin{proof}
(i) $\Rightarrow$ (ii). Let $f \colon X\to X$. Assume that there exists a proper, sequentially lower semicontinuous function $h \colon X \to [0,\infty]$ such that
$$\forall_{x\in X} \,\, d(x,f(x)) \leq h(x)-h(f(x)).$$ Let $\Phi \colon X\times X \to \overline{\mathbb R}$ be given by the formula $\Phi(x,y) = h(y)-h(x)$. We will show that $\Phi$ is a $K$-Oettli-Th\'era function. The proofs of conditions (i) and (ii) from the definition of $K$-Oettli-Th\'era function are immediate. Let $x,y,z \in X$. We have
$$\Phi(x,z) = h(z)-h(x) = h(z)-h(y)+h(y) - h(x) = \Phi(y,z) + \Phi(x,y)\leq K(\Phi(y,z) + \Phi(x,y)).$$
Let $x_0 \in \{x \in X\colon h(x) \in \mathbb R\}$. Such $x_0$ exists, because $h$ is proper. Then for any $x \in X$, we have $\Phi(x_0,x) = h(x)-h(x_0) \geq -h(x_0).$ Hence $\Phi$ is a $K$-Oettli-Th\'era function. Since for any $x\in X$ we have
$$\forall_{x\in X} \,\, d(x,f(x)) \leq h(x)-h(f(x))= -\Phi(x,f(x)),$$
by Theorem \ref{CK th}, $f$ has a fixed point.
(ii) $\Rightarrow$ (iii)
Follows from Theorem \ref{chara}.
(iii) $\Rightarrow$ (i)
Follows from Lemma \ref{sigma}.
\end{proof}
{
Now, we return from this slight detour to give a quick series of generalizations of Karisti-Kirk theorem in various settings. Reasonings in proofs are similar to the ones presented in \cite{BCLS}. However, obtained results are more general.}
\begin{theorem}\emph(cf. \cite[Theorem 22]{BCLS}\emph)
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous and $F \colon X \to \mathcal{P}(X)$. Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function. If
\begin{equation} \label{CKwar2}
\forall_{x\in X} \,\exists_{y \in F(x)}\,\, d(x,y) \leq - \Phi(x,y),
\end{equation}
then there is $a \in X$ such that $a \in F(a).$
\end{theorem}
\begin{proof}
From (\ref{CKwar2}) we infer that for any $x\in X$ there is $y \in F(x)$ such that $y \in B_x$. By Proposition \ref{Propklu}, there is $a \in X$ such that $B_a = \{a\}$. Hence there is $y \in F(a)$ such that $y \in B_a$, so $a \in F(a)$.
\end{proof}
\begin{theorem}\emph(cf. \cite[Theorem 23]{BCLS}\emph)
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous. Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function. There exists $a \in X$ such that
\begin{equation} \label{CKwar3}
\forall_{x\in X\setminus\{a\}} \,\, d(a,x) > - \Phi(a,x).
\end{equation}
\end{theorem}
\begin{proof}
Condition (\ref{CKwar3}) means that for any $x\in X\setminus\{a\}$, $x\notin B_a$, that is $B_a = \{a\}.$ The existence of such $a$ follows from Proposition \ref{Propklu}.
\end{proof}
\begin{theorem}\emph(cf. \cite[Theorem 24]{BCLS}\emph)
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous. Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function. For any $\gamma > 0$ and any Oettli-Th\'era element $x_0$ for $\Phi$ there exists $a \in X$ such that
\begin{equation} \label{CKwar4}
\forall_{x\in X\setminus\{a\}} \,\, \gamma d(a,x) > - \Phi(a,x).
\end{equation}
and
\begin{equation} \label{CKwar4'}
\gamma d(x_0,a) \leq - \Phi(x_0,a).
\end{equation}
\end{theorem}
\begin{proof}
It is easy to see that a function $\Psi \colon X \times X \to \overline{\mathbb R}$ given by the formula $\Psi(x,y) = \frac{1}{\gamma}\Phi(x,y)$ is also a $K$-Oettli-Th\'era function with the same Oettli-Th\'era elements as $\Phi$. Hence, by Proposition \ref{Propklu}, there is $a \in B_{x_0}^{\Psi}$ such that $B_a^{\Psi} = \{a\}$. We have
$$d(x_0,a) \leq -\Psi(x_0,a) = - \frac{1}{\gamma}\Phi(x_0,a),$$
because $a \in B_{x_0}^{\Psi}$. This gives us (\ref{CKwar4'}). Moreover, since $B_a^{\Psi} = \{a\}$,
$$\forall_{x\in X\setminus\{a\}} \,\, d(a,x) > - \Psi(a,x) = - \frac{1}{\gamma}\Phi(a,x),$$
which is equivalent to (\ref{CKwar4}).
\end{proof}
\begin{theorem}\emph(cf. \cite[Theorem 25]{BCLS}\emph)
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous. Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function. Fix $\varepsilon \geq 0$ and $x_0 \in X$ such that $-\varepsilon \leq \inf\limits_{x\in X} \Phi(x_0,x)$. Then for every $\gamma > 0$ and $\delta \geq \frac{\varepsilon}{\gamma}$ there exists $a \in X$ such that $d(a,x_0) \leq \delta$ and $a$ is the strict minimum of the function $\phi_{\gamma} \colon X \to \mathbb R$ given by the formula
$$\phi_{\gamma}(x) = \Phi(a,x) + \gamma d(x,a).$$
\end{theorem}
\begin{proof}
Let $\gamma > 0$ and $\delta \geq \frac{\varepsilon}{\gamma}$.
A function $\Psi \colon X \times X \to \overline{\mathbb R}$ given by the formula $\Psi(x,y) = \frac{1}{\gamma}\Phi(x,y)$ is a $K$-Oettli-Th\'era function with the same Oettli-Th\'era elements as $\Phi$. Hence, by Proposition \ref{Propklu}, there is $a \in B_{x_0}^{\Psi}$ such that $B_a^{\Psi} = \{a\}$. We have
$$d(x_0,a) \leq -\Psi(x_0,a) = - \frac{1}{\gamma}\Phi(x_0,a),$$
because $a \in B_{x_0}^{\Psi}$.
Thus,
$$\gamma d(x_0,a) \leq -\Phi(x_0,a) \leq - \inf\limits_{x\in X} \Phi(x_0,x) \leq \varepsilon \leq \gamma \delta.$$
So, $d(a,x_0) \leq \delta$.
Moreover, since $B_a^{\Psi} = \{a\}$,
$$\forall_{x\in X\setminus\{a\}} \,\, d(a,x) > - \Psi(a,x) = - \frac{1}{\gamma}\Phi(a,x).$$
Therefore, for any $\in X\setminus\{a\}$, we have
$$\phi_{\gamma}(x) = \Phi(a,x) + \gamma d(a,x) > 0 = \phi_{\gamma}(a),$$
which finishes the proof.
\end{proof}
For the next result we need another definition. Let $(X,d)$ be a semimetric space, $\gamma \in (0, \infty)$ and $a,b \in X$. The set
$$P_{\gamma}(a,b):=\{y \in X \colon \gamma d(y,a) + d(y,b) \leq d(a,b)\}$$
is called the petal associated with $\gamma$ and $a,b$.
\begin{theorem}\emph(cf. \cite[Theorem 27]{BCLS}\emph)
Let $M$ be a semicomplete subset of a b-metric space $(X,d)$ with a constant $K \geq 1$, where $d$ is $1$-continuous. Fix $x_0 \in M$ and $b \in X\setminus M$. Then for every $\gamma > 0$ there exists $a \in P_{\gamma}(x_0,b) \cap M$ such that
$$P_{\gamma}(a,b) \cap M = \{a\}.$$
\end{theorem}
\begin{proof}
Let $\gamma > 0$.
A function $\phi \colon M \to \mathbb R$ given by the formula $\phi(x) = \frac{1}{K\gamma}d(x,b)$ is a Caristi-Kirk function on $M$. For any $x \in M$ we have
$$P_{\gamma}(x,b) \cap M = \{y \in M \colon \gamma d(y,x) + d(y,b) \leq d(x,b)\} = \{y \in M \colon d(y,x) \leq \frac{1}{\gamma}d(x,b) - \frac{1}{\gamma} d(y,b)\}$$$$= \{y \in M \colon d(y,x) \leq K\phi(x) - K\phi(y)\}= B_x^{\phi}.$$
Using Proposition \ref{Propklu} (2), we obtain that there is $a \in B_{x_0}^{\phi}$ such that $B_a^{\phi} = \{a\}$. Thus,
$$\{a\} = B_a^{\phi} = P_{\gamma}(a,b) \cap M.$$
\end{proof}
\begin{theorem}\emph(cf. \cite[Theorem 28]{BCLS}\emph)
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous. Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function and $x_0 \in X$ be an Oettli-Th\'era element for $\Phi$. If for any $y \in B_{x_0}^{\Phi}$ such that $\inf\limits_{x\in X} \Phi(y,x) < 0$ there exists $z \in X$, $z \neq y$ such that $d(y,z) \leq - \Phi(y,z),$ then there is $a \in B_{x_0}^{\Phi}$ such that $\inf\limits_{x\in X} \Phi(a,x) = 0$.
\end{theorem}
\begin{proof}
By Proposition \ref{Propklu}, there is $a \in B_{x_0}^{\Phi}$ such that $B_a^{\Phi} = \{a\}$. We will show that $\inf\limits_{x\in X} \Phi(a,x) = 0$. Since $\Phi(a,a) = 0$, we have $\inf\limits_{x\in X} \Phi(a,x) \leq 0$. Assume on the contrary that $\inf\limits_{x\in X} \Phi(a,x) < 0$. Then, by the assumption, there exists $z \in X$, $z \neq a$ such that $d(a,z) \leq - \Phi(a,z).$ But it means that $z \in B_a^{\Phi} = \{a\}$, a contradiction. Hence $\inf\limits_{x\in X} \Phi(a,x) = 0$.
\end{proof}
\begin{theorem}\emph(cf. \cite[Theorem 29]{BCLS}\emph)
Let $(X,d)$ be a semicomplete b-metric space with a constant $K \geq 1$, where $d$ is $1$-continuous. Let $\Phi \colon X \times X \to \overline{\mathbb R}$ be a $K$-Oettli-Th\'era function and $x_0 \in X$ be an Oettli-Th\'era element for $\Phi$. Let $A \subset X$ be such that
$$\forall_{x\in B_{x_0}^{\Phi}} \, \, \exists_{y \in X\setminus\{x\}}\,\, d(x,y) \leq -\Phi(x,y).$$
Then $B_{x_0}^{\Phi} \cap A \neq \emptyset$.
\end{theorem}
\begin{proof}
By Proposition \ref{Propklu}, there is $a \in B_{x_0}^{\Phi}$ such that $B_a^{\Phi} = \{a\}$. We will show that $a\in A$. On the contrary assume that $a \notin A$. Then, by the assumption, there exists $y \in X$, $y \neq a$ such that $d(a,y) \leq -\Phi(a,y)$. But it means that $y \in B_a^{\Phi} = \{a\}$, a contradiction. Hence $a \in B_{x_0}^{\Phi} \cap A$.
\end{proof}
\section{Some remarks on the notion of ball-convergence and topology of ball spaces}
Since the notion of ball spaces stems from the topological background, it is a natural question whether the notion of convergence can be introduced in such setting. The instinctiveness of such question may come from the fact, that we have already some convergence-related notions. The most obvious example would be the spherical completeness, which strongly resembles completeness from the metric setting.
Let us consider a ball space $(X,\mathcal{B})$ and a sequence of its elements $(x_n)_{n\in\mathbb N}$. We will say that $(x_n)_{n\in\mathbb N}$ b-converges\footnote{An abbreviation for ``\textit{ball-converges}''.} to some $x\in X$ if $ \bigcap \mathcal{B}_{(x_n)} = \{x\}$, where $\mathcal{B}_{(x_n)}$ is a family of all balls from $\mathcal{B}$ containing infinitely many terms from sequence $(x_n)$, i.e.,
\begin{equation}\label{ballconverg}
\mathcal{B}_{(x_n)}:= \left\{ B\in \mathcal{B} \, : \, \operatorname{card}\left( \{x_n \, : \, n\in\mathbb N\} \cap B\right) = \aleph_0 \right\}.
\end{equation}
This fact will be denoted by $x_n\overset{\mathcal{B}}{\to} x$.
{\begin{theorem} \label{top}
Let $(X,\tau)$ be a $T_0$ topological space. Consider the ball space $(X,\mathcal{B})$, where $\mathcal{B}$ is a family of all closed sets in $(X,\tau)$. Let $(x_n)$ be a sequence in $X$ and $x \in X$. Then $(x_n)$ is convergent to $x$ with respect to $\tau$ if and only if $x_n\overset{\mathcal{B}}{\to} x$.
\end{theorem}
\begin{proof}
"$\Leftarrow$". Assume that $x_n\overset{\mathcal{B}}{\to} x$. This means that $x$ belongs to every closed set containing infinitely many terms of $(x_n)$. Let $U$ be an open neighbourhood of $x$. Suppose that $U$ does not contain almost all terms of $(x_n)$. Hence the closed set $X\setminus U$ contains infinitely many terms of $(x_n)$, so due to b-convergence we have $x \in X\setminus U$, a contradiction. Therefore, $(x_n)$ is convergent to $x$ with respect to $\tau$.
"$\Rightarrow$". Assume that $(x_n)$ converges to $x$ with respect to $\tau$, that is, any open neighbourhood of $x$ contains all but finitely many terms of $(x_n)$. Let $B$ be a closed set containing infinitely many terms of $(x_n)$. Suppose that $x \notin B$. Then $x \in X\setminus B$ and this set is open. By the assumption, $X\setminus B$ contains all but finitely many terms of $(x_n)$, and so $B$ contains finitely many terms of $(x_n)$, a contradiction. Now, suppose that there is $y \in X$ such that for any closed set $B$ containing infinitely many terms of $(x_n)$, $y \in B$. Repeating the reasoning from "$\Leftarrow$", we obtain that $(x_n)$ converges to $y$ with respect to $\tau$. Since $(X,\tau)$ is $T_0$, the limits are unique, so $x=y$, which finishes the proof.
\end{proof}}
\begin{remark}
In the reasoning above, instead of picking $\mathcal{B}$ as the family of all closed sets, one can take a topological basis $\mathcal{U}$ of $(X,\tau)$ and define $\mathcal{B}$ as the complements of sets from $\mathcal{U}$.
\end{remark}
As we can take any family of sets in the role of $\mathcal{B}$, the notion of b-convergence covers large variety of other \textit{modes} of convergence which guarantees the uniqueness of the limit.
{We would like to show that b-convergence is a good notion of convergence in general. Let us introduce} the limit operator as in e.g., \cite[Problem 1.7.18]{RE}:
Let $X$ be a nonempty set. A mapping $\lambda : D_\lambda \rightarrow X$, where $\emptyset\neq D_\lambda\subseteq X^\mathbb{N}$, is called a limit operator on $X$, if it satisfies the following conditions:
\begin{itemize}
\item[$(L1)$] if $x_n = x$ for all $n\in \mathbb{N}$, then $\lambda (x_n) = x$;
\item[$(L2)$] if $\lambda (x_n)= x$, then $\lambda (x_{k_n}) = x$ for every subsequence $(x_{k_n})$ of $(x_n)$;
\item[$(L3)$] if $(x_n)$ does not converge to $x$ (which means that either $(x_n)\notin D_\lambda$ or $\lambda(x_n)\neq x$), then it contains a subsequence $(x_{k_n})$ such that no subsequence of $(x_{k_n})$ converges to $x$.
\end{itemize}
Note that this definition places a requirement on $D_\lambda$ -- which resembles the set of all convergent sequences -- to contain at least all constant sequences. Then, a limit operator $\lambda$ can be used to define the operation of closure as follows:
\[
x\in \overline{A}\text{ if and only if } A\text{ contains a sequence }(x_n)\text{ such that }\lambda (x_n) = x.\]
If a semimetric space $(X,d)$ guarantees the uniqueness of the limit, then
\[
D_\lambda:=\{(x_n)\subseteq X:\exists_{x\in X}\; d(x_n,x)\to 0\}\]
and for any convergent sequence $(x_n)\in D_\lambda$ one can clearly define $\lambda(x_n):=x$. Clearly, similar trick can be done for the ball spaces. Formally:
\begin{lemma}\label{lematkulowy}
Let $(X,\mathcal{B})$ be a ball space where $\bigcup \mathcal{B} = X$ and $\mathcal{B}$ separates points of $X$, i.e., for every pair of distinct points $x,y\in X$ there exists $B_x,B_y\in\mathcal{B}$ which contain $x$ and $y$, respectively, but neither $x\in B_y$ nor $y\in B_x$.
Then the set of all b-convergent sequences $D_\lambda$ along with the operator $\lambda$ which maps $(x_n)\in D_\lambda$ to its unique b-limit satisfies conditions $(L1)$--$(L3)$.
\end{lemma}
\begin{proof}
Uniqueness of the limit is guaranteed by the very definition of b-convergence. Thus, we shall proceed with proving that conditions $(L1)$-$(L3)$ hold.
Clearly, every constant sequence is ball-convergent to itself. Since $\mathcal{B}$ separates points of $X$, then for any $y\in Y$ there exists $B_{x,y}\in \mathcal{B}$ which contains $x$ and does not contain $y$. Therefore,
\[
B_{(x)}=\{B \in \mathcal{B} \, : \, x\in B\} \supset \{ B_{x,y} \, : \, y\neq x, y\in X \},
\]
hence $\bigcap \mathcal{B}_{(x)} = \{x\}$.
The fact that $(L2)$ is satisfied follows from a simple observation. Let $(x_{n_k})_{k\in\mathbb N}$ be any subsequence of $(x_n)$ which is b-convergent to a certain point $x\in X$. Let $B'$ be any set from $\mathcal{B}$ which contains infinitely many points from $x$. Then automatically $B'$ contains infinitely many terms of $(x_n)$ and since $\bigcap\mathcal{B}_{(x_n)}=\{x\}\subset B'$, then $B'$ contains $x$. Since $B'$ was taken arbitrarily, $(L2)$ follows.
Lastly, we will show that (L3) holds. Let $(x_n)$ be a sequence which does not converge to $x$. Then, we have the following possible cases:
\begin{itemize}
\item $x\notin\mathcal{B}_{(x_n)}$ -- in this scenario, there exists some ball, say $B_0$ which contains infinitely many terms of $(x_n)$ -- denote this subsequence by $(x_{n_k})_{k\in\mathbb N}$ {and such that $x \notin B_0$}. Then $\cap\mathcal{B}_{(x_{n_k})}$ does not contain $x$, as $B_0\in \mathcal{B}_{(x_{n_k})}$, hence $(x_{n_k})$ cannot contain a sequence convergent to $x$ (by reasoning analogous to the one in the proof of $(L2)$).
\item there exists $y\neq x$ such that both $x$ and $y$ belong to $\mathcal{B}_{(x_n)}$. This means that any ball containing infinitely many terms of $(x_n)$ contains both $x$ and $y$. Clearly, the same issue will occur when taking any subsequence of $(x_n)$ and picking any ball containing infinitely many terms of said subsequence. In this case $(x_n)$ cannot have any convergent subsequence.
\end{itemize}
Due to $(x_n)$ being arbitrary, the proof of $(L3)$ is complete.
\end{proof}
It has been shown in several distinct papers that, in general, open balls in semimetric spaces are not necessarily open sets in the sense of topology $\tau$ induced by sequence convergence in $(X,d)$. In particular, we refer the Reader to \cite{ATD,PS}, as well as encourage them to see \cite[Theorem 5.6]{CJT1} which provides a huge range of such pathological spaces where no open ball is open. This is somewhat remedied by introducing the strong topology $\tau^d$ in the semimetric space $(X,d)$, which is defined by forming base consisting of finite intersections of open balls, i.e.,
\[
\mathfrak{B}:=\left\{ { \bigcap_{T} \mathring B(x_k,r_k) \, : \, T - \mbox{finite subset of }\mathbb N, \, x_i \in X, \, r_i>0 \mbox{ for } i \in T}\right\}.
\]
Whenever $x_n$ converges to $x$ with respect to $\tau^d$ we will write $x_n\overset{d}{\boldsymbol{\to}} x$.
In this topology, however, $d(x_n,x)\overset{n\to \infty}{\longrightarrow} 0$ does not always imply that $x_n\overset{d}{\boldsymbol{\to}} x$.
These facts leaves us with a question when the topology induced by the defined ball-convergence in some ball space $(X,\mathcal{B}_S)$ built upon a semimetric space coincides with the topological structure induced by standard convergence. If not, perhaps it may be equal to the topology $\tau^d$? Another question, which should be answered in the first place is whether the ball-convergence is able to define the topology at all?
Let us begin with providing at least a partial answer to the first question.
Now, let us take any semimetric space $(X,d)$. Consider the topology $\tau_\mathcal{B}$ generated by the convergence introduced by the ball structure of space $(X,\mathcal{B}_S)$. For it to be able to coincide with the topology $\tau$ we need to guarantee that $0$ is the accumulation point of $S$. If that is not the case, the sequence $(x_n)$ might be b-convergent to some $x$ even if all its terms are $\varepsilon$-apart from $x$, as the ball structure might be unable to detect such small \textit{gaps} between points.
The $1$-continuity of a semimetric guarantees that the standard convergence implies the b-convergence. This claim is formally proved in the following theorem.
\begin{theorem}\label{g1convergenceb}
Let $(X,d)$ be any $1$-continuous (G)-semimetric space. If sequence $(x_n)_{n\in\mathbb N}$ of elements of $X$ converges to $x\in X$, it also b-converges to $x$ in a ball space $(X,\mathcal{B}_S)$, where $S=(0,+\infty)$.
\end{theorem}
\begin{proof}
Assume that $(x_n)_{n\in\mathbb N}$ converges to $x$. Then for each $s>0$ there exists $n_0\in \mathbb N$ such that $x_n\in B_s(x)$ for $n\geqslant n_0$.
Clearly $\bigcap\mathcal{B}_s(x)=\{x\}$, so it is enough to show that for any $y\in X$ and any $s\in S$ such that $B_s(y)$ contains infinitely many terms of $(x_n)$ we have $x\in B_s(y)$. Let $(x_{n_k})$ be a subsequence of $(x_n)$ contained in $B_s(y)$. Clearly $d(x_{n_k},x)\to 0$ and from $1$-continuity of $d$ we have
\[
d(x,y) = \lim_{k\to\infty} d(x_{n_k},y)\leqslant s.
\]
Hence $x\in B_s(y)$ which finishes the proof.
\end{proof}
Straightforwardly from the proof of Theorem \ref{g1convergenceb} we obtain the following:
\begin{corollary}
Let $S\subset (0,+\infty)$ be such that $S^d\supset \{0\}$ and let $(X,d)$ be any $1$-continuous (G)-semimetric space. If sequence $(x_n)_{n\in\mathbb N}$ of elements of $X$ converges to $x\in X$, it also b-converges to $x$~in a ball space $(X,\mathcal{B}_S)$.
\end{corollary}
Unfortunately, we cannot reverse this implication as we have to face the following example in a~metric setting:
{\begin{example}
Let $x,x_1,x_2,\dots$ be distinct points. Let $X=\{x,x_1,x_2,\dots\}$. Define a metric on $X$ as follows
$$d(y,z)=d(z,y) := \left\{ \begin{array}{ccc}
0\;\text{ if }\; y=z \\
1 \;\text{ if }\; (y=x \vee z=x) \wedge y\neq z\\
1+\frac{1}{|n-m|} \;\text{ if }\; y=x_n,z=x_m, n\neq m.%
\end{array}%
\right. $$
The function $d$ is indeed a metric, because first two axioms follow from the defininition of $d$ and triangle inequality follows easily from the fact that $d(X \times X) \subset \{0\} \cup [1,2]$. Let $\mathcal{B}$ be a family of all closed balls in $(X,d)$. Obviously, $d(x_n,x) \not\to 0$. However, we will show that $x_n\overset{\mathcal{B}}{\to} x$. Let $B(y,r)$ be a closed ball containing infinitely many terms of $(x_n)$. We will show that $x \in B(y,r)$. Consider the cases:
\begin{itemize}
\item[$1.$] $y=x$. Then obviously $x \in B(y,r)$.
\item[$2.$] $y=x_n$ for some $n \in \mathbb N$. Since $d(x_n,x_m) > 1$ for any $m \neq n$, we have $r \geq 1$. Because $d(x,x_n) =1,$ we obtain $x \in B(y,r)$.
\end{itemize}
Now, observe that for any $m \in \mathbb N$ there exists a ball $B(y,r)$ containing infinitely many terms of $(x_n)$ such that $x_m \notin B(y,r)$. Indeed, let $y = x_{m+1}$, $ r = \frac{3}{2}$. Then, $d(x_m,x_{m+1}) = 1+1 = 2 > \frac{3}{2},$ so $x_m \notin B(x_{m+1}, \frac{3}{2}).$ On the other hand, for any $n > m+2$ we have $d(x_{m+1},x_n) =1 +\frac{1}{|n-m-1|} \leq \frac{3}{2}$. Thus, $x_n \in B(x_{m+1}, \frac{3}{2}).$ Finally, $x_n\overset{\mathcal{B}}{\to} x$.
\end{example}}
As the notion of b-convergence even in metric spaces turns out to be weaker than standard convergence, we pose the following question: under what conditions the implication from Theorem \ref{g1convergenceb} can be reversed?
A partial answer to this question is provided by the doubling property. A semimetric space is said to be $N$-doubling if every closed ball of radius $r>0$ can be covered by at most $N$ balls of radius $\frac{r}{2}$ (one can think of such condition as a metric-type finite-dimensionality).
\begin{theorem}\label{convergo}
Let $(X,d)$ be a $1$-continuous semimetric space with $N$-doubling property for some $N\in\mathbb N$ and let $S={(0,\infty)}$. If {a} sequence $(x_n)_{n\in\mathbb N}$ b-converges to some $x\in X$ in ball space $(X,\mathcal{B}_S)$, then $(x_n)$ contains a subsequence $(x_{n_k})$ such that $d({x_{n_k}},x)\to 0$ as $n\to\infty$. {Moreover, if $(x_n)$ is bounded, then $d(x_{n},x)\to 0$ as $n\to\infty$.}
\end{theorem}
\begin{proof}
Our assumption can be restated as follows:
\begin{center}
\textit{For every $y\in X$ and $r\in S$ if $B_r(y)$ contains infinitely many terms of $(x_n)$ then $d(x,y)\leqslant r$.}
\end{center}
We {want to find a subsequence $(x_{n_k})$ of a sequence $(x_n)$ such that $d(x_{n_k},x)\to 0$}. Define
\[
r_0:=\inf \{ r>0 \, : \, B_r(x) \mbox{ contains infinitely many terms of } (x_n)\}.
\]
At first, let us notice that $r_0$ is finite. Indeed, there exists at least one ball of the form $B_s(y)$ (for some $y\in X$ and $s>0$) which contains infinitely many terms of $(x_n)$ as well as $x$. Hence for all $n\in \mathbb N$ such that $x_n\in B_s(y)$ we have:
\[
d(x_n,x)\leqslant g\left(d(x,y),d(y,x_n) \right) \leqslant g\left(d(x,y),s\right) =:\hat{r}.
\]
Therefore, $r_0\leqslant \hat{r}$ as $B_{\hat{r}}(x)$ contains infinitely many terms of $(x_n)$. Suppose that $r_0>0$. Thus, there exists a subsequence $(x_{n_k})$ of $(x_n)$ which satisfies $d(x_{n_k},x)\in \left[r_0,\frac{3}{2}r_0\right]$ (otherwise, $r_0$ could be lowered further). The doubling property guarantees that $B_{\frac{3}{2}r_0}(x)$ can be covered with a family of $N$ balls $\{B_{\frac{3}{4}r_0}(y_k^{(1)}) \, : \, k\leqslant N\}$. In particular, at least one such ball (denote it by $B_{\frac{3}{4}r_0}(y^{(1)})$) contains infinitely many elements of $(x_{n_k})$. From our b-convergence assumption, such ball contains $x$ as well.
This ball can also be covered by another set of balls with half the radius, i.e. balls of the form $B_{\frac{3}{8}r_0}(y_k^{(2)})$. One such ball is guaranteed to contain infinitely many terms of $(x_n)$, denote it by $B_{\frac{3}{8}r_0}(y^{(2)})$. It contains $x$ as well.
Proceeding inductively, we obtain a sequence of balls $B_{\frac{3}{2^{k+1}}r_0}(y^{(k)})$, each of these containing infinitely many terms of $(x_n)$ as well as $x$. Thus, $d(x,y^{(k)}) \to 0$. For all $n\in \mathbb N$ such that $x_n\in B_{\frac{3}{2^{k+1}}r_0}(y^{(k)})$ we have:
\begin{equation}\label{klopf}
d(x,x_n)\leqslant g\left(d(x,y^{(k)}),d(x_n,y^{(k)})\right)\leqslant g\left(\frac{3}{2^{k+1}}r_0,\frac{3}{2^{k+1}}r_0\right)=:s_k.
\end{equation}
As $\frac{3}{2^{k+1}}r_0\to 0$, we have $s_k\to 0$ from the continuity of $g$ {at} the origin. But since $s_k\to 0$, then $s_k<r_0$ for sufficiently large $k$. Thus $B_{s_k}(x)$ contains infinitely many terms of $(x_n)_{n\in\mathbb N}$, which contradicts our supposition that $r_0>0$. {Finally, let $n_1=1$ and for $k >1$ let $n_{k} > n_{k-1}$ be such that $x_{n_k} \in B_{\frac{1}{k}}(x)$. Since $r_0 = 0$ we can always find such terms. We obtain $d(x_{n_k},x) \to 0$.}
{Now, assume that $(x_n)$ is bounded. Then there is $y \in X$ and $s>0$ such that $B_s(y)$ contains all elements of $(x_n)$.
Suppose on the contrary that $d(x_n,x) \not\to 0$, that is, there are $r>0$ and a subsequence $(x_{n_k})$ of $(x_n)$ such that $d(x_{n_k},x) > r$ for every $k\in \mathbb N$.
The doubling property guarantees that $B_{s}(y)$ can be covered with a family of $N$ balls $\{B_{\frac{1}{2}s}(y_i^{(1)}) \, : \, i\leqslant N\}$. In particular, at least one such ball (denote it by $B_{\frac{1}{2}s}(y^{(1)})$) contains infinitely many elements of $(x_{n_k})$. From our b-convergence assumption, such ball contains $x$ as well.
Now, we can proceed similarly as in the first part of the proof to find a sequence of balls $(B_{\frac{1}{2^{j}s}}(y^{(j)}))$ each containing $x$ and infinitely many terms of $(x_{n_k})$.
For all $k\in \mathbb N$ such that $x_{n_k}\in B_{\frac{1}{2^{j}}s}(y^{(j)})$ we have:
\begin{equation}\label{klopf2}
d(x,x_{n_k})\leqslant g\left(d(x,y^{(j)}),d(x_{n_k},y^{(j)})\right)\leqslant g\left(\frac{1}{2^{j}}s,\frac{1}{2^{j}}a\right)=:r_j.
\end{equation}
As $\frac{1}{2^{j}}s\to 0$, we have $r_j\to 0$ from the continuity of $g$ at the origin. But since $r_j\to 0$, then there is $m \in \mathbb N$ such that $r_m<r$. There is also $x_{n_k} \in B_{\frac{1}{2^{m}}s}(y^{(m)})$ and, by (\ref{klopf2}), $d(x,x_{n_k}) < r$, a contradiction. Therefore, $d(x_{n},x) \to 0$.}
\end{proof}
The Theorem above also holds even if we do not assume that $S=(0,+\infty)$ but merely require $S$ to satisfy $S^d\ni 0$. The proof gets trickier this time, because sometimes we are required to iterate the doubling property sufficient number of times and then multiply the radii of obtained balls by some positive constant $\rho\in(1,2)$. As $0\in S^d$, after iterating the doubling property sufficiently many times, there will be an $s_0 \in [\frac{3r_0}{2^{k+1}}, \frac{3r_0}{2^{k}})$ and we can increase the radii of our balls to $s_0$. Since balls of smaller radii are contained in the ones with same center but larger radii, we can inductively construct a sequence of balls as in the original proof. However, this somewhat tedious reasoning can cloud the general idea behind the proof in our opinion.
The following example shows that the assumption of boundedness of $(x_n)$ is essential.
\begin{example}\label{PiotrusPan}
In the Euclidean metric space $(R,d_e)$ define a sequence $(x_n)$ in the following way: for $k \in \mathbb N$ put $x_{2k} := k$ and $x_{2k-1} = \frac{1}{k}$. This sequence is obviously not convergent in $(R,d_e)$. Consider the ball space $\mathcal{B}$ consisting of all compact intervals. Then $x_n\overset{\mathcal{B}}{\to} x$. Indeed, it is easy to see that every compact interval containing infinitely many terms of $(x_n)$ must contain $0$. Moreover, for any $x \neq 0$ there exists an interval containing infinitely many terms of $(x_n)$ and not containing $x$ (it suffices to take $[0,\frac{|x|}{2}]$).
(If we take as a ball space the family of all closed sets, then, by Theorem \ref{top}, we do not have ball convergence).
\end{example}
We clearly see that building a ball space upon some semimetric (or even metric) space $(X,d)$ using solely the closed balls leads to some pathological examples. We have asked ourselves a question, whether there exists a natural way of constructing a ball space out of $(X,d)$ which does not utilize all closed sets, but has more natural properties than $(X,\mathcal{B}_{(0,+\infty)})$.
By $(X,\mathcal{B}_S^+)$, where $S\subseteq \mathbb R_+$ let us denote a ball space, where
\[
\mathcal{B}_S^+:=\mathcal{B}_S \cup \left\{ X \setminus \mathring B(x,r) \, : \, x \in X, r\in S \right\},
\]
where, as previously, $\mathring B(x,r):= \{ y \in X \, : \, d(x,y)<r \}$ denotes an open ball.
That is, the proposed \textit{filled ball space} built upon $(X,d)$ consists of closed ball, whose radii belong to $S$ as well as the complements of open balls with radii from the same set $S$. This allows us to fix some problems similar to the one presented in Example \ref{PiotrusPan}. One can also see, that the statement of Theorem 5.3/Corollary 5.4 still hold if we replace $(X,\mathcal{B}_S)$ by $(X,\mathcal{B}_S^+)$.
\begin{theorem}
Let $(X,d)$ be a (G)-semimetric space, $(x_n)$ be a sequence in $X$ and $x\in X$.
\begin{itemize}
\item[(1)]If $d$ is $1$-continuous and $(x_n)_{n\in\mathbb N}$ converges to $x$, it also converges to $x$ in the filled ball space $(X,\mathcal{B}_S^+)$, where $S\subset(0,+\infty)$ is such that $S^d\supset \{0\}$.
\item[(2)] If $(x_n)_{n\in\mathbb N}$ b-converges to $x$ in the filled ball space $(X,\mathcal{B}_S^+)$, then it also converges in the standard semimetric sense, i.e. $d(x_n,x)\to 0$.
\end{itemize}
\end{theorem}
\begin{proof}
Ad (1) Let us assume that $(x_n)_{n\in\mathbb N}$ converges to $x$. As in the proof of previous version of this theorem, clearly $\bigcap\mathcal{B}_s(x)=\{x\}$ and for any $y\in X$ and any $s\in S$ such that $B_s(y)$ contains infinitely many terms of $(x_n)$ we have $x\in B_s(y)$. Now it is enough to show, that no complement of open ball containing $x$ contains infinitely many terms of $(x_n)$. Take any $y\in X$ and $r\in S$. If for an infinite subsequence $(x_{k_n})_{n\in\mathbb N}$ we have $x_{k_n} \notin \mathring B(y,r)$, then $d(x_{k_n},y)\geqslant r$ and, as a result of $1$-continuity,
\[
d(x,y)=\lim_{n\to\infty} d(x_{k_n},y) \geqslant r.
\]
Hence every complement of the open ball containing infinitely many terms of $(x_{n})_{n\in\mathbb N}$ contains $x$ as well.
Ad(2) To prove the second part of the Theorem, let us rephrase the assumption of b-convergence in the filled ball space:
\begin{center}
\textit{For every $y\in X$ and $r\in S$: if $B_r(y)$ contains infinitely many terms of $(x_n)$ then $d(x,y)\leqslant r$;\\ if for infinitely many terms of $(x_n)$ we have $d(x_n,y)\geqslant r$, then $d(x,y)\geqslant r$.}
\end{center}
To the contrary, suppose that there exists some subsequence $(x_{k_n})$ which does not converge to $x$ in the semimetric sense. Then, there exists $\varepsilon>0$ such that $d(x,x_{k_n})\geqslant \varepsilon$ for all but finite number of indices. The complement of $\mathring B(x,\varepsilon)$ is a set which contains all elements of $(x_{k_n})$ -- and there is infinitely many of them. As such, $X\setminus \mathring B(x,\varepsilon)$ should contain $x$ as well, but clearly it does not contain $x$.
\end{proof}
Clearly, filled ball spaces seem to be more natural ground for topological considerations. We can see, that the sequence from Example \ref{PiotrusPan} is not b-convergent if we replace the ball space $(\mathbb R,\mathcal{B}_{(0,+\infty)})$ with the filled version. We also see that the convergence in $(\mathbb R,d_d)$ coincides with b-convergence in $(\mathbb R,\mathcal{B}_{S}^+)$, where $S:=(0,1]$, as only the constant (from some point) sequences are convergent in both of these spaces.
Lastly, we would like to share an interesting example of $b$-metric space whose both topologies $\tau^d$ and $\tau$ are metrizable, but in distinct ways.
\begin{example}
We shall consider real line with the semimetric $d$ given by
\begin{equation}
d(x,y):=\begin{cases}
|x-y|,& x,y\in \mathbb Q \, or \, x,y\in \mathbb R\setminus \mathbb Q,\\
2|x-y|,& elsewhere.
\end{cases}
\end{equation}
As we have the obvious Lipschitz equivalence between $d$ and the Euclidean metric $d_e$, the topology induced by sequence convergence in $(\mathbb R,d)$ coincides with the standard topology of the reals. What remains to show is that $\mathbb R$ equipped with $\tau^d$ is also metrizable.
Firstly, observe that an open ball in $(\mathbb R,d)$ is of the form
\[
B(x,r):= \begin{cases}
\left(\left(x-r,x+r\right) \cap \mathbb Q \right) \cup \left(x-\frac{r}{2},x+\frac{r}{2} \right),& if\, x\in \mathbb Q\\
\left(\left(x-r,x+r\right) \cap (\mathbb R\setminus \mathbb Q) \right) \cup \left(x-\frac{r}{2},x+\frac{r}{2} \right),& if\, x\notin \mathbb Q.
\end{cases}
\]
Thus each open ball reminds the planet Saturn -- we have a solid center, surrounded by a perforated ring.\footnote{The authors are aware of the fact that Saturn is a gas giant -- this fact makes the use of word \textit{solid} somewhat questionable.} Notice, that every open interval is also an open set in $(\mathbb R, \tau^d)$, as it can be represented as a sum of tiny open balls which form a base of $\tau^d$. Moreover, each intersection of the form $(x-r,x+r)\cap \mathbb Q$ and $(x-r,x+r)\cap (\mathbb R\setminus\mathbb Q)$ is also an open set. To see that, let $x\in \mathbb Q$ and consider the following intersection of open balls \[
\mathring B(x+2r,3r)\cap \mathring B(x-2r,3r)= \left(\left(x-r,x+r\right) \cap \mathbb Q \right).
\]
One can follow this reasoning to prove that all intersections of open intervals with rationals (or their complement) are open in $\tau^d$. We will now prove that $(\mathbb R,\tau^d)$ is homeomorphic to the subspace $L$ of $\mathbb R^2$ (equipped with standard metric) which is defined as
\[
L:=\left(\mathbb Q\times \{1\}\right) \cup \left( (\mathbb R\setminus \mathbb Q) \times \{0\} \right).
\]
The homeomorphism $f:\mathbb R\to L$ is given by $f(x):=(x,\chi_\mathbb Q(x))$, where $\chi_\mathbb Q$ is the indicator function of rationals. The fact that $f$ is a bijection is trivial. We will now show that both $f$ and its inverse are continuous. Without loss of generality, let $x\in \mathbb Q$. Take any open neighbourhood $A$ of $(x,1)\in L$. As $L$ inherits the natural topology from $\mathbb R^2$, there exists a rational $\varepsilon>0$ such that
\[
A \supset ((x-\varepsilon, x+\varepsilon)\cap \mathbb Q) \times \{1\}.
\]
Let $B:=\left(x-\frac{\varepsilon}{2},x+\frac{\varepsilon}{2}\right)\cap \mathbb Q$. It is an open set (with respect to $\tau^d$) which satisfies $f(B)\subset A$. This proves that $f$ is continuous at $x$ -- since it was an arbitrary point, $f$ is continuous.
Let us move on to the inverse of $f$, which is $g(x,y) = x$. Fix $x \in \mathbb R$. Let $U$ be an open subset of $\mathbb R$ (in $\tau^d$) containing $x$. Since $U$ can be expressed as a union of base sets, we obtain:
\[
g^{-1}[U]=g^{-1}\left[\bigcup_{t\in T} \bigcap_{i\leqslant n_t} \mathring B(x_{t,i},r_{t,i})\right]= \bigcup_{t\in T} \bigcap_{i\leqslant n_t} g^{-1}\left[ \mathring B(x_{t,i},r_{t,i})\right].
\]
We will see that one of the intersections on the right hand side of this equality contains an image of a certain open neighbourhood of $(x,1)$. Fix $t_0 \in T$ such that $x\in \bigcap_{i\leqslant n_{t_0}} \mathring B(x_{t_0,i},r_{t_0,i})$. Notice that if $x_{t_0,i}$ is rational, then \[
\hspace{-5.5mm}
g^{-1}\left[\mathring B(x_{t_0,i},r_{t_0,i}) \right] = \bigg(\bigg(\mathbb Q \cap \bigg( x_{t_0,i} - r_{t_0,i}, x_{t_0,i} + r_{t_0,i} \bigg) \bigg) \times \{1\}\bigg) \cup \left( \left(\left( x_{t_0,i} - \frac{1}{2}r_{t_0,i}, x_{t_0,i} + \frac{1}{2}r_{t_0,i} \right)\setminus \mathbb Q \right) \times \{0\}\right),
\]
otherwise
\[\hspace{-5.5mm}
g^{-1}\left[\mathring B(x_{t_0,i},r_{t_0,i}) \right] = \left(\left(\mathbb Q \cap \left( x_{t_0,i} - \frac{1}{2}r_{t_0,i}, x_{t_0,i} + \frac{1}{2}r_{t_0,i} \right) \right) \times \{1\}\right) \cup \bigg(\bigg(\bigg( x_{t_0,i} - r_{t_0,i}, x_{t_0,i} + r_{t_0,i} \bigg)\setminus \mathbb Q \bigg) \times \{0\}\bigg).
\]
Regardless of whether $x\in \mathbb Q$ or not, $g^{-1}\left[\mathring B(x_{t_0,i},r_{t_0,i}) \right]$ remains an union of two open subsets of $L$, thus guaranteeing its openness. A finite intersection of such sets remains open, thus the preimage of $U$ is open. This proves the continuity of $f^{-1}$.
Therefore $f$ is a homeomorphism between $L$ equipped with natural topology and $(\mathbb R,\tau^d)$. This proves that $(\mathbb R,\tau^d)$ is metrizable. Since $L$ obviously is not homeomorphic to real line with standard topology, we have that $\tau$ and $\tau^d$ are both metrizable in non-homeomorphic ways.
\end{example}
This example leaves us with an open question, whether some simple conditions under which topology $\tau^d$ is metrizable can be stated. The results which answer similar question in the case of convergence-based topology can be found in \cite{CJT1, WAW} as well as in some of the references therein.
\section*{Acknowledgments}
Piotr Nowakowski was supported by the GA \v{C}R grant 20-22230L (Czech Science Foundation). The authors would also like to express their gratitude towards Professor Wiesław Kubiś and Professor Jacek Jachymski for the inspiration and support.
|
1,941,325,221,020 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Active galactic nuclei (AGNs), the most persistent luminous sources of electromagnetic radiation in the universe,
are widely believed to be powered by the accretion of material onto supermassive black holes (SMBHs).
Narrow-line Seyfert 1 galaxies (NLS1s) serve as a peculiar subclass of AGNs,
with common definition based on their optical spectrum (a narrow width of the broad Balmer emission lines with FWHM (H$\beta$) $<$ 2000 km $\rm s^{-1}$,
along with strong optical $\rm Fe_{II}$ lines and weak forbidden lines) \citep{Osterbrock1985,Goodrich1989,Pogge2000}.
Other extreme properties include rapid X-ray variability, near-Eddington accretion rates as well as strong star formation of the host galaxies,
indicating a rapidly growing phase of their central black holes \citep{Komossa2008}.
Furthermore, the $\gamma$-ray emissions of NLS1s \citep{Abdo2009,Liao2015} render them a new member of the jetted-AGN family.
Therefore, the studies of NLS1s are essential to have a through understanding of the AGN phenomena.
Periodic emissions are well detected in X-ray light curves of Galactic binary systems
\citep{Bolton1972,Webster1972,Ackermann2012} and
the periodic variability studies can be powerful diagnostic tools \citep{Remillard2006}.
However, in AGNs, the periodicity is relatively rare.
Some detections or evidences for periodic variabilities in optical, X-ray, and/or gamma-ray emission of AGN have been reported
in the literature \citep{Kidger1992,Valtonen2006,1553,Reis2012,Zhang2017a,Covino2017}.
In X-rays, a significant transient QPO has been detected in NLS1 galaxy RE J1034+396 \citep{Gierlinski2008}.
Recently, the other significant transient QPO signal has been reported in NLS1s galaxy 1H 0707-495 \citep{Pan2016} and Mrk 766 \citep{Zhang2017b}.
In this work, we re-analyze the data set adopted by \citet{Pan2016} with the Weighted-Wavelet Z-transform method.
In addition to confirm previous finding, we find a new QPO signal in the early part of the light-curve with a period cycle
of $\sim$ 8240 s at the confidence level of $\sim 3.7\sigma$.
These two QPO signals, with a frequency ratio of $\sim2:1$, are separated by an intermediate state.
Interestingly, in two other observation data sets these signals appeared, though individually and at lower significance levels.
Our conclusions are thus further strengthened.
\section{Observation and Data Analysis}
\label{sec:Observations}
\subsection{The construction of X-ray light curves}
\label{subsec:make lc}
The X-ray Multi-Mirror Mission (XMM-Newton) was launched on December 10th 1999 by European Space Agency's (ESA).
{\it XMM-Newton} carries a set of three X-ray CCD cameras (EPIC), including two MOS \citep{Turner2001} and one PN \citep{Struder2001},
with an unprecedented large effective area.
The NLS1 galaxy 1H 0707-495 had been monitored 15 times over 40 ks with the former three detectors between 2000 January and 2011 February in a full frame imaging mode.
The data analysis is performed following the standard procedure in the Science Analysis Software (SAS) with version of 16.0.0 provided by the {\it XMM-Newton} science operations center \footnote{https://www.cosmos.esa.int/web/xmm-newton/sas-threads}.
The events are extracted from a circle region of interest (ROI) of 40-arcsec radius
centered on the position (R.A. = 107.173, dec. = -49.552) of the target in 0.2-10.0 keV with time-bin of 100 s.
We exclude the events in periods with high background flaring rates exceeding 0.4 counts/s,
and only the good events (the PATTERN $\le$ 4 for PN and PATTERN $\le$ 12 for MOS) are used in generating light curves.
For background, the events are extracted from a source-free circle (without any other source) with the same diameter in the same chip.
Then, the correction performed by tool {\it epiclccorr} has been accomplished, and the pile-up effect is negligible.
We combine the three {detectors (PN+MOS1+MOS2)} background subtracted light curves (we denote it as EPIC light-curve),
and show them in the top panels of Fig. \ref{wwz} and Figs. \ref{20100917}-\ref{20070514}. Apart from that,
there are two identical Reflection Grating Spectrometers (2RGS) \citep{Herder2001} onboard on {\it XMM-Newton}.
To confirm the former results, we also derive the light curves from 2RGS products with tool {\it rgslccorr} following the standard procedure and
show the results in Figs. \ref{wwz_rgs}-\ref{20070514}.
And the following analysis is based on these light curves.
\subsection{The analysis of periodic variability}
\label{subsec:Search qpo}
The wavelet transform is a widely used method for searching the periodic signal in the time series
by changing the parameters of wavelets to fit light curves in both time and frequency domains.
This method is different from Fourier analysis, which pays attention to a limited time span of data.
A key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information (location in time).
However, this method may yield untrustworthy results because of the local number density of the uneven data points.
A modified version of weighted wavelet Z-transform with a Morlet mother function has been provided in \citet{Foster1996},
which handles efficiently this problem by rescaling the wavelet function. Therefore, we calculate WWZ power spectral for light curves.
In the II and III panels of Fig.\ref{wwz}, we present the color-scaled WWZ power spectrum for the whole light-curve;
WWZ power spectral (WWZ power develops and evolves in frequency and amplitude over time, the detailed information provided by \citealt{Foster1996}.)
and time-averaged WWZ power spectral (the average WWZ power at a given frequency along time) for the first and second segment, respectively.
\citet{Pan2016} divided the light-curve into two segments (see Fig. 1 therein) and then calculated the power spectrum density (PSD) with Fourier analysis
for the second segment. They detected a QPO with a strong peak at $(2.6\pm 0.18) \times10^{-4}$ Hz (which corresponds to a period of 3800 s).
We use the WWZ method to re-analyze the data and surprisingly find two signals, separated by a short intermediate state, at different frequencies
(see the panel II of Fig.\ref{wwz}).
We thus calculate the power spectrum for both segments, respectively.
The WWZ powers for the second segment reveal a clear peak (see the bottom C and D panels of Fig.\ref{wwz}), in agreement with \citet{Pan2016}.
While, in the segment 1, another QPO also appears in the WWZ power and time-averaged WWZ at $(1.21\pm0.20)\times10^{-4}$ Hz
(see the III A and III B panels of Fig.\ref{wwz}).
The uncertainty is evaluated as the means of the full width at half maximum (FWHM) fit Gaussian function
centered around the maximum time-averaged WWZ power.
The maximum time-averaged WWZ power is $\sim14.5$ times of the underlying continuum with the false-alarm probability \citep[FAP;][]{Vaughan2005} of $\sim2.5\times10^{-7}$
for the combined {EPIC} lightcurve, and for the combined 2RGS light-curve, it is 10.5 times than the underlying continuum with the FAP of $1.4\times10^{-5}$.
In order to establish a robust significance, we generate the artificial
light curves with a simulator \citep{Timmer1995,Emmanoulopoulos2013} for all light-curves reported in this paper, and the simulator is based on
the power spectral density (PSD) and the probability density function of observed variation.
Therefor, the artificial light curves have full properties of statistical and variability as the X-ray flux of the target.
For determining the PSD best-fit, we fit the PSD with a bending power law plus constant function (null hypothesis) as $P(f)~=~Af^{-1}(1+{(f/f_{bend})}^{\alpha-1})^{-1}+C$
employing maximum likelihood method.
In the likelihood method, we use likelihood function of $\mathcal{L}=\prod_{j=1}^{N-1}p(I_j|S_j)=\prod_{j=1}^{N-1}\frac{1}{S_j}\exp(-I_j/S_j)$,
where $I_j=I(f_j)$ representing the power at frequency $f_j=j/(N\times\Delta T)$ and $S_j$ representing the expectation value at $f_j$ basing on the null hypothesis
\citep[see][for the details]{Groth1975,Leahy1983,Barret2012}.
For the model, the free parameters $\alpha$, $f_{bend}$, $C$ and $A$ are the spectral index above the bend,
the bend frequency, the constant (Poisson noise) and the normalization, respectively \citep{Markowitz2003,McHardy2006,Gonzales2012,Kelly2014}.
The detailed information of the method for evaluating of significance level is provided
in \cite{Gierlinski2008,Emmanoulopoulos2013,1553,Bhatta2016}. In total, we generate $\sim~2\times10^{5}$ artificial light curves (the same number for others).
In the III A panel of Fig.\ref{wwz}, the $4\sigma$ and $3\sigma$ confidence levels are shown with the red solid and blue dotted-dashed curves, respectively.
The significance curves represent the distribution of heights of the most significant peaks across any of the sampled frequencies basing the null hypothesis.
We compute the probability of obtaining the power peaks using {EPIC} and 2RGS light-curves.
The probabilities for the signal are $\sim99.98\%$ ($3.7\sigma$) and $\sim99.96\%$ ($3.5\sigma$), respectively.
{On 2010 September 17, the same signal is also detected, which is shown in the Fig. \ref{20100917}. The results of EPIC and 2RGS light-curves are shown in left and right images, respectively.
For EPIC the confidence level of the signal at $\sim (1.21\pm0.12)\times10^{-4}$ Hz is $\sim 3.5\sigma$ (99.95\%), and for 2RGS it is $3.2\sigma$ (99.87\%).
The probability of chance fluctuation in the EPIC power spectra from two independent observations
light-curves at $1.21\times10^{-4}$ Hz is $< 8.6\times10^{-8}$ (the corresponding significance is $\sim5.3\sigma$).}
Considering 15 times monitoring over 40 ks, the total exposure time is $\sim$ 1.3 Ms
meaning $\sim20$ segments of similar length presented here.
Including the results reported above, we detect QPO signals at $\sim (1.21\pm0.12)\times10^{-4}$ Hz in two segments.
While the number of trials of frequency bins is $5$ within FWHM of the power peak
(from $\sim1.1\times10^{-4}$ to $\sim1.3\times10^{-4}$ Hz, in fact the maximum of power spectra is independent of frequency bins).
Accounting the total number of trials {of $190\times5$}, the combined confidence level of the signal is $\sim4.0\sigma$ (99.993\%).
We also fit segment 1 and segment 2 light curves with autoregressive integrated moving average (ARIMA) models \citep{Chatfield2003,Kelly2014,Goyle2017}
to check the reliability of the quasi-periodic signals. Using the Akaike Information Criterion \citep[AIC;][]{Akaike1973},
we select the best-fit ARIMA models for both light curve segments.
And for segment 1 and segment 2, the best-fit models are ARIMA(7,1,7) and ARIMA(9,1,9), respectively.
In the auto-correlation function (ACF) of the residuals of ARIMA(7,1,7) and ARIMA(9,1,9), the most distinct spikes, both exceeding the 95\% confidence limit,
are at a lag of 7400 s for segment 1 and at 3700 s for segment 2, respectively.
The corresponding frequencies are $1.35\times10^{-4}$ Hz and $2.7\times10^{-4}$ Hz, well in agreement with those found with the WWZ method and
indicating that the x-ray quasi-periodic variabilities in segment 1 and segment 2 may be intrinsic.
For segment 1 light curve, the AIC results of 289 ARIMA models are shown in Fig.~\ref{arima_aic} with color-scaled AIC values,
and the standard residuals and auto-correlation function (ACF) of the residuals of the best-fit model ARIMA(7,1,7) are shown in Fig.~\ref{arima}.
Comparing with the LSP and WWZ methods, the ACF can not exactly determine the frequencies of quasi-periodic signal.
Moreover, it is worth noting that the ACF is obtained by the residuals of the best-fit ARIMA model, rather than X-ray light-curve used in methods of LSP and WWZ.
These two reasons may explain the slight difference between the frequency found in ACF and the other two methods.
Furthermore, we know that higher order ARIMA models often produce periodicities in time series.
Then the confidence level of ACF peak obtained by residuals between x-ray light-curves and ARIMA(7,1,7) model will be lower comparing with that of LSP and WWZ.
Even so, the most distinct spike shown in left panel of Fig.~\ref{arima} is well above the threshold of 95\%.
For revealing the variability of X-ray flux, we fold the light-curve with a tool {\it efold} provided in HEASOFT Software using the period cycle of 8244.36 s
with phase zero corresponding to 318550671.023 s. And the tool is provided in HEASOFT Software\footnote{https://heasarc.gsfc.nasa.gov/docs/software/lheasoft/download.html}.
The folded X-ray light-curve is fitted with a constant model, the reduced $\rm\chi^2/d.o.f$ is 7875/49,
it is shown in the I A panel (i.e., the insert) of Fig.\ref{wwz} with a red dashed-dotted line representing the mean count rate of 7.51 counts/s,
which reveals a significant variability of folded light-curve varying with phase.
Furthermore, this result represents the amplitude of X-ray flux varying with phase.
And error bars in the fold light-curve are calculated from the standard deviation of the mean values of each phase bin.
For clarity, we show two period cycles.
We have searched for possible signals in other observation data sets for this source.
Interestingly in two observations there are tentative QPO {signals,
and the results are shown in Figs.~\ref{20100915} and \ref{20070514}.}
For the measurement on 2010 September 15,
the confidence level for the signal in the EPIC and 2RGS light-curves at $\sim(1.13\pm0.12)\times10^{-4}$ Hz is $\sim 2.4\sigma$ (see Fig.~\ref{20100915}).
While in the left (right) image of Fig. \ref{20070514}, the data is obtained from the measurement on 2007 May 14 for {EPIC} (for 2RGS),
the confidence level with power peak at $\sim(2.70\pm0.24)\times10^{-4}$ Hz is $\sim4.2\sigma$ ($4.1\sigma$).
Furthermore, in the segment 2 of Fig. \ref{wwz_rgs}, we also detect the qusi-peroidic signal at $\sim2.6\times10^{-4}$ Hz in the 2RGS data at confidence level of $\sim 4.6\sigma$,
which is consistent with the results reported in \citet{Pan2016}.
The above periodic cycles are well consistent with those displaying in the emission on
2008 February 4, strongly favoring the transient nature of the QPOs in X-rays of NLS1 galaxies suggested in \cite{Gierlinski2008} and \cite{Pan2016}.
\section{SUMMARY AND DISCUSSION}
\label{sec:summary}
In this work, we have re-analyzed the XMM-Newton observation data of NLS1 galaxy 1H 0707-495.
By dividing the X-ray light-curve measured on 2008 February 4 into two segments, we construct the WWZ powers with the WWZ method, respectively.
In the power spectrum of segment 2 of Fig. \ref{wwz}, there is a strong signal peaks at $(2.64\pm0.2)\times10^{-4}$ Hz,
which confirms the previous detection of QPO at $(2.6\pm0.18)\times10^{-4}$ Hz in \citet{Pan2016}.
Surprisingly, we find a new QPO signal in the power spectrum of segment 1.
Such a signal is at $(1.21\pm0.20)\times10^{-4}$ Hz with a combined significance (from two independent observations) of $\sim 4\sigma$ and
the root-mean-square (rms) in segment 1 has a fractional variability of $\sim30\%$ with a mean count rate of 7.5 counts/s.
On 2008 February 4, the two QPO signals are separated by an intermediate state in light-curve and the frequency ratio is $\sim1:2~(1:2.14\pm0.38)$.
And the signals detected in WWZ power spectra are confirmed with method of ARIMA.
This is the first time to observe two QPO signals, separated by an intermediate state, in X-ray emission of AGNs.
Our conclusion is further supported by the presence of these two signals,
though at lower significance levels, in XMM-Newton measurements on 2007 May 14 and 2010 September 15, respectively.
The physical origin of QPO signals in X-ray binaries as well as AGNs is still to be better understood \citep{Li2004,Remillard2006}. Nevertheless
some models do suggest frequency ratios of $1:2:3$ and so on. For example, \citet{Lai2009} studied the global stability of non-axisymmetric p modes (also called inertial-acoustic
modes) trapped in the innermost regions of accretion discs around black holes and showed that
the lowest-order p modes, with frequencies $\omega \approx 0.6 m\Omega_{\rm ISCO}$ can be overstable due to general relativistic effects, where
$m=1, 2, 3, . . .$ is the azimuthal wavenumber and $\Omega_{\rm ISCO}$ is the disc rotation frequency at the so-called
innermost stable circular orbit (ISCO).
They also suggested that overstable non-axisymmetric p modes driven by the corotational instability may account for the high-frequency
QPOs observed from a number of BH X-ray binaries in the very high state while the absence of such signals in
the soft (thermal) state may result from mode damping due to the radial infall at the ISCO.
While in our scenario it is required that different $m$ appeared in different time intervals.
Our new signal is consistent with the correlation between BH masses and QPO frequencies
\citet{Kluzniak2002,Abramowicz2004,Torok2005,Remillard2006,Zhou2010,Zhou2015,Pan2016}, as shown in Fig.~\ref{fm}.
We extracted the energy spectra of the segment 1, segment 2 and the whole X-ray light-curves on 2008 February 4 using XSPEC \citep[v. 12.9n,][]{Arnaud1996}. The energy spectra are fitted with the model of \emph{zpowerlaw} (power-law corrected by redshift $\sim0.04$ of 1H 0707-495) and no significant change is found in our
fitting (i.e., they have similar X-ray luminosity and spectral shape). The spectral-fitting results are shown in Fig.~\ref{20080204spec} and the best-fitting parameters are listed in Tab.~\ref{Paras}. Therefore the physical origin of our phenomena is still a mystery.
Finally we would like to caution that the identified QPOs are at relatively low significance and more robust signals are still needed to establish this kind of phenomena.
\section*{Acknowledgments}
We would like to sincerely thank the anonymous referees for the useful and constructive comments.
This work was supported in part by the National Basic Research Program of China (No. 2013CB837000)
and the National Key Program for Research and Development (2016YFA0400200),
the National Natural Science Foundation of China under grants of No. 11525313
(i.e., the Funds for Distinguished Young Scholars), 11433009, 11573071, 11673067, 11733009, U1738124,
the Key Laboratory of Astroparticle Physics of Yunnan Province (No. 2016DG006),
and China postdoctoral science Foundation (No. 2017M621859).
|
1,941,325,221,021 | arxiv | \section{Introduction}
\label{s:intro}
X-ray images of giant cavities in galaxy clusters associated with powerful jets from central active galactic nuclei (AGN) suggest that
AGN may play an important role in the energetics of galaxy intra-cluster media (ICMs) \citep[\emph{e.g.,}][]{fabian03, birzan04, wise07}.
The estimated minimum energy required to produce cavities is often in the range of $10^{55}$ to $10^{60}$ erg
\citep{birzan04}. Observations of ICMs have shown the existence of a temperature floor of approximately
2 keV \citep[\emph{e.g.,}][]{peterson02}. The lack of gas below this temperature,
contrary to expectations of a classical ``cooling flow'' \citep{fabian94},
is historically known as the ``cooling problem''. Evidently, the energy required to suppress this cooling of the ICM below 2 keV is on the
same order as energy in the X-ray cavities \citep{mcnamara07}. As a result, one popular
hypothesis that has emerged to solve the cooling problem is that energy injected into the ICM by AGN will quench cooling and
subsequent star formation in the central cluster galaxy. Several numerical studies such as \citet{bruggen05} and \citet{sijacki06}
have supported this hypothesis.
X-ray cavity systems are evidently formed when low density, hot plasma originating from the AGN inflates a bubble in the ICM. The low
density plasma produces a decrement in the line of sight intensity through the cavity from the normal ICM X-ray emission \citep{clarke97}.
The cavities produce roughly elliptical brightness depressions $\sim $20\% to 40\% below the surrounding regions \citep{mcnamara07}. A few dozen such
cavity systems are known \citep[see][and references therein]{dong10, rafferty06}. Some cavities are filled with radio emission from relativistic particles
and are typically found in pairs with an AGN in the cluster center between them. Other cavities devoid of radio emission above 1.4 GHz are referred to as
radio ghosts; \citep[\emph{e.g.,}][]{birzan08}.
The presence of multiple cavity pairs in some cases suggests a series of outbursts from the AGN.
Hydra A, for example, contains several attached cavities filling at least 10\% of the cluster volume
within 300 kpc of the cluster center \citep{wise07}. Work done by \citet{morsony10}, however, suggests that the presence of multiple cavities
may be the result of the motion of a dynamic ICM. The size of cavities varies greatly from 1 kpc in diameter for M87
\citep{young02}, for example, to over 200 kpc in diameter in Hydra A \citep{wise07}.
X-ray cavities are likely to be long lived structures, remaining intact for over 100 Myr in Hydra A, for example \citep{nulsen05}.
On the other hand, simple hydrodynamic analyses suggest cavities filled with light gas should be unstable to Rayleigh Taylor (RT) and Kelvin Helmholtz (KH)
instabilities as they form and rise in the cluster. Several numerical
studies have been performed, which include additional physics to stabilize the bubbles. \citet{jones05}, for example, carried out
2D calculations of bubbles with magnetic fields finding that the fields suppress instabilities.
\citet{reynolds05} demonstrated the stabilizing effect of a Braginskii viscosity mitigated
by Coulomb collisions. \citet{bruggen09}
included a model for RT driven sub-grid turbulence in 3D hydrodynamic simulations of bubbles. Their results show that turbulence can also
prevent the break up of bubbles as a by-product of resulting ICM entrainment.
Surveys of cavity systems and their energy of formation requirements have found that nearly half of the studied cavities
show evidence for sufficient power to suppress cooling for
short periods of time in their host clusters \citep{birzan04, rafferty06} if the energy
in the cavities becomes distributed in the ICM.
There are, however, significant
uncertainties in the determination of the cavity
energy contents and associated time scales.
The energy content, generally assumed to be measurable in terms of the
supporting pressure of the cavity, requires, for instance, accurate measurement of
the cavity pressure and also its volume. The cavity volume, $V$, is generally estimated from circles
or ellipses fit by eye to the cluster X-ray surface brightness distribution. That two dimensional projection is
then converted into a three dimensional ellipsoid by revolution around the long axis
\citep[\emph{e.g.,}][]{birzan04, wise07}.
The minimum energy required to inflate a cavity containing internal energy, $E$, is taken to be the
enthalpy content of the cavity, $E + PV$, based on the assumption that the cavity expanded
subsonically in the ICM
at constant pressure, $P$.
The timescale needed to inflate the cavity and to measure the associated cavity power
is usually estimated from buoyancy or characteristic sound crossing times arguments.
A substantial amount of effort has gone into numerical studies of outflows from AGN and
the creation of bubbles containing hot AGN generated plasma. To make direct comparisons with
observations, however, realistic synthetic observations from these calculations are needed. Several authors have applied this approach
with various models of jets and bubbles in a cluster \citep[\emph{e.g.,}][]{bruggen05, diehl07, bruggen09, morsony10}. The synthetic observations of
magnetically dominated cavities by \citet{diehl07} were able to produce several of the observed characteristics seen in real observations including
bright rims commonly found outlining cavities \citep{mcnamara07}. \citet{bruggen09} were able to show consistency between their
measurement of $PV$ from synthetic observations of bubbles with sub-grid turbulence and a sample of observed cavities by \citet{diehl08}.
Studies such as \citet{dong10} and \citet{ensslin02} have tested the efficiency of detecting cavity systems from X-ray observation, but to our knowledge,
a detailed assessment of the
reliability of the observational techniques used to determine cavity enthalpy has not
been performed. Synthetic observations of the complex interactions involved in the formation of X-ray cavities provide a
powerful test of these methods.
The primary goal of this paper is to test common observational techniques for
determining cavity energetics. We employed a pair of 3D magnetohydrodynamic (MHD) simulations of jets in realistic cluster environments
presented in \citet{oj} (henceforth, OJ10).
These simulations were post-processed to yield
synthetic X-ray observations. Section \ref{s:calc} describes the models and numerical methods. Section \ref{s:synobs} and \S \ref{s:measure}
present the observations and analysis, while \S \ref{s:conclusion} lists the conclusions of this work. In the analysis
we have used $H_{0}=72$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{M}=0.3$, and $\Omega_{\Lambda}=0.7$.
\section{Simulation Details}
\label{s:calc}
The simulations presented, described by OJ10, were computed on a 3D Cartesian grid using a 2nd order total variation diminishing
(TVD) non-relativistic MHD code described by \cite{rj} and \cite{ryu}. A gamma-law
equation of state was assumed with $\gamma = 5/3$; radiative cooling was
negligible for the conditions of these simulations and was therefore ignored.
Computational details are provided in OJ10. We provide here only an outline as needed to
evaluate the present work.
The physical extent of the computational grid was $x =$ 600 kpc, $y = z =$ 480 kpc.
Each computational zone represented one cubic kiloparsec with $\Delta{x} = \Delta{y} = \Delta{z} = 1$ kpc. Two oppositely directed jets were
centered within the grid and aligned with the x-axis. A passive tracer, $C_{jet}$, was advected with the flow to identify jet material from
ambient material. Two different jet models are utilized in the present discussion;
1) a so-called relic (RE) model in which quasi-steady jets were on for
26 Myr then turned off and, 2) an intermittent (I13) model in which the jet power
cycled on and off at 13 Myr intervals throughout the simulation.
\subsection{Bi-directed Jet Properties}
Both simulations featured bi-directed jets that had an internal Mach 3 speed at full power,
corresponding to a physical speed $v_{jet}$ = 0.10c. These jets
originated from a cylindrical region $r_{jet}$ = 3 kpc in radius and $l_{jet}$ = 12 kpc in length centered in the grid.
The gas injected at
the jet origin was less dense than the ambient gas by a factor of one hundred and was initially in pressure equilibrium
with its local surroundings. Temporal variation in the jet was controlled
by an exponential ramp in density, pressure and momentum density over 1.64 Myr for I13 and 0.65 Myr for RE.
Physical conditions inside the jet source region were relaxed to
a volume average from a sphere surrounding the jet origin as the jet
turned off, then evolved back from instantaneous volume averages for the
local medium to the desired jet conditions as jet power resumed.
The combined power from both jets at peak was $L = 1.2\times 10^{46}$ erg s$^{-1}$. Small magnetic and
gravitational energy contributions to the jet energy flux were ignored in
defining the jet power.
Those energy terms were, however, followed explicitly
in the simulations and accounted for in energy exchanges between the jets and
their surroundings.
The magnetic field launched from the jet was purely toroidal, $B_{\phi} = B_{0}(r/r_{jet})$ inside
a jet core region, with $\beta = P_{jet} / (B^{2} / 8\pi) \approx 100$, on the perimeter of
the jet core. There was a thin `sheath'' surrounding the core, through which
all the jet properties, including the magnetic field transitioned to local ICM conditions.
\subsection{Cluster Environment}
\label{s:cluster}
The simulation cluster environments in OJ10 were designed to mimic a realistic,
relaxed cluster. Gravitational potential and density
profiles were selected to yield a temperature profile typical of clusters in hydrostatic equilibrium. A tangled ambient magnetic
field with a characteristic coherence length typical of observed clusters
was chosen to break symmetry over the grid. The local ICM magnetic
pressure averaged to about 1\% of the gas pressure, although that ratio
fluctuated by large factors over the volume.
The NFW \citep{navarro} dark matter density distribution
\begin{align}
\rho_{dm} = \frac{\rho_{s}}{\left(\frac{r}{r_{dm}}\right)\left(1 + \frac{r}{r_{dm}}\right)^{2}},
\end{align}
was used to generate the gravitational acceleration
\begin{align}
g(r) = -\frac{4\pi Gr_{dm}^{3}\rho_{s}}{r^{2}}\left[ln\left(1 + \frac{r}{r_{dm}}\right) - \frac{r}{r + r_{dm}}\right] \label{f:grav}
\end{align}
where $r_{dm} =$ 400 kpc and $\rho_{s} \approx 4.3 \times 10^{-26}$ g cm$^{\textrm{-3}}$. This gave a virial mass of $M_{v} = 5 \times
10^{14}\,\,M_{\odot}$ for a virial radius of 2 Mpc, which was within a factor of a few of the Perseus Cluster \citep[\emph{e.g.,}][]{ettori98}.
The gas density of the cluster was initialized with a density distribution
\begin{align}
\rho_{a}(r) = \rho_{0} \left[ \frac{f_{1}}{\left(1 + \left(\frac{r}{r_{c1}}\right)^{2}\right)^{\frac{3\beta}{2}}} + \frac{f_{2}}{\left(1 +
\left(\frac{r}{r_{c2}}\right)^{2}\right)^{\frac{3\beta}{2}}}\right], \label{rhobeta}
\end{align}
given by OJ10, where $f_{1} = 0.9$, $f_{2} = 0.1$, $r_{c1} =$ 50 kpc, $r_{c2} =$ 200 kpc and $\beta =$ 0.7. The density scale was
$\rho_{0} = 8.33 \times 10^{-26}$ g cm$^{-3}$. The pressure
was determined by hydrostatic equilibrium, yielding a temperature profile resembling typical clusters (cf OJ10).
The central pressure was $P_{0} = 4 \times 10^{-10}$ dyne cm$^{-2}$ giving a sound speed $c_{0} = 895$ cm s$^{-1}$ in the cluster core
for a $\gamma =$ 5/3 gas.
On top of the initialized hydrostatic equilibrium in the ICM, a Kolmogorov spectrum of density fluctuations was imposed
with a maximum local amplitude, $\pm0.10\rho_{a}(r)$, as described by OJ10.
The initially tangled and divergence-free cluster magnetic field was given by OJ10 as
\begin{align}
\overrightarrow{B} = B_{\theta}\hat{\theta} + B_{\phi}\hat{\phi}
\end{align}
where the components are
\begin{align}
B_{\theta} = \frac{F_{1}(r)\cdot m}{r}sin\theta\,cos(m\phi) \\
B_{\phi} = \frac{F_{2}(2)\cdot n}{r}sin(n\theta) - \frac{F_{1}(r)}{r}sin(m\phi)\,sin(2\theta)
\end{align}
with $m = n =$ 3. $F_{1}(r)$ and $F_{2}(r)$ are functions designed to keep an approximately constant $\beta$ atmosphere with fluctuations that
vary over scales of a few tens of kpc. The scale of the fields maintains a $\beta \approx$ 100 on average over the cluster volume. The
maximum magnitude of the field is $\sim 10 \mu$G.
\subsection{Relativistic ``Cosmic Ray'' Electrons}
\label{s:crs}
The simulations included a population of relativistic Cosmic ray electrons (CRs) passively advected with the MHD quantities.
The numerical details of the CR transport are given in
\citet{jones99}, \citet{treg01} and \citet{treg04}. A small, fixed fraction of
the thermal electron flux through shocks was injected into the CR electrons population and
subjected to first order Fermi acceleration according to the standard test-particle
theory. Downstream of shocks the CRs were also subject to adiabatic and synchrotron/inverse Compton radiative
energy changes.
The nominal CR pressure, which was neglected, was generally less than 1\% of the gas pressure.
CRs with Lorentz factors from $\gamma =$ 10 to $\gamma \sim 1.6\times10^{5}$ were tracked as a
piecewise power law distribution.
The inclusion of CRs allowed us to calculate inverse Compton and synchrotron emissions in a self-consistent manner. A separate analysis paper
will include detailed consideration of radio synchrotron emission and high energy non-thermal
X-ray ($>$ 10 keV) emission.
Only X-rays below 10 keV are discussed in this paper. Those are entirely dominated in our
computations by thermal emissions from the cluster ICM, although inverse
Compton emissions are included.
\section{Synthetic X-ray Observations}
\label{s:synobs}
The physical quantities evolved through MHD simulations of radio jets provide unparalleled intuition into the complex dynamics of MHD flows,
but it has not always been intuitive to relate these quantities to observation. To make this connection and address questions raised
from observations, emission processes in these simulations must be properly calculated and converted into a synthetic observation.
The approach used here to model synthetic X-ray observations was based on \citet{treg02}.
Observations were computed for an assumed cluster redshift, $z = 0.0594$, ($D_L = 240$ Mpc)
corresponding approximately to the Hydra cluster \citep{wise07}.
In order to understand better the influence of projection effects, we
carried out the synthetic observations at three representative angles,
$i = 80^{\degr}, 45^{\degr}, ~{\rm and}~30^{\degr}$, between
the jet axis and the line of sight.
Two emission mechanisms were included; thermal bremsstrahlung and inverse Compton scattering off of CMB photons.
In each zone of the
computational grid we calculated emissivities based on local properties of the thermal or CR
electron population, then corrected them for the cluster redshift.
Thermal bremsstrahlung or free-free emissivity was computed as
\begin{align}
j_{\nu_{local}} = 5.4\times10^{-39}\,g_{ff}(\nu_{local},T_{e})Z_{i}^{2}\frac{n_{e}n_{i}}{T_{e}^{1/2}}e^{-h\nu_{local}/kT_{e}}
\,\,erg\,cm^{-3}\,s^{-1}\,sr^{-1}, \label{f:brem}
\end{align}
where $\nu_{local} = \nu_{obs}\left(1 + z\right)$.
The free-free Gaunt factor, $g_{ff}$, was computed by interpolation from the values calculated for plasma with typical
ICM properties in Table 1 of \citet{nozawa}.
We assume a fully ionized $Z_{i} = 1$ hydrogen gas with an ideal gas equation of state
where the average temperature per zone was
$T_{e} = T_{i} = T(keV)= \mu P m_{H}/(1.602\times10^{-9}\rho )$
with $\mu =$ 1/2 and $P$ and $\rho$ in cgs units. The numerical resolution of
discontinuities in the simulations was a few zones.
Consequently, the contact discontinuity between AGN (jet) and ICM plasmas was a few zones of moderately
high density, very high temperature gas. These transition regions were artificial and should not in
the absence of some equivalent, real viscous mixing, contribute to line of sight intensities. To reduce this artifact,
any zone with $C_{jet} \geq 0.01$ (partially AGN plasma) had $j_{\nu_{local}}$ for
thermal bremsstrahlung set to zero. Equation \ref{f:brem} was integrated numerically over a given range of frequencies to simulate finite bandwidths
of real instruments.
This paper focuses on energy ranges accessible to observatories typified by \emph{Chandra}. At those energies the inverse Compton emission in these
simulations is negligible.
Consequently, we omit details of their computation.
Assessing projection effects is critical to comparisons of these synthetic observations with real observations. We developed a parallelized
ray casting engine that allows the user to define an arbitrary orientation and resolution for the output images. A ray was cast normal to
the image plane through the appropriately aligned grid of emissivities. Tri-linear interpolation was used at regular intervals
along the ray and summed to give the total intensity along the line of sight, assuming an optically thin medium. Finally, intensities were
converted into fluxes per pixel by multiplying the line-of-sight intensity by the solid angle of an image pixel.
The image resolution was set to 1 arc sec, which matched the
simulation 1 kpc physical resolution at the selected 240 Mpc source distance.
\subsection{Relic (RE) Observations}
\label{s:REobs}
Synthetic X-ray observations of the RE, relic simulation are shown for several
times and projection orientations in Figure \ref{fig:REobs}.
Following a common practice designed to highlight AGN-blown cavities, the computed
brightness distribution of the ICM outside of the identified cavities was fit with a double
$\beta$-profile (\S \ref{s:enthalpy})
and then divided out to accentuate the X-ray cavities.
The double $\beta$-profile was determined independently for each synthetic X-ray observation.
Figure \ref{fig:REobs} shows the synthetic X-ray observations
in a 1.5-2.5 keV band divided by the best-fit
double $\beta-$profile in each instance. Time evolution of the system is displayed
from left to right. At the
earliest time shown, 26.3 Myr, the jets had just turned off.
Each row corresponds to a different orientation, with the inclination angle of the
jet with respect to the line of sight decreasing from top to bottom.
There are several
notable features in each observation.
Cavities are seen as brightness decrements from the surrounding emission. A pair of cavities
is seen at 26.3 and 52.5 Myr at large inclination but appear to merge into a single cavity at small inclination. Presumably X-ray emission from a
central galaxy would prevent the two cavities from appearing as a single cavity. Our simulations, however, do not include
emission from gas bound specifically to the central galaxy. All inclinations show a pair of
cavities at 157.5 Myr. The contrast in brightness of the cavity to the surrounding gas diminishes with both distance from cluster center and
decreasing inclination. A detailed discussion on these trends can be found in \citet{ensslin02}. The bow shock from the jets is
seen in all observations. At early times the bow shock appears as a bright rim surrounding the cavities. At later times the bow shock
has moved far from the edge of the cavities and no longer appears as a bright rim.
\subsection{Intermittent (I13) Observations}
\label{s:I13obs}
Figure \ref{fig:I13obs} shows a similar set of observations to those in Figure \ref{fig:REobs},
but for the I13, intermittent jet simulation. Several new features are seen
with the introduction of jet intermittency. Late times at every inclination reveal
``ripples'' between the cavities and the bow shock.
These features correspond to sound waves generated at the cavity walls during periods of jet activity.
Similar ``ripples'' have been seen in
observations of the Perseus cluster (\cite{fabian03}). A second, related
distinction from the RE
observations is the appearance of bright rims outlining the cavities at every
epoch. At smaller inclination angles the bright rims resemble the ``arms'' seen on smaller scales in NGC 4636 \citep{baldi}.
\section{Cavity Measurements}
\label{s:measure}
In the following analysis we attempted to apply common techniques for extracting two fundamental parameters for each cavity detected in
each observation throughout the elapsed time of each jet model. Every epoch for a given simulation represents a separate test for measuring both
cavity enthalpy and cavity age. Following this time evolution allows us to detect biases and trends in the quality of the measurements. For the
remainder of this paper we refer to values measured directly from the simulation
data as the ``actual'' values, while values measured from the synthetic
observations are referred to as ``observed'' values. We report the fractional error on a measured quantity $x$
as $\epsilon_{x} \equiv (x_{observed} - x_{actual})/x_{actual}$ for the remainder of this paper.
\subsection{Enthalpy}
\label{s:enthalpy}
The minimum energy required to produce a cavity is generally
estimated as the total thermal energy in the cavity and the work done inflating the cavity slowly at constant pressure; that is,
the enthalpy in the cavity, $H = U_{therm} + PV\sim {\rm several}\times PV$.
In particular, if the adiabatic index of the cavity plasma is $\gamma_c$,
\begin{align}
H = \frac{\gamma_c}{\gamma_c - 1}PV.
\end{align}
For a gas with $\gamma_c = 5/3$, applicable to our simulations, this gives $H = (5/2)PV$.
Estimation of cavity enthalpy, under the assumption that the cavity was inflated at
its current location, requires knowledge of both the cavity volume and surrounding gas pressure.
Since the AGN activity disturbed large volumes of the ICM it is not straightforward to
determine either its pressure distribution or, for that matter, the volume occupied by
the AGN generated cavity. A common strategy to resolve these two problems involves
fitting a simple, symmetric brightness profile to regions of emission that seem
not to include cavity structures. That profile can then be used to
obtain estimates for the average radial ICM properties.
There are several variations of this strategy \citep[\emph{e.g.,}][]{wise04, birzan04}.
Our goal was not to determine the best strategy but to use a common approach as an example.
We followed a procedure similar to \citet{wise04} and \citet{xue00} to extract pressure and \cite{birzan04} to extract volume from each observation.
For this exercise we used a double $\beta-$profile profile \citep[\emph{e.g.,}][]{ikebe96} of the form
\begin{align}
S_{X}(r_{p}) = S_{0} \left( S_{01}\left[1 + \left(\frac{r_{p}}{R_{C1}}\right)^{2}\right]^{1/2 - 3\beta_{1}} +
S_{02}\left[1 + \left(\frac{r_{p}}{R_{C2}}\right)^{2}\right]^{1/2 - 3\beta_{2}} \right), \label{f:dbeta}
\end{align}
where $r_{p}$ is the projected distance from cluster center, to model the brightness distribution of the X-ray emitting ICM. \citet{xue00}
discuss the benefits of using this profile as opposed to a single
$\beta-$profile. The profile was fit independently to each 1.5-2.5 keV
synthetic X-ray observation of the RE and I13 simulations.
The synthetic images were divided into annular bins, each $\approx$ 1 arc sec in width. To remove any effects of the X-ray cavities in
characterizing the brightness profile of the cluster plasma, a set of ellipses was chosen that best fit each cavity by eye. Any pixels within
these ellipses
were excluded from the annular bins. The average flux from the remaining pixels was used to define an azimuthally averaged brightness profile that was
fit with the double $\beta$-profile. Refer to Appendix \ref{a:fitting} for details regarding the fitting procedure. Figure \ref{fig:betafit}
shows example double $\beta-$profile fits for observations of both models
at an inclination of $i$ = 45$^{\text{o}}$. The best fit profiles resulted in
$0.5 \le \beta_{1} \le 1.5$, $0.9 \le \beta_{2} \le 1.8$ with typical values $R_{C1} \sim 50$ kpc, $R_{C2} \sim 200$ kpc, $S_{01} \sim 0.8$, and
$S_{02} \sim 0.2$. The undisturbed cluster parameters were $\beta_{1} = 0.7$, $\beta_{2} = 1$, $R_{C1} = 55$ kpc, $R_{C2} = 260$ kpc, $S_{01} = 0.9$,
and $S_{02} = 0.1$.
\subsubsection{Cluster Temperature Profile}
\label{s:temp}
The ICM temperature, $T_{ICM}$, at a given projected radius, $r_{p}$, was determined from the ratio of fluxes in two bands;
1.5-2.5 keV and 9.5-10.5 keV.
In particular, the equation
\begin{align}
\frac{S_{X,1.5-2.5}(r_{p})}{S_{X,9.5-10.5}(r_{p})} = \frac{\int_{\nu = (1+z)1.5\,keV/h}^{\nu = (1+z)2.5\,keV/h}g_{ff}(\nu,T_{ICM})e^{-h\nu/T_{ICM}}d\nu}
{\int_{\nu = (1+z)9.5\,keV/h}^{\nu = (1+z)10.5\,keV/h}g_{ff}(\nu,T_{ICM})e^{-h\nu/T_{ICM}}d\nu} \label{f:temperature}
\end{align}
was solved for $T_{ICM}$ using the aforementioned double $\beta$-profile fits.
Following \citet{wise04}, we assumed that the two components
of the double $\beta$-profile corresponded to two phases of the ICM with temperatures $T_{ICM,1}$ for the inner component and $T_{ICM,2}$ for the
outer component. $T_{ICM,1}$ was taken to be the minimum and $T_{ICM,2}$ the maximum temperatures found using Equation \ref{f:temperature}. The
projected radius for the transition from $T_{ICM,1}$ to $T_{ICM,2}$ was chosen to be the average of $R_{C1}$ and $R_{C2}$. Figure
\ref{fig:temp} shows a comparison between the actual azimuthally averaged temperature profile as a function of physical radius from the RE initial
conditions and the two component
projected profile. Note that $T_{ICM,1}$ mostly exceeded the actual inner core temperatures, since hotter gas along the line of sight contaminated
$S_{X}$ at small $r_{p}$.
\subsubsection{Cluster Electron Density Profile}
\label{s:density}
The radial thermal electron density profile of component $i=1,2$ was obtained by inverting equation \eqref{f:dbeta}, following the derivation of
\citet{xue00};
\begin{align}
n_{ei}^{2}(r_{p}=0) = \left(\frac{4\pi^{1/2}}{\alpha(T_{ICM,i})g_{i}\mu_{e}}\right)\left(\frac{\Gamma(3\beta_{i})}{\Gamma(3\beta_{i} - 1/2)}\right)
\left(\frac{S_{0i}}{R_{Ci}}\right)A_{ij},
\end{align}
where
\begin{align}
\alpha(T_{ICM,i}) = \frac{2^{4} e^{6}}{3 m_{e} \hbar c^{2}} \left(\frac{2\pi\,1.602\times10^{-9}\,T_{ICM,i}}{3 m_{e} c^{2}}\right)^{1/2}, \\
g_{i} = \int_{\nu = (1+z)1.5\,keV/h}^{\nu = (1+z)2.5\,keV/h} g_{ff}(\nu,T_{ICM,i})\,e^{-h\nu/T_{ICM}}\,d\nu,
\end{align}
and
\begin{align}
\frac{1}{A_{ij}} = 1 + \frac{R_{Ci}S_{0j}g_{i}}{R_{Cj}S_{0i}g_{j}}\left(\frac{T_{ICM,i}}{T_{ICM,j}}\right)^{1/2}
\left[\frac{\Gamma(3\beta_{j})\Gamma(3\beta_{i} - 1/2)}{\Gamma(3\beta_{i})\Gamma(3\beta_{j} - 1/2)}\right], \nonumber \\
j = 1,2\,\,and\,\,j\ne i.
\end{align}
The values for $S_{0i}$, $R_{Ci}$, and $\beta_{i}$ were the best fit values for each component from the 1.5-2.5 keV observation.
For simplicity, we assumed pure hydrogen. The electron weight, $\mu_{e} = 2 / (1 + X)$, where $X$, the hydrogen mass fraction, was therefore unity.
The total electron density at a projected radius $r_{p}$ was determined by
\begin{align}
n_{e}(r_{p}) = \displaystyle\sum_{i=1}^{2}n_{ei}(r_{p}) = \left(n_{e}(r_{p}=0) \displaystyle\sum_{i=1}^{2}n_{ei}(r_{p}=0)\left[1 +
\left(\frac{r_{p}}{R_{Ci}}\right)^{2}\right]^{-3\beta_{i}}\right)^{1/2}. \label{f:betaden}
\end{align}
Figures \ref{fig:REdensity} and \ref{fig:I13density} show example observed electron density profiles determined by Equation \ref{f:betaden} compared
to the actual azimuthally averaged electron density from the RE simulation from observations at $i$ = 45$^{\degr}$. The data for the actual density
were generated considering only computational zones with $C_{jet}$ = 0
(pure ICM plasma) to avoid any contamination by AGN plasma. Near to and within the jet launching region, $\lesssim\,l_{jet}$, this
condition was, of course, not met while the jets were active. For this reason there are no data points at small radii in Figures \ref{fig:REdensity} and
\ref{fig:I13density} when the jets were active.
The initial conditions in the upper left panel of Figure \ref{fig:REdensity} reveal a bias towards
lower density within the inner core radius (${\lesssim}$ 50 kpc) with a fractional error $\epsilon_{\rho} \sim$ 15\%.
This is a result of the bias toward
higher temperatures in the determination of $T_{ICM,1}$ discussed in \S \ref{s:temp}. By holding $j_{\nu}$ constant in Equation \ref{f:brem} for
an observation with
$h\nu_{local}$ approximately equal to the actual gas temperature it can be shown that an overestimate of the gas temperature will result in an
underestimate of the electron density. After jet activity terminated in the RE simulation, the observed density profile matches the actual ICM
profile with an error, $\epsilon_{\rho} \sim$ 30\%. The largest contribution to the
error is from regions influenced by the bow shock. Excluding these regions gives $\epsilon_{\rho} \lesssim$ 15\%. Evidence of the bow shock is seen
in the actual density profile as a bump in excess of the smooth,
observationally determined profile at $r_{p}$ = 35 kpc for 26.3 Myr, 70 kpc for 52.5 Myr, and 200 kpc at 157.5 Myr. These distances correspond to the
cluster-centric distance of the bow shock orthogonal to the jet axis.
In general, the observed distribution closely matches the actual distribution for radii outside the inner core radius, $R_{C1}$, at all times also with
$\epsilon_{\rho} \lesssim$ 10\%.
The I13 profiles in Figure \ref{fig:I13density}, similarly display fractional errors for the observed distribution
$\epsilon_{\rho} \lesssim$ 15\% at all times exterior to $R_{C1}$. Evidence of the bow shock is seen here as well in the actual profile as a bump
in excess over the observed profile at $b$ = 20 kpc for 26.3 Myr, 55 kpc for 52.5 Myr, 100 kpc for 105 Myr, and 200 kpc for 170.6 Myr.
The electron density profile of the ICM obtained from the brightness profile was reliable to within $\sim$ 20\% outside of regions influenced
by shocks regardless of jet intermittency and observed inclination. Inside of shock influenced regions the fractional error was as high as $\sim$ 40\%.
\subsubsection{Cluster Pressure Profile}
\label{s:pressure}
The (azimuthally averaged) radial ICM pressure profile was calculated for each observation from the
double $\beta$-profile model
temperature and density profiles just outlined, assuming an ideal gas equation
of state.
Figures \ref{fig:REpressure} and \ref{fig:I13pressure} show example pressure profiles determined from the observed temperature
and density profiles
along with azimuthally averaged ICM pressures
and AGN pressures extracted directly from the RE and I13 simulation. Following the procedure used for density, only zones with $C_{jet}$ = 0 (pure ICM plasma)
were used to measure the ICM pressure profile. Zones with $C_{jet} \geq$ 0.01 were used to measure the AGN pressure profile.
The top left panel of Figure \ref{fig:REpressure} shows the results of the
double $\beta$-profile inversion (the ``observed'' profile) for the initial
conditions. Within $R_{C1}$ the observed profile underestimates the actual ICM profile by $\gtrsim$ 10\%. This is due to the underestimate
of the electron density discussed in \S \ref{s:density}. Outside of $R_{C1}$ the ICM pressure profile is measured with $\epsilon_{P} \le$ 10\%.
At the time the jets are turned off for the RE simulation, the top right panel of Figure \ref{fig:REpressure}, the ICM pressure is measured with
$\epsilon_{P}$ $<$ 20\% at all radii. The signature of the bow shock in the ICM (solid line) can be seen at 35 kpc as a bump in excess over the
observed profile. At 52.5 Myr, the lower left panel, the observed profile significantly overestimates the ICM pressure
by $\epsilon_{P}$ $>$ 10\% within $R_{C1}$. The pressure $\sim$ 35 kpc behind the bow shock has dropped $\sim$ 20\% from the initial conditions
at this time as seen in Figure \ref{fig:REpressure_cut}. By 157.5 Myr, the ICM has relaxed closer to equilibrium, and the observed profile
measures the actual pressure to $\epsilon_{P} \le$ 15\%.
The observed pressure profiles for the I13 simulation shown in Figure \ref{fig:I13pressure}
reproduced the ICM profile to $\epsilon_{P} \le$ 45\% at all times, and at distances $\gtrsim 20-30$ kpc from cluster center
the observed profile was typically much better than that. The intermittency of the jets in the I13 run produced a more complex
pressure distribution than the RE simulation. It cannot be captured by the smooth profile produced by the double $\beta-$profile inversion. At
52.5 and 105 Myr, the actual pressure varies $\pm$30\% from the observed pressure within $R_{C2}$. By 170.6 Myr, the strength of the bow shock
has diminished, and the error on the observed profile falls to $\epsilon_{P} \le$ 10\% outside $R_{C1}$. Inside of $R_{C1}$, however, the effects
of the AGN activity produces a pressure structure poorly reproduced by the observed profile. The measurement of the ICM pressure profile
from observation was reliable to within $\sim$ 20\% outside of regions strongly affected by jet related shocks. Inside of shocked regions the
measurement was only reliable to within $\sim$ 60\%. This was true for both the RE and I13 simulations regardless of the observation orientation.
Following convention, the observed ICM pressure profiles were used to calculate X-ray cavity enthalpy on the
assumption that the ICM and cavity pressures were equal.
The dotted lines in Figures \ref{fig:REpressure} and \ref{fig:I13pressure} show the average pressure in AGN plasma at each radial bin.
This pressure could only be observationally measured if the cavity were in exact pressure balance with the ICM and the exact cluster-centric
distance of the cavity was known. Here we discuss how closely observation matches the AGN plasma profile assuming the cavity location is known.
\S \ref{s:cav_enthalpy} discusses the effect of projection on inferred pressure.
The AGN plasma pressure in the RE simulation, as shown in
Figure \ref{fig:REpressure}, roughly follows the ICM pressure at 26.3 and 157.5 Myr
except for high pressure at the ends of the jets where momentum
flows drive the cavities outward (OJ10). At 52.5 Myr, the AGN plasma pressure differs by as much as a factor of three from the actual
ICM pressure from 30-100 kpc.
This discrepancy can be explained by the influence of the bow shock. Referring to Figure \ref{fig:REpressure_cut}, shocked ICM material between
projected distances of 30-100 kpc raised the average ICM pressure over the lower pressure inside of the jet cocoon. The observed pressure profile, which
is sensitive to the ICM pressure, is
$\sim$ 75\% greater than the AGN plasma pressure within $R_{C1}$ at this time.
For the I13 simulation at 26.3 Myr in Figure
\ref{fig:I13pressure}, there is a significant difference between the AGN and ICM profiles from 25-35 kpc also due to the effects
of the bow shock. At this time the observed pressure profile overestimates the AGN plasma pressure by
$\sim$ 50\% within 20 kpc. At 52.5 Myr, the AGN and ICM profiles approximately agree with the exception at the ends of the jets. Here the
observed profile matches the actual AGN plasma pressure to within 13\%. The observed, AGN, and ICM profiles all agree at 52.5 Myr to within $\sim$ 30\%
except the ends of the jets. The intermittency of the jets impacts how well the observed profile reproduces the AGN pressure. At 105 Myr, when
the jets are inactive, the observed pressure overestimates the AGN pressure within 30 kpc while at 170.6 Myr, when the jets are active, it underestimates it.
Inferring AGN plasma pressure from observational measurement of the ICM pressure at a specific radius was reliable only to $\sim$ 75\% from the
RE and I13 observations.
\subsubsection{Cavity System Volume}
\label{s:volume}
Cavity volumes were estimated from 1.5-2.5 keV observations at each analyzed epoch
for both RE and I13 simulations. As already noted, each cavity
was fit by eye with a set of ellipses \citep{birzan04}.
For each projection angle, $i = 80^{\degr}, 45^{\degr},~{\rm and}~ 30^{\degr}$,
the cavities were assumed in this measurement to be in the plane of
the sky in order to test the effects of an inaccurate or unknown value for the
inclination. These observed cavity volumes were calculated assuming them to be ellipsoids
of revolution around the major axis of each projected ellipse.
By using multiple ellipses to cover each
projected cavity we were better able to define the outer edge of the X-ray cavity. Figure \ref{fig:ellipse} shows an example of the area enclosed
by the ellipses
chosen for the RE simulation at 131.3 Myr observed at $i$ = 80$^{\text{o}}$.
In general, it was difficult to define the edge
of the cavities for the RE simulation once the cavities extended past $R_{C1}$. For the I13 simulation the cavities were often
outlined with a bright rim (see Figure \ref{fig:I13obs}), making the edge (taken to be the inside of the rim) easier to find.
The actual cavity volumes were computed by integrating the volume in the
simulation data with $C_{jet} \geq 0.01$ (partially AGN plasma).
Observed and actual volumes are compared in Figure \ref{fig:volume}. Since we expect
a projection bias due to foreshortening along the jet axis (see Appendix \ref{a:ellipse}), we plot
the observed volume divided by $a_{p} / a$, where $a_{p}$ is given by Equation \ref{eq:ellipsecorr} and $a$ is the actual length of a single best fit ellipse,
normalized by the actual volume. The scatter in the measurements without correcting for this projection bias was $\sim$ 50\%.
Two features stand out in the comparisons in Figure \ref{fig:volume}. First, the $a_{p} / a$ correction reduced the scatter due to the projection bias
to approximately 10-15\% for both RE and I13. The second obvious feature of the comparison is that the observed cavity volume estimates
tend to be modestly smaller than the actual volumes for RE but not for I13.
The reason for this has to due with the different shapes of the cavities between RE and I13. I13 retains a nearly elliptical area
at all times while RE developed a non-elliptical shape (see Figures \ref{fig:REobs} and \ref{fig:I13obs}), which required many ellipses to fit.
Fitting ellipses will tend to underestimate a non-elliptical shape if the observer requires that none of the fits extend beyond the cavity edge.
Despite this limitation, and the subjective, observer-dependent nature of
the process, our observed volume estimates generally agree with the actual volumes to within about $\pm 50$\% (omitting the $a_{p} / a$
correction).
The observations used to determine cavity analysis did not include any noise representing X-ray counts or intrumental effects. Low counts at large
cluster centered distances would make cavity edges in those regions more difficult to identify. The long axis of all of the observed cavities,
roughly aligned with the jet, would likely be underestimated given these conditions, which would reduce the measured volume by the same amount.
\subsubsection{Cavity System Enthalpy}
\label{s:cav_enthalpy}
The observational estimates for the ICM pressure profile and cavity system
volume presented above were used to derive the total cavity enthalpy, $H_{obs}$.
The cavity volume enclosed by the chosen ellipses for each observation was discretized into 1 kpc$^{3}$ volume elements corresponding to the 1 arc sec resolution
of the images for a cluster distance, $D_{L}$ = 240 Mpc. The pressure in
each cavity volume was determined from the observed pressure profile and the
projected cluster-centric distance of the volume elements. The total enthalpy in the cavity system for each observation was given by
\begin{align}
H_{obs} = \frac{5}{2}\displaystyle\sum_{i}^{N_{elements}}P_{i}\left(1\,kpc^{3}\right).
\end{align}
Figure \ref{fig:enthalpy} shows comparisons of the total energy
added to the simulation volumes by jets, $\Delta E_{tot}$, the actual enthalpy in the cavity systems, $H_{act}$,
and the above observed cavity enthalpy estimates, $H_{obs}$, throughout
the RE and I13 simulations at three inclinations. The value for $H_{act}$ was computed from the pressure and total volume of each voxel in the
simulation data with $C_{jet} \geq 0.01$ to be consistent with the synthetic observations (see \S \ref{s:synobs}). This conservative cut meant
that some ICM enthalpy was included in $H_{act}$ values, making it possible for $H_{act} > \Delta E_{tot}$.
Several features stand out in comparing the various energy measures. The first feature is that $H_{obs}$, $H_{act}$, and $\Delta E_{tot}$
all agree with each other within about a factor of two for a given simulation and inclination angle.
Second, the comparisons of enthalpy, both observed and actual, and the total energy measures is
better for the intermittent jet simulation, I13, than for the terminated, RE, case.
This is consistent with the analysis of the
simulations reported in OJ10. In particular, they
noted that about 50\% of the jet energy ($\Delta E_{tot}$) injected during the
active phase of either the I13 or RE model had been converted into ICM thermal or
kinetic energy by the end of the simulation. The remaining energy increment was mostly
gravitational potential energy in the ICM or thermal energy in the jet cocoon\footnote{Relatively smaller energy
increments are contained at a given time in jet kinetic energy and magnetic fields.},
which is roughly what $H_{obs}$ measures. Approximately 30\% of $\Delta E_{tot}$ ended up as gravitational potential energy in the ICM
for the RE simulation. By contrast, a much smaller fraction, $\sim 15$\%, of $\Delta E_{tot}$
in the intermittent jet, I13, simulation is converted to ICM gravitational
by the simulation's end. Thus, we should expect a closer match between $\Delta E_{tot}$ and $H_{act}$ in that case.
Another striking feature for both panels of Figure \ref{fig:enthalpy} is the consistency in $H_{obs}$
among the inclination angles for a given simulation and epoch. This is due to two competing projections effects. In particular, the estimated volume generally decreases
with decreasing inclination
angle as discussed in \S \ref{s:volume}, while the projected distance from cluster center decreases, making a cavity appear to be in a higher pressure
environment then
it actually is. To see the net effect refer to Figures \ref{fig:REpressure} and \ref{fig:I13pressure}. It is evident that the ICM pressure profile
can be approximated as a power law, $P = P_{0}(r / r_{0})^{\alpha}$, over
the projected distances the cavities occupy (40-200 kpc) for most of the simulated time. Then, assuming a given cavity is a cylinder with radius $R$
extending from a projected distance $r_{1}\,sin\,i$ to $r_{2}\,sin\,i$ (see Appendix \ref{a:ellipse} regarding the use of $sin\,i$) and
estimating $\alpha \approx$ -1 we would predict that the enthalpy $H$ would be
\begin{align}
H &= \pi\frac{5}{2}\,R^{2}\,p_{0} \int_{r_{1}\,sin\,i}^{r_{2}\,sin\,i} \left( \frac{r}{r_{0}} \right)^{-1}\,dr \\
&= \pi\frac{5}{2}\,R^{2}\,p_{0}ln \left(\frac{r_{2}}{r_{1}} \right),
\end{align}
which is independent of $i$.
For the RE simulation on the left panel of Figure \ref{fig:enthalpy} $H_{obs}$ always underestimated
$H_{act}$ while the jets were active. Recall that $H_{obs}$ depends on the
observed estimate of the ICM pressure distribution and, from \S
\ref{s:pressure}, that the observed pressure profile underestimated the AGN plasma pressure (the cavity
pressure) while the jets were active. In short, during those times
the cavities are over-pressured as they drive moderate strength shocks into the
ICM. This is consistent with comparisons shown in Figure \ref{fig:REpressure}. Further into the simulation we see $H_{act}$ declining. The cavities
are rising buoyantly in the cluster while maintaining approximate pressure equilibrium (see OJ10). The thermal energy in the cavities dropped as this
energy was transferred to gravitational potential energy. The observed values follow this
trend, remaining within $\approx$ 50\% $H_{act}$.
The I13 simulation on the right panel of Figure \ref{fig:enthalpy} shows a different evolution of $H_{act}$ and $H_{obs}$ because of the different AGN history.
There is a step-like growth of $H_{act}$ due to the intermittency of the jets. The time delay between
the peaks of $H_{act}$
and $H_{obs}$ at each step is due to the difference in evolution of the observed ICM pressure and the actual AGN plasma profiles. When the jets turn on the
cavities are over-pressured with respect to the ICM, but they eventually expand to approximate pressure balance. Prior to the
expansion of the cavities, however, the higher energy content of the cavities cannot be accurately measured by the procedure described in \S \ref{s:pressure}.
Therefore, an increase of $H_{obs}$ lags behind an increase of $H_{act}$.
\subsection{Ages}
\label{s:ages}
Three characteristic timescales are commonly employed for determining cavity age. For a cavity centered at a projected distance $r_{p}$
from cluster center,
radius $R$, cross section $S$, drag coefficient $C$, and volume $V$ these times are: 1) the buoyant rise time $t_{buoy} \approx
r_{p}\sqrt{CS/(2gV)}$, 2) the ``refill time'' $t_{r} = 2\sqrt{R/g}$, and 3) the sound crossing time $t_{c} = r_{p}/c_{s}$ \citep{birzan04}. These times
can be compared to known ages from the simulations given measurements of the sound speed, $c_{s}$, and $g$ from the synthetic
observations. Measurements of each age were made from observations of the \textbf{N} and \textbf{S} cavities (see Figures \ref{fig:REobs} and
\ref{fig:I13obs}) at the end of
the RE simulation, $t = 157.5$ Myr, and I13 simulation, $t = 170.6$ Myr, at three different inclination angles. In these observations we represented
each cavity as a single ellipsoid with semi-major axis $a$ and semi-minor axis $b$.
Following a procedure similar to \citet{birzan04}, the value of $r_{p}$ was the projected distance
from cluster center to the center of the cavity, the radius was given by $R = \sqrt{ab}$, and the cross section was given by $S = \pi\,b_{max}^{2}$,
where $b_{max}$ was half the maximum azimuthal width of the cavity. The volume $V$ was determined for
each cavity following the method described in \S \ref{s:volume}. The sound speed was given by $c_{s} = \sqrt{\gamma k<T_{ICM}>/(\mu m_{H})}$.
At the end of the RE simulation $<T_{ICM}> =$ 2.77 keV giving $c_{s}$ = 941 km s$^{-1}$, and at the end of the I13 simulation $<T_{ICM}> =$ 2.82 keV,
giving $c_{s}$ = 950 km s$^{-1}$. Refer to Appendix \ref{a:gravity} for details on estimating $g$, which we assumed to be constant, from the synthetic
observations. We let $C$ = 1 for simplicity.
Table \ref{tab:age} shows the measured parameters and age estimates for each observation at the end of the RE and I13 simulations.
Both $t_{buoy}$ and $t_{c}$ are affected by projection. For this reason, we would expect the
buoyant rise time to vary as $t_{buoy} \propto r_{p}/\sqrt{a} \propto \sqrt{sin\,i}$ due to projection effects on both the projected distance
and semi-major axis of the cavity. The sound crossing time, however,
should vary more rapidly with inclination as $t_{c} \propto r_{p} \propto sin\,i$. The cavity pairs for both simulations approximately show
this trend for $t_{buoy}$ and $t_{c}$. The refill time shows a weaker dependence on inclination angle because $t_{r} \propto \sqrt{(ab)^{1/2}}
\propto (sin\,i)^{1/4}$ (recall that we assumed constant $g$ in calculating ages).
An important aspect of Table \ref{tab:age} was that for most cases $t_{buoy} < t_{c}$,
which implies a terminal buoyant velocity greater than the sound speed.
This unphysical result could have been avoided in a number of ways.
In their analysis of Hydra A, for example, \citet{wise07}
represented the cavity system as a series of spherical bubbles. Approximating the end of the \textbf{N} cavity of the RE simulation observed at
$i$ = 80$^{\text{o}}$
as a sphere would increase $r_{p}$ to $\sim$ 200 kpc and would decrease $V$ to $(4/3)\pi\,r^{3} \sim 1.4\times10^{5}$ kpc$^{3}$. Given these measurements
for an outer spherical cavity, $t_{buoy} \sim$ 244 Myr while $t_{c} \sim$ 208 Myr. Another approach may have been to assume $C >$ 1 similar to
values empirically estimated by \citet{jones05}.
The parameters and measurements used in a buoyant rise model are not very well constrained. A buoyant model also did not properly capture the evolution
of the cavities in the RE or I13 simulations (see OJ10). For these reasons we chose not to use $t_{buoy}$ as the cavity age.
The equations for $t_{r}$ and $t_{c}$ are related, and the models only differ in the length over which material moves. By convention, $t_{r}$ uses the cavity
radius, which is not affected by projection. We instead use the values of $t_{c}$ for the cavity ages
in subsequent calculations so that our analysis demonstrates the dependence on projection.
When projection did not greatly effect our measurements $t_{c}$ was reliable to within $\pm 20$\%.
\subsection{Cavity Power}
\label{s:lum}
The enthalpy and age of a cavity system are typically combined into a characteristic
power called the cavity power $P_{cav} = H / t$. If $P_{cav}$ can be deposited into
the ICM it may balance the cooling in the host cluster. It is therefore common practice to compare $P_{cav}$ with the cooling
\citep[\emph{e.g.,}][]{birzan04, rafferty06, cavagnolo10}.
The cavity power should represent a lower limit to the total luminosity of the AGN because it does not account for energy already deposited into
the ICM or other forms
of AGN plasma energy such as magnetic, potential, or kinetic (see \S \ref{s:cav_enthalpy}). Table \ref{tab:lum} shows a comparison of the observed
$P_{cav,obs}$ with the actual
total average jet luminosity, $L_{jet,act}$, for both RE and I13 observed at the three different
inclinations at the end of each simulation. $P_{cav,obs}$ was computed as $H_{obs} / \langle t \rangle$, where $\langle t \rangle$ was the average of the ages
in Table \ref{tab:age} for the \textbf{N} and \textbf{S} cavities, while the
mean jet power, $L_{jet,act} = \Delta E(t) / t$, where $t$ = 157.5 Myr and $t$ = 170.6 Myr for the RE
and I13 simulations respectively. The differences between $P_{cav,obs}$ and $L_{jet,sim}$ at these times were characteristic of early times.
Observations of the RE and I13 simulations produced $P_{cav,obs}$ increasing with decreasing inclination primarily due to projection effects on
measuring the ages as discussed in \S \ref{s:ages}. For the RE simulation, $P_{cav,obs} < L_{jet,act}$ as we would expect for all but the smallest inclination
angles. The I13 observations, however, resulted in $P_{cav,obs} > L_{jet,act}$ at all orientations. The underestimate of cavity system age was the dominant
reason $P_{cav,obs}$ exceeded $L_{jet,act}$ in these observations.
For measurements from observations at $i$ = 80$^{\text{o}}$, when the
error on the assumed inclination is small, $P_{cav,obs}$ was within $\sim$ 40\% of $L_{jet,act}$, and $P_{cav,obs}$ was within
a factor of three of $L_{jet,act}$ across all of the observed inclination angles.
\section{Conclusions}
\label{s:conclusion}
We have presented an analysis of the reliability of common techniques used to extract X-ray cavity enthalpy, age, and mechanical
luminosity from X-ray observations of cavity systems. By utilizing synthetic X-ray observations of detailed simulations we were able to
directly compare observationally determined and actual values from the simulations. The important results from this work are:
$\bullet$ The synthetic observations of the I13 simulation show bright rims outlining the cavities at each analyzed epoch out to 170.6 Myr while
the RE simulation does not show bright rims. The difference in the AGN history represented by each model accounts for this difference. I13 had
periodic injection of energy into the cavities throughout the simulation while RE deposited all of its energy early on.
$\bullet$ Observationally measuring X-ray cavity enthalpy is reliable to within approximately a factor of two across a wide range of age and inclinations
for the models of jet intermittency presented here. Several steps go into determining the enthalpy, and each may introduce significant
errors. Extracting the ICM electron density profile, for example, was reliable to within $\sim$ 20\% outside of regions strongly influenced by shocks.
Inside recently shock influenced regions the error was $\sim$ 40\%. Combining this and the temperature profile into the pressure inside of the cavity at a
given cluster-centric distance may not be as accurate. During
periods of jet activity, the observationally determined pressure may differ by as much as $\sim$ 75\% from the cavity pressure. This is related to the
supersonic speeds of the jets through the ICM and the consequent post-shock pressure enhancement. Our measurements of cavity volume were within $\pm$50\%
of the actual total cavity system volume.
This process is subjective, however, and a more robust and objective method for finding and outlining cavities should be developed. The overall
effect of each of these measurements is contained in the factor of two reliability of enthalpy measurements.
The energy required to offset cooling in clusters can be characterized as $\eta PV$. An approximate factor of two span in $\eta$ in our tests is due
largely to uncertainties in the measurement of $PV$.
$\bullet$ The determination of cavity age from one or more of the commonly used age estimates could potentially be misleading. The buoyant rise
model was not an accurate description of the evolution of the cavities in our simulations. A simple application of this model implied unrealistic
terminal velocities greater than the sound speed. The refill time model produced ages within $\pm$15\%
of the correct age regardless of the error on assumed inclination. It relied on an accurate measurement of the gravitational acceleration,
however, which assumes the cluster to be in hydrostatic equilibrium. This assumption may be not be valid for a given cluster. We preferred to use the
sound crossing time as a simple and fairly robust model for cavity age. For a well constrained inclination angle, our measurements were within $\pm$20\% of the
actual cavity age.
$\bullet$ Observationally measuring the cavity power produced values within a factor of $\sim$ 3 of the average total jet luminosity from our
simulations regardless of assumed inclination angle.
The observed cavity power was within 40\% of the jet luminosity if the projection effects were negligible. At all observed inclination angles for the
I13 simulation and $i$ = 30$^{\text{o}}$ for the RE simulation the cavity power overestimates the average jet luminosity largely due to underestimates
in the cavity system age.
\begin{acknowledgments}
This work was supported at the University of Minnesota by NSF grant AST0908668
and by the University of Minnesota Supercomputing Institute. PJM and SMO were supported in
part by the Graduate Dissertation
Fellowship at the University of Minnesota. SMO was also supported by NASA Astrophysics Theory Program Grant NNX09AG02G. We
are grateful to Brian McNamara and Paul Nulsen for very fruitful
conversations and to an anonymous referee for help in improving the
original manuscript.
\end{acknowledgments}
|
1,941,325,221,022 | arxiv | \section{Introduction}
Surgical resection is the initial treatment for nearly all brain tumors. The achieved extent-of-resection is strongly correlated with prognosis and is the single greatest modifiable determinant of survival. Brain tumors are intimately involved in surrounding functioning brain tissue, aggressive resection must be balanced against the risk of causing new neurological deficits.
During neurosurgery, Image-Guided Neurosurgical Systems (IGNSs) provide a patient-to-image mapping that relates the preoperative image data to an intraoperative patient coordinate system, allowing surgeons to infer the locations of their surgical instruments relative to preoperative image data and helping them to optimize the extent of resection while avoiding damage to critical structures.
Commercial IGNSs assume a rigid registration between preoperative imaging and patient coordinates. However, intraoperative deformation of the brain, which is also known as brain shift, invalidates this assumption. Since brain shift progresses during surgery, the rigid patient-to-image mapping of IGNS becomes less and less accurate. Consequently, most surgeons only use IGNS to make a surgical plan but justifiably do not trust it throughout the course of an operation \cite{Gerard,Bayer}.
\subsubsection{Related Work}
As one of the most important error sources in IGNS, intraoperative brain shift must be compensated in order to increase the accuracy of neurosurgery. Registration between the Intraoperative MRI (iMRI) image, which provides clinicians with an updated view of anatomy during surgery, and preoperative MRI (preMRI) image (preop-to-intraop registration) has been a successful strategy for brain shift compensation \cite{Hata,Soza,Clatz,Vigneron,Drakopoulos}. However, iMRI acquisition is disruptive, expensive and time consuming, making this technology unavailable for most clinical centers worldwide. More recently, 3D intraoperative Ultrasound (iUS) appears to be a promising replacement for iMRI. Although some progress has been made by previous work on preMRI-to-iUS registration \cite{Gobbi,Arbel,Pennec,Lette,Reinerstsen,Fuerst,Rivaz}, yet there are still no clinically accepted solutions and no commercial neuro-navigation systems that provide brain shift compensation. This is because of three reasons: 1) Most non-rigid registration methods can not handle artifacts and missing structures in iUS; 2) The multi-modality of preMRI-to-iUS registration makes the already difficult problem even more challenging; 3) A few methods \cite{Ou} can achieve a reasonable alignment, yet they take around 50 minutes for an US pair and are too slow to be clinically applicable.
Another shortcoming of existing brain shift compensation approaches is the lack of an uncertainty measure. Brain shift is a complex spatiotemporal phenomenon, and given the state of registration technology, and the importance of the result, it seems reasonable to expect, e.g., error bars that indicate the confidence level in the estimated deformation. In fact, registration uncertainty can actually helps surgeons make more informed decisions. If a surgeon must decide whether to continue resection near a critical structure, it is vital that they know how far the instrument is predicted to be from the structure and how likely the prediction is to be accurate. Moreover, if a large registration error at location A and small error at location B are observed in the vicinity of surgical field, without knowledge of registration uncertainty, the surgeon would probably assume a large error everywhere and thus ignore the registration altogether. If only s/he knows that A lies in an area of high uncertainty while B lies in an area of low uncertainty, s/he would have greater confidence in the registration at B and other locations of low uncertainty.
In this paper, we propose a novel feature-driven active framework for brain shift compensation. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model \cite{GPBook} is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, for areas that are difficult to align, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the registration accuracy.
Contributions and novelties of our work can be summarized as follows:
\vspace{-2mm}
\begin{enumerate}
\item The proposed feature-based registration is robust for aligning iUS image pairs with missing correspondence and is fast.
\item We explore applying a GP model and variograms for image registration.
\item Registration uncertainty in transformation parameters can be naturally obtained from the GP model.
\item To the best of our knowledge, the proposed active registration strategy is the first method to actively combine user expertise in brain shift compensation.
\item We retrospectively demonstrate the efficacy of our method on clinical data acquired during neurosurgery.
\end{enumerate}
\vspace{-6mm}
\section{Method}
\vspace{-2mm}
\subsection{The role of US-to-US registration}
In order to alleviate the difficulty of preop-to-intraop registration, instead of directly aligning iMRI and iUS images, we choose an iterative compensation approach which is similar to the work in \cite{Riva}.
As shown in Fig.1. acquisition processes for pre-duraUS (preUS) and post-resectionUS (postUS) take place before opening the dura and after tumor resection, respectively. Since most brain-shift occurs after taking the preUS, a rigid multi-modal registration may be suffice to achieve a good alignment $T^{\mathrm{rigid}}$ between preMRI and preUS \cite{Fuerst}. Next, we register the preUS to postUS using the proposed feature-driven active framework to acquire a deformable mapping $T_{\mathrm{deform}}$. After propagating $T_{\mathrm{rigid}}$ and $T_{\mathrm{deform}}$ to the preMRI, surgeons may use it as an updated view of anatomy to compensate for brain shift during the surgery.
\vspace{-6mm}
\begin{figure}[H]
\centering
\includegraphics[height=2.9cm]{fig1}
\vspace{-2mm}
\caption{Pipeline of the US-based brain shift compensation. }
\label{fig:correct1}
\vspace{-6mm}
\end{figure}
\subsection{Feature-based registration strategy}
Because of tumor resection, compensating for brain shift requires non-rigid registration algorithms capable of aligning structures in one image that have no correspondences in the other image. In this situation, many image registration methods that take into account the intensity pattern of the entire image will become trapped in incorrect local minima.
\begin{figure}[t]
\centering
\includegraphics[height=5.6cm]{fig2}
\vspace{-7mm}
\caption{Pipeline of the feature-based active preduraUS-to-postUS registration.}
\label{fig:correct1}
\vspace{-3mm}
\end{figure}
We therefore pursue a Feature-Based Registration (FBR) strategy due to its robustness in registering images with missing correspondence \cite{Matt}. FBR mainly consists of 3 steps: feature-extraction, feature-matching and dense deformation field estimation. An optional ``active registration'' step can be added depends on the quality of FBR.
\vspace{-4mm}
\subsubsection{Feature extraction and matching} As illustrated in Fig.2(a)(b), distinctive local image features are automatically extracted and identified as key-points on preUS and postUS images. A matcher searches for a corresponding postUS key-point for each key-point on the preUS image \cite{Matt}.
From a matched key-point pair, let $\mathbf{x}_i$ be the coordinates of the predUS key-point and $\mathbf{x}^{\mathrm{post}}_i$ be the coordinate of its postUS counterpart. Here, we first use all matched PreUS key-points as landmarks, and perform a land-mark based preUS-to-postUS affine registration to obtain a rough alignment. $\mathbf{x}^{\mathrm{post}}_i$ becomes $\mathbf{x}^{\mathrm{affine}_i}$ after the affine registration. The displacement vector, which indicates the movement of landmark $\mathbf{x}_i$ due to the brain shift process, can be calculated as $\mathbf{d}(\mathbf{x}_i)=\mathbf{x}^{\mathrm{affine}_i}_i-\mathbf{x}_i$. where $\mathbf{d}=[d_x,d_y,d_z]$.
\vspace{-2mm}
\subsubsection{Dense deformation field} The goal of this step is to obtain a dense deformation field from a set of $N$ sparse landmark and their displacements $\mathcal{D}=\{ (\mathbf{x}_i,\mathbf{d}_i),i=1:N \}$, where $\mathbf{d}_i=\mathbf{d}(\mathbf{x}_i)$ is modeled as a observation of displacements.
In the GP model, let $\mathbf{d}(\mathbf{x})$ be the displacement vector for the voxel at location $\mathbf{x}$. thus it has a prior distribution as $d(\mathbf{x})\sim \mathrm{GP}(\mathrm{m}(\mathbf{x}),\mathrm{k}(\mathbf{x},\mathbf{x}'))$, where $\mathrm{m}(\mathbf{x})$ is the mean function, which usually is set to 0, and the GP kernel $\mathbf{k}(\mathbf{x},\mathbf{x}')$ represent the spatial correlation of displacement vectors.
By the model assumption, all displacement vectors follow a joint Gaussian distribution $p(\mathbf{d}\mid \mathbf{X})=\mathcal{N} (\mathbf{d}\mid \mathbf{\mu},\mathbf{K}) $, where $K_{ij}=\mathbf{k}(\mathbf{x},\mathbf{x}')$ and $\mathbf{\mu} = (\mathrm{m}(\mathbf{x}_1) ,...,\mathrm{m}(\mathbf{x}_N)) $. As a result, the displacement vectors $\mathbf{d}$ for known landmarks and $N_*$ unknown displacement vectors $\mathbf{d}_*$ at location $\mathbf{X}_*$, which we want to predict, have the following relationship:
\vspace{-2mm}
\def\A{
\begin{pmatrix}
\mathbf{d}\\
\mathbf{d}_* \\
\end{pmatrix}}
\def\B{
\begin{pmatrix}
\mathbf{\mu} \\
\mathbf{\mu}_*\\
\end{pmatrix}}
\def\C{
\begin{pmatrix}
\mathbf{K} & \mathbf{K}_*\\
\mathbf{K}^T_* & \mathbf{K}_{**}.\\
\end{pmatrix}}
\begin{equation}
\A \sim \left(\B , \C\right)_.
\end{equation}
In Equation (1), $\mathbf{K}=\mathrm{k}(\mathbf{X},\mathbf{X})$ is the $N\times N$ matrix, $\mathbf{K}_*=\mathrm{k}(\mathbf{X},\mathbf{X_*})$ is the $N \times N_*$ matrix, and $\mathbf{K_{**}}=\mathrm{k}(\mathbf{X_*},\mathbf{X_*})$ is the $N_* \times N_*$ matrix.
The mean $\mu_*=[\mu_{*x},\mu_{*y},\mu_{*z}]$ represents values of voxel-wise displacement vectors and can be estimated from the posterior Gaussian distribution $p(\mathbf{d}_*\mid \mathbf{X_*},\mathbf{X},\mathbf{d})= \mathcal{N}(\mathbf{d}_*\mid \mu_*,\Sigma_*)$ as
\begin{equation}
\mu_*= \mu(\mathbf{X_*})+\mathbf{K}^T_*\mathbf{K}^{-1}(\mathbf{d}-\mu(\mathbf{X})).
\end{equation}
Given $\mu(\mathbf{X})= \mu(\mathbf{X_*})=0$, we can obtain the dense deformation field for the preUS image by assigning $\mu_{*x}$,$\mu_{*y}$,$\mu_{*z}$ to $\mathbf{d}_x$, $\mathbf{d}_y$ and $\mathbf{d}_z$, respectively.
\vspace{-4mm}
\subsubsection{Active registration} Automatic approaches may have difficulty in the preop-to-intraop image registration, especially for areas near the tumor resection site. Another advantage of the GP framework is the possibility of incorporate user expertise to further improve the registration result.
From Equation (1), we can also compute the covariance matrix of the posterior Gaussian $p(\mathbf{d}_*\mid \mathbf{X_*},\mathbf{X},\mathbf{d})$ as
\begin{equation}
\Sigma_*= \mathbf{K}_{**}-\mathbf{K}^T_*\mathbf{K}^{-1}\mathbf{K}_*.
\end{equation}
Entries on the diagonal of $\Sigma_*$ are the marginal variances of predicted values. They can be used as an uncertainty measure to indicates the confidence in the estimated transformation parameters.
If users are not satisfied by the FBR alignment result, they could manually, guided by the image context and visualization of registration uncertainty, add new corresponding pairs of key-points to drive the GP towards better results.
\vspace{-4mm}
\subsection{GP kernel estimation}
The performance of GP registration depends exclusively on the suitability of the chosen kernels and its parameters. In this study, we explore two schemes for the kernel estimation: Variograms and discrete grid search.
\vspace{-4mm}
\subsubsection{Variograms} While being used extensively in geostatistics to characterize the spatial dependence of a stochastic process \cite{VarioBook}, variograms have not yet received much attention in the medical imaging field. Although GP regression for medical image registration, and variograms, were described in \cite{Warfield}, neither quantitative results, nor estimation of the posterior uncertainty were provided.
In the GP registration context, $\mathbf{d}(\mathbf{x})$ is modelled as a random quantity, variograms can measure the extent of pairwise spatial correlation between displacement vectors with respect to their distance, and in advance give insight into choosing the suitable GP kernel.
In practice, we estimate the empirical variogram of landmarks' displacement vectors as
\vspace{-3mm}
\begin{equation}
\begin{aligned}
\hat{\gamma}(h\pm\delta)
&:=\frac{1}{2|N(h\pm\delta)|}\sum_{(i,j)\in{N(h\pm\delta)}}^{}\norm{\mathbf{d}(\mathbf{x}_i)-\mathbf{d}(\mathbf{x}_j)}^2.
&
\end{aligned}
\end{equation}
For the norm term $\norm{\mathbf{d}(\mathbf{x}_i)-\mathbf{d}(\mathbf{x}_j)}$, we compute its 3 components $d_x$ $d_y$ $d_z$ and construct 3 variograms, respectively. As shown in Fig.3(a), for displacement vectors $\mathbf{d}(\mathbf{x_1})$ and $\mathbf{d}(\mathbf{x_2})$, $\norm{d_x(\mathbf{x}_2)-d_x(\mathbf{x}_1)}$ is the vector difference with respect to the $\mathbf{x}$ axis, etc. $h$ represents the distance between two displacement vectors.
\begin{figure}[t]
\centering
\includegraphics[height=3.8cm]{fig3}
\vspace{-3mm}
\caption{(a) $\norm{d_x(\mathbf{x}_2)-d_x(\mathbf{x}_1)}$ and $h$; (b) Empirical variogram cloud; (c) Variogram cloud divided into bins with their means marked as blue.}
\label{fig:correct1}
\vspace{-5mm}
\end{figure}
To construct an empirical variogram, the first step is to make a variogram cloud by plotting $\norm{d(\mathbf{x}_2)-d(\mathbf{x}_1)}^2$ and $h_{ij}$ for all displacement pairs. Next, we introduce a variable $\delta$, and divide the variogram cloud into bins with a bin width setting to 2$\delta$. Lastly, the mean of each bin is calculated and further plotted with the mean distance of that bin to form an empirical variogram. Fig.4(a) shows an empirical variogram of a real US image pair that has 71 landmarks.
The empirical variogram only consists of value differences at a finite set of discrete distances, whereas the GP kernels are continuous for all $h$. Therefore, the next step is to fit a smooth curve to the empirical values and derive the kernel function from that fitted curve.
\vspace{-6mm}
\begin{figure}[H]
\centering
\includegraphics[height=3.1cm]{fig4}
\vspace{-3mm}
\caption{(a) X-axis empirical variogram of a US images pair;(b) Sill, range and nugget; (c) Fitting a continuous model to an empirical variogram.}
\label{fig:correct1}
\vspace{-6mm}
\end{figure}
Fig.4(b) is an example of a fitted curve. The curve is commonly describe by the following characteristics:
\vspace{-2mm}
\begin{labeling}{Parameters}
\item [Nugget] The non-zero value at $h=0$.
\item [Sill] The value at which the curve reaches its maximum.
\item [Range] The value of distance $h$ where the sill is reached.
\end{labeling}
\vspace{-1mm}
Conventionally, displacement vectors that are separated by distances further than the range are considered uncorrelated \cite{VarioBook}.
In general, the curve must have a mathematical expression that can describe the variances of a random process. Practically, the choice is limited to a few options, such as exponential and Gaussian models. For instance, the Gaussian variogram function is
\vspace{-6mm}
\begin{equation}
\gamma(h)=c_0+c\{1-exp(- \frac{h^2}{a} )\}.
\end{equation}
In equation (5), $c_0$ is the nugget, $c=\mathrm{Sill}-c_0$, and $a$ is the model parameter. This function asymptotically approaches its sill, and has an effective range as $r'=\sqrt{3a}$.
Fitting a model to an empirical variogram is implemented in most geostatistis software. A popular choice is choosing several models that appear to have the right shape and use the one with smallest weighted squared error \cite{VarioBook}.
\vspace{-3mm}
\subsubsection{Discrete grid search} The variogram scheme often requires many landmarks to work well \cite{VarioBook}. For US pairs that have fewer landmarks, we predefine some kernel functions, and use cross validation in a discrete search for the model parameters.
\vspace{-3mm}
\section{Experiments}
\vspace{-2mm}
The experimental dataset consists of 6 preUS and postUS image pairs that were acquired on a BK Ultrasound 3000 system (BK Medical, Analogic Corporations, Peabody, USA) that is directly connected to the Brainlab VectorVision Sky neuronavigaton system (Brainlab, Munich, Germany) during surgery.
We use the mean euclidean distance between the predicted and ground truth of landmarks' coordinates, measured in $mm$, for the registration evaluation. Compared methods include: affine, thin-plate kernel FBR, variograms FBR and gaussian kernel FBR. For US pairs that have less than 50 landmarks, we use leave-one-out cross validation, otherwise 5-fold cross validation. All of compared methods can be finished within 10 minutes.
\vspace{-5mm}
\begin{table}[H]
\centering
\caption{Registration evaluation results (in $\mathit{mm}$)}
\label{my-label}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& Landmarks & Before Reg. & Affine & Thin-plate & Variograms & GaussianK
\\ \hline
Patient 1 & 123 & 5.56$\pm$1.05 & 2.99$\pm$1.21 & 1.79$\pm$0.70 & 2.11$\pm$0.74 & 1.75$\pm$0.68 \\ \hline
Patient 2 & 71 & 3.35$\pm$1.22 & 2.08$\pm$1.13 & 2.06$\pm$1.18 & 2.06$\pm$1.12 & 1.97$\pm$1.05 \\ \hline
Patient 3 & 49 & 2.48$\pm$1.56 & 1.93$\pm$1.75 & 1.25$\pm$1.95 & n/a & 1.23$\pm$1.77 \\ \hline
Patient 4 & 12 & 4.40$\pm$1.79 & 3.06$\pm$2.35 & 1.45$\pm$1.99 & n/a & 1.42$\pm$2.04 \\ \hline
Patient 5 & 64 & 2.91$\pm$1.33 & 1.86$\pm$1.24 & 1.29$\pm$1.17 & n/a & 1.33$\pm$1.40 \\ \hline
Patient 6 & 98 & 3.29$\pm$1.09 & 2.12$\pm$1.16 & 2.02$\pm$1.21 & 2.05$\pm$1.40 & 1.96$\pm$1.38 \\ \hline
\end{tabular}
\end{table}
\vspace{-6mm}
In addition, we demonstrate the preliminary result of active registration. As shown in Fig.5, (a) is the registered source preUS image, (b) is the target postUS image. Noticing that the FBR does not well align the tumor boundary due to lacking of landmarks. In the active registration step, a user manually added 3 new key-point pairs based on the image context and a color mapping of registration uncertainty. By visual inspection, we can see the alignment of tumor boundary substantially improved.
\begin{figure}[t]
\centering
\includegraphics[height=3.4cm]{fig5}
\vspace{-3mm}
\caption{(a) FR result of the preUS image; (b) PostUS image; (c) Overlaying the visualization of uncertainty on the preUS image;(d) Improved registration result.}
\label{fig:correct1}
\vspace{-3mm}
\end{figure}
\vspace{-4mm}
\section{Conclusion}
\vspace{-3mm}
We proposed a novel feature-based active registration framework to compensate for the brain shift. We believe this framework has the potential to be eventually applied in the operating room. Future work includes exploring non-isotropic variograms and other advanced schemes for GP kernel estimation. Implementing our framework into clinical software, such as 3D slicer, is also of interest.
\section{Introduction}
Surgical resection is the initial treatment for nearly all brain tumors. The achieved extent-of-resection is strongly correlated with prognosis and is the single greatest modifiable determinant of survival. Brain tumors are intimately involved in surrounding functioning brain tissue, aggressive resection must be balanced against the risk of causing new neurological deficits.
During neurosurgery, Image-Guided Neurosurgical Systems (IGNSs) provide a patient-to-image mapping that relates the preoperative image data to an intraoperative patient coordinate system, allowing surgeons to infer the locations of their surgical instruments relative to preoperative image data and helping them to optimize the extent of resection while avoiding damage to critical structures.
Commercial IGNSs assume a rigid registration between preoperative imaging and patient coordinates. However, intraoperative deformation of the brain, which is also known as brain shift, invalidates this assumption. Since brain shift progresses during surgery, the rigid patient-to-image mapping of IGNS becomes less and less accurate. Consequently, most surgeons only use IGNS to make a surgical plan but justifiably do not trust it throughout the course of an operation \cite{Gerard,Bayer}.
\subsubsection{Related Work}
As one of the most important error sources in IGNS, intraoperative brain shift must be compensated in order to increase the accuracy of neurosurgery. Registration between the Intraoperative MRI (iMRI) image, which provides clinicians with an updated view of anatomy during surgery, and preoperative MRI (preMRI) image (preop-to-intraop registration) has been a successful strategy for brain shift compensation \cite{Hata,Soza,Clatz,Vigneron,Drakopoulos}. However, iMRI acquisition is disruptive, expensive and time consuming, making this technology unavailable for most clinical centers worldwide. More recently, 3D intraoperative Ultrasound (iUS) appears to be a promising replacement for iMRI. Although some progress has been made by previous work on preMRI-to-iUS registration \cite{Gobbi,Arbel,Pennec,Lette,Reinerstsen,Fuerst,Rivaz}, yet there are still no clinically accepted solutions and no commercial neuro-navigation systems that provide brain shift compensation. This is because of three reasons: 1) Most non-rigid registration methods can not handle artifacts and missing structures in iUS; 2) The multi-modality of preMRI-to-iUS registration makes the already difficult problem even more challenging; 3) A few methods \cite{Ou} can achieve a reasonable alignment, yet they take around 50 minutes for an US pair and are too slow to be clinically applicable.
Another shortcoming of existing brain shift compensation approaches is the lack of an uncertainty measure. Brain shift is a complex spatiotemporal phenomenon, and given the state of registration technology, and the importance of the result, it seems reasonable to expect, e.g., error bars that indicate the confidence level in the estimated deformation. In fact, registration uncertainty can actually helps surgeons make more informed decisions. If a surgeon must decide whether to continue resection near a critical structure, it is vital that they know how far the instrument is predicted to be from the structure and how likely the prediction is to be accurate. Moreover, if a large registration error at location A and small error at location B are observed in the vicinity of surgical field, without knowledge of registration uncertainty, the surgeon would probably assume a large error everywhere and thus ignore the registration altogether. If only s/he knows that A lies in an area of high uncertainty while B lies in an area of low uncertainty, s/he would have greater confidence in the registration at B and other locations of low uncertainty.
In this paper, we propose a novel feature-driven active framework for brain shift compensation. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model \cite{GPBook} is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, for areas that are difficult to align, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the registration accuracy.
Contributions and novelties of our work can be summarized as follows:
\vspace{-2mm}
\begin{enumerate}
\item The proposed feature-based registration is robust for aligning iUS image pairs with missing correspondence and is fast.
\item We explore applying a GP model and variograms for image registration.
\item Registration uncertainty in transformation parameters can be naturally obtained from the GP model.
\item To the best of our knowledge, the proposed active registration strategy is the first method to actively combine user expertise in brain shift compensation.
\item We retrospectively demonstrate the efficacy of our method on clinical data acquired during neurosurgery.
\end{enumerate}
\vspace{-6mm}
\section{Method}
\vspace{-2mm}
\subsection{The role of US-to-US registration}
In order to alleviate the difficulty of preop-to-intraop registration, instead of directly aligning iMRI and iUS images, we choose an iterative compensation approach which is similar to the work in \cite{Riva}.
As shown in Fig.1. acquisition processes for pre-duraUS (preUS) and post-resectionUS (postUS) take place before opening the dura and after tumor resection, respectively. Since most brain-shift occurs after taking the preUS, a rigid multi-modal registration may be suffice to achieve a good alignment $T^{\mathrm{rigid}}$ between preMRI and preUS \cite{Fuerst}. Next, we register the preUS to postUS using the proposed feature-driven active framework to acquire a deformable mapping $T_{\mathrm{deform}}$. After propagating $T_{\mathrm{rigid}}$ and $T_{\mathrm{deform}}$ to the preMRI, surgeons may use it as an updated view of anatomy to compensate for brain shift during the surgery.
\vspace{-6mm}
\begin{figure}[H]
\centering
\includegraphics[height=2.9cm]{fig1}
\vspace{-2mm}
\caption{Pipeline of the US-based brain shift compensation. }
\label{fig:correct1}
\vspace{-6mm}
\end{figure}
\subsection{Feature-based registration strategy}
Because of tumor resection, compensating for brain shift requires non-rigid registration algorithms capable of aligning structures in one image that have no correspondences in the other image. In this situation, many image registration methods that take into account the intensity pattern of the entire image will become trapped in incorrect local minima.
\begin{figure}[t]
\centering
\includegraphics[height=5.6cm]{fig2}
\vspace{-7mm}
\caption{Pipeline of the feature-based active preduraUS-to-postUS registration.}
\label{fig:correct1}
\vspace{-3mm}
\end{figure}
We therefore pursue a Feature-Based Registration (FBR) strategy due to its robustness in registering images with missing correspondence \cite{Matt}. FBR mainly consists of 3 steps: feature-extraction, feature-matching and dense deformation field estimation. An optional ``active registration'' step can be added depends on the quality of FBR.
\vspace{-4mm}
\subsubsection{Feature extraction and matching} As illustrated in Fig.2(a)(b), distinctive local image features are automatically extracted and identified as key-points on preUS and postUS images. A matcher searches for a corresponding postUS key-point for each key-point on the preUS image \cite{Matt}.
From a matched key-point pair, let $\mathbf{x}_i$ be the coordinates of the predUS key-point and $\mathbf{x}^{\mathrm{post}}_i$ be the coordinate of its postUS counterpart. Here, we first use all matched PreUS key-points as landmarks, and perform a land-mark based preUS-to-postUS affine registration to obtain a rough alignment. $\mathbf{x}^{\mathrm{post}}_i$ becomes $\mathbf{x}^{\mathrm{affine}_i}$ after the affine registration. The displacement vector, which indicates the movement of landmark $\mathbf{x}_i$ due to the brain shift process, can be calculated as $\mathbf{d}(\mathbf{x}_i)=\mathbf{x}^{\mathrm{affine}_i}_i-\mathbf{x}_i$. where $\mathbf{d}=[d_x,d_y,d_z]$.
\vspace{-2mm}
\subsubsection{Dense deformation field} The goal of this step is to obtain a dense deformation field from a set of $N$ sparse landmark and their displacements $\mathcal{D}=\{ (\mathbf{x}_i,\mathbf{d}_i),i=1:N \}$, where $\mathbf{d}_i=\mathbf{d}(\mathbf{x}_i)$ is modeled as a observation of displacements.
In the GP model, let $\mathbf{d}(\mathbf{x})$ be the displacement vector for the voxel at location $\mathbf{x}$. thus it has a prior distribution as $d(\mathbf{x})\sim \mathrm{GP}(\mathrm{m}(\mathbf{x}),\mathrm{k}(\mathbf{x},\mathbf{x}'))$, where $\mathrm{m}(\mathbf{x})$ is the mean function, which usually is set to 0, and the GP kernel $\mathbf{k}(\mathbf{x},\mathbf{x}')$ represent the spatial correlation of displacement vectors.
By the model assumption, all displacement vectors follow a joint Gaussian distribution $p(\mathbf{d}\mid \mathbf{X})=\mathcal{N} (\mathbf{d}\mid \mathbf{\mu},\mathbf{K}) $, where $K_{ij}=\mathbf{k}(\mathbf{x},\mathbf{x}')$ and $\mathbf{\mu} = (\mathrm{m}(\mathbf{x}_1) ,...,\mathrm{m}(\mathbf{x}_N)) $. As a result, the displacement vectors $\mathbf{d}$ for known landmarks and $N_*$ unknown displacement vectors $\mathbf{d}_*$ at location $\mathbf{X}_*$, which we want to predict, have the following relationship:
\vspace{-2mm}
\def\A{
\begin{pmatrix}
\mathbf{d}\\
\mathbf{d}_* \\
\end{pmatrix}}
\def\B{
\begin{pmatrix}
\mathbf{\mu} \\
\mathbf{\mu}_*\\
\end{pmatrix}}
\def\C{
\begin{pmatrix}
\mathbf{K} & \mathbf{K}_*\\
\mathbf{K}^T_* & \mathbf{K}_{**}.\\
\end{pmatrix}}
\begin{equation}
\A \sim \left(\B , \C\right)_.
\end{equation}
In Equation (1), $\mathbf{K}=\mathrm{k}(\mathbf{X},\mathbf{X})$ is the $N\times N$ matrix, $\mathbf{K}_*=\mathrm{k}(\mathbf{X},\mathbf{X_*})$ is the $N \times N_*$ matrix, and $\mathbf{K_{**}}=\mathrm{k}(\mathbf{X_*},\mathbf{X_*})$ is the $N_* \times N_*$ matrix.
The mean $\mu_*=[\mu_{*x},\mu_{*y},\mu_{*z}]$ represents values of voxel-wise displacement vectors and can be estimated from the posterior Gaussian distribution $p(\mathbf{d}_*\mid \mathbf{X_*},\mathbf{X},\mathbf{d})= \mathcal{N}(\mathbf{d}_*\mid \mu_*,\Sigma_*)$ as
\begin{equation}
\mu_*= \mu(\mathbf{X_*})+\mathbf{K}^T_*\mathbf{K}^{-1}(\mathbf{d}-\mu(\mathbf{X})).
\end{equation}
Given $\mu(\mathbf{X})= \mu(\mathbf{X_*})=0$, we can obtain the dense deformation field for the preUS image by assigning $\mu_{*x}$,$\mu_{*y}$,$\mu_{*z}$ to $\mathbf{d}_x$, $\mathbf{d}_y$ and $\mathbf{d}_z$, respectively.
\vspace{-4mm}
\subsubsection{Active registration} Automatic approaches may have difficulty in the preop-to-intraop image registration, especially for areas near the tumor resection site. Another advantage of the GP framework is the possibility of incorporate user expertise to further improve the registration result.
From Equation (1), we can also compute the covariance matrix of the posterior Gaussian $p(\mathbf{d}_*\mid \mathbf{X_*},\mathbf{X},\mathbf{d})$ as
\begin{equation}
\Sigma_*= \mathbf{K}_{**}-\mathbf{K}^T_*\mathbf{K}^{-1}\mathbf{K}_*.
\end{equation}
Entries on the diagonal of $\Sigma_*$ are the marginal variances of predicted values. They can be used as an uncertainty measure to indicates the confidence in the estimated transformation parameters.
If users are not satisfied by the FBR alignment result, they could manually, guided by the image context and visualization of registration uncertainty, add new corresponding pairs of key-points to drive the GP towards better results.
\vspace{-4mm}
\subsection{GP kernel estimation}
The performance of GP registration depends exclusively on the suitability of the chosen kernels and its parameters. In this study, we explore two schemes for the kernel estimation: Variograms and discrete grid search.
\vspace{-4mm}
\subsubsection{Variograms} While being used extensively in geostatistics to characterize the spatial dependence of a stochastic process \cite{VarioBook}, variograms have not yet received much attention in the medical imaging field. Although GP regression for medical image registration, and variograms, were described in \cite{Warfield}, neither quantitative results, nor estimation of the posterior uncertainty were provided.
In the GP registration context, $\mathbf{d}(\mathbf{x})$ is modelled as a random quantity, variograms can measure the extent of pairwise spatial correlation between displacement vectors with respect to their distance, and in advance give insight into choosing the suitable GP kernel.
In practice, we estimate the empirical variogram of landmarks' displacement vectors as
\vspace{-3mm}
\begin{equation}
\begin{aligned}
\hat{\gamma}(h\pm\delta)
&:=\frac{1}{2|N(h\pm\delta)|}\sum_{(i,j)\in{N(h\pm\delta)}}^{}\norm{\mathbf{d}(\mathbf{x}_i)-\mathbf{d}(\mathbf{x}_j)}^2.
&
\end{aligned}
\end{equation}
For the norm term $\norm{\mathbf{d}(\mathbf{x}_i)-\mathbf{d}(\mathbf{x}_j)}$, we compute its 3 components $d_x$ $d_y$ $d_z$ and construct 3 variograms, respectively. As shown in Fig.3(a), for displacement vectors $\mathbf{d}(\mathbf{x_1})$ and $\mathbf{d}(\mathbf{x_2})$, $\norm{d_x(\mathbf{x}_2)-d_x(\mathbf{x}_1)}$ is the vector difference with respect to the $\mathbf{x}$ axis, etc. $h$ represents the distance between two displacement vectors.
\begin{figure}[t]
\centering
\includegraphics[height=3.8cm]{fig3}
\vspace{-3mm}
\caption{(a) $\norm{d_x(\mathbf{x}_2)-d_x(\mathbf{x}_1)}$ and $h$; (b) Empirical variogram cloud; (c) Variogram cloud divided into bins with their means marked as blue.}
\label{fig:correct1}
\vspace{-5mm}
\end{figure}
To construct an empirical variogram, the first step is to make a variogram cloud by plotting $\norm{d(\mathbf{x}_2)-d(\mathbf{x}_1)}^2$ and $h_{ij}$ for all displacement pairs. Next, we introduce a variable $\delta$, and divide the variogram cloud into bins with a bin width setting to 2$\delta$. Lastly, the mean of each bin is calculated and further plotted with the mean distance of that bin to form an empirical variogram. Fig.4(a) shows an empirical variogram of a real US image pair that has 71 landmarks.
The empirical variogram only consists of value differences at a finite set of discrete distances, whereas the GP kernels are continuous for all $h$. Therefore, the next step is to fit a smooth curve to the empirical values and derive the kernel function from that fitted curve.
\vspace{-6mm}
\begin{figure}[H]
\centering
\includegraphics[height=3.1cm]{fig4}
\vspace{-3mm}
\caption{(a) X-axis empirical variogram of a US images pair;(b) Sill, range and nugget; (c) Fitting a continuous model to an empirical variogram.}
\label{fig:correct1}
\vspace{-6mm}
\end{figure}
Fig.4(b) is an example of a fitted curve. The curve is commonly describe by the following characteristics:
\vspace{-2mm}
\begin{labeling}{Parameters}
\item [Nugget] The non-zero value at $h=0$.
\item [Sill] The value at which the curve reaches its maximum.
\item [Range] The value of distance $h$ where the sill is reached.
\end{labeling}
\vspace{-1mm}
Conventionally, displacement vectors that are separated by distances further than the range are considered uncorrelated \cite{VarioBook}.
In general, the curve must have a mathematical expression that can describe the variances of a random process. Practically, the choice is limited to a few options, such as exponential and Gaussian models. For instance, the Gaussian variogram function is
\vspace{-6mm}
\begin{equation}
\gamma(h)=c_0+c\{1-exp(- \frac{h^2}{a} )\}.
\end{equation}
In equation (5), $c_0$ is the nugget, $c=\mathrm{Sill}-c_0$, and $a$ is the model parameter. This function asymptotically approaches its sill, and has an effective range as $r'=\sqrt{3a}$.
Fitting a model to an empirical variogram is implemented in most geostatistis software. A popular choice is choosing several models that appear to have the right shape and use the one with smallest weighted squared error \cite{VarioBook}.
\vspace{-3mm}
\subsubsection{Discrete grid search} The variogram scheme often requires many landmarks to work well \cite{VarioBook}. For US pairs that have fewer landmarks, we predefine some kernel functions, and use cross validation in a discrete search for the model parameters.
\vspace{-3mm}
\section{Experiments}
\vspace{-2mm}
The experimental dataset consists of 6 preUS and postUS image pairs that were acquired on a BK Ultrasound 3000 system (BK Medical, Analogic Corporations, Peabody, USA) that is directly connected to the Brainlab VectorVision Sky neuronavigaton system (Brainlab, Munich, Germany) during surgery.
We use the mean euclidean distance between the predicted and ground truth of landmarks' coordinates, measured in $mm$, for the registration evaluation. Compared methods include: affine, thin-plate kernel FBR, variograms FBR and gaussian kernel FBR. For US pairs that have less than 50 landmarks, we use leave-one-out cross validation, otherwise 5-fold cross validation. All of compared methods can be finished within 10 minutes.
\vspace{-5mm}
\begin{table}[H]
\centering
\caption{Registration evaluation results (in $\mathit{mm}$)}
\label{my-label}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& Landmarks & Before Reg. & Affine & Thin-plate & Variograms & GaussianK
\\ \hline
Patient 1 & 123 & 5.56$\pm$1.05 & 2.99$\pm$1.21 & 1.79$\pm$0.70 & 2.11$\pm$0.74 & 1.75$\pm$0.68 \\ \hline
Patient 2 & 71 & 3.35$\pm$1.22 & 2.08$\pm$1.13 & 2.06$\pm$1.18 & 2.06$\pm$1.12 & 1.97$\pm$1.05 \\ \hline
Patient 3 & 49 & 2.48$\pm$1.56 & 1.93$\pm$1.75 & 1.25$\pm$1.95 & n/a & 1.23$\pm$1.77 \\ \hline
Patient 4 & 12 & 4.40$\pm$1.79 & 3.06$\pm$2.35 & 1.45$\pm$1.99 & n/a & 1.42$\pm$2.04 \\ \hline
Patient 5 & 64 & 2.91$\pm$1.33 & 1.86$\pm$1.24 & 1.29$\pm$1.17 & n/a & 1.33$\pm$1.40 \\ \hline
Patient 6 & 98 & 3.29$\pm$1.09 & 2.12$\pm$1.16 & 2.02$\pm$1.21 & 2.05$\pm$1.40 & 1.96$\pm$1.38 \\ \hline
\end{tabular}
\end{table}
\vspace{-6mm}
In addition, we demonstrate the preliminary result of active registration. As shown in Fig.5, (a) is the registered source preUS image, (b) is the target postUS image. Noticing that the FBR does not well align the tumor boundary due to lacking of landmarks. In the active registration step, a user manually added 3 new key-point pairs based on the image context and a color mapping of registration uncertainty. By visual inspection, we can see the alignment of tumor boundary substantially improved.
\begin{figure}[t]
\centering
\includegraphics[height=3.4cm]{fig5}
\vspace{-3mm}
\caption{(a) FR result of the preUS image; (b) PostUS image; (c) Overlaying the visualization of uncertainty on the preUS image;(d) Improved registration result.}
\label{fig:correct1}
\vspace{-3mm}
\end{figure}
\vspace{-4mm}
\section{Conclusion}
\vspace{-3mm}
We proposed a novel feature-based active registration framework to compensate for the brain shift. We believe this framework has the potential to be eventually applied in the operating room. Future work includes exploring non-isotropic variograms and other advanced schemes for GP kernel estimation. Implementing our framework into clinical software, such as 3D slicer, is also of interest.
|
1,941,325,221,023 | arxiv | \section{Introduction}\label{sec:introduction}
Automatic Speech Recognition (ASR) systems are becoming widely adopted in various applications, such as voice commands, voice assistants, dictation tools or conversation transcribers.
In many ASRs, a serious limitation is the lack of any punctuation or capitalization (with exception of some recent end-to-end models).
This can be problematic both in the case of visual presentation of the outputs, where the non-punctuated transcripts are confusing and difficult to read, and when these transcripts are used as inputs for downstream tasks such as those in the domain of Natural Language Processing (NLP).
Off-the-shelf NLP systems are usually trained on punctuated text, thus lack punctuation can cause a significant deterioration of their performance.
We are especially interested in addressing this issue in the domain of telephone conversational speech.
Our application transcribes telephone calls between customers and agents, and performs their semantic annotations to find particular and specific events, as well as an intents and moods of the interlocutors.
Providing punctuation became crucial for us to provide a high quality service.
Unlike many other machine learning tasks, punctuation prediction does not abound reference datasets that would enable supervised learning.
In principle any punctuated text source such as blogs, news articles or Wikipedia, could be used for training a punctuation prediction model, but most of them are hardly representative of the conversational language.
On the other hand, speech transcripts with proper punctuation are rather difficult to find or time-consuming to annotate.
In this work, we show that the English Fisher corpus~\cite{cieri2004fisher}, which contains about 11000 distinct conversations, can be successfully used to provide data for punctuation prediction.
To leverage the fact that we are working with conversational speech, we propose to use the recognition from both sides of the conversation to predict punctuation, as well as relative timing and duration of each word, which, to the best of our knowledge, has not been used before for punctuation prediction task.
Two variants of Deep Neural Network (DNN) sequence labelling models - a Bidirectional Long Short-Term Memory (BLSTM) and a Convolutional Neural Network (CNN) were trained to predict the punctuation outputs for each word in the dialogue sequence.
Pre-trained GloVe~\cite{pennington2014glove} word embeddings were used with the intent of making the model more robust to different conversation topics than those that can be found in English Fisher corpus~\cite{cieri2004fisher}.
Both models achieve results that are on par with other work performed in this task for comparable domains.
The related research is presented in section~\ref{sec:relatedWork}.
Section~\ref{sec:methods} describes our approach to data preparation as well as model architectures and the details of their training.
We present and discuss the results in section~\ref{sec:results}.
Finally, we conclude our work in section~\ref{sec:conclusions}.
\section{Related work}\label{sec:relatedWork}
Early attempts focused on finding sentence boundaries ("dot prediction"), and for that purpose, several linguistic features were used: an n-gram language model, turn markers and parts of speech (POS) information~\cite{stolcke1996automatic}.
Subsequent research employed a maximum entropy model, which predicted dots, commas and question marks based on lexical features (words, n-grams and previous predictions) and prosodic features, represented as pause tokens of a specific length~\cite{huang2002maximum}.
It has been shown that the presence of pauses in speech can serve as an indicator of punctuation marks, but there is a significant variation in how different speakers use pauses~\cite{igras2016structure}.
Conditional Random Fields (CRF) based models were also proposed for this task~\cite{lu2010better,ueffing2013improved}.
Recently, an LSTM model with several variants has been proposed for this task, which similarly uses words and pauses tokens as inputs~\cite{tilk2015lstm,tilk2016bidirectional}.
The authors decided not to use additional prosodic features such as F0 or phone durations due to their subpar performance in~\cite{christensen2001punctuation}.
We wish to emphasize that relative word timing and duration have not been investigated by any of these works, and in principle, their fidelity should be higher than artificial, discretized pause tokens.
\section{Methods}\label{sec:methods}
\subsection{Data preparation}\label{subsec:dataPreparation}
Unlike other telephone speech corpora the Fisher corpus~\cite{cieri2004fisher} has properly punctuated transcripts.
While the most widely used version of the Fisher transcripts (available in LDC catalogue numbers LDC2004T19 and LDC2005T19) are the \textit{.txt} files containing time alignment, the majority of conversations also has a second transcript version in a \textit{.txo} file, which does not have time alignment, but has rich punctuation and proper capitalization.
The availability of this data provides an opportunity to utilize the information from both sides of the conversation to predict punctuation.
We represent a dialogue $\mathcal{W}$ as an ordered set $\mathcal{W} = \{w_i\}$ of words $w$, where each $w$ has several properties:
\begin{itemize}
\item $t_i$ is the textual representation of word $w_i$;
\item $c_i$ is a binary feature, representing which conversation side uttered word $w_i$;
\item $s_i$ is a real number, describing time offset (in seconds) at which the word $w_i$ started;
\item $d_i$ is a real number, describing the duration (in seconds) of the word $w_i$;
\item $p_i$ is the punctuation symbol, which appears after word $w_i$.
\end{itemize}
The set is ordered on the $s$ property of each word, i.e. the starting time.
This formulation allows to elegantly represent interjections, interruptions and simultaneous speech, which are often encountered in dialogues.
The $p$ properties are only known at the training time and are being predicted during inference.
With this representation in mind, we treat the punctuation prediction problem as a sequence labelling task.
To fit the Fisher data into our model definition, we need to combine information from time-annotated and punctuated transcripts.
The first step is computing the forced alignment of the time-annotated transcripts to obtain word-level information about starting times and durations ($s$ and $d$ properties).
For that purpose, we used the Kaldi ASR toolkit~\cite{povey2011kaldi} with a LSTM-TDNN acoustic model trained with lattice-free Maximum Mutual Information (MMI) criterion~\cite{povey2016purely}.
In order to minimize the differences between two transcript versions we edited the Fisher data preparation script not to exclude single-word utterances and the text in parentheses, .
The next step is extraction of punctuation properties $p$ and conversation side properties $c$ from the punctuated transcripts.
We retain blanks (no punctuation), dots, commas and question marks.
Other punctuation classes were rejected (converted to blanks) due to their low frequency (e.g.\ exclamation marks or triple dots) or the fact that it is modeled by other properties of the representation (double dash - that marks an interruptions).
Finally, we combine the information obtained from both sources.
This task is not trivial, since both transcript versions may have slight differences.
We observed that this problem could be viewed as global alignment between two symbol sequences, which can be obtained by the application of the Needleman-Wunsch algorithm~\cite{needleman1970general}. The algorithm, originating in bioinformatics for DNA sequence alignment, is based on dynamic programming and is available in open-source Biopython library~\cite{bioinformaticsbtp163}.
We compute the alignment between two transcript versions separately for each channel in each recording and remove the words which appeared in only one of the transcripts.
Then, we concatenate the words from both channels into one sequence and sort it by the starting time $s$, which yields our dialogue representation.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{sequence_alignment.png}
\caption{
An example of alignment between two word sequences in Fisher: the time-annotated and the punctuation annotated.
The $s$ stands for start time and $d$ stands for duration, both in seconds.
The circles represent a blank symbol, i.e.\ no match for a given word in the second sequence.
}
\label{fig:sequence_alignment}
\end{figure}
\begin{table}[th]
\caption{The total count of labels for each of the punctuation classes available in our training data set.}
\label{tab:punctuationClasses}
\centering
\begin{tabular}{ r r r }
\toprule
\textbf{Class} & \textbf{Count} & \textbf{Percentage} \\
\midrule
blank & 1429905 & 79.1\% \\
comma & 208289 & 11.5\% \\
dot & 148624 & 8.2\% \\
question mark & 22182 & 1.2\% \\
\bottomrule
\end{tabular}
\end{table}
Since this is a sequence labelling task, we're predicting punctuation class for each word.
This results in a heavy class imbalance, as shown in table~\ref{tab:punctuationClasses}.
We attempted to mitigate this issue by introducing sample weighting based on predicted class frequency, however, it resulted in the model being skewed towards high recall, but much lower precision for the under-represented classes, manifesting as frequent false positives.
\subsection{Punctuation model}\label{subsec:punctuationModel}
\subsubsection{Features}\label{subsubsec:features}
There are several input features which we explored for our experiments.
The features which we used in every experiment are word embeddings and a conversation side indicator.
The word embeddings are 300-dimensional pre-trained GloVe~\cite{pennington2014glove} embeddings\footnote{
The \textit{glove.42B.300d.zip} embeddings, which are available at https://nlp.stanford.edu/projects/glove.
}, trained on Common Web Crawl data.
Those weights are fixed during training.
We selected the embeddings for 50000 most frequent words.
Then we expanded this representation by added zeroes values to embed all out of vocabulary words to save GPU memory.
Increasing the vocabulary size to 100000 words did not provide any significant performance gains.
Additionally, we trained our own GloVe embeddings on conversational-like data (around 525M words) gathered by the University of Washington\footnote{
The \textit{525M\_fisher\_conv\_web-filt+periods.gz} data set, which is available at https://ssli.ee.washington.edu/data.
} to investigate if these embeddings trained on conversational data would perform better, however, in some experiments, they resulted in either the F1 score being 0.2-0.3\% lower or a lack of model convergence.
We suspect this might be caused by a much smaller data quantity compared to the official GloVe embeddings.
The conversation side feature is a one-dimensional binary feature.
We used the word time information described by the interval between the start of the current word and start of the previous word, and duration of the current word, as features to the model.
We provided the interval instead of absolute offset time to obtain a more normal-like distribution for this feature.
Both of these features are speaker-adapted, i.e.\ they are standardized with regard to other words uttered by the same speaker in the same dialogue.
This also means that the pauses are not modelled explicitly as word tokens - they must be inferred by the model based on the subsequent word timings.
In some experiments, we used part of speech (POS) tags predicted by SpaCy~\footnote{https://spacy.io/}, although we didn't notice any significant improvement.
We hypothesize that either the POS tags did not introduce any predictive information, or that the performance of the tagger was poor in the absence of punctuation (and thus sentence segmentation).
\subsubsection{Architecture}\label{subsubsec:architecture}
We evaluated the performance of two types of models - one based on Convolutional Neural Nets (CNN), and the other based on Bidirectional Long Short-Term Memory (BLSTM) networks.
The input layer is a concatenation of the features described in~\ref{subsubsec:features}.
Both models were implemented using Keras~\cite{chollet2015keras} with Tensorflow~\cite{abadi2016tensorflow} backend.
The BLSTM model consists of four BLSTM layers, with each direction having 128 weights.
This model has the advantage of seeing a large context of words during training, and possibly the whole conversation during inference.
The CNN model uses several layers of 1D convolutions, which can be interpreted as fully-connected layers processing the input in small windows.
We additionally use dilated convolutions to broaden the context seen by each consecutive CNN layer.
Each layer is followed by a SELU activation~\cite{klambauer2017self}, which yielded a small improvement over batch normalization~\cite{ioffe2015batch} with ReLU~\cite{nair2010rectified}.
The setup which worked best for us is six 1D CNN layers, each with the filter size of 128 and padding which doesn't modify the word sequence length (i.e.\ \textit{same}).
The context width is equal to 3 for first five layers and equal to 20 for the last layer.
The middle four layers have a dilation rate of 2.
The final layer in both CNN and BLSTM model is fully-connected and followed by a softmax activation - this layer is applied separately at each time step to retrieve punctuation prediction for a given word.
To regularize the model we apply several measures:
\begin{itemize}
\item a dropout layer with probability 0.5 before the softmax layer;
\item 0.001 weight decay for the softmax layer weights and also for the BLSTM recurrent layers;
\item we add Gaussian noise with standard deviation 0.1 to the time feature and embedding inputs, before the last softmax activation, and before SELU activations in the CNN model;
\item SELU activations in the CNN model, which constrain the weights to a zero mean and unit variance distribution (which was verified by inspecting in TensorBoard).
\end{itemize}
\subsection{Training}\label{subsec:training}
To train the models, we use a standard, categorical cross-entropy loss function and the Adam optimizer~\cite{kingma2014adam} with default settings proposed by the authors.
The number of epochs is determined by early stopping, with two epochs patience.
We divide the Fisher conversations into training, validation and test sets with proportions 8:1:1.
To best utilize the GPU, we use a batch size of 256 and each sample in the batch is created by traversing the conversation in windows of 200 words.
\section{Results}\label{sec:results}
We present the results achieved by the CNN and BLSTM models with and without time features in table~\ref{tab:allResults}.
Each model is evaluated with precision, recall and F1 scores for each punctuation class separately.
We see that CNN models yield slightly higher precision for the punctuation classes, and BLSTM tends to have the better recall (and the inverse is true for the blank symbol).
Although the BLSTM model makes fewer mistakes overall, the punctuation predicted by the CNN model is more accurate - especially in the case of question marks.
The word-level time features yield minor improvement in both models, which suggests that the prosodic information carried by the relative word timing and their duration is useful in the punctuation prediction task.
\begin{table}[th]
\caption{
The per-class precision, recall and F1-score (in \%) achieved by the CNN and BLSTM models with pre-trained GloVe embeddings.
All models used 300-dimensional word embeddings and 1-dimensional boolean conversation side features, and the +T models additionaly used two 1-dimensional time features.
The $\epsilon$ symbol denotes a blank prediction.
}
\label{tab:allResults}
\centering
\begin{tabular}{ r r r r r }
\toprule
\textbf{Model} & \textbf{Class} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} \\
\midrule
\multirow{4}{*}{CNN}
& $\epsilon$ & 91.7 & \textbf{95.5} & 93.5 \\
& . & 67.7 & 58.6 & 62.8 \\
& ? & 70.8 & 45.1 & 55.1 \\
& , & 68.3 & 58.1 & 62.8 \\
\midrule
\multirow{4}{*}{CNN+T}
& $\epsilon$ & 92.3 & 95.2 & 93.8 \\
& . & \textbf{68.6} & 63.3 & 65.9 \\
& ? & \textbf{72.9} & 46.7 & 57.0 \\
& , & \textbf{68.7} & 60.3 & 64.2 \\
\midrule
\multirow{4}{*}{BLSTM}
& $\epsilon$ & 92.7 & 94.9 & 93.8 \\
& . & 66.9 & 63.1 & 64.9 \\
& ? & 70.2 & 47.3 & 56.5 \\
& , & 67.9 & 61.8 & 64.7 \\
\midrule
\multirow{4}{*}{BLSTM+T}
& $\epsilon$ & \textbf{93.5} & 94.7 & \textbf{94.1} \\
& . & 67.9 & \textbf{66.7} & \textbf{67.3} \\
& ? & 64.7 & \textbf{54.6} & \textbf{59.2} \\
& , & 68.2 & \textbf{64.1} & \textbf{66.1} \\
\bottomrule
\end{tabular}
\end{table}
For the BLSTM+T model we show the confusion matrix in figure~\ref{fig:confusionMatrix}.
This matrix is row-normalized to better illustrate per-class mistakes, but the reader should note that due to the class imbalance (shown in table~\ref{tab:punctuationClasses}), this confusion matrix is almost symmetric regarding absolute numbers.
We observe several interesting types of mistakes.
First of all, the blanks and commas are most frequently confounded types (around 55k false positives and 44k false negatives), which in our opinion is the least harmful type of mistake, given that the placement of commas in transcribed speech can often be arbitrary.
All of the punctuation classes labels are missed about 20\% of the time (i.e. blank is predicted) relatively to their occurrence count.
The question mark is the most difficult class to predict and is often mistaken with the dot (about 20\% of question marks), relatively rarely inserted in place of any other class.
This can most likely be explained by the scarcity of labels for this class.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{confusion_matrix.pdf}
\caption{
Confusion matrix for the BLSTM+T model, normalized with regard to true labels (i.e. rows).
}
\label{fig:confusionMatrix}
\end{figure}
Below is an example part of a Fisher dialogue showcasing the predictions of the punctuation model.
Note: words start with a capital letter only after a dot appears.
\begin{quote}
L: Oh, and that's west paterson. I don't know
R: Oh,
L: if
R: okay.
L: that counts.
R: Okay. Okay. Yeah, west peterson is nice.
[laughter] So, i didn't even understand the ah,
the topic of the day did you hear it?
L: I [noise] i heard first i heard censorship.
And then i heard, ah, today's topic is something
about public schools. It was i think, ah,
should public schools
R: Do something about books
L: be allowed
R: kids
L: to censor
R: read?
L: certain books.
\end{quote}
Besides the quantitative evaluation, we also performed a qualitative investigation of the predictions of both models on the ASR transcripts of calls from a different domain than Fisher.
Since we do not have the golden labels for this data, this evaluation is highly subjective.
We observed that the CNN model tends to yield less confusing mistakes and outputs transcripts with higher, subjective readability, which is supported by the higher precision scores obtained by this model.
We suspect that this effect is amplified by the fact that the BLSTM model is more vulnerable to ASR mistakes due to the larger context size during inference.
\section{Conclusions}\label{sec:conclusions}
We presented two kinds of punctuation predictions DNN models - BLSTM and CNN based - which operate on a conversation, represented as a sequence of words, and utilize word embeddings, conversation side and per-word timing information as features.
We used two versions of the Fisher corpus transcripts - time-aligned and punctuated - along with sequence alignment procedure to procure the training and evaluation data.
Our results constitute significant evidence that the distribution of words in time, as well as pre-trained word embeddings, can be useful in the punctuation prediction task in the domain of conversational speech.
We've shown that the CNN architecture tends to achieve better precision scores, while the BLSTM variant is characterized by overall better recall and F1 measure.
These models can be easily applied in a production environment to provide punctuation annotations for speech recognition system transcripts, where all of the model input features are available.
For the future work, we'd like to investigate how much improvement can be gained by using prosodic features, as well as more sophisticated neural network architectures, such as models with attention~\cite{chan2016listen}.
\bibliographystyle{IEEEtran}
|
1,941,325,221,024 | arxiv |
\chapter{Numerical Results}
\label{chap:numerical_results}
This chapter provides a numerical evaluation of all previously introduced system models and algorithms. Firstly the performance of the algorithms is evaluated. Then a reference parameter set is defined and the system performance is evaluated for this set. This result is then used as a reference point for the following performance evaluation of the impact of each system parameter on the system's performance. Furthermore, the system model extensions and the corresponding algorithms are evaluated and compared to the reference parameter set and the basic system model.
\section{Algorithm Evaluation}
\label{sec:numerical_alg_eval}
In this section the performance of the proposed algorithms are evaluated. Firstly, the performance of \ref{alg:sfp} (SFP) and \ref{alg:ao} (AO) are compared. Secondly, the proposed initialization for these algorithms is evaluated. Then the performance of \gls{ssum} and enhanced \gls{ssum} is analyzed.
In the remainder of this chapter the performance of the investigated system models is also evaluated to check if their secrecy capacity is maximized, which is usually being done in the literature. Consequently, the maximization of the \gls{see} is compared to the maximization of the secrecy capacity. A secrecy capacity maximization can also be performed with each previously introduced algorithm. Therefore, it is only necessary to ensure that the power consumption (the denominator of the \gls{see} ratio) is constantly $1$. This can simply be achieved by e.g. setting $\alpha_i = 0$ and choosing $P_{c,i}$ accordingly.
\subsection{Performance of SFP and AO}
\label{sec:numerical_eval_SFP_AO}
In \ref{chap:mimo-me_see_maximization} two algorithms are introduced which can obtain a locally optimal solution for the problem of maximizing the \gls{see} of the introduced system model. In this section the performance of these two algorithms are evaluated. The average \gls{see}, secrecy rate and computation duration of 100 realizations over an increasing transmit power constraint is depicted in \ref{fig:numerical_sfp_ao_cmp_see}, \ref{fig:numerical_sfp_ao_cmp_cs} and \ref{fig:numerical_sfp_ao_cmp_dur}, respectively.
\begin{figure}
\centering
\subfloat[~]{\label{fig:numerical_sfp_ao_cmp_see}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
xmin=-50,
xmax=0,
ymin=0,
ymax=90,
xlabel={Max. Transmit Power per Modem [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north west,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/cmp_see_sfp_ao.dat};
\addplot table[x index=0,y index=2] {data/cmp_see_sfp_ao.dat};
\addplot table[x index=0,y index=3] {data/cmp_see_sfp_ao.dat};
\addplot table[x index=0,y index=4] {data/cmp_see_sfp_ao.dat};
\addplot table[x index=0,y index=5] {data/cmp_see_sfp_ao.dat};
\addplot table[x index=0,y index=6] {data/cmp_see_sfp_ao.dat};
\addplot table[x index=0,y index=7] {data/cmp_see_sfp_ao.dat};
\addplot table[x index=0,y index=8] {data/cmp_see_sfp_ao.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[~]{\label{fig:numerical_sfp_ao_cmp_cs}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
xmin=-50,
xmax=0,
ymin=0,
ymax=15,
xlabel={Max. Transmit Power per Modem [dB]},
ylabel={$R_s$ $\left[ \text{secure bits / Hz / s} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north west,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/cmp_cs_sfp_ao.dat};
\addplot table[x index=0,y index=2] {data/cmp_cs_sfp_ao.dat};
\addplot table[x index=0,y index=3] {data/cmp_cs_sfp_ao.dat};
\addplot table[x index=0,y index=4] {data/cmp_cs_sfp_ao.dat};
\addplot table[x index=0,y index=5] {data/cmp_cs_sfp_ao.dat};
\addplot table[x index=0,y index=6] {data/cmp_cs_sfp_ao.dat};
\addplot table[x index=0,y index=7] {data/cmp_cs_sfp_ao.dat};
\addplot table[x index=0,y index=8] {data/cmp_cs_sfp_ao.dat};
\end{axis}
\end{tikzpicture}}
\subfloat[~]{\label{fig:numerical_sfp_ao_cmp_dur}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - SFP, FD - $\max SEE$ - AO, HD - $\max SEE$ - SFP, HD - $\max SEE$ - AO, FD - $\max C_s$ - SFP, FD - $\max C_s$ - AO, HD - $\max C_s$ - SFP, HD - $\max C_s$ - AO},
xmin=-50,
xmax=0,
ymin=0,
ymax=300,
xlabel={Max. Transmit Power per Modem [dB]},
ylabel={Computation Duration $\left[ s \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(1.1,0.5)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/cmp_dur_sfp_ao.dat};
\addplot table[x index=0,y index=2] {data/cmp_dur_sfp_ao.dat};
\addplot table[x index=0,y index=3] {data/cmp_dur_sfp_ao.dat};
\addplot table[x index=0,y index=4] {data/cmp_dur_sfp_ao.dat};
\addplot table[x index=0,y index=5] {data/cmp_dur_sfp_ao.dat};
\addplot table[x index=0,y index=6] {data/cmp_dur_sfp_ao.dat};
\addplot table[x index=0,y index=7] {data/cmp_dur_sfp_ao.dat};
\addplot table[x index=0,y index=8] {data/cmp_dur_sfp_ao.dat};
\end{axis}
\end{tikzpicture}}
\caption{Performance comparison of \ref{alg:sfp} and \ref{alg:ao}.}
\label{fig:numerical_eval_sfp_ao}
\end{figure}
The first four curves compare the performance of the two introduced algorithms for maximizing the \gls{see} for the \gls{fd} and \gls{hd} case. The remaining four curves represent the performance for maximizing the secrecy capacity. In \ref{fig:numerical_sfp_ao_cmp_see} it can be seen that the two algorithms achieve a similar \gls{see} performance. Furthermore it can be seen in \ref{fig:numerical_sfp_ao_cmp_cs} that \gls{sfp} achieves a slightly higher secrecy rate when maximizing for secrecy capacity in case of high transmit power constraints. As a result the \gls{see} is slightly worse. For this reason \gls{sfp} slightly outperforms \gls{hd}.
Comparing the computation duration of \gls{sfp} and \gls{ao} (see \ref{fig:numerical_sfp_ao_cmp_dur}) it can be seen that for transmit powers below \si{-30}{dB} all algorithms yield a solution very fast. However for transmit power constraints above \si{-30}{dB} the computation duration starts to deviate. For each of the two compared objectives (maximizing for \gls{see} or secrecy capacity $C_s$) and for each of the two system models (\gls{fd} and \gls{hd}) \gls{sfp} solves the problem faster than \gls{ao}. The gain of \gls{sfp} is most significant when maximizing the secrecy capacity in case of a \gls{fd} system model.
It can be concluded that both algorithms yield roughly a similar performance. Because both algorithms are independent of each other, the results are plausible. However, \gls{sfp} is significantly faster for larger transmit power constraints. For this reason the remaining evaluation is based on the \gls{sfp} algorithm.
\subsection{Evaluation of Proposed Initialization}
\label{sec:numerical_eval_init}
One of the major problems of this work is that all proposed algorithms only yield locally optimal solutions because the objective function is difficult to maximize. Therefore, it is not possible to claim global optimality with any of the proposed algorithms. The goal of this section is to evaluate the performance of the proposed algorithms in terms of optimality.
In general each of the proposed algorithms is either based on \gls{sfp} or \gls{ao}. Both approaches iteratively improve an initial solution until they converge to a stationary point of the objective function. This approach can either get stuck in a local optimum or it converges to the global optimum. Therefore, the performance of both algorithms depends heavily on the chosen initialization. In \ref{sec:initialization} a possible initialization scheme is introduced. In the remainder of this subsection the performance of this initialization is discussed.
In general the global optimum of the investigated problem can only be obtained by an exhaustive search of the set of feasible solutions, which is infeasible for large problem sizes. The following approach is proposed as a benchmark for the initialization: The initialization requires initial guesses for the three covariance matrices $\mat{Q}_{a,t}$, $\mat{Q}_{a,a}$ and $\mat{Q}_{b}$. If quantized roughly (with "1 bit"), each covariance matrix can either have a very high value (H) or a very low value (L). Because of these two quantization steps and the three variables, this results in 8 possible initializations (see \ref{tab:numerical_inits}).
\begin{table}[tb]
\centering
\caption{Benchmark for the initialization with uniform initialization.}
\label{tab:numerical_inits}
\begin{tabular}{@{}c|cccccccc@{}}
\hline
Initialization & 1. & 2. & 3. & 4. & 5. & 6. & 7. & 8. \\ \hline
$\mat{Q}_{a,t}$ & L & H & L & H & L & H & L & H \\
$\mat{Q}_{a,a}$ & L & L & H & H & L & L & H & H \\
$\mat{Q}_{b}$ & L & L & L & L & H & H & H & H \\ \hline
\end{tabular}
\end{table}
Now as a benchmark for each problem realization the algorithm is initialized with each of the 8 possible initializations and subsequently maximized for each of them. The benchmark performance is then obtained by taking the maximum resulting \gls{see} when maximizing \gls{see} or $C_s$ when maximizing secrecy capacity. The initializations for L are obtained by initializing the covariance matrices with $\mat{0}$. For H the covariance matrices are initialized by obtaining the structure which maximizes the energy ratio for the corresponding covariance matrix as discussed in \ref{sec:finding_a_good_structure} and then by scaling the covariance matrix such that the maximum transmit power is allocated.
\begin{figure}
\centering
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - Bench,
FD - $\max SEE$ - Init,
HD - $\max SEE$ - Bench,
HD - $\max SEE$ - Init,
FD - $\max C_s$ - Bench,
FD - $\max C_s$ - Init,
HD - $\max C_s$ - Bench,
HD - $\max C_s$ - Init},
xmin=-50,
xmax=10,
ymin=0,
ymax=35,
xtick={0,-50,-40,-30,-20,-10,10},
xlabel={Max. Transmit Power per Modem [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(1.1,0.5)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=2] {data/bench_data.dat};
\addplot table[x index=0,y index=1] {data/bench_data.dat};
\addplot table[x index=0,y index=4] {data/bench_data.dat};
\addplot table[x index=0,y index=3] {data/bench_data.dat};
\addplot table[x index=0,y index=6] {data/bench_data.dat};
\addplot table[x index=0,y index=5] {data/bench_data.dat};
\addplot table[x index=0,y index=8] {data/bench_data.dat};
\addplot table[x index=0,y index=7] {data/bench_data.dat};
\end{axis}
\end{tikzpicture}
\caption{Evaluation of the performance of the proposed initialization.}
\label{fig:numerical_eval_init}
\end{figure}
\Cref{fig:numerical_eval_init} illustrates the performance of the proposed initialization (see \ref{sec:two_stage_init}) which is compared to the previously introduced benchmark initialization for 100 problem realizations over an increasing transmit power constraint. It can be seen that the performance nearly matches. The remaining offset could result from the numerical evaluation. Because of this analysis it can be assumed that the performance of the proposed initialization achieves at least a similar performance as the benchmark. Furthermore the proposed initialization requires only a single optimization instead of 8, hence reducing complexity roughly by $1/8$.
In the remainder of this chapter the numerical evaluation is only performed using the proposed initialization. A better local optimum could maybe be obtained by quantizing with a higher resolution. However, due to the increased complexity this approach is not pursuit further.
\subsection{Evaluation of SSUM for SEE Maximization}
\label{sec:numerical_eval_ssum}
In this subsection the performance of \ref{alg:stat_ssum}, which is based on \gls{ssum}, is evaluated. Because the algorithm numerically approximates the expectation operator by iteratively drawing random initialization of the gives statistics until convergence, it is interesting to compare the performance of multiple iterations of the algorithm on the same problem instance. In \ref{fig:numerical_eval_ssum} the empirical \gls{cdf} of the solutions is shown.
\begin{figure}
\centering
\subfloat[]{\label{ig:numerical_eval_ssum01}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - SSUM, HD - $\max SEE$ - SSUM},
xmin=66,
xmax=72,
ymin=0,
ymax=1,
title={Cumulative Distribution Function
},
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eval_ssum.dat};
\addplot table[x index=2,y index=3] {data/eval_ssum.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[]{\label{ig:numerical_eval_ssum02}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max C_s$ - SSUM, HD - $\max C_s$ - SSUM},
xmin=20,
xmax=26,
ymin=0,
ymax=1,
title={Cumulative Distribution Function
},
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=4,y index=5] {data/eval_ssum.dat};
\addplot table[x index=6,y index=7] {data/eval_ssum.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of the performance of \ref{alg:stat_ssum} which is based on \gls{ssum}.}
\label{fig:numerical_eval_ssum}
\end{figure}
The results were generated by using the algorithm 100 times to solve the same problem instance. The algorithm specifications are shown in \ref{tab:numerical_ssum_stopping_criterion}, where \emph{iterations} denotes the number of algorithm evaluations, minimum average improvement denotes the minimum average improvement which is required to continue the algorithm. The moving average window size defines the size of the window over the last samples over which the average improvement is computed. Furthermore, the minimum and the maximum number of realizations represent hard thresholds on the minimum and maximum number of random realizations of the given statistical distribution which the algorithm draws. If the maximum number of realizations is reached, the algorithm terminates. This is also known as early stopping in the literature \cite{Razaviyayn2016}.
\begin{table}[tb]
\centering
\caption{Stopping criterion for \gls{ssum} algorithm evaluation.}
\label{tab:numerical_ssum_stopping_criterion}
\begin{tabular}{@{}ccccc@{}}
\hline
\multirow{2}{*}{Iterations} & Min. Avg. & Moving Avg. & Min. & Max. \\
& Improvement & Window Size & Realizations & Realizations\\ \hline
100 & $10^{-3}$ & 5 & 25 & 125\\ \hline
\end{tabular}
\end{table}
The \gls{cdf} for maximizing for \gls{see} in case of a \gls{hd} system model as well as the \gls{cdf} for maximizing for secrecy capacity in case of a \gls{hd} system model are very narrow, which means that the algorithm will always yield a very similar solution. However for the remaining two cases the \gls{cdf} is much more broad and stretches approximately over 2 secure bits / Hz / J. This is already a significant variation. However, the accuracy of the algorithm might be increased by defining a stricter stopping criterion at the cost of increased computational complexity.
\subsection{Evaluation of Enhanced SSUM for SEE Maximization}
\label{sec:numerical_eval_essum}
Consequently the performance of the proposed extension of the \gls{ssum} algorithm, denoted as \emph{enhanced SSUM} and given by \cref{alg:stat_enhanced_ssum}, is evaluated. The algorithm parameters which are used are listed in \cref{tab:numerical_essum_stopping_criterion}. The algorithm is evaluated on 100 randomly generated problem instances. For the evaluation the mean \gls{see} and standard deviation of standard and enhanced \gls{ssum} is computed and can be found in \cref{tab:numerical_eval_essum_performance}. It can be seen that enhanced \gls{ssum} provides a small gain for the \gls{hd} system model. However for the \gls{fd} system model the gain is vanishing. A possible explanation could be that the number of \emph{negative channel realizations} is much smaller in case of \gls{fd}. This result is elaborated in \cref{sec:numeric_eval_stat_csi}.
\begin{table}[tb]
\centering
\caption{Stopping criterion for enhanced \gls{ssum} algorithm evaluation.}
\label{tab:numerical_essum_stopping_criterion}
\begin{tabular}{@{}ccccc@{}}
\hline
\multirow{2}{*}{Iterations} & Min. Avg. & Moving Avg. & Min. & Max. \\
& Improvement & Window Size & Realizations & Realizations\\ \hline
100 & $10^{-3}$ & 5 & 25 & 100\\ \hline
\end{tabular}
\end{table}
\begin{table}[tb]
\centering
\caption{Evaluation of performance of enhanced \gls{ssum}.}
\label{tab:numerical_eval_essum_performance}
\begin{tabular}{cc|cc|cc}
\hline
\multicolumn{2}{c|}{\multirow{2}{*}{}} & \multicolumn{2}{c|}{HD} & \multicolumn{2}{c}{FD} \\
\multicolumn{2}{c|}{} & SSUM & enhanced SSUM & SSUM & enhanced SSUM \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}$SEE$\\ $\left[ \text{sec. bits / Hz / J} \right]$\end{tabular}} & Mean & $20.562$ & $21.315$ & $37.299$ & $37.306$ \\
& Std. dev. & $11.443$ & $11.065$ & $12.930$ & $12.914$ \\ \hline
\end{tabular}
\end{table}
\section{Reference Parameter Set}
\label{sec:numerical_reference_parameter_set}
In this section a set of reference parameters is defined. Then this reference parameter set is evaluated.
The analyzed system model mainly depends on a set of 11 parameters:
\begin{itemize}
\item Thermal noise variance $\sigma_i^2$ of Bob ($i = b$) and Eve ($i=e$).
\item Maximum transmit power $P_{\text{max},i}$ per modem of Alice ($i = a$) and Bob ($i = b$).
\item Power amplifier efficiency $\alpha_i^{-1}$ of Alice ($i=a$) and Bob ($i=b$).
\item Constant power consumption of Alice $P_{c,a}$ and Bob $P_{c,b}$.
\item Bob's additional power consumption $P_{\text{FD}}$ due to his \gls{fd} modem.
\item Position of Eve $pos_{\text{Eve}}$ for the case that Eve is located on the line connecting Alice and Bob. For this case it is assumed, that Alice is fixed at a position \si{0}{m} and Bob at a position \si{100}{m}.
\item Variance of the channel coefficients of Bob's \gls{si} channel $\sigma_{SI}^2$.
\item The transmitter noise $\kappa_i$ of Alice ($i = a$) and Bob ($i = b$) and the receiver noise $\beta_j$ of Bob ($j = b$) and Eve ($j = e$).
\item Number of antennas of Alice $N_a$ and Bob $N_b$.
\item Number of antennas of Eve $N_e$.
\end{itemize}
An overview of the default parameters of the system model and the range of each parameter which is evaluated can be found in \ref{tab:ref_parameters}. Note that the geometry of the reference setup places Alice, Bob and Eve on the 3 corners of an equilateral triangle. As a result all three nodes have equal distance to each other. Only for the evaluation of $pos_{\text{Eve}}$ this geometry is changed and Eve moves along the line connecting Alice and Bob. In this setup Alice and Bob are placed at a distance of \si{100}{m}.
\begin{table}[tb]
\centering
\caption{Set of reference parameters and range of each parameter.}
\label{tab:ref_parameters}
\begin{tabular}{c|cccccccccc}
\hline
Parameter & \begin{tabular}[c]{@{}c@{}}$\sigma_i^2$\\ {[}dB{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$P_{max,i}$\\ {[}dB{]}\end{tabular} & $\alpha_i$ & \begin{tabular}[c]{@{}c@{}}$P_{c,i}$\\ {[}dB{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$P_{\text{FD}}$\\ {[}dB{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$pos_{\text{Eve}}$\\ {[}m{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\sigma_{SI}^2$\\ {[}dB{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\kappa_i = \beta_j$\\ {[}dB{]}\end{tabular} & $N_a = N_b$ & $N_e$ \\ \hline
Ref. Value & -40 & 0 & 0.9 & -20 & 0 & / & 0 & -40 & 4 & 4 \\
Min. Value & -50 & -50 & 0.1 & -40 & -60 & 5 & -20 & -60 & 2 & 2 \\
Max. Value & -10 & 10 & 1 & 0 & 0 & 95 & 30 & 0 & 8 & 8 \\ \hline
\end{tabular}
\end{table}
In the following, four different cases will be compared in terms of \gls{see}:
\begin{enumerate}
\item \gls{fd} Bob for maximizing the \gls{see}.
\item \gls{hd} Bob for maximizing the \gls{see}.
\item \gls{fd} Bob for maximizing the secrecy capacity.
\item \gls{hd} Bob for maximizing the secrecy capacity.
\end{enumerate}
\section{Performance Evaluation for Full CSI}
\label{sec:numerical_performance_evaluation_full_csi}
In this section the performance of the previously introduced \gls{sisose} and \gls{mimome} system models for a \gls{fd} and \gls{hd} Bob is evaluated. Therefore, the impact of all system parameters on the \gls{see} is evaluated.
\subsection{Evaluation of the Impact of Thermal Noise Variance and Transmit Power}
\label{sec:numerical_eval_nvar_and_pmax}
In this subsection the impact of the thermal noise variance $\sigma_i^2$ and the maximum transmit power per modem $P_{\text{max},i}$ is evaluated. Both of them essentially change the \gls{snr} of the system, therefore the impact of both of them is studied together. The case of high thermal noise variance or low maximum transmit power is referred to as \emph{low} \gls{snr} case. Consequently, the case of low thermal noise variance or high maximum transmit power is denoted as \emph{high} \gls{snr} case.
\begin{figure}
\centering
\subfloat[SISO-SE]{\label{fig:numerical_siso_nvar}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-50,
xmax=-10,
ymin=0,
ymax=40,
xtick={0,-50,-40,-30,-20,-10},
xlabel={Thermal Noise Variance [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north east,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/siso_nvar.dat};
\addplot table[x index=0,y index=2] {data/siso_nvar.dat};
\addplot table[x index=0,y index=3] {data/siso_nvar.dat};
\addplot table[x index=0,y index=4] {data/siso_nvar.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[MIMO-ME]{\label{fig:numerical_std_nvar}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-50,
xmax=-10,
ymin=0,
ymax=250,
xlabel={Thermal Noise Variance [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north east,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/nvar_data.dat};
\addplot table[x index=0,y index=2] {data/nvar_data.dat};
\addplot table[x index=0,y index=3] {data/nvar_data.dat};
\addplot table[x index=0,y index=4] {data/nvar_data.dat};
\end{axis}
\end{tikzpicture}}
\subfloat[SISO-SE]{\label{fig:numerical_siso_pmax}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-50,
xmax=10,
ymin=0,
ymax=15,
xtick={0,-50,-40,-30,-20,-10,10},
xlabel={Max. Transmit Power per Modem [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/siso_pmax.dat};
\addplot table[x index=0,y index=2] {data/siso_pmax.dat};
\addplot table[x index=0,y index=3] {data/siso_pmax.dat};
\addplot table[x index=0,y index=4] {data/siso_pmax.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[MIMO-ME]{\label{fig:numerical_std_pmax}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-50,
xmax=10,
ymin=0,
ymax=90,
xtick={0,-50,-40,-30,-20,-10,10},
xlabel={Max. Transmit Power per Modem [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/pmax_data.dat};
\addplot table[x index=0,y index=2] {data/pmax_data.dat};
\addplot table[x index=0,y index=3] {data/pmax_data.dat};
\addplot table[x index=0,y index=4] {data/pmax_data.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of thermal noise variance $\sigma_i^2$ and maximum transmit power per modem $P_{\text{max},i}$ on the system's \gls{see} for the \gls{sisose} and \gls{mimome} case.}
\label{fig:eval_nvar_pmax}
\end{figure}
\Cref{fig:numerical_siso_nvar} and \ref{fig:numerical_std_nvar} characterize the \gls{see} for a varying thermal noise variance for the \gls{sisose} and \gls{mimome} case, respectively. It is assumed that the thermal noise variances at Bob and Eve are identical: $\sigma_b^2 = \sigma_e^2$. For very low $\sigma_i^2$, \gls{fd} exhibits a significant gain over \gls{hd} for maximizing the \gls{see} for both cases \gls{sisose} and \gls{mimome}. Maximizing for secrecy capacity is in general much less energy efficient. However for the \gls{mimome} case, the \gls{ee}, especially of \gls{hd}, increases for very low $\sigma_i^2$. To go on it can be seen that for $\sigma_i^2 \geq -30$ the gain of \gls{fd} over \gls{hd} is diminishing when maximizing the \gls{see}. Furthermore, for $\sigma_i^2 \rightarrow \infty$ the \gls{see} of all four schemes goes to zero.
In \ref{fig:numerical_siso_pmax} and \ref{fig:numerical_std_pmax} the \gls{see} is depicted over the maximum transmit power per modem for the \gls{sisose} and \gls{mimome} case respectively. It is assumed that Alice and Bob both have an equal maximum transmit power constraint. Note that independent and no joint transmit power constraints are assumed throughout this work.
It can be seen that the performance for the single antenna and the multiple antenna case are similar. For low transmit powers (low \gls{snr}) all four schemes perform equally. That means that maximizing for secrecy capacity is also optimal in terms of \gls{see} for low \gls{snr}. However for high transmit powers above \si{20}{dB} all four schemes start to deviate significantly: The \gls{see} of maximizing for secrecy capacity decreases significantly and approaches nearly zero. As a result maximizing for secrecy capacity is highly sub-optimal in terms of \gls{see}. Furthermore for maximizing the secrecy capacity in the \gls{mimome} case \gls{fd} performs worse than \gls{hd}. That means the increased secrecy rate of \gls{fd} enhanced physical layer security comes at high costs in terms of \gls{see}. When maximizing the \gls{see}, \gls{fd} can slightly improve the \gls{see} as compared to \gls{hd}.
To summarize: For low \gls{snr} maximizing with respect to secrecy capacity is also optimal in terms of \gls{see} and \gls{fd} can not increase the performance. In contrast for high \gls{snr} maximizing for secrecy capacity comes at a high \gls{ee} cost. Furthermore for high \gls{snr} it is possible to achieve a slight \gls{see} gain by a \gls{fd} Bob if \gls{see} maximization is applied.
\subsection{Evaluation of the Impact of the Power Consumption}
\label{sec:numeric_eval_power_consumption}
In this subsection the impact of the power amplifier efficiency $\alpha_i^{-1}$ and constant power consumption $P_{c,i}$ on the system's \gls{see} is evaluated. These two parameters are used to characterize the power consumption of Alice and Bob. It is important to evaluate the impact of these parameters because it can be expected that they have a high impact on the system's \gls{see}.
\begin{figure}
\centering
\subfloat[SISO-SE]{\label{fig:numerical_siso_alpha}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=0,
xmax=1,
ymin=0,
ymax=15,
xlabel={Power Amplifier Efficiency},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north west,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/siso_alpha.dat};
\addplot table[x index=0,y index=2] {data/siso_alpha.dat};
\addplot table[x index=0,y index=3] {data/siso_alpha.dat};
\addplot table[x index=0,y index=4] {data/siso_alpha.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[MIMO-ME]{\label{fig:numerical_std_alpha}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=0,
xmax=1,
ymin=0,
ymax=100,
xlabel={Power Amplifier Efficiency},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north west,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/alpha_data.dat};
\addplot table[x index=0,y index=2] {data/alpha_data.dat};
\addplot table[x index=0,y index=3] {data/alpha_data.dat};
\addplot table[x index=0,y index=4] {data/alpha_data.dat};
\end{axis}
\end{tikzpicture}}
\subfloat[SISO-SE]{\label{fig:numerical_siso_Pc}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-40,
xmax=0,
ymin=0,
ymax=50,
xlabel={Constant Power Consumption [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north east,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/siso_Pc.dat};
\addplot table[x index=0,y index=2] {data/siso_Pc.dat};
\addplot table[x index=0,y index=3] {data/siso_Pc.dat};
\addplot table[x index=0,y index=4] {data/siso_Pc.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[MIMO-ME]{\label{fig:numerical_std_Pc}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-40,
xmax=0,
ymin=0,
ymax=600,
xlabel={Constant Power Consumption [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north east,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/Pc_data.dat};
\addplot table[x index=0,y index=2] {data/Pc_data.dat};
\addplot table[x index=0,y index=3] {data/Pc_data.dat};
\addplot table[x index=0,y index=4] {data/Pc_data.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of power amplifier efficiency $\alpha_i$ and constant power consumption per modem $P_{c,i}$ on the system's \gls{see} for the \gls{sisose} and \gls{mimome} case.}
\label{fig:eval_alpha_Pc}
\end{figure}
The impact of the power amplifier efficiency on the system's \gls{see} is depicted in \ref{fig:numerical_siso_alpha} and \ref{fig:numerical_std_alpha}. Firstly it can be observed that the relative performance for the \gls{sisose} and \gls{mimome} case is very similar. For a low efficiency of e.g. 0.1, the performance of \gls{fd} and \gls{hd} for maximizing the \gls{see} is very similar in terms of \gls{see}. With increasing power amplifier efficiency, the \gls{see} of both systems increases. However, the rate of improvement drops for higher power amplifier efficiencies. Furthermore, \gls{fd} exhibits an increasing gain over \gls{hd} for increasing power amplifier efficiency. The maximum is obtained at a power amplifier efficiency of \si{100}{\%}.
The \gls{see} for varying constant power consumption $P_{c,i}$ is shown in \ref{fig:numerical_siso_Pc} and in \ref{fig:numerical_std_Pc}. Again the \gls{sisose} and the \gls{mimome} system model perform similarly. The \gls{see} for \gls{hd} and \gls{fd} for maximizing the secrecy capacity stays quasi constant at a very low level close to zero. In contrast, the \gls{see} increases rapidly if maximized for \gls{see} for low constant power consumptions. Furthermore there is hardly any gain of \gls{fd} over \gls{hd} for low constant power consumptions. For high constant power consumptions the \gls{see} of \gls{fd} and \gls{hd} for maximizing the \gls{see} also approaches zero. It is interesting to note that \gls{fd} exhibits the highest absolute gain over \gls{hd} if maximized for \gls{see} in the range of \si{-20}{dB} to \si{-10}{dB}.
\subsubsection{Full-Duplex Power Consumption}
\label{sec:numerical_eval_p_fd}
In this \namecref{sec:numerical_eval_p_fd} the impact of the additional power consumption $P_{\text{FD}}$ of any \gls{fd} node is evaluated. \Cref{fig:eval_p_fd} provides an overview of the results. The combined impact of $P_{\text{FD}}$ and four other parameters (thermal noise variance, power amplifier efficiency, transmitter noise / receiver distortion and strength of Bob's \gls{si} channel) is evaluated. In general it can be seen that the \gls{fd} \gls{see} approaches zero as $P_{\text{FD}} \rightarrow \infty$. In contrast the \gls{hd} performance is not effected.
\begin{figure}
\centering
\subfloat[Thermal noise variance]{\label{fig:numerical_p_fd_nvar}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={
FD - $\max SEE$ - $\sigma_i^2 = -50$ dB,
HD - $\max SEE$ - $\sigma_i^2 = -50$ dB,
FD - $\max SEE$ - $\sigma_i^2 = -40$ dB,
HD - $\max SEE$ - $\sigma_i^2 = -40$ dB,
FD - $\max SEE$ - $\sigma_i^2 = -30$ dB,
HD - $\max SEE$ - $\sigma_i^2 = -30$ dB
},
xmin=-40,
xmax=0,
ymin=0,
ymax=250,
xlabel={FD Power Consumption $P_{\text{FD}}$ [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/nvar_pfd_data.dat};
\addplot table[x index=0,y index=2] {data/nvar_pfd_data.dat};
\addplot table[x index=0,y index=5] {data/nvar_pfd_data.dat};
\addplot table[x index=0,y index=6] {data/nvar_pfd_data.dat};
\addplot table[x index=0,y index=9] {data/nvar_pfd_data.dat};
\addplot table[x index=0,y index=10] {data/nvar_pfd_data.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[Power amplifier efficiency]{\label{fig:numerical_p_fd_alpha}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={
FD - $\max SEE$ - $\alpha_i^{-1} = 0.3$ dB,
HD - $\max SEE$ - $\alpha_i^{-1} = 0.3$ dB,
FD - $\max SEE$ - $\alpha_i^{-1} = 0.6$ dB,
HD - $\max SEE$ - $\alpha_i^{-1} = 0.6$ dB,
FD - $\max SEE$ - $\alpha_i^{-1} = 0.9$ dB,
HD - $\max SEE$ - $\alpha_i^{-1} = 0.9$ dB
},
xmin=-40,
xmax=0,
ymin=0,
ymax=90,
xlabel={FD Power Consumption $P_{\text{FD}}$ [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/alpha_pfd_data.dat};
\addplot table[x index=0,y index=2] {data/alpha_pfd_data.dat};
\addplot table[x index=0,y index=5] {data/alpha_pfd_data.dat};
\addplot table[x index=0,y index=6] {data/alpha_pfd_data.dat};
\addplot table[x index=0,y index=9] {data/alpha_pfd_data.dat};
\addplot table[x index=0,y index=10] {data/alpha_pfd_data.dat};
\end{axis}
\end{tikzpicture}}
\subfloat[Transmitter noise / receiver distortion]{\label{fig:numerical_p_fd_kappa_beta}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={
FD - $\max SEE$ - $\kappa_i = \beta_j = -60$ dB,
HD - $\max SEE$ - $\kappa_i = \beta_j = -60$ dB,
FD - $\max SEE$ - $\kappa_i = \beta_j = -40$ dB,
HD - $\max SEE$ - $\kappa_i = \beta_j = -40$ dB,
FD - $\max SEE$ - $\kappa_i = \beta_j = -20$ dB,
HD - $\max SEE$ - $\kappa_i = \beta_j = -20$ dB,
},
xmin=-40,
xmax=0,
ymin=0,
xlabel={FD Power Consumption $P_{\text{FD}}$ [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/kappa_beta_pfd_data.dat};
\addplot table[x index=0,y index=2] {data/kappa_beta_pfd_data.dat};
\addplot table[x index=0,y index=5] {data/kappa_beta_pfd_data.dat};
\addplot table[x index=0,y index=6] {data/kappa_beta_pfd_data.dat};
\addplot table[x index=0,y index=9] {data/kappa_beta_pfd_data.dat};
\addplot table[x index=0,y index=10] {data/kappa_beta_pfd_data.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[Constant power consumption]{\label{fig:numerical_p_fd_Pc}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={
FD - $\max SEE$ - $P_c = -30$ dB,
HD - $\max SEE$ - $P_c = -30$ dB,
FD - $\max SEE$ - $P_c = -20$ dB,
HD - $\max SEE$ - $P_c = -20$ dB,
FD - $\max SEE$ - $P_c = -10$ dB,
HD - $\max SEE$ - $P_c = -10$ dB,
},
xmin=-40,
xmax=0,
ymin=0,
xlabel={FD Power Consumption $P_{\text{FD}}$ [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/Pc_pfd_data.dat};
\addplot table[x index=0,y index=2] {data/Pc_pfd_data.dat};
\addplot table[x index=0,y index=5] {data/Pc_pfd_data.dat};
\addplot table[x index=0,y index=6] {data/Pc_pfd_data.dat};
\addplot table[x index=0,y index=9] {data/Pc_pfd_data.dat};
\addplot table[x index=0,y index=10] {data/Pc_pfd_data.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of the \gls{fd} power consumption $P_\text{FD}$ on the system's \gls{see}.}
\label{fig:eval_p_fd}
\end{figure}
The impact of the \gls{fd} power consumption on the system's \gls{see} for three different noise variances is depicted in figure \cref{fig:numerical_p_fd_nvar}. As already evaluated, the gain of \gls{fd} as compared to \gls{hd} vanishes with increasing thermal noise variance. Now comparing the amount of \gls{fd} power consumption as well, it can be seen that the maximum tolerable $P_{\text{FD}}$ decreases with increasing thermal noise variance $\sigma_i^2$. E.g. for $\sigma_i^2 = -50$ dB, \gls{fd} yields a gain for $P_{\text{FD}} < -20$ dB. However for $\sigma_i^2 = -40$ dB, \gls{fd} yields only a gain for $P_{\text{FD}} < -30$ dB.
A similar system performance for different power amplifier efficiencies is illustrated in \cref{fig:numerical_p_fd_alpha}. For lower $\alpha_i^{-1}$, \gls{fd} requires a lower $P_{\text{FD}}$ in order to yield a \gls{see} gain as compared to \gls{hd}.
To summarize: It can be seen that the \gls{see} can be significantly increased by maximizing for \gls{see} instead of secrecy capacity for any power amplifier efficiency and for constant power consumptions which are not too high. Furthermore \gls{fd} can only outperform \gls{hd} in terms of \gls{see} for a very high power amplifier efficiency and a low constant power consumption as well as a low \gls{fd} power consumption $P_{\text{FD}}$.
\subsection{Evaluation of the Impact of Self-Interference}
\label{sec:numeric_eval_si}
In this subsection the impact of the \gls{si} on the system's \gls{see} is analyzed. The \gls{si} is mainly characterized by the transmitter noise parameter $\kappa_i$, the receiver distortion paramter $\beta_i$ and the power of the \gls{si} channel. It is important to evaluate the performance of the system in dependency on these parameters because canceling the \gls{si} of \gls{fd} systems is still a challenge.
\begin{figure}
\centering
\subfloat[SISO-SE]{\label{fig:numerical_siso_kappa_beta}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-60,
xmax=0,
ymin=0,
ymax=15,
xlabel={Transmitter Noise $\kappa_i$ / Receiver Distortion $\beta_j$ ($\kappa_i = \beta_j$) [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.04,0.3)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/siso_kappa_beta.dat};
\addplot table[x index=0,y index=2] {data/siso_kappa_beta.dat};
\addplot table[x index=0,y index=3] {data/siso_kappa_beta.dat};
\addplot table[x index=0,y index=4] {data/siso_kappa_beta.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[MIMO-ME]{\label{fig:numerical_std_kappa_beta}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-60,
xmax=0,
ymin=0,
ymax=90,
xlabel={Transmitter Noise $\kappa_i$ / Receiver Distortion $\beta_j$ ($\kappa_i = \beta_j$) [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.04,0.3)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/kappa_beta_data.dat};
\addplot table[x index=0,y index=2] {data/kappa_beta_data.dat};
\addplot table[x index=0,y index=3] {data/kappa_beta_data.dat};
\addplot table[x index=0,y index=4] {data/kappa_beta_data.dat};
\end{axis}
\end{tikzpicture}}
\subfloat[SISO-SE]{\label{fig:numerical_siso_si}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-30,
xmax=30,
ymin=0,
ymax=15,
xlabel={Variance of Bob's SI Channel [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.04,0.3)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/siso_si.dat};
\addplot table[x index=0,y index=2] {data/siso_si.dat};
\addplot table[x index=0,y index=3] {data/siso_si.dat};
\addplot table[x index=0,y index=4] {data/siso_si.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[MIMO-ME]{\label{fig:numerical_std_si}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=-20,
xmax=30,
ymin=0,
ymax=90,
xlabel={Variance of Bob's SI Channel [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.04,0.3)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/si_data.dat};
\addplot table[x index=0,y index=2] {data/si_data.dat};
\addplot table[x index=0,y index=3] {data/si_data.dat};
\addplot table[x index=0,y index=4] {data/si_data.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of transmitter noise $\kappa_i$, receiver distortion $\beta_i$ and variance of \gls{si} channel $\sigma_{SI}^2$ on the system's \gls{see} for the \gls{sisose} and \gls{mimome} case.}
\label{fig:eval_kappa_beta_si}
\end{figure}
Firstly, the effect of the transmitter noise and the receiver distortion on the \gls{see} for the \gls{sisose} and \gls{mimome} case is analyzed (see \ref{fig:numerical_siso_kappa_beta} and \ref{fig:numerical_std_kappa_beta}). The results for the \gls{sisose} case have to be interpreted cautiously because the transmitter noise and receiver distortion is only modeled at Bob in the \gls{sisose} system model of \ref{chap:siso-se}. However, in the \gls{mimome} system model these distortions are incorporated at each node. Therefore the \gls{see} approaches zero for all four settings if the distortion goes to infinity only in the \gls{mimome} case (see \ref{fig:numerical_std_kappa_beta}). However in both figures it can be seen that the gain of \gls{fd} over \gls{hd} for maximizing the \gls{see} vanishes for distortions above \si{-20}{dB}.
It is interesting to note that when maximizing the secrecy capacity for both cases, the \gls{see} increases for higher distortions before it drops to zero for very high distortions. A possible explanation for this could be that the distortion depends on the transmit power. For higher parameters $\kappa_i$ and $\beta_j$ it might be optimal to allocate less power, which in turn leads to an increased \gls{see}.
In \ref{fig:numerical_siso_si} and \ref{fig:numerical_std_si} the impact of the power of Bob's \gls{si} channel on the \gls{see} is depicted. The performance of the two \gls{hd} cases is constant since their system model does not depend on the \gls{si} channel $\mat{H}_{bb}$. However for the two \gls{fd} cases the gain over \gls{hd} diminishes for a variance of roughly \si{20}{dB} for the \gls{si} channel.
In summary it can be seen that \gls{fd} only outperforms \gls{hd} in terms of \gls{see} for low distortion parameters $\kappa_i$ and $\beta_j$ and for a low variance of the \gls{si} channel.
\subsection{Evaluation of the Impact of Eve's Position}
\label{sec:numeric_eval_pos}
In this subsection the geometry of the system models is altered. Now a system model is considered where Alice and Bob are located at a distance of \si{100}{m} and Eve is located somewhere on the line connecting Alice and Bob. In the following it is analyzed how the \gls{see} performance is affected by the position of Eve on this line-segment.
\begin{figure}
\centering
\subfloat[SISO-SE]{\label{fig:numerical_siso_pos}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=0,
xmax=100,
ymin=0,
ymax=0.7,
xlabel={Position of Eve [m] (Alice \si{0}{m}; Bob \si{100}{m})},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north west,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/siso_pos.dat};
\addplot table[x index=0,y index=2] {data/siso_pos.dat};
\addplot table[x index=0,y index=3] {data/siso_pos.dat};
\addplot table[x index=0,y index=4] {data/siso_pos.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[MIMO-ME]{\label{fig:numerical_std_pos}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=0,
xmax=100,
ymin=0,
ymax=4,
xlabel={Position of Eve [m] (Alice \si{0}{m}; Bob \si{100}{m})},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north west,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/pos_data.dat};
\addplot table[x index=0,y index=2] {data/pos_data.dat};
\addplot table[x index=0,y index=3] {data/pos_data.dat};
\addplot table[x index=0,y index=4] {data/pos_data.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of the position of Eve $pos_{\text{Eve}}$ on the system's \gls{see} for the \gls{sisose} and \gls{mimome} case.}
\label{fig:eval_pos}
\end{figure}
The performance of the \gls{sisose} system model and the \gls{mimome} system model for a varying position of Eve is depicted in \ref{fig:numerical_std_pos} and \ref{fig:numerical_siso_pos}. Firstly, it is noted that both system models roughly have a similar relative performance. The \gls{see} is lowest for Eve being close to Alice for all compared cases and for both system models. Furthermore, the \gls{see} increases when Eve moves away from Alice and towards Bob. In general \gls{fd} and \gls{hd} have a very similar performance. For maximizing the \gls{see}, \gls{fd} is always at least as good as \gls{hd}. However once Eve is closer to Bob than to Alice, \gls{fd} yields a higher \gls{see} as compared to \gls{hd}. This gain increases when Eve moves closer to Bob and it becomes significant e.g. for a position of \si{95}{m}. Also for maximizing the secrecy capacity the \gls{see} of \gls{fd} is higher than in the \gls{hd} case if Eve is located close to Bob. This makes sense intuitively because jamming from Bob is much more efficient if Eve is located in close proximity to Bob.
In summary, \gls{fd} can yield a higher \gls{see} only if Eve is located close to Bob. This is also true if the secrecy capacity is maximized.
\subsection{Evaluation of the Impact of the Number of Antennas}
\label{sec:numeric_eval_Na_Nb_Nc}
In the following subsection the impact of the number of antennas on the system's \gls{see} is investigated. This is only possible for the \gls{mimome} system model. The results are depicted in \ref{fig:numerical_std_Na_Nb} and \ref{fig:numerical_std_Ne}.
\begin{figure}
\centering
\subfloat[~]{\label{fig:numerical_std_Na_Nb}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=2,
xmax=8,
ymin=0,
ymax=250,
xlabel={Number of Antennas of Alice and Bob $N_a = N_b$},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north west,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/Na_Nb_data.dat};
\addplot table[x index=0,y index=2] {data/Na_Nb_data.dat};
\addplot table[x index=0,y index=3] {data/Na_Nb_data.dat};
\addplot table[x index=0,y index=4] {data/Na_Nb_data.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[~]{\label{fig:numerical_std_Ne}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$, HD - $\max SEE$, FD - $\max C_s$, HD - $\max C_s$},
xmin=2,
xmax=8,
ymin=0,
ymax=120,
xlabel={Number of Antennas of Eve},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend pos=north east,font=\footnotesize]
\addplot table[x index=0,y index=1] {data/Ne_data.dat};
\addplot table[x index=0,y index=2] {data/Ne_data.dat};
\addplot table[x index=0,y index=3] {data/Ne_data.dat};
\addplot table[x index=0,y index=4] {data/Ne_data.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of the number of antennas of Alice, Bob and Eve on the system's \gls{see} for the \gls{mimome} case.}
\label{fig:eval_Na_Nb_Ne}
\end{figure}
In general the \gls{see} increases for an increasing number of antennas of Alice and Bob (see \ref{fig:numerical_std_Na_Nb}). Firstly, it can be seen that the gain of maximizing for \gls{see} is much higher for a larger number of antennas. Secondly, \gls{fd} exhibits a gain over \gls{hd} for low number of antennas for maximizing for \gls{see}. However, for a large number of antennas the gain vanishes, when maximizing \gls{see}. In contrast, when maximizing the secrecy capacity \gls{hd} is more efficient for a large number of antennas where as they both perform equally well for a low number of antennas.
The system performance for varying the number of antennas of Eve is somewhat reciprocal to that. The \gls{see} increases for a smaller number of antennas. For a larger number of antennas \gls{fd} yields a gain over \gls{hd} if maximized for \gls{see}. Furthermore, the gain is quite significant if Eve has 8 antennas (double as many as Alice and Bob).
In conclusion, the \gls{see} increases if Alice and Bob have more antennas than Eve. Furthermore, the gain of \gls{fd} is only significant if Eve has more antennas than Alice and Bob.
\section{Performance Evaluation for Bidirectional Communication}
\label{sec:numeric_eval_bidirectional}
In this section the \gls{see} performance of different bidirectional communication schemes, which are introduced in \ref{chap:sse_bidirectional_mimo-me}, are evaluated numerically. These are \gls{fd}, \gls{fdd} and \gls{tdd}.
\begin{figure}
\centering
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandard,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={
FD BC - $\max SEE$,
FD WC - $\max SEE$,
FDD. - $\max SEE$,
TDD. - $\max SEE$,
FD BC - $\max C_s$,
FD WC - $\max C_s$,
FDD. - $\max C_s$,
TDD. - $\max C_s$
},
xmin=-50,
xmax=10,
ymin=0,
ymax=120,
xtick={0,-50,-40,-30,-20,-10,10},
xlabel={Max. Transmit Power per Modem [dB]},
ylabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(1.1,0.5)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/bidirectional_pmax_data.dat};
\addplot table[x index=0,y index=2] {data/bidirectional_pmax_data.dat};
\addplot table[x index=0,y index=3] {data/bidirectional_pmax_data.dat};
\addplot table[x index=0,y index=4] {data/bidirectional_pmax_data.dat};
\addplot table[x index=0,y index=5] {data/bidirectional_pmax_data.dat};
\addplot table[x index=0,y index=6] {data/bidirectional_pmax_data.dat};
\addplot table[x index=0,y index=7] {data/bidirectional_pmax_data.dat};
\addplot table[x index=0,y index=8] {data/bidirectional_pmax_data.dat};
\end{axis}
\end{tikzpicture}
\caption{Evaluation of the performance of \gls{fd}, \gls{fdd} and \gls{tdd} for bidirectional communication.}
\label{fig:numerical_eval_bidirect}
\end{figure}
\Cref{fig:numerical_eval_bidirect} depicts the performance of all four schemes for either \gls{see} or secrecy capacity maximization over an increasing transmit power constraint. The system parameters are chosen as in the previous section (see \ref{tab:ref_parameters}). For low transmit powers (low \gls{snr}) maximizing for secrecy capacity is again also optimal with respect to \gls{see}. Furthermore, \gls{fd} and \gls{fdd} perform similarly where as \gls{tdd} performs worst.
Comparing \gls{fdd} and \gls{tdd} it can be seen that \gls{fdd} yields a better performance for maximum transmit powers below \si{-20}{dB}. This could result from the lower noise power because of the reduced bandwidth of \gls{fdd}. However, for maximum transmit powers above \si{-20}{dB}, \gls{tdd} outperforms \gls{fdd}. A possible explanation could be the reduced transmit power of \gls{tdd} because only one transmitter is transmitting at each time.
For high \gls{snr} it can be observed that \gls{fd} significantly outperforms \gls{fdd} and \gls{tdd}. In case of the \glsfirst{bc} system model \gls{fd} can nearly double the \gls{see} as compared to \gls{fdd} and \gls{tdd}. The \glsfirst{wc} \gls{fd} system model performs worse but still significantly outperforms the classical schemes.
As a result it can be concluded that \gls{fd} can yield a significant \gls{see} gain in case of bidirectional communication as compared to classic approaches like \gls{fdd} and \gls{tdd}.
\section{Performance Evaluation for Statistical CSI}
\label{sec:numeric_eval_stat_csi}
In this section it is evaluated what performance the system can achieve in case of statistical \gls{csi} knowledge of the eavesdropper channels, as introduced in \ref{chap:sse_mimo-me_statistical_csi}. Therefore, the previous evaluation is extended to the statistical \gls{csi} system model of \ref{chap:sse_mimo-me_statistical_csi}. Firstly, the performance of different solutions in the reference point is compared. Then the impact of different system parameters on the system's performance is evaluated.
For the evaluation \si{10000} random channel realizations of each known channel distribution are generated. The performance of the covariance matrices, which where obtained by the different optimization algorithms, is then evaluated on this set of reference channels. It is assumed that the \si{10000} reference channels empirically catch the underlying statistical distribution.
\subsection{Evaluation for Reference Parameters}
\label{sec:numerical_stat_csi_eval_ref_parameters}
In this subsection the performance for the reference parameter set is evaluated. A simple approximation is to approximate the channel statistics with the mean of the distribution, which is denoted as $\mat{\bar{H}}$. In this case the deterministic algorithms of \ref{chap:mimo-me_see_maximization} (especially \ref{alg:sfp}) can be used to maximize either the \gls{see} or the secrecy capacity. This approach is considered as a benchmark for \ref{alg:stat_ssum}, which is denoted as \emph{Bench} for the remainder of this section.
\begin{figure}
\centering
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - Bench,
HD - $\max SEE$ - Bench,
FD - $\max C_s$ - Bench,
HD - $\max C_s$ - Bench,
FD - $\max SEE$ - SSUM,
HD - $\max SEE$ - SSUM,
FD - $\max C_s$ - SSUM,
HD - $\max C_s$ - SSUM},
xmin=0,
xmax=140,
ymin=0,
ymax=1,
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(1.1,0.5)},anchor=west},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_bench_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_bench_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxcs_nvar_bench_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxcs_nvar_bench_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxcs_nvar_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxcs_nvar_hd_m.dat};
\end{axis}
\end{tikzpicture}
\caption{Evaluation of the performance of SSUM.}
\label{fig:numerical_ssum_ref}
\end{figure}
A comparison of the performance in terms of \gls{see} is depicted as a \gls{cdf} in \ref{fig:numerical_ssum_ref}. Firstly, it can be observed that approximating the channel statistics by the mean value $\mat{\bar{H}}$ (\emph{Bench}) does not yield a good performance. Especially for the \gls{hd} case the result performs well only on very few channel realizations. Secondly, \gls{fd} yields a significant gain in \gls{see} and more importantly always has fewer channel realizations where the \gls{see} is zero.
To provide security in case of statistical \gls{csi} the number of channel realizations which result in a zero \gls{see} should be as small as possible. Therefore a \gls{fd} Bob can significantly enhance the security as well as the \gls{see} in case of statistical \gls{csi} because it results in significantly fewer zero \gls{see} channel realizations.
\subsection{Evaluation of the Impact of Thermal Noise Variance and Transmit Power}
\label{sec:numerical_stat_csi_eval_nvar_pmax}
In this subsection the impact of the thermal noise variance and the maximum transmit power per modem is evaluated.
\begin{figure}
\centering
\subfloat[Thermal noise variance]{\label{fig:numerical_stat_csi_nvar}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - $\sigma_i^2 = -50$ dB,
HD - $\max SEE$ - $\sigma_i^2 = -50$ dB,
FD - $\max SEE$ - $\sigma_i^2 = -40$ dB,
HD - $\max SEE$ - $\sigma_i^2 = -40$ dB,
FD - $\max SEE$ - $\sigma_i^2 = -30$ dB,
HD - $\max SEE$ - $\sigma_i^2 = -30$ dB,
},
xmin=0,
xmax=300,
ymin=0,
ymax=1,
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_fd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_hd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_fd_h.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_nvar_hd_h.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[Max. transmit power]{\label{fig:numerical_stat_csi_pmax}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - $p_{max} = -10$ dB,
HD - $\max SEE$ - $p_{max} = -10$ dB,
FD - $\max SEE$ - $p_{max} = 0$ dB,
HD - $\max SEE$ - $p_{max} = 0$ dB,
FD - $\max SEE$ - $p_{max} = 10$ dB,
HD - $\max SEE$ - $p_{max} = 10$ dB,
},
xmin=0,
xmax=120,
ymin=0,
ymax=1,
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_pmax_fd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_pmax_hd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_pmax_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_pmax_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_pmax_fd_h.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_pmax_hd_h.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of the thermal noise variance and the maximum transmit power per modem on the system's \gls{see} for the \gls{mimome} system model in case of statistical \gls{csi} of all eavesdropper channels.}
\label{fig:numerical_stat_csi_eval_nvar_pmax}
\end{figure}
The \gls{cdf} of \gls{fd} and \gls{hd} for maximizing the \gls{see} for three different thermal noise variance levels is depicted in \ref{fig:numerical_stat_csi_nvar}. It can be observed that with decreasing thermal noise variance the average \gls{see} increases and that the number of channel realizations with a zero \gls{see} decreases for both \gls{fd} and \gls{hd}. For this reason a low thermal noise vaiance, or a high \gls{snr}, is highly beneficial for secrecy as well as for \gls{see}. Furthermore it can be observed that the gain of \gls{fd} over \gls{hd} increases with decreasing thermal noise power. As a result in case of low \gls{snr} \gls{fd} yields no gain over \gls{hd}. These results match the results of the evaluation for full \gls{csi} as elaborated in \ref{sec:numerical_eval_nvar_and_pmax}.
Similarly the performance in terms of \gls{see} for three different maximum transmit power constraints is shown in \ref{fig:numerical_stat_csi_pmax}. It can be seen that the performance significantly increases with increasing transmit power constraint. This result matches the results from \cref{sec:numerical_eval_nvar_and_pmax}. A new result is that \gls{fd} has significantly less zero $SEE$ channel realizations for low transmit powers as compared to \gls{hd}. Therefore, \gls{fd} improves the secrecy not only in terms of average secrecy rate.
\subsection{Evaluation of the Impact of the Power Consumption}
\label{sec:numerical_stat_csi_eval_alpha_Pc}
In this subsection the impact of the power consumption, which mainly depends on the power amplifier efficiency $\alpha_i^{-1}$ and the constant power consumption per modem $P_{c,i}$, on the system's \gls{see} is evaluated.
\begin{figure}
\centering
\subfloat[Power amplifier efficiency]{\label{fig:numerical_stat_csi_alpha}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - $\alpha_i^{-1} = 0.3$ dB,
HD - $\max SEE$ - $\alpha_i^{-1} = 0.3$ dB,
FD - $\max SEE$ - $\alpha_i^{-1} = 0.6$ dB,
HD - $\max SEE$ - $\alpha_i^{-1} = 0.6$ dB,
FD - $\max SEE$ - $\alpha_i^{-1} = 0.9$ dB,
HD - $\max SEE$ - $\alpha_i^{-1} = 0.9$ dB,
},
xmin=0,
ymin=0,
ymax=1,
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_alpha_fd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_alpha_hd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_alpha_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_alpha_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_alpha_fd_h.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_alpha_hd_h.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[Constant power consumption]{\label{fig:numerical_stat_csi_Pc}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - $P_c = -30$ dB,
HD - $\max SEE$ - $P_c = -30$ dB,
FD - $\max SEE$ - $P_c = -20$ dB,
HD - $\max SEE$ - $P_c = -20$ dB,
FD - $\max SEE$ - $P_c = -10$ dB,
HD - $\max SEE$ - $P_c = -10$ dB,
},
xmin=0,
ymin=0,
ymax=1,
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_Pc_fd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_Pc_hd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_Pc_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_Pc_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_Pc_fd_h.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_Pc_hd_h.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of the power amplifier efficiency and the constant power consumption per modem on the system's \gls{see} for the \gls{mimome} system model in case of statistical \gls{csi} of all eavesdropper channels.}
\label{fig:numerical_stat_csi_eval_alpha_Pc}
\end{figure}
The \gls{cdf} for three different values of the power amplifier efficiency is illustrated in \ref{fig:numerical_stat_csi_alpha}. It can be observed that the \gls{see} as well as the gain of \gls{fd} over \gls{hd} grows with increasing power amplifier efficiency. This results matches the results of the \gls{sisose} and \gls{mimome} system model in case of full eavesdropper \gls{csi}.
The performance in terms of \gls{see} for three different constant power consumption values is depicted in \ref{fig:numerical_stat_csi_Pc}. The \gls{see} of both system models increases with decreasing constant power consumption. However, the gain of \gls{fd} over \gls{hd} does not change significantly. Furthermore, the number of channel realizations which result in a zero \gls{see} decreases for higher constant power consumptions. This is a new result and it means that a higher constant power consumption is beneficial for the system's security.
In conclusion the impact of the power amplifier efficiency as well as the constant power consumption on the system's \gls{see} in case of statistical \gls{csi} is very similar to the deterministic case.
\subsection{Evaluation of the Impact of Self-Interference}
\label{sec:numerical_stat_csi_eval_kappa_beta_si}
In this subsection the impact of the \gls{si}, which mainly depends on the transmitter noise, the receiver distortion and the strength of Bob's \gls{si} channel, on the system's \gls{see} is evaluated.
\begin{figure}
\centering
\subfloat[Transmitter noise and receiver distortion]{\label{fig:numerical_stat_csi_kappa_beta}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - $\kappa_i = \beta_j = -60$ dB,
HD - $\max SEE$ - $\kappa_i = \beta_j = -60$ dB,
FD - $\max SEE$ - $\kappa_i = \beta_j = -40$ dB,
HD - $\max SEE$ - $\kappa_i = \beta_j = -40$ dB,
FD - $\max SEE$ - $\kappa_i = \beta_j = -20$ dB,
HD - $\max SEE$ - $\kappa_i = \beta_j = -20$ dB,
},
xmin=0,
ymin=0,
ymax=1,
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_kappa_beta_fd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_kappa_beta_hd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_kappa_beta_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_kappa_beta_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_kappa_beta_fd_h.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_kappa_beta_hd_h.dat};
\end{axis}
\end{tikzpicture}}
\hfill
\subfloat[Constant power consumption]{\label{fig:numerical_stat_csi_si}
\begin{tikzpicture}[baseline,trim axis left,trim axis right]
\begin{axis}[myStandardCDF,
width=.39\textwidth,
legend style={font=\tiny},
legend style={cells={align=center}},
legend entries ={FD - $\max SEE$ - $\sigma_{\text{SI}}^2 = -20$ dB,
HD - $\max SEE$ - $\sigma_{\text{SI}}^2 = -20$ dB,
FD - $\max SEE$ - $\sigma_{\text{SI}}^2 = 0$ dB,
HD - $\max SEE$ - $\sigma_{\text{SI}}^2 = 0$ dB,
FD - $\max SEE$ - $\sigma_{\text{SI}}^2 = 20$ dB,
HD - $\max SEE$ - $\sigma_{\text{SI}}^2 = 20$ dB,
},
xmin=0,
ymin=0,
ymax=1,
ylabel={Probability},
xlabel={SEE $\left[ \text{secure bits / Hz / J} \right]$},
xlabel near ticks,
ylabel near ticks,
/pgf/number format/1000 sep={},
clip=false,
legend image post style={xscale=1},
/tikz/plot label/.style={black, anchor=west},
legend style={at={(0.5,-0.3)},anchor=north},
font=\footnotesize]
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_si_fd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_si_hd_l.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_si_fd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_si_hd_m.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_si_fd_h.dat};
\addplot table[x index=0,y index=1] {data/eCDF_maxsee_si_hd_h.dat};
\end{axis}
\end{tikzpicture}}
\caption{Evaluation of impact of the transmitter noise, receiver distortion and strength of Bob's \gls{si} channel on the system's \gls{see} for the \gls{mimome} system model in case of statistical \gls{csi} of all eavesdropper channels.}
\label{fig:numerical_stat_csi_eval_kappa_beta_si}
\end{figure}
The \gls{cdf} for three different values of $\kappa_i$ and $\beta_j$ is characterized in \ref{fig:numerical_stat_csi_kappa_beta}. As throughout this work it is assumed that both parameters are equal: $\kappa_i = \beta_j$. Two observations can be made from this evaluation: Firstly, the gain of \gls{fd} over \gls{hd} diminishes with increasing distortion, which is expected. Secondly, the number of channels which have a zero \gls{see} drops with decreasing distortions for both \gls{fd} and \gls{hd}. Therefore, from a security point of view, it is highly desirable to have low transmitter and receiver distortions. Additionally, it can be observed that the \gls{see} of \gls{hd} increases for large distortions. This is also observed for full \gls{csi}.
\Cref{fig:numerical_stat_csi_si} illustrates the \gls{cdf} for three different values of the strength of Bob's \gls{si} channel. As expected, it can be observed that the strength of the \gls{si} channel has no impact on the \gls{hd} system model. Furthermore, the performance of \gls{fd} drops with increasing \gls{si} strength. However, the performance of \gls{fd} should not drop below the \gls{hd} performance. Therefore, the gap between them for $\sigma_{\text{SI}}^2 = 20$dB should not exist and has to result from imperfections in the numerical evaluation.
\section{Introduction} \label{sec:into}
\input{./Sections/sec_into} \vspace{-6mm}
\section{System Model}\label{sec:model}
\input{./Sections/sec_systemmodel} \vspace{-6mm}
\section{Secrecy Energy Efficiency Maximization} \label{sec_SEE_max}
\input{./Sections/sec_SEE_max} \vspace{-6mm}
\section{Secure Bidirectional Communication: Joint Full-Duplex Operation at Alice and Bob} \label{sec_SEE_max_BD}
\input{./Sections/sec_SEE_max_BD} \vspace{-6mm}
\section{Secrecy Energy Efficiency Maximization with Statistical CSI} \label{sec_SSUM}
\input{./Sections/sec_SSUM} \vspace{-6mm}
\section {Simulation Results}\label{sec:simulations}
\input{./Sections/sec_simulations} \vspace{-6mm}
\section {Conclusion} \label{sec:conclusion}
\input{./Sections/sec_conclusion} \vspace{-6mm}
\vspace{-6mm}
\appendices
\input{./Sections/Appendix}
\vspace{-7mm}
\section{SUIAP initialization}
%
\revOmid{\section{Dinkelbach's algorithm} \label{appendix_dinkelbach}
Let $f:\mathbb{R}^n \rightarrow \mathbb{R}$ and $g:\mathbb{R}^n \rightarrow \mathbb{R}^+$ be, respectively, a concave differentiable and a convex differentiable function. Moreover, let $\mathcal{X}$ be a convex compact set in $\mathbb{R}^n$. Then, the optimization problem
\begin{align}
\underset{ \ma{x} }{\text{max}} \;\; {f(\ma{x})}/{g(\ma{x})}\;\; \text{s.t.}\;\; \ma{x} \in \mathcal{X} \label{P_CCFP}
\end{align}
represents the class of concave over convex fractional programs, see \cite{zappone2015energy,rev_1_1, rev_1_3} for a wide range of applications and related methods .
\begin{lemma} \label{Dinkelbach_aux}
\cite[Section~2]{dinkelbach1967nonlinear} Consider the real-valued auxiliary function
\begin{align} \label{dinkelbach_auxilliary_func}
\gamma (\lambda) := \underset{ \ma{x} \in \mathcal{X} }{\text{max}} f(\ma{x}) - \lambda g(\ma{x}).
\end{align}
Then, $\gamma(\lambda)$ is strictly monotonically decreasing and convex over $\lambda$. Moreover, $\ma{x}^\star \in \mathcal{X}$ is a globally optimum solution to (\ref{P_CCFP}) iff $\ma{x}^\star = \underset{ \ma{x} \in \mathcal{X} }{\text{arg max}} f(\ma{x}) - \lambda^\star g(\ma{x})$ with $\lambda^\star$ as the unique zero of $\gamma(\lambda)$.
\end{lemma}
The purpose of the Dinkelbach's algorithm \cite{dinkelbach1967nonlinear} is to obtain the unique zero of $\gamma(\lambda)$, and thereby the global optimum of the fractional problem (\ref{P_CCFP}). This is implemented via iteratively evaluating and updating $\gamma(\lambda)$ in the decreasing direction, where $\gamma(\lambda)$ can be evaluated as a standard convex problem, see Algorithm~\ref{alg_Dinkelbach} for a detailed procedure.
\vspace{-3mm}\begin{algorithm}[H]
{\tiny{ \revOmid{\begin{algorithmic}[1]
\State{$ \lambda = 0, \epsilon > 0, \eta > \epsilon;$} \Comment{initialization}
\Repeat
\State{$ \ma{x}^\star \leftarrow \underset{ \ma{x} \in \mathcal{X} }{\text{arg max}} f(\ma{x}) - \lambda^\star g(\ma{x}) $}
\State{$ \eta \leftarrow f(\ma{x}^\star) - \lambda g(\ma{x}^\star) $}
\State{$ \lambda^\star \leftarrow f(\ma{x}^\star)/g(\ma{x}^\star) $}
\Until{$ \eta \geq \epsilon $}
\State{\Return$ \left\{\lambda^\star, \ma{x}^\star\right\} $}
\end{algorithmic} }}}
\caption{\scriptsize{Dinkelbach's Algorithm~\cite{dinkelbach1967nonlinear}} } \label{alg_Dinkelbach}
\end{algorithm} }
\section{SUIAP initialization}
\subsection{The choice of $\ma{F}$ and $\ma{G}$} \label{appendix_init_spatialadjustent}
As mentioned, the role of $\ma{F}$ ($\ma{G}$) is to direct (suppress) the transmission into the desired (undesired) direction. Hence, for the design of $\ma{Q}_a$, i.e., data transmission from Alice, we choose $\ma{F} \leftarrow \ma{H}_{ab}$, and $\ma{G} \leftarrow \ma{H}_{ae}$. Conversely, for the design of $\ma{W}_a$ we choose $\ma{F} \leftarrow \ma{H}_{ae}$, and $\ma{G} \leftarrow \ma{H}_{ab}$. For the design of $\ma{W}_b$ we set $\ma{F} \leftarrow \ma{H}_{be}$. However, the choice of $\ma{G}$ should include the impact of distortion terms on Bob, reflecting the effect of residual self-interference. The distortion power at Bob can be written as
\begin{align}
& \text{tr}\Big( \kappa \ma{H}_{bb} \text{diag}\left(\ma{W}_b\right) \ma{H}_{bb}^H \Big) + \text{tr}\Big( \beta \text{diag}\left( \ma{H}_{bb} \ma{W}_b \ma{H}_{bb}^H \right) \Big) \nonumber \\
& = \text{tr}\bigg( \Big( \underbrace{ \kappa \text{diag}\left( \ma{H}_{bb}^H\ma{H}_{bb} \right) + \beta \ma{H}_{bb}^H\ma{H}_{bb} }_{\tilde{\ma{H}}_{bb}} \Big) \ma{W}_b \bigg), \nonumber
\end{align}
which consequently results in the choice of $\ma{G} \leftarrow \left( {\tilde{\ma{H}}_{bb}} \right)^{\frac{1}{2}}$.
\subsection{Power adjustment} \label{appendix_init_powAdj}
Via the utilization of the obtained spatial beams, i.e., normalized covariance matrices, the optimal power adjustment on each transmission is seeked to maximize the resulting SEE. In each case, by fixing the power of the other transmission, the resulting $\text{SEE}_p$ is written as
\begin{align} \label{appendix_SEE_pp}
\text{SEE}_p \left( p_\mathcal{X} \right) = \frac{\text{log} \left( \frac{ \alpha_{11}^{\mathcal{X}} p_\mathcal{X}^2 + \alpha_{12}^{\mathcal{X}} p_\mathcal{X} + \alpha_{13}^{\mathcal{X}} }{ \alpha_{21}^{\mathcal{X}} p_\mathcal{X}^2 + \alpha_{22}^{\mathcal{X}} p_\mathcal{X} + \alpha_{23}^{\mathcal{X}} } \right) }{ \gamma_1^{\mathcal{X}} p_\mathcal{X} + \gamma_2^{\mathcal{X}}},
\end{align}
where $p_\mathcal{X}$, $\mathcal{X} \in \left\{ \ma{Q}_a, \ma{W}_a, \ma{W}_b \right\}$, represent the power associated with different transmissions. It is observed from (\ref{appendix_SEE_pp}) that $\text{SEE}_p \rightarrow 0$ for $p_\mathcal{X} \rightarrow \infty$ and $\text{SEE}_p \left( 0 \right) $ is a finite and non-negative value. Moreover, $\text{SEE}_p \left( p_\mathcal{X} \right)$ is a continuous and differentiable function in the region $p_\mathcal{X} \in [0, \infty)$. This concludes the location of the optimal $p_\mathcal{X}$ at the problem boundaries, or at the points equalizing the derivative of $\text{SEE}_p \left( p_\mathcal{X} \right)$ to zero, see section \ref{appendix_power_efficient_implementation} for an efficient numerical solution.
\subsection{Efficient implementation} \label{appendix_power_efficient_implementation}
For ease of notation, the objective (\ref{appendix_SEE_pp}) is denoted as $f(p_\mathcal{X})$ and its numerator is defined as $g(p_\mathcal{X})$:
\begin{align}
\label{eq:obj_f}
f(p_\mathcal{X}) &{}= \frac{ g(p_\mathcal{X}) }{\gamma_1^{\mathcal{X}} \, p_\mathcal{X} + \gamma_2^{\mathcal{X}}}\\
\label{eq:obj_num_g}
g(p_\mathcal{X}) &{}= \log \left( \frac{\alpha_{11}^{\mathcal{X}} \, p_\mathcal{X}^2 + \alpha_{12}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{13}^{\mathcal{X}}}{\alpha_{21}^{\mathcal{X}} \, p_\mathcal{X}^2 + \alpha_{22}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{23}^{\mathcal{X}}} \right).
\end{align}
The goal is to find the maximum feasible value $\lambda^*$ of the objective function $f(p_\mathcal{X})$ as well as $p_\mathcal{X}^*$ for which $\lambda^*$ is achieved. $\lambda^*$ is defined as follows:
\begin{equation}
\lambda^* = \max_{0 \leq p_\mathcal{X} \leq c} \frac{ g(p_\mathcal{X}) }{\gamma_1^{\mathcal{X}} \, p_\mathcal{X} + \gamma_2^{\mathcal{X}}}.
\end{equation}
It can be seen that the objective $f(p_\mathcal{X})$ is continuous and differentiable. Because $p_\mathcal{X}$ is bounded on the closed interval $\left[ 0, c \right]$, it follows from the extreme value theorem that the objective has a maximum $\lambda^*$ on this interval. The optimum $p_\mathcal{X}$, i.e. $p_\mathcal{X}^*$, is located either on the boundaries of the closed interval or on a point satisfying
\begin{equation}
\label{eq:cond_stationary_point}
\frac{\partial}{\partial p_\mathcal{X}^*} \frac{ g(p_\mathcal{X}^*) }{\gamma_1^{\mathcal{X}} \, p_\mathcal{X}^* + \gamma_2^{\mathcal{X}}} = 0.
\end{equation}
Using the quotient rule, (\ref{eq:cond_stationary_point}) can be rewritten as
\begin{align}
\label{eq:cond2}
&\Leftrightarrow \frac{ g'(p_\mathcal{X}^*) \left( \gamma_1^{\mathcal{X}} \, p_\mathcal{X}^* + \gamma_2^{\mathcal{X}} \right) - g(p_\mathcal{X}^*) \, \gamma_1^{\mathcal{X}}}{ \left( \gamma_1^{\mathcal{X}} \, p_\mathcal{X}^* + \gamma_2^{\mathcal{X}} \right)^2 } = 0\\
&\Leftrightarrow g'(p_\mathcal{X}^*) \left( \gamma_1^{\mathcal{X}} \, p_\mathcal{X}^* + \gamma_2^{\mathcal{X}} \right) - g(p_\mathcal{X}^*) \, \gamma_1^{\mathcal{X}} = 0\\
\label{eq:optimality_condition}
&\Leftrightarrow \frac{g'(p_\mathcal{X}^*)}{\gamma_1^{\mathcal{X}}} = \frac{ g(p_\mathcal{X}^*) }{\gamma_1^{\mathcal{X}} \, p_\mathcal{X}^* + \gamma_2^{\mathcal{X}}} = \lambda,
\end{align}
where $g'(p_\mathcal{X})$ denotes the derivative of $g(p_\mathcal{X})$ with respect to $p_\mathcal{X}$, given by
\begin{equation}
\label{eq:g_diff_def}
g'(p_\mathcal{X}) {}= \frac{2 \, \alpha_{11}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{12}^{\mathcal{X}} }{ \alpha_{11}^{\mathcal{X}} \, p_\mathcal{X}^2 + \alpha_{12}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{13}^{\mathcal{X}}} - \frac{2 \, \alpha_{21}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{22}^{\mathcal{X}} }{ \alpha_{21}^{\mathcal{X}} \, p_\mathcal{X}^2 + \alpha_{22}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{23}^{\mathcal{X}}}.
\end{equation}
Our goal is to convert the maximization problem into a simpler feasibility problem: For a given value of the objective function, denoted as $\lambda$, check if a feasible $p_\mathcal{X}$ exists. Therefore, equation (\ref{eq:optimality_condition}) is rewritten to:
\begin{equation}
\label{eq:optimality_condition_rewritten}
\lambda \, \gamma_1^{\mathcal{X}} - \frac{2 \, \alpha_{11}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{12}^{\mathcal{X}} }{ \alpha_{11}^{\mathcal{X}} \, p_\mathcal{X}^2 + \alpha_{12}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{13}^{\mathcal{X}}} + \frac{2 \, \alpha_{21}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{22}^{\mathcal{X}} }{ \alpha_{21}^{\mathcal{X}} \, p_\mathcal{X}^2 + \alpha_{22}^{\mathcal{X}} \, p_\mathcal{X} + \alpha_{23}^{\mathcal{X}}} = 0.
\end{equation}
Equation (\ref{eq:optimality_condition_rewritten}) is a fourth-order polynomial and hence, it can be written as
\begin{equation}
\label{eq:polynomial}
c_4 \, p_\mathcal{X}^4 + c_3 \, p_\mathcal{X}^3 + c_2 \, p_\mathcal{X}^2 + c_1 \, p_\mathcal{X} + c_0 = 0,
\end{equation}
with
\begin{align*}
c_4 &{}= \alpha_{11}^{\mathcal{X}} \, \alpha_{21}^{\mathcal{X}} \, \gamma_1^{\mathcal{X}} \lambda\\
c_3 &{}= \left( \alpha_{12}^{\mathcal{X}} \, \alpha_{21}^{\mathcal{X}} + \alpha_{11}^{\mathcal{X}} \, \alpha_{22}^{\mathcal{X}} \right) \gamma_1^{\mathcal{X}} \lambda\\
c_2 &{}= \left( \alpha_{13}^{\mathcal{X}} \, \alpha_{21}^{\mathcal{X}} + \alpha_{12}^{\mathcal{X}} \, \alpha_{22}^{\mathcal{X}} + \alpha_{11}^{\mathcal{X}} \, \alpha_{23}^{\mathcal{X}} \right) \gamma_1^{\mathcal{X}} \lambda - \alpha_{11}^{\mathcal{X}} \, \alpha_{22}^{\mathcal{X}} + \alpha_{12}^{\mathcal{X}} \, \alpha_{21}^{\mathcal{X}}\\
c_1 &{}= \left( \alpha_{13}^{\mathcal{X}} \, \alpha_{22}^{\mathcal{X}} + \alpha_{12}^{\mathcal{X}} \, \alpha_{23}^{\mathcal{X}} \right) \gamma_1^{\mathcal{X}} \lambda + 2 \, \alpha_{13}^{\mathcal{X}} \, \alpha_{21}^{\mathcal{X}} - 2 \, \alpha_{11}^{\mathcal{X}} \, \alpha_{23}^{\mathcal{X}}\\
c_0 &{}= \alpha_{13}^{\mathcal{X}} \, \alpha_{23}^{\mathcal{X}} \, \gamma_1^{\mathcal{X}} \lambda - \alpha_{12}^{\mathcal{X}} \, \alpha_{23}^{\mathcal{X}} + \alpha_{13}^{\mathcal{X}} \, \alpha_{22}^{\mathcal{X}}.
\end{align*}
Let the roots of (\ref{eq:polynomial}) be denoted as $p_\mathcal{X}^i$, $i \in \{1,\ldots,4\}$. If there is any real $p_\mathcal{X}^i \in \left[ 0, c \right]$ for which
\begin{equation}
\label{eq:check_if_feasible}
f(p_\mathcal{X}^i) \geq \lambda
\end{equation}
holds, then $\lambda$ is feasible. As a result, it is possible to construct the following bi-section algorithm to find the maximum $\lambda$, denoted as $\lambda^*$.
Firstly, a closed interval for $\lambda^*$ is defined, i.e. $\lambda^* \in \left[ \lambda_{\min} , \lambda_{\max} \right]$, where $\lambda_{\min}$ can usually be defined as
\begin{equation}
\label{eq:def_lambda_min}
\lambda_{\min} = \min \left\lbrace f(0), f(c) \right\rbrace.
\end{equation}
Moreover, $\lambda_{\max}$ can be chosen as an upper bound on $\lambda$:
\begin{equation}
\label{eq:lambda_max}
\lambda_{\max} = \frac{\log \left( \max \left\lbrace \alpha_{11}^{\mathcal{X}}/\alpha_{21}^{\mathcal{X}} , \alpha_{12}^{\mathcal{X}}/\alpha_{22}^{\mathcal{X}}, \alpha_{13}^{\mathcal{X}}/\alpha_{23}^{\mathcal{X}} \right\rbrace \right) }{ \gamma_2^{\mathcal{X}}}.
\end{equation}
The algorithm finds $\lambda^*$ up to some tolerance $\epsilon > 0$. In the first iteration it is verified if $\lambda^1 = \frac{\lambda_{\max} - \lambda_{\min}}{2}$ is feasible. Therefore, the roots $p_\mathcal{X}^i$, $i \in \{1,\ldots,4\}$ of (\ref{eq:polynomial}) are calculated for $\lambda = \lambda^1$. If any $p_\mathcal{X}^i$ is real, lies in the interval $\left[ 0, c \right]$ and $f(p_\mathcal{X}^i) \geq \lambda^1$ holds, then $\lambda^1$ is feasible and the procedure is repeated for $\lambda^2 = \frac{\lambda_{\max} - \lambda^1}{2}$. Otherwise $\lambda^1$ is infeasible and the procedure is repeated for $\lambda^2 = \frac{\lambda^1 - \lambda_{\min}}{2}$. By construction this algorithm numerically approximates the maximum feasible objective value $\lambda^*$ to arbitrary precision $\epsilon$. This algorithm is formally given by algorithm \ref{alg:bi-section_maximization}.
\begin{algorithm}[H]
{\scriptsize{\begin{algorithmic}[1]
\State{\textbf{Input: }$\lambda_{\min}$, $\lambda_{\max}$, $c$;}
\State{$\epsilon > 0$, $\ell = 0$, $a = \lambda_{\min}$, $b = \lambda_{\max}$;}
\Repeat
\State{$\ell = \ell+1$;}
\State{$\text{isFeasible} =$ \textbf{false};}
\State{$\lambda^\ell = \frac{a+b}{2}$;}
\State{Calculate roots $p_\mathcal{X}^{i,\ell}$, $i \in \{1,\ldots,4\}$ of (\ref{eq:polynomial}) for $\lambda^\ell$.}
\ForAll{$i \in \left\lbrace 1, \ldots, 4 \right\rbrace$}
\If{ $\mathrm{Im}\left\lbrace p_\mathcal{X}^{i,\ell} \right\rbrace = 0$ \textbf{and} $p_\mathcal{X}^{i,\ell} \in \left[0, c\right]$ \textbf{and} $f\left(p_\mathcal{X}^{i,\ell}\right) \geq \lambda^\ell$}
\State{$\text{isFeasible} =$ \textbf{true};}
\State{$p_\mathcal{X} = p_\mathcal{X}^{i,\ell}$;}
\State{\textbf{break};}
\EndIf
\EndFor
\If{ $\text{isFeasible}$ }
\State{$a = \lambda^\ell$;}
\Else
\State{$b = \lambda^\ell$;}
\EndIf
\Until{$\text{isFeasible}$ \textbf{and} $\frac{b-a}{2} < \epsilon$}
\State{\Return{$p_\mathcal{X}^* = p_\mathcal{X}$, $\lambda^* = \lambda^\ell$;}}
\end{algorithmic}} }
\caption{{Bi-Section Power Allocation} } \label{alg:bi-section_maximization}
\end{algorithm}
\MinorRR{
\vspace{-8mm}\section{Proof to Lemma~\ref{lemma_BD_Positive_C}} \label{appendix_lemma_BD_Positive_C}
Let $\mathcal{A}_1:=\left( \ma{Q}_a^{\star},\ma{Q}_b^{\star},\ma{W}_a^{\star}, \ma{W}_b^{\star} \right)$ be a KKT solution for (36). Moreover, let ${\tilde{C}^{\text{BD}}_{ab}} \left( \mathcal{A}_1 \right) < 0$, without loss of generality\footnote{The case assuming ${\tilde{C}^{\text{BD}}_{ba}} \left( \mathcal{A}_1 \right) < 0$ can be argued similarly. Moreover, since KKT conditions are also necessary conditions for any globally optimum solution to (36), due to the differentiable objective with linear constraints, the given proof in this part also subsumes the case when $\mathcal{A}_1$ is a globally optimum solution.}. The proof is obtained via contradiction as follows.
The Lagrangian function, corresponding to the problem (36) is formulated as
\begin{align} \label{}
\mathcal{L} \Big( \ma{Q}_a,\ma{Q}_b, & \ma{W}_a, \ma{W}_b , \overbar{\ma{Q}_a}, \overbar{\ma{Q}_b},\overbar{\ma{W}_a},\overbar{\ma{W}_b}, \tau_a, \tau_b \Big) = - \text{SEE}_p^{\text{BD}} + \tau_a \left( P^{\text{BD}}_{A} - P_{A,\text{max}}\right) + \tau_b \left( P^{\text{BD}}_{B} - P_{B,\text{max}}\right) \nonumber \\
& - \text{tr}\left( \overbar{\ma{Q}_a} \ma{Q}_a \right) - \text{tr}\left( \overbar{\ma{Q}_b} \ma{Q}_b\right) - \text{tr}\left( \overbar{\ma{W}_a} \ma{W}_a\right) - \text{tr}\left( \overbar{\ma{W}_b} \ma{W}_b\right), \nonumber
\end{align}
where $\tau_a,\tau_b \geq 0$ are slack variables associated with the power constraints, whereas $\overbar{\ma{Q}_a},\overbar{\ma{Q}_b},$ $\overbar{\ma{W}_a},\overbar{\ma{W}_b} \in \mathcal{H}$ are slack variables for dualizing the semidefinite constraints. Since $\mathcal{A}_1$ is a KKT solution, the directional derivative of the Lagrangian function must vanish for any direction at the point $\mathcal{A}_1$. In order to utilize this property, we observe the behavior of the Lagrangian function when $\mathcal{A}_1$ moves over the directions $d \left(\ma{X} \right)$, such that
{{ \begin{align}
d \left(\ma{X} \right) := \left( - \ma{U}_q \ma{X} \ma{U}_q^H, \ma{0}_{N_B \times N_B}, \ma{U}_q \ma{X} \ma{U}_q^H, \ma{0}_{N_B \times N_B} \right),\;\; \forall \ma{X} \succeq {0}.
\end{align} }}
\hspace{-2mm}In the above definition, $\ma{Q}_a^{\star} = \ma{U}_q\ma{\Lambda}_q\ma{U}_q^H$, with $\ma{\Lambda}_q \in \compl^{r_q \times r_q}$, is the economy-size singular value decomposition\footnote{This choice ensures that the signal space of the movement $\ma{U}_q \ma{X} \ma{U}_q^H$ remains within the space of $\ma{Q}_a^{\star}$, where $r_q$ represents the rank of $\ma{Q}_a^{\star}$. When $\ma{Q}_a^{\star}$ is not rank-deficient, $\ma{U}_q$ can be simply chosen as an identity matrix.}. Now, let $\nabla_{d} f \left(x\right)$ represent the directional derivative of a function $f$ at point $x$ and for the direction $d$. Then, we have
{{ \begin{align}
\nabla_{d \left(\ma{X} \right)} \mathcal{L}\left(\mathcal{A}_1\right) & \overset{(a)}{=} {0} , \;\;\;\;\;\; \forall \ma{X} \succeq {0}, \nonumber \\
{\Rightarrow} \;\;\; \nabla_{d \left(\ma{X} \right)} - \left(\tilde{C}^{\text{BD}}_{ab} \left(\mathcal{A}_1\right) + \tilde{C}^{\text{BD}}_{ba} \left(\mathcal{A}_1\right) \right) & \overset{(b)}{\geq} {0}, \;\;\;\;\;\; \forall \ma{X} \succeq {0}, \nonumber \\
{\Rightarrow} \;\;\; \nabla_{d \left(\ma{X} \right)} \text{log}\left| \ma{\Sigma}_{b}^{\text{BD}} \left(\mathcal{A}_1 \right) \right| - \nabla_{d \left(\ma{X} \right)}\text{log}\left| \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right) \right| \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; &
\nonumber \\ - \underbrace{ \nabla_{d \left(\ma{X} \right)} \left( \text{log}\left| \ma{H}_{ab} \ma{Q}_a \ma{H}_{ab}^H + \ma{\Sigma}_{b}^{\text{BD}} \right| - \text{log}\left| \ma{H}_{ae} \ma{Q}_a \ma{H}_{ae}^H + \ma{\Sigma}_{e-a}^{\text{BD}} \right| \right) }_{= 0} & \overset{(c)}{\geq} {0}, \;\;\;\;\; \forall \ma{X} \succeq {0}, \nonumber \\
{\Rightarrow} \;\;\; \nabla_{d \left(\ma{X} \right)} \text{log}\left| \ma{\Sigma}_{b}^{\text{BD}} \left(\mathcal{A}_1 \right) \right| - \nabla_{d \left(\ma{X} \right)} \text{log}\left| \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right) \right| & \; {\geq} \; 0, \;\;\;\;\;\; \forall \ma{X} \succeq {0}, \nonumber \\
{\Rightarrow} \;\;\; \text{tr}\left( \ma{U}_q^H \ma{H}_{ab}^H \left( \ma{\Sigma}_b^{\text{BD}} \left(\mathcal{A}_1 \right) \right)^{-1} \ma{H}_{ab} \ma{U}_q \ma{X}\right) - \text{tr}\left( \ma{U}_q^H \ma{H}_{ae}^H \left( \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right)\right)^{-1} \ma{H}_{ae} \ma{U}_q \ma{X}\right) &\overset{(d)}{\geq} 0, \;\;\;\;\; \forall \ma{X} \succeq {0}, \nonumber \\
{\Rightarrow} \;\;\; \ma{U}_q^H \ma{H}_{ab}^H \left( \ma{\Sigma}_b^{\text{BD}} \left(\mathcal{A}_1 \right)\right)^{-1} \ma{H}_{ab} \ma{U}_q \;\; {\succeq} \ma{U}_q^H \ma{H}_{ae}^H \hspace{-1mm} \left( \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right) \right)^{\hspace{-1mm}-1} \hspace{-1mm}\ma{H}_{ae} \ma{U}_q & . \label{Lemma_SemiDefiniteInequality}
\end{align} }}
\hspace{-2mm}In the above statements, ($a$) holds as $\mathcal{A}_1$ satisfies the KKT conditions. ($b$) follows from the fact that $\nabla_{d \left(\ma{X} \right)} P_{\text{tot}} = 0$, together with the complementary slackness condition, leading to $\nabla_{d \left(\ma{X} \right)} \text{tr}\left( \overbar{\ma{Q}_a} \ma{Q}_a\right) = 0$, and $\nabla_{d \left(\ma{X} \right)} \text{tr}\left( \overbar{\ma{W}_a} \ma{W}_a\right) \geq 0$. ($c$) is obtained by recalling (\ref{C_BD_ab}),~(\ref{C_BD_ba}) and observing the fact that $ \nabla_{d \left(\ma{X} \right)} \tilde{C}^{\text{BD}}_{ba} \left(\mathcal{A}_1\right) =0$, for the case that $\rho = 1$, and $\nabla_{d \left(\ma{X} \right)} \tilde{C}^{\text{BD}}_{ba} \left(\mathcal{A}_1\right) \geq 0$, when $\rho = 0$. Finally, ($d$) follows from the known identities $\text{tr}\left(\ma{A}\ma{B}\right)=\text{tr}\left(\ma{B}\ma{A}\right)$ and $ \partial \text{log}\left| \ma{A} \right| = \text{tr}\left(\ma{A}^{-1} \partial \ma{A} \right)$.
Now, recalling the initial assumption ${\tilde{C}^{\text{BD}}_{ab}} \left( \mathcal{A}_1 \right) < 0$ yields
{{\begin{align} \label{lemma_last_contradiction}
\frac{\left| \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right)\right|}{\left| \ma{\Sigma}_b^{\text{BD}} \left(\mathcal{A}_1 \right) \right|} \overset{(e)}{<} \frac{\left| \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right) + \ma{H}_{ae} \ma{Q}_a^{\star} \ma{H}_{ae}^H \right|}{\left| \ma{\Sigma}_b^{\text{BD}} \left(\mathcal{A}_1 \right) + \ma{H}_{ab} \ma{Q}_a^{\star} \ma{H}_{ab}^H \right|} & \nonumber \\
& \hspace{-62mm} \overset{(f)}{=} \frac{\left| \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right)\right|}{\left| \ma{\Sigma}_b^{\text{BD}} \left(\mathcal{A}_1 \right) \right|} \times \underbrace{ \left( \frac{\left| \ma{I}+ \ma{U}_q^H \ma{H}_{ae}^H\left( \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right) \right)^{-1} \ma{H}_{ae} \ma{U}_q \ma{\Lambda}_q \right|}{\left| \ma{I} + \ma{U}_q^H \ma{H}_{ab}^H \left( \ma{\Sigma}_b^{\text{BD}} \left(\mathcal{A}_1 \right)\right)^{-1}\ma{H}_{ab} \ma{U}_q \ma{\Lambda}_q \right|} \right) }_{\leq 1} \leq \frac{\left| \ma{\Sigma}_{e-a}^{\text{BD}} \left(\mathcal{A}_1 \right)\right|}{\left| \ma{\Sigma}_b^{\text{BD}} \left(\mathcal{A}_1 \right) \right|},
\end{align} }}
\hspace{-2mm}which leads to a contradiction. In the above inequalities, ($e$) is obtained by incorporating (\ref{C_BD_ab}) in the inequality ${\tilde{C}^{\text{BD}}_{ab}} \left( \mathcal{A}_1 \right) < 0$ and ($f$) is obtained by recalling (\ref{Lemma_SemiDefiniteInequality}) and employing the matrix identity $\left|\ma{I} + \ma{A}\ma{B}\right| = \left|\ma{I} + \ma{B}\ma{A} \right|$, and the fact that $\left|\ma{I} + \ma{A}\right| \geq \left|\ma{I} + \ma{B} \right|$ for any $\ma{A} \succeq \ma{B}$.}
\section{Proof to Lemma~\ref{lemma_SSUM_Convergence}} \label{appendix_Lemma_SSUM_convergence}
\subsection{Proof of tightness:}
Tightness is obtained by observing the equivalence
{{\begin{subequations} \label{appendix_SSUM_lemma_tigtness}
\begin{align}
& |\mathbb{F}_{{C}} | P_{\text{tot}}\left( \mathbb{Q}^\star \right) \text{SAA} \left( \mathbb{Q}^\star \right) = \sum_{i \in \mathbb{F}_C} \{ \tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) \}^+
\overset{(g)}{=} \sum_{i \in \mathbb{G}_{C_1}} \{ \tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) \}^+ + \sum_{i \in \mathbb{G}_{C_2^+}} \tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) \label{appendix_SSUM_lemma_tigtness_a} \\
& \overset{(h)}{=} \sum_{i \in \mathbb{G}_{C_1}} \{ \hat{C}_{\text{s},i} \left( \mathbb{Q}^\star, \mathbb{Q}^\star \right) \}^+ + \sum_{i \in \mathbb{G}_{C_2^+}} \hat{C}_{\text{s},i} \left( \mathbb{Q}^\star , \mathbb{Q}^\star \right)
= |\mathbb{G}_{{C}} | P_{\text{tot}}\left( \mathbb{Q}^\star \right) \text{SAA}_{LB} \left(\mathbb{Q}^\star, \mathbb{Q}^\star \right), \nonumber
\end{align}
\end{subequations} }}
where ($g$) is obtained by applying (\ref{SSUM_set_def_C_2_+}), and ($h$) from $\tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star\right) = \hat{C}_{\text{s},i} \left( \mathbb{Q}^\star, \mathbb{Q}^\star \right)$, see (\ref{eq_SEE_max_SEE_Taylor}). \vspace{-4mm}
\vspace{-5mm}\subsection{Proof of equal directional derivative:}
Let $C_{\text{s},i} := \{ \tilde{C}_{\text{s},i} \}^+$. The directional derivative of $\text{SAA}$ at $\mathbb{Q}^\star$ is then expressed as
{{\begin{subequations}\label{directional_derivative_lemma_ssum}
\begin{align}
& P_{\text{tot}}\left( \mathbb{Q}^\star \right) {\MinorRR{ \nabla_{d}}} \text{SAA} \left( \mathbb{Q}^\star \right) \nonumber \\
& {=} \Big( \sum_{i \in \mathbb{G}_{C_1}} {\MinorRR{ \nabla_{d}}} C_{\text{s},i} \left( \mathbb{Q}^\star \right) + \sum_{i \in \mathbb{G}_{C_2^+}} {\MinorRR{ \nabla_{d}}} \tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) \Big)/|\mathbb{G}_{{C}} | - {\MinorRR{ \nabla_{d}}} P_{\text{tot}} \left( \mathbb{Q}^\star \right) \text{SAA}\left( \mathbb{Q}^\star \right) \label{directional_derivative_lemma_ssum_a} \\
& = \Big( \sum_{i \in \mathbb{G}_{C_1^{(d)}}} {\MinorRR{ \nabla_{d}}} \hat{C}_{\text{s},i} \left(\mathbb{Q}^\star, \mathbb{Q}^\star \right) + \sum_{i \in \mathbb{G}_{C_2^+}} {\MinorRR{ \nabla_{d}}} \hat{C}_{\text{s},i} \left(\mathbb{Q}^\star, \mathbb{Q}^\star \right) \Big)/|\mathbb{G}_{{C}} | - {\MinorRR{ \nabla_{d}}} P_{\text{tot}} \left( \mathbb{Q}^\star \right) \text{SAA}_{LB}\left( \mathbb{Q}^\star, \mathbb{Q}^\star \right) \label{directional_derivative_lemma_ssum_b} \\
& = P_{\text{tot}}\left( \mathbb{Q}^\star \right) {\MinorRR{ \nabla_{d}}} \text{SAA}_{LB}\left( \mathbb{Q}^\star, \mathbb{Q}^\star \right), \label{directional_derivative_lemma_ssum_c}
\end{align}
\end{subequations} } }
where the set $\mathbb{G}_{C_1^{(d)}}$ is defined as
\begin{align}
\mathbb{G}_{C_1^{(d)}} := \left\{ \forall i \;\; \vert \;\; i \in {\mathbb{G}_{{C_1}}} \;\; \text{and} \;\; {\MinorRR{ \nabla_{d}}} C_{\text{s},i} \left( \mathbb{Q}^\star \right) \neq 0 \right\}.
\end{align}
In the above arguments, (\ref{directional_derivative_lemma_ssum_a}) is obtained by recalling (\ref{Eq_SSUM_SAA}), and the fact that ${C}_{\text{s},i} \left( \mathbb{Q}^\star\right)$ is positive and differentiable for any $i \in \mathbb{G}_{C_2^+}$. The identity (\ref{directional_derivative_lemma_ssum_b}) is obtained by considering the possible situations for ${\MinorRR{ \nabla_{d}}} C_{\text{s},i} \left( \mathbb{Q}^\star \right)$:
\begin{itemize}
\item $\tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) < 0 .$ Then, $C_{\text{s},i}$ is differentiable and ${\MinorRR{ \nabla_{d}}} C_{\text{s},i} \left( \mathbb{Q}^\star \right) = 0$ for any direction $d$.
\item $\tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) > 0 .$ Then, $C_{\text{s},i}$ is differentiable and ${\MinorRR{ \nabla_{d}}} C_{\text{s},i} \left( \mathbb{Q}^\star \right) = {\MinorRR{ \nabla_{d}}} \hat{C}_{\text{s},i} \left( \mathbb{Q}^\star, \mathbb{Q}^\star \right)$, $\forall d$.
\item $\tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) = 0 $ and ${\MinorRR{ \nabla_{d}}} \tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) > 0$. Then, $C_{\text{s},i}$ is not differentiable and ${\MinorRR{ \nabla_{d}}} C_{\text{s},i} \left( \mathbb{Q}^\star \right) = {\MinorRR{ \nabla_{d}}} \hat{C}_{\text{s},i} \left( \mathbb{Q}^\star, \mathbb{Q}^\star \right)$.
\item $\tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) = 0 $ and ${\MinorRR{ \nabla_{d}}} \tilde{C}_{\text{s},i} \left( \mathbb{Q}^\star \right) \leq 0$. Then, $C_{\text{s},i}$ is not differentiable and ${\MinorRR{ \nabla_{d}}} C_{\text{s},i} \left( \mathbb{Q}^\star \right) = 0$.
\end{itemize}
Finally, the identity (\ref{directional_derivative_lemma_ssum_c}) is obtained by recalling (\ref{SSUM_SAA_LB}), and the tightness property from (\ref{appendix_SSUM_lemma_tigtness}).
\subsection{Worst case CSI error} \label{WC_CSI}
It is beneficial to obtain the least favorable CSI error matrices, as they provide guidelines for the future channel estimation strategies. For instance, this helps us to choose a channel training sequence that reduces the radius of the CSI error feasible regions in the most destructive directions. Moreover, such knowledge is a necessary step for cutting-set-based methods \cite{mutapcic2009cutting} which aim to reduce the design complexity by iteratively identifying the most destructive error matrices and explicitly incorporating them into the future design steps. In the current setup, the worst-case channel error matrices are identified by maximizing the weighted MSE objective in (\ref{eq:global_opt_problem_MWMSE_CSIError}) within their defined feasible region. This is expressd as
\begin{subequations} \label{wmmse_}
\begin{align}
\underset{{{\mathbb{C}}}}{ \text{max}} \;\; & \sum_{i \in \mathbb{I}} \sum_{k\in\mathbb{F}_K}\text{tr}\left({\ma{W}_{i}^k}^H \ma{E}_i^k \ma{W}_{i}^k\right), \;\; \\
{\text{s.t.}} \;\; & \left\| \ma{D}_{ij}^k \ma{\Delta}_{ij}^k \right\|_F \leq \zeta_{ij}^k, \;\; \forall i,j \in \mathbb{I},\; k \in \mathbb{F}_K.
\end{align}
\end{subequations}
Due to the uncoupled nature of the error feasible set, and the value of the objective function over $\ma{\Delta}_{ij}^k$, following (\ref{quadratic_error_representation_final}), the above problem is decomposed as
\begin{subequations} \label{find_worst_delta_3}
\begin{align}
\underset{ {\ma{b}}_{ij}^k }{ \text{min}} \;\; & - \left\| \ma{C}_{ij}^k \tilde{\ma{D}}_{ij}^k {\ma{b}}_{ij}^k \right\|_2^2 - 2 \text{Re}\left\{ {{\ma{b}}_{ij}^k}^H {{\tilde{\ma{D}}_{ij}^k}}^H {\ma{C}_{ij}^k}^H {\ma{c}}_{ij}^k \right\} - {{\ma{c}}_{ij}^k}^H {\ma{c}}_{ij}^k \label{wwmse_worstcaseerror_nonconvexquadraticobjective}\\ {\text{ s.t.}} \;\; & {{\ma{b}}_{ij}^k}^H {\ma{b}}_{ij}^k \leq {\zeta_{ij}^k}^2,
\end{align}
\end{subequations}
where $\text{Re}\{\cdot\}$ represents the real part of a complex value. Note that the objective in (\ref{wwmse_worstcaseerror_nonconvexquadraticobjective}) is a non convex function and can not be minimized using the usual numerical solvers in the current form. Following the zero duality gap results for the non-convex quadratic problems \cite{zheng2012zero, BV:04}, we focus on the dual function of (\ref{find_worst_delta_3}). The corresponding Lagrangian function to (\ref{find_worst_delta_3}) is constructed as
\begin{align} \label{find_worst_delta_Lagrangian}
& \mathcal{L}\left({\ma{b}}_{ij}^k , \rho_{ij}^k \right) = \nonumber \\ & {{\ma{b}}_{ij}^k}^H \ma{A}_{ij}^k {\ma{b}}_{ij}^k - 2 \text{Re}\left\{ {{\ma{b}}_{ij}^k}^H {\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H {\ma{c}}_{ij}^k \right\} - {{\ma{c}}_{ij}^k}^H {\ma{c}}_{ij}^k - \rho_{ij}^k {\zeta_{ij}^k}^2,
\end{align}
where $\rho_{ij}^k$ is the dual variable and
\begin{align}
\ma{A}_{ij}^k := \rho_{ij}^k \ma{I}_{N_jM_i} - {\tilde{\ma{D}}_{ij}^k}^H{{\ma{C}}_{ij}^k}^H{\ma{C}}_{ij}^k\tilde{\ma{D}}_{ij}^k.
\end{align}
Consequently, the value of the dual function is obtained as
\begin{align}
& \ma{g} \left( \rho_{ij}^k \right) = \nonumber \\ & - {{{\ma{c}}}_{ij}^k}^H \ma{C}_{ij}^k \tilde{\ma{D}}_{ij}^k \left(\ma{A}_{ij}^k\right)^{-1} {\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H {{\ma{c}}}_{ij}^k - {{\ma{c}}_{ij}^k}^H{\ma{c}}_{ij}^k - \rho_{ij}^k {\zeta_{ij}^k}^2,
\end{align}
if $ \ma{A}_{ij}^k \succeq 0$, and ${\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H {{\ma{c}}}_{ij}^k \in \mathcal{R}\{\ma{A}_{ij}^k\}$, and otherwise is unbounded from below\footnote{If one of the aforementioned conditions is not satisfied, an infinitely large value of $\ma{b}_{ij}$ can be chosen in the negative direction of $\ma{A}_{ij}$, if $\ma{A}_{ij}^k$ is not positive semi-definite, or in the direction ${\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H {\ma{c}}_{ij}^k$ within the null-space of $\ma{A}_{ij}^k$.}. By applying the Schur complement lemma, the maximization of the dual function is written using the epigraph form as
\begin{subequations} \label{eq:dual_channelerrormatrices}
\begin{align}
\underset{ {\rho}_{ij}^k \geq 0 , \; \phi_{ij}^k} { \text{max} } \;\; & - \phi_{ij}^k \\
{\rm s.t.} \;\; & \left[\begin{array}{cc} \phi_{ij}^k - {{\ma{c}}_{ij}^k}^H{\ma{c}}_{ij}^k - {\rho}_{ij}^k {\zeta_{ij}^k}^2 & {{{\ma{c}}}_{ij}^k}^H \ma{C}_{ij}^k \tilde{\ma{D}}_{ij}^k \\ {\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H {{\ma{c}}}_{ij}^k & \ma{A}_{ij}^k \end{array} \right] \succeq 0, \label{eq:dual_channelerrormatrices_schur_semidefiniteconstraint}
\end{align}
\end{subequations}
where $\phi_{ij}^k \in \real $ is an auxiliary variable\footnote{Note that the semi-definite presentation in (\ref{eq:dual_channelerrormatrices_schur_semidefiniteconstraint}) automatically satisfies $ \ma{A}_{ij}^k \succeq 0$, and ${\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H {{\ma{c}}}_{ij}^k \in \mathcal{R}\{\ma{A}_{ij}^k\}$.}. By plugging the obtained dual variable ${\rho}_{ij}^k$ into (\ref{find_worst_delta_Lagrangian}), and considering the fact that $- {\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H \ma{C}_{ij}^k \tilde{\ma{D}}_{ij}^k + {\rho_{ij}^k}^\star \ma{I}_{N_jM_i} \succeq 0$ as a result of (\ref{eq:dual_channelerrormatrices}), the optimal value of $\ma{b}_{ij}^k$ is obtained from (\ref{find_worst_delta_Lagrangian}) as
\begin{align}
{\ma{b}_{ij}^k}^\star = \left( - {\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H \ma{C}_{ij}^k \tilde{\ma{D}}_{ij}^k + {\rho_{ij}^k}^\star \ma{I}_{N_jM_i} \right)^{-1} {\tilde{\ma{D}}_{ij}^k}^H {\ma{C}_{ij}^k}^H {{\ma{c}}}_{ij}^k,
\end{align}
where $(\cdot)^\star$ represents the optimality and the worst case $\ma{\Delta}_{ij}^k$ is consequently calculated via $\text{vec}(\ma{\Delta}_{ij}^k) = \tilde{\ma{D}}_{ij}^k {\ma{b}_{ij}^k}^\star$.
\subsection{Computational complexity} \label{alg_complexity}
The proposed designs in Section~\ref{sec:WMMSE} and \ref{sec:WMMSE_CSI_Error} are based on the alternative design of the optimization variables. Furthermore, it is observed that the consideration of non-linear hardware distortions, leading to inter-carrier leakage, as well as the impact of CSI error, result in a higher problem dimension and thereby complicate the structure of the resulting optimization problem. In this part, we analyze the arithmetic complexity associated with the Algorithm~\ref{sec:WMMSE_CSI_Error}. Note that Algorithm~\ref{sec:WMMSE_CSI_Error} is considered as a general framework, containing Algorithm~\ref{sec:WMMSE} as a special case, since it takes into account the impacts of hardware distortion jointly with CSI error. \par
The optimization over $\mathbb{V}, \mathbb{U}$ are separately cast as SDP. A general SDP problem is defined as
\begin{align}
\underset{\ma{z}}{\text{min}} \;\; \ma{p}^T \ma{z}, \;\; {\text{s.t.}} \;\; \ma{z}\in \real^n, \; \ma{Y}_0 + \sum_{i=1}^n z_i \ma{Y}_i \succeq 0, \; \|\ma{z}\|_2 \leq q,
\end{align}
where the fixed matrices $\ma{Y}_i$ are symmetric block-diagonal, with $M$ diagonal blocks of the sizes $l_m \times l_m,\; m \in \mathbb{F}_M$, and define the specific problem structure, see \cite[Subsection~4.6.3]{ben2001lectures}. The arithmetic complexity of obtaining an $\epsilon$-solution to the defined problem, i.e., the convergence to the $\epsilon$-distance vicinity of the optimum is upper-bounded by
\begin{align}
\mathcal{O}(1) \left(1 + \sum_{m=1}^{M} l_m \right)^{\frac{1}{2}} \left( n^3 + n^2 \sum_{m=1}^{M} l_m^2 + n \sum_{m=1}^M l_m^3\right) \text{digit}\left( \epsilon\right),
\end{align}
where $\text{digit}(\epsilon)$, is obtained from \cite[Subsection~4.1.2]{ben2001lectures}, and affected by the required solution precision. The required computation of each step is hence determined by size of the variable space and the corresponding block diagonal matrix structure, which is obtained in the following:
\subsubsection{Optimization over $\mathbb{V},\mathbb{T},\mathbb{M}$}
The size of the variable space is given as $n= 2K \left( 4 + \sum_{i\in\mathbb{I}} d_iN_i \right)$. Moreover, the block sizes are calculated as $l_m = 2 + 2 K d_i N_i, \; \forall i \in \mathbb{I}$, corresponding to the semi-definite constraint on $\ma{G}_i$, and as $l_m = 2+ 2\tilde{d}_{ij} + 2M_iN_j, \; \forall i,j \in \mathbb{I},k\in\mathbb{F}_K$, corresponding to the semidefinite constraint on $\ma{F}_{i,j}^k$ from (\ref{Prob:MMSE_CSI_Error_final}). The overall number of the blocks is calculated as $M=2+4K$.
\subsubsection{Optimization over $\mathbb{U},\mathbb{T},\mathbb{M}$}
The size of the variable space is given as $n= 2K \left( 4 + \sum_{i\in\mathbb{I}} d_i M_i \right)$. The block sizes are calculated as $l_m = 2+ 2\tilde{d}_{ij} + 2M_iN_j, \; \forall i,j \in \mathbb{I},k\in\mathbb{F}_K$, corresponding to the semidefinite constraint on $\ma{F}_{i,j}^k$ from (\ref{Prob:MMSE_CSI_Error_final}). The overall number of the blocks is calculated as $M=4K$.
\subsubsection{Remarks}
The above analysis intends to show how the bounds on computational complexity are related to different dimensions in the problem structure. Nevertheless, the actual computational load may vary in practice, due to the structure simplifications and depending on the used numerical solver. Furthermore, the overall algorithm complexity also depends on the number of optimization iterations required for convergence. See Subsection~\ref{sec_AlgorithmAnalysis} for a study on the convergence behavior, as well as a numerical evaluation of the algorithm computational complexity.
\subsection{Robust MMSE Transceiver Design} \label{subsec_MMSE}
The optimization in (\ref{eq:global_opt_problem_4}) provides a design for maximizing the system sum rate, where the rate functions are re-structured using the WMMSE method. Alternatively, an optimization problem for obtaining the optimal MMSE system operation can be formulated as
\begin{align}
\underset{{\mathbb{V}}, {\mathbb{U}}}{ \text{min}} \;\; \underset{{\mathbbl{\Delta}}}{\text{max}} \;\; &\;\; \sum_{i \in \mathbb{I}} \mu_i \sum_{k \in \mathbb{K}} \text{tr} \left( \ma{E}_i^k\right) ,\;\; {\rm s.t.} \;\; \text{(\ref{eq:global_opt_problem_1_power_contraint}),~(\ref{eq:global_opt_problem_1_normbounded_contraint})} \label{eq:global_opt_problem_2_constraints}, \label{MMSE_global_opt_problem}
\end{align}
where the maximization over ${\mathbbl{\Delta}}$ represents the worst-case channel estimation error in the MMSE sense, and $\mu_i \in \real$ represents the significance of the estimation MSE at communication direction $i$. As it can be observed from (\ref{eq:global_opt_problem_3}), the defined MMSE optimization (\ref{MMSE_global_opt_problem}) can be interpreted as the special case of (\ref{eq:global_opt_problem_3}) by choosing $\nu_i = \mu_i$, and setting $\ma{W}_i^k = \ma{I}_{d_i}$ in the objective and in the definitions of $\ma{c}_{ij}^k$ and $\ma{C}_{ij}^k, \forall i,j \in \mathcal{D}, k \in \mathbb{K}$. As a result, a similar iterative optimization can be employed where at each step a convex sub-problem is solved over ${\mathbb{U}}$ and ${\mathbb{V}}$. Note that unlike the rate maximization case, where the max-min inequality is employed to construct a lower bound of the rate function, the provided framework acts on the exact sum-MSE value as the objective. Moreover, the utilization of the MAX-DET algorithm is not necessary and all sub-problems are presented as a standard SDP, due to the elimination of ${\mathbb{W}}$ from the optimization variables set.
\subsection{Extended Power Constraints}
System power constraints usually reflect the limitations of battery, or device energy storage. Nevertheless, multiple considerations of power may become relevant for the systems with distributed antennas \cite{5963593}\footnote{In a distributed antenna system a centralized core performs joint transmit/receive processing tasks for all chains. Nevertheless, each antenna unit, or each sub-set of antennas, can be connected to a separate power sources.} or for the transceivers where an accurate operation of the chain elements are crucial. In this respect, a FD transceiver, in particular is sensitive to the operational accuracy of the hardware elements, which may become saturated or inaccurate outside of a given functional range. In this respect the consideration of per transmit antenna power constraint \cite{CZHH:14,7478064}, total or per receive antenna self-interference power constraint as may become relevant \cite{HL:14, Huberman2014, JTLH:12}.
In order to incorporate this, we generalize the defined power constraint in (\ref{p4_power}) as
\begin{subequations}
\begin{align}
& \ma{s}_{\text{tx},l}^T \left\lceil \text{diag}\left( \sum_{k \in \mathbb{K} } \tilde{\ma{V}}_{i}^k \right) \right\rceil_{i \in \mathbb{I} } \leq p_{\text{tx},l},\; l \in \mathbb{C}_{\text{tx}}, \\
&\ma{s}_{\text{rx},l}^T \left\lceil \text{diag}\left( \sum_{k \in \mathbb{K} } \ma{H}_{ij} \tilde{\ma{V}}_{j}^k \ma{H}_{ij}^H \right) \right\rceil_{i \in \mathbb{I}} \leq p_{\text{rx},l}, \; \forall i \neq j \in \mathbb{I},\; l \in \mathbb{C}_{\text{rx}}, \\
& \left[\begin{array}{cc} \tilde{\ma{V}}_{i}^k & {{\ma{V}}_{i}^k} \\ {{\ma{V}}_{i}^k}^H & \ma{I}_{d_i} \end{array} \right] \succeq 0, \; \forall i \in \mathbb{I}, \; k \in \mathbb{K},
\end{align}
\end{subequations}
where $\tilde{\ma{V}}_{i}^k$ is the transmit covariance in the direction $i$ and subcarrier $k$, and $\ma{s}_{\text{tx},l}$ ($\ma{s}_{\text{rx},l}$) is a selection vector with zero and one elements, and $\mathbb{C}_{\text{tx}}$ ($\mathbb{C}_{\text{rx}}$) are the set of transmit (receive) power constraints\footnote{As an example, the choice $\ma{s}_{\text{tx},l} = [ 1 \ldots 1 ] $ represents a total transmit power from all antennas.}.
\subsection{Extension to the Multi-User Interference Channel Setup}
The proposed optimization framework in (\ref{}) consideres a multi-carrier FD communication in a P2P setup. Nevertheless, the proposed formulation can be easily extended to a scenario with multiple simultaneously active communication pairs, i.e., a FD interference channel. Such a setup gains high practical relevance, considering the trend of wireless systems towards frequency reuse and network densification, which results in the coexistence of multiple wireless communication links on the same area. In particular, while the proposed design simultaneously accounts for the inherent inaccuracies of a FD system regarding inaccurate CSI and the impacts of hardware impairments, it benefits the performance of an interference channel by simultaneously taking advantage of the additional design freedom. This includes the application of possible duplexing modes\footnote{This is since each communicating pair may now operate as a HD link in two directions, or to act as a FD link where both directions are active.}, as well as spatial (due to multiple antenna setup), and multi-carrier diversities. The two latter cases are in particular beneficial in controlling the additional interference terms in a dense communication setup, by properly assigning different beams/subchannels to the coexisting communication links.
In order to extend our design framework, we generalize the definition of the set $\mathbb{I}$ to include all communication links. In this way, each pair includes two communication links which are operating at the opposite directions. Similar to Section~\ref{sec:model}, the channel $\ma{H}_{ii}$ represents the desired channel for the link $i$, where $\ma{H}_{ij}, i \neq j$, represents the interference channel from link $j$ to link $i$. As a result, a similar signal definition is obtained via (\ref{model:transmitsignal}), (\ref{model:rx_signal}), and (\ref{model:ri_estimatedsignal}) where the aggregate interference-plus-noise on the link $i$, and in the subcarrier $k$ is redefined as
\begin{align}
{\boldsymbol{\nu}}_{i}^k =\ma{n}_{i}^k + \ma{e}_{\text{r},i}^k + \ma{\Delta}_{i\bar{i}}^k \ma{V}_{\bar{i}}^k \ma{s}_{\bar{i}}^k + \hspace{-3mm} {\sum_{j \in \{\mathbb{I}\setminus \{i,\bar{i}\}\}}} \hspace{-3mm} \ma{H}_{ij}^k \ma{x}_{j}^k + \hspace{-1mm} \sum_{ j\in \{i,\bar{i}\} } \hspace{-1mm} \ma{H}_{ij}^k \ma{e}_{\text{t},j}^k ,
\end{align}
where $\bar{i}\in\mathbb{I}$ is the index of the communication link at the opposite direction of the link $i$, but among the same pair of nodes. Consequently, the aggregate interference-plus-noise covariance is obtained as (\ref{eq_extension_multiplelinks_aggregate_interference_covariance}). Moreover, following the same procedure as in (\ref{MSE_Matrix})-(\ref{eq:global_opt_problem_4}), a similar optimization structure is obtained where $\ma{c}_{ij}^k$ and $\ma{C}_{ij}^k$ are redefined with the consideration of the new interference terms, see Appendix.
\begin{figure*}[!hb]
\normalsize
\begin{align} \label{eq_extension_multiplelinks_aggregate_interference_covariance}
\ma{\Sigma}_{i}^k & = \ma{\Delta}_{i\bar{i}}^k \ma{V}_{\bar{i}}^k{\ma{V}_{\bar{i}}^k}^H {\ma{\Delta}_{i{\bar{i}}}^k}^H +
\nonumber \\ & {\sum_{j \in \mathbb{I} }} \ma{H}_{ij}^k \ma{\Theta}_{\text{tx},j}^k \text{diag} \left( \sum_{k \in \mathbb{K} } \ma{V}_j^k{\ma{V}_j^k}^H \right) {\ma{H}_{ij}^k}^H
+ \ma{\Theta}_{\text{rx},i}^k \text{diag} \left( \sum_{k\in\mathbb{K}} \left( \sigma_{i,k}^2 \ma{I}_{M_i} + {\sum_{j \in \mathbb{I}}} \ma{H}_{ij}^k \ma{V}_j^k{\ma{V}_j^k}^H {\ma{H}_{ij}^k}^H \right) \right) \nonumber \\ & \;\;\;\; + {\sum_{j \in \{\mathbb{I}\setminus \{i,\bar{i}\}\}}} \ma{H}_{ij}^k \ma{V}_j^k{\ma{V}_j^k}^H {\ma{H}_{ij}^k}^H + \sigma_{i,k}^2 \ma{I}_{M_i}.
\end{align}
\hrulefill
\vspace*{-0mm}
\end{figure*}
\subsection{Related works on SEE maximization}
\vspace{-1mm}
\subsection{Contribution and paper organization}
\begin{itemize}[leftmargin=*]
\item In this work, \revOmid{we study \revPeterNew{an}} SEE [SBPJ] maximization problem for a general MIMOME setup, where the legitimate receiver is capable of FD jamming. \revPeterNew{This stays in contrast to} available designs which utilize FD transceivers for improving the secrecy capacity~\cite{7339654, ZGYZ:14, 6787008, 7792199, secrecy_ICC_taghizadeh, 7945480} or the studies on the SEE of half-duplex (HD) networks~\cite{zappone2016energy,7094604, 6199997, 6476945}. Due to the intractable problem structure, \revOmid{an iterative algorithm} is proposed, with a guaranteed convergence to a point satisfying \revPeterNew{the Karush-Kuhn-Tucker (KKT) conditions of the original problem}.
\item \revPeterNew{The joint utilization of FD capabilities, both on Alice and Bob, for jamming and bi-directional information exchange, shows additional potential for SEE improvement}. This is grounded on the fact that, firstly, the FD jamming power is reused for both communication directions, resulting in power-efficient jamming, and secondly, the coexistence of two communication directions on the same channel may degrade Eve's decoding capability. Motivated by this, the \revOmid{proposed iterative algorithm is extended for} \revPeterNew{a bidirectional FD} setup.
\item In order to account for \revOmid{channel state information (CSI)} uncertainties, the consideration of statistical CSI regarding the channels to Eve has been introduced in \cite{lin2013secrecy, zappone2016energy}, considering HD nodes. However, the aforementioned works limit the studied setups to a single antenna Eve, where CSI statistics follow a specific fast-fading nature. In this work \revPeterNew{\revOmid{an}} SEE maximization problem is studied for \revPeterNew{\revOmid{a}} FD-enabled MIMOME setup, where the channels to Eve follow an arbitrary statistical distribution. Note that unlike the fast-fading \revPeterNew{condition}, which assumes the CSI is not available due to mobility, the consideration of an arbitrary statistical distribution also accounts for the scenarios \revPeterNew{where Eve is stationary, but Eve's CSI cannot be obtained due to the lack of collaboration from Eve}. Hence, we propose a successive selection and statistical lower bound maximization (SSSLM) algorithm, utilizing a combination of \revOmid{sample average} approximation~\cite{kim2015guide}, \revPeterNew{\revOmid{and}} successive lower bound approximation method \cite{razaviyayn2013unified}, with the goal of maximizing the \revPeterNew{statistical average SEE}. The algorithm is proven to converge to a point satisfying the KKT conditions \revPeterNew{of the original problem}.
\end{itemize}
The numerical results indicate only a marginal SEE gain, \revPeterNew{through} the utilization of FD jamming, for a wide range of system \revPeterNew{parameters}. However, the observed SEE gain is notable for \revPeterNew{systems with a small distance between the FD node and the eavesdropper, a high \revPeter{signal-to-noise ratio (SNR)}, or for a bidirectional FD communication setup, if SI can efficiently be mitigated.}
\revOmid{The studied system model is defined in Section~\ref{sec:model}. When \revPeterNew{only the legitimate receiver (Bob)} is capable of FD jamming, an SEE maximization framework is introduced in Section~\ref{sec_SEE_max}, and then extended \revPeterNew{to a bidirectional} FD communication setup in Section~\ref{sec_SEE_max_BD}. \revPeterNew{In} Section~\ref{sec_SSUM}, \revPeterNew{an} SEE maximization framework is introduced \revPeterNew{for the case} when the channels to the eavesdropper are not accurately known. The behavior of the proposed algorithms, as well as the impact of the FD jamming on the SEE performance are numerically studied in Section~\ref{sec:simulations}. This paper is concluded in Section~\ref{sec:conclusion} by summarizing the main results.}
\subsection{Mathematical notation}
Throughout this paper, column vectors and matrices are denoted as lower-case and {upper-case} bold letters, respectively. Mathematical expectation, trace, determinant, {and} Hermitian transpose are denoted by $ \mathbb{E}(\cdot), \; {\text{ tr}}(\cdot), \; |\cdot|,$ {and} $(\cdot)^{ H},$ respectively. The Kronecker product is denoted by $\otimes$. The identity matrix with dimension $K$ is denoted as ${\ma I}_K$ and ${\rm vec}(\cdot)$ operator stacks the elements of a matrix into a vector. {Moreover,} $(\cdot)^{-1}$ represents the inverse of a matrix and $||\cdot||_{2},||\cdot||_{\text{F}}$ {respectively represent the Euclidean and Frobenius norms}. {\MinorRR{$\ma{0}_{m \times n}$ represents an all-zero matrix with size $m \times n$.}} $\text{diag}(\cdot)$ returns a diagonal matrix by putting the off-diagonal elements to zero. \revPeter{$\bot$ denotes statistical independence.} The set $\mathbb{F}_K$ is defined as $\{1,2,\ldots,K\}$, and $|\mathbb{X}|$ denotes the size of the set $\mathbb{X}$. \revPeter{The set of positive real numbers,} the set of complex numbers, and the set of all positive semi-definite matrices with Hermitian symmetry are denoted by $\mathbb{R}^+$, $\compl$ and $\mathcal{H}$, respectively. $a^\star$ indicates the value of $a$ for which optimality holds. The value of $\{x\}^+$ is equal to $x$, if positive, and zero otherwise. \revPeter{Furthermore, $\mathcal{CN} \left( \ma{x}, \ma{X} \right)$ denotes the complex normal distribution with mean $\ma{x}$ and covariance $\ma{X}$.}
%
%
%
\subsection{Related works on FD MC systems} \label{related_works}
In the early work by Riihonen \emph{et al.} \cite{6488955}, the performance of a combined analog/digital SIC scheme is evaluated for a FD orthogonal-frequency-division-multiplexing (OFDM) transceiver, taking into account the impact of hardware distortions, e.g., limited analog-to-digital convertor (ADC) accuracy. The problem of resource allocation and performance analysis for FD MC communication systems is then addressed in \cite{5449862,7270330,7504451,sun2016optimal,7454410,7194031,6832469}, however, assuming a single antenna transceiver.
In this regard, a FD MC system is studied in \cite{5449862,7270330,7504451} in the context of FD relaying, in \cite{7454410, 7194031} and \cite{sun2016optimal} in the context of FD cellular systems with non-orthogonal multiple access (NOMA) capability, and in \cite{6832469} for rate region analysis of a hybrid HD/FD link. Moreover, an MC relaying system with hybrid decode/amplify-and-forward operation is studied in \cite{ng2012dynamic}, with the goal of maximizing the system sum rate via scheduling and resource allocation. \par
In all of the aforementioned designs the behavior of the residual self-interference signal is modeled as a purely linear system. As a result, the impact of the hardware distortions leading to inter-carrier leakage, as such observed in \cite{6488955}, are neglected. Moreover, to the best of the authors knowledge, the design of a bidirectional MC MIMO system, where the communication links in opposite directions simultaneously suffer from self-interference, is still an open problem. \par
\subsection{Contribution and paper organization}
In this paper we study a bidirectional FD MIMO OFDM system, where the impacts of hardware distortions leading to imperfect SIC and inter-subcarrier leakage are taken into account.
Our main contributions are summarized as following:
\begin{itemize}
\item In Section~\ref{sec:model} the operation of a FD MC transceiver is modeled, taking into account the impacts of hardware impairments, following the same framework proposed in \cite{DMBS:12}. As a result, the explicit impact of hardware inaccuracies on the inter-carrier leakage is observed.
\item In Section~\ref{sec:WMMSE}, an alternating quadratic convex program (QCP), denoted as AltQCP, is proposed in order to obtain a minimum weighted MSE transceiver design. The known weighted-minimum-MSE (WMMSE) method \cite{CACC:08} is then utilized to extend the AltQCP framework for maximizing the system sum rate. For both algorithms, a monotonic performance improvement is observed at each step, leading to a necessary convergence.
\item In Section~\ref{sec:WMMSE_CSI_Error} the proposed design in Section~\ref{sec:WMMSE} is extended by also taking into account the impact of CSI error. This is done by updating the system model proposed in Section~\ref{sec:model}. Moreover, a worst-case MMSE design is proposed as an alternating semi definite program (SDP), denoted as AltSDP. Similar to the previous methods, a monotonic performance improvement is observed at each step, leading to a necessary convergence.
\item In Section~\ref{sec_discussions} the computational complexity of the proposed AltSDP algorithm is analytically obtained in relation to the system dimensions. Moreover, a methodology to obtain the least favorable CSI error matrices is obtained, by transforming the resulting non-convex quadratic problem into a convex problem.
\item Finally, the proposed designs are numerically evaluated for various system parameters in Section~\ref{sec:simulations}. In particular, it is observed that the gain of utilizing a design which takes into account the impacts of inter-carrier leakage, resulting from hardware distortions, becomes significant as transceiver inaccuracy increases. The main conclusions of this paper are then summarized in Section~\ref{sec:conclusion}.
\end{itemize}
\subsection{Mathematical Notation}
Throughout this paper, column vectors and matrices are denoted as lower-case and {upper-case} bold letters, respectively. {Mathematical expectation, trace}, inverse, determinant, transpose, conjugate {and} Hermitian transpose are denoted by $ \mathbb{E}\{\cdot\}, \; {\text{tr}}(\cdot), \; (\cdot)^{-1}\; |\cdot|, \; (\cdot)^{ T},\; (\cdot)^{*}$ {and} $(\cdot)^{ H},$ respectively. The Kronecker product is denoted by $\otimes$. The identity matrix with dimension $K$ is denoted as ${\ma I}_K$ and ${\text{vec}}(\cdot)$ operator stacks the elements of a matrix into a vector. $\ma{0}_{m \times n}$ represents an all-zero matrix with size $m \times n$. $\| \cdot \|_{2}$ and $\|\cdot\|_{{F}}$ {respectively represent the Euclidean and Frobenius norms}. $\text{diag}(\cdot)$ returns a diagonal matrix by putting the off-diagonal elements to zero. $\left\lfloor \mathbf{A}_i \right \rfloor_{i=1,\ldots,K}$ denotes a tall matrix, obtained by stacking the matrices $\mathbf{A}_i,~i=1,\ldots, K$. $\mathcal{R}\{\ma{A}\}$ represents the range (column space) of the matrix $\ma{A}$. The set of real, positive real, and complex numbers are respectively denoted as $\mathbb{R}, \mathbb{R}^+ , \compl$.
\subsection{SUIAP} \label{subsec_SUIAP}
The proposed SUIAP algorithm consists of two nested loops. The detailed procedure is explained in the following.
\subsubsection{Initialization} \label{sec_SUIAP_init}
{In this section we briefly discuss the initialization of Algorithm~\ref{SUIAP}.} We separate the choice of spatial beams and power allocation for different transmissions, in order to obtain a fast solution.
\subsubsection{Spatial adjustment}
The role of the transmit spatial adjustment is to direct the transmit signal to the desired receiver, while preventing leakage to the undesired directions. This is written as the following maximization
\begin{align} \label{eq_op_SEE_max_ratio_initialization}
\underset{\ma{Q} }{ \text{max}} \;\; \frac{\text{tr} \left( \ma{F} \ma{Q} \ma{F}^H \right) + \nu_f}{\text{tr} \left( \ma{G} \ma{Q} \ma{G}^H \right) + \nu_g}, \;\; \text{s.t.} \;\; \text{tr} \left(\ma{Q}\right) = 1,
\end{align}
where $\ma{Q}$ represents the normalized covariance matrix, $\ma{F}$ and $\ma{G}$ are the desired and undesired channels, and $\nu_f, \nu_g$ are the noise variances at the desired and undesired receivers, respectively. An optimal solution to (\ref{eq_op_SEE_max_ratio_initialization}) can be obtained as
\begin{align} \label{eq_SEE_max_ratio_initialization}
\text{vec}\left( {\ma{Q}^\star}^{\frac{1}{2}} \right) = \mathcal{P}_{\text{max}} \left( \left( \ma{I}\otimes \ma{G}^H\ma{G} + \nu_g \ma{I} \right)^{-1} \left( \ma{I}\otimes \ma{F}^H\ma{F} + \nu_f \ma{I} \right) \right).
\end{align}
where $\mathcal{P}_{\text{max}} \left( \cdot \right)$ calculates the dominant eigenvector. Note that the above approach is applied separately for the spatial adjustment of $\ma{Q}_a,\ma{W}_a$ and $\ma{W}_b$. The corresponding desired and undesired channels are defined in Appendix~\ref{appendix_init_spatialadjustent}.
\subsubsection{Power allocation}
The transmit power adjustment for $\ma{Q}_a,\ma{W}_a$ and $\ma{W}_b$ is obtained by applying the normalized covariance in the previous part as the basis. Afterwards, the power for each transmission is optimized to maximize $\text{SEE}_p$, see Appendix~\ref{appendix_init_powAdj}.
\subsubsection{Outer loop}
In each outer iteration, the optimization problem (\ref{problem_SEE_max_1}) is approximated by replacing the objective with an effective lower bound to $\text{SEE}_p$, following the successive inner approximation (SIA) framework \cite{marks1978technical}. This is implemented by applying the inequality
\begin{align} \label{eq_SEE_max_SEE_Taylor}
- \text{log} \left| \ma{X}\right| \geq - \text{log} \left| \ma{Y} \right| + \text{tr}\left( \ma{Y}^{-1} \left( \ma{Y} - \ma{X} \right) \right)
\end{align}
obtained from the first-order Taylor approximation of the convex terms $- \text{log} \left| \ma{X} \right|$ at the point $\ma{X} = \ma{Y}$. The approximated optimization problem at the $l$-th outer iteration is consequently expressed as
\begin{align} \label{eq_SEE_lower_bound_SIA}
\underset{ \mathbb{Q}^{[l]} }{\text{max}} \;\; & \underbrace{C_{LB,ab} \left(\mathbb{Q}^{[l]}, {\mathbb{Q}^{[l-1]}}^\star \right)/P_{\text{tot}} \left(\mathbb{Q}^{[l]}\right) }_{\leq \text{SEE}_p\left( \mathbb{Q}^{[l]} \right)} \;\; \text{s.t.} \;\; \text{(\ref{problem_SEE_max_1_c})} ,
\end{align}
\begin{figure*}[!t]
\normalsize
{{\revOmid{{{\begin{align}
& C_{LB,ab} \left( \mathbb{Q}^{[i]} , {\mathbb{Q}}^{[j]} \right) := \text{log} \left| \ma{\Sigma}_b \left( {\mathbb{Q}}^{[i]} \right) + \ma{H}_{ab}\ma{Q}_a^{[i]}\ma{H}_{ab}^H \right| - \text{log} \left| \ma{\Sigma}_b\left( {\mathbb{Q}}^{[j]} \right) \right| + \text{log} \left| \ma{\Sigma}_e\left( {\mathbb{Q}}^{[i]} \right) \right| \nonumber \\
& +\hspace{-0mm} \text{tr}\Big(\hspace{-0mm} \Big( \ma{\Sigma}_e\left( {\mathbb{Q}}^{[j]} \right) + \ma{H}_{ae}\ma{Q}_a^{[j]}\ma{H}_{ae}^H \Big)^{-1} \Big( \hspace{-0mm} \ma{\Sigma}_e\left( {\mathbb{Q}}^{[j]} \right) - \ma{\Sigma}_e\left( {\mathbb{Q}}^{[i]} \right) + \ma{H}_{ae}\Big( \hspace{-0mm} \ma{Q}_a^{[j]} - \ma{Q}_a^{[i]} \hspace{-0mm} \Big) \ma{H}_{ae}^H\Big)\hspace{-0mm} \Big) \nonumber \\
& + \text{tr}\left( \left( \ma{\Sigma}_b\left( {\mathbb{Q}}^{[j]} \right) \right)^{-1} \left( \ma{\Sigma}_b\left( {\mathbb{Q}}^{[j]} \right) - \ma{\Sigma}_b\left( {\mathbb{Q}}^{[i]} \right) \right) \right) - \text{log} \left| \ma{\Sigma}_e\left(\hspace{-0mm} {\mathbb{Q}}^{[j]}\hspace{-0mm} \right) \hspace{-0mm}+ \hspace{-0mm} \ma{H}_{ae}\ma{Q}_a^{[j]}\ma{H}_{ae}^H \right| \hspace{-0mm} \label{eq_SEE_max_SEE_Approx_Dincklebach}
\end{align} }}} }}
\hrulefill
\vspace*{-5mm}
\end{figure*}
where $\mathbb{Q}^{[X]}:= \left\{ \ma{Q}_a^{[X]}, \ma{W}_a^{[X]}, \ma{W}_b^{[X]} \right\}$, with $X$ specifying an iteration instance. Moreover, $C_{LB,ab}$ is given in (\ref{eq_SEE_max_SEE_Approx_Dincklebach}) and ${\mathbb{Q}^{[l-1]}}^\star$ represents the obtained solution at the previous outer iteration.
\subsubsection{Inner loop}
The inner loop is dedicated to optimally solve the approximated problem at each outer iteration via the well-known Dinkelbach's algorithm \cite{dinkelbach1967nonlinear}, as (\ref{eq_SEE_lower_bound_SIA}) belongs to the class of concave-over-affine fractional programs \cite{beckenbach1937generalized}. In particular, the optimum solution is obtained via a sequence of parametric variable updates, see Appendix~\ref{appendix_dinkelbach} for elaboration on the related class of fractional programs and the detailed procedure. The main steps associated with the Dinkelbach's algorithm, i.e., Steps~$3$ and $5$ from Algorithm~\ref{alg_Dinkelbach}, can be expressed as the following updates:
\begin{align}
{ \mathbb{Q}^{[l,k]} }^\star & \leftarrow \underset{ \mathbb{Q}^{[l,k]} }{\text{argmax}} \;\; C_{LB} \left(\mathbb{Q}^{[l,k]}, {\mathbb{Q}^{[l-1]}}^\star \right) - {\lambda^{[l,k-1]}}^\star P_{\text{tot}}\left(\mathbb{Q}^{[l,k]}\right) \;\; \text{s.t.} \;\; \text{(\ref{problem_SEE_max_1_c})} , \label{Dinkelbach_SEE_1} \\
{\lambda^{[l,k]}}^\star & \leftarrow C_{LB,ab} \left({\mathbb{Q}^{[l,k]}}^\star, {\mathbb{Q}^{[l-1]}}^\star \right) / P_{\text{tot}}\left({\mathbb{Q}^{[l,k]}}^\star\right), \label{Dinkelbach_SEE_2}
\end{align}
associated with the $l$-th outer iteration and $k$-th inner iteration.
It is observed that (\ref{Dinkelbach_SEE_1}) is a jointly convex problem over the optimization variables $\mathbb{Q}^{[k,l]}$ and \revPeterNew{can efficiently be implemented} via {the} MAX-DET algorithm \cite{vandenberghe1998determinant}, whereas (\ref{Dinkelbach_SEE_2}) can be obtained via direct evaluation.
The defined algorithm steps, both outer and inner loop iterations, are continued until a jointly stable point is obtained, see Algorithm~\ref{SUIAP} for more details.
\vspace{-2mm}
\subsubsection{Convergence}
Via the application of the \revPeterNew{Dinkelbach's algorithm} on the class of concave-over-affine fractional programs, the iterations of \revPeterNew{the} inner loop converge to a globally optimum solution for (\ref{eq_SEE_lower_bound_SIA}). This follows from the strictly monotonic nature of the auxiliary function (\ref{eq_SEE_lower_bound_SIA}), and the convexity of the update (\ref{Dinkelbach_SEE_1}). For more information please see Appendix~\ref{appendix_dinkelbach} and the references therein. The following lemmas reveal the nature of the convergence at the outer loop.
\begin{lemma} \label{lemma_SIA_Convergence}
(SIA sequence: \cite[Theorem~1]{marks1978technical}) Consider the optimization problem
\begin{align} \label{lemma_SIA_convergence}
\underset{ \ma{x} }{\text{min}} \;\; & g_0 (\ma{x}) \;\; \text{s.t.} \;\; g_i(\ma{x}) \leq 0, \; \forall i \in \{1,\ldots,{I}\},
\end{align}
where $g_i:\mathbb{R}^n \rightarrow \mathbb{R}^+$ and $g_i:\mathbb{R}^n \rightarrow \mathbb{R}$ are differentiable but potentially non-convex functions. Furthermore, consider the differentiable functions ($\forall i$) $\bar{g}_i$, approximating ${g}_i$ at $\ma{x}_0$, such that $(i) \; g_i(\ma{x}) \leq \bar{g}_i(\ma{x}, \ma{x}_0)$, $(ii) \; g_i(\ma{x}_0) = \bar{g}_i(\ma{x}_0, \ma{x}_0)$ and $(iii) \; \partial g_i(\ma{x}_0) / \partial \ma{x} = \partial \bar{g}_i(\ma{x}_0, \ma{x}_0) / \partial \ma{x}$.
Then, upon the feasibility of an initial value $\ma{x}^{[0]}$, the sequence of approximate convex optimization problems
\begin{align} \label{lemma_SIA_convergence_approx}
{\ma{x}^{[k]}}^{\star} \leftarrow \underset{\ma{x}^{[k]} }{\text{argmin}} \;\; & \bar{g}_0 \left(\ma{x}^{[k]}, {\ma{x}^{[k-1]}}^{\star}\right) \;\; \text{s.t.} \;\; \bar{g}_i \left(\ma{x}^{[k]}, {\ma{x}^{[k-1]}}^{\star}\right) \leq 0, \;\; i \in \{1,\ldots,{I}\},
\end{align}
\revPeterNew{converges} to a point satisfying the KKT conditions of the original problem (\ref{lemma_SIA_convergence}).
\end{lemma}
\begin{proof}
The proof follows from two observations. Firstly, the sequence of $g_0 ({\ma{x}^{[k]}}^\star)$ leads to a necessary convergence. This is observed from the chain of inequalities $0 \leq g_0 ({\ma{x}^{[k]}}^\star) \leq g_0 ({\ma{x}^{[k-1]}}^\star) \leq \cdots \leq g_0 ({\ma{x}^{[0]}})$ as the optimal objective value is upperbounded by any feasible value. Secondly, the approximate problem (\ref{lemma_SIA_convergence_approx}) shares the same \revPeterNew{set of KKT conditions as (\ref{lemma_SIA_convergence}) at the point of convergence, due to the stated properties $(i)-(iii)$}. The detailed proof of the latter case is articulated in~\cite{marks1978technical}, also see \cite{rev_1_2} for a similar discussion.
\end{proof}
\begin{lemma} \label{lemma_OuterLoopConvergence}
The properties $(i)-(iii)$ stated in Lemma~\ref{lemma_SIA_Convergence} hold for the approximation introduced in (\ref{eq_SEE_lower_bound_SIA}).
\end{lemma}
\begin{proof}
The tightness $(ii)$, and the shared slope properties $(iii)$ are obtained directly by observing the nature of the inequality (\ref{eq_SEE_max_SEE_Taylor}), as the first-order Taylor approximation of the differentiable convex term $-\text{log}(x)$. The globally lower bound property $(iii)$ is observed since any first-order Taylor approximation of a convex function is also a global lower bound, see \cite[Subsection~3.1.3]{BV:04}.
\end{proof}
The combination of Lemma~\ref{lemma_SIA_convergence} and Lemma~\ref{lemma_OuterLoopConvergence} conclude the convergence of the SUIAP algorithm to a point satisfying the KKT optimality conditions\footnote{\MinorRevOmid{However, due to the non-convex nature of the underlying problem, the global optimality of the converging point may not be theoretically guaranteed and the obtained solution depends on the used initialization. In Subsection~\ref{sec_AlgorithmAnalysis}, the optimal performance is numerically evaluated by repeating the SUIAP algorithm with several initializations. Although the optimality gap of the obtained solution \revOmid{may not} be theoretically guaranteed, it is observed that the proposed initialization leads to a negligible gap with the numerically obtained performance benchmark. \label{remarl_SUIAP_optimalitagap}}}.
\vspace{-2mm} \vspace{-0mm}
%
}
\subsubsection{Computational complexity}
The computational complexity of the algorithm is dominated by the steps of the determinant maximization in the inner loop. A general form of a MAX-DET problem is defined as
\begin{align}
\underset{\ma{z}}{\text{min}} \;\; \ma{p}^T \ma{z} + \text{log}\left|{\ma{Y}(\ma{z})}^{-1}\right|, \;\; {\text{s.t.}} \;\; {\ma{Y}(\ma{z})} \succ 0,\; {\ma{F}(\ma{z})} \succeq 0,
\end{align}
where $\ma{z}\in \real^n$, and ${\ma{Y}(\ma{z})} \in \real^{n_Y \times n_Y} := \ma{Y}_0 + \sum_{i=1}^{n} {z_i \ma{Y}_i}$ and $ {\ma{F}(\ma{z})} \in \real^{n_F \times n_F} := \ma{F}_0 + \sum_{i=1}^{n} {z_i \ma{F}_i} $.
An upper bound to the computational complexity of the above problem is given as
\begin{align} \label{SUAIP_complexity}
\mathcal{O}\Big( \gamma_{\text{in}} \sqrt{n} \big(n^2 + n_Y^2 \big) n_F^2 \Big),
\end{align}
see \cite[Section~10]{vandenberghe1998determinant}. In our problem $n = 2N_{A}^2 + N_{B}^2 $ representing the dimension of real valued scalar variable space, and $n_Y = 2M_{B} + 2M_{E}$ and $n_F = 2N_{B} + 4N_{A}+ 2$, representing the dimension of the determinant operation and the constraints space, respectively. \vspace{-1mm}
\begin{remark}
The above analysis intends to show how the bounds on computational complexity are related to different problem dimensions. Nevertheless, the computational load may vary in practice, depending on the implementation, the used numerical solver, and the number of optimization iterations required to obtain convergence. Please see Subsection~\ref{sec_AlgorithmAnalysis} for a numerical analysis on the algorithm computational complexity.
\end{remark}
\begin{algorithm}[H]
{{ \begin{algorithmic}[1]
\State{$l,k \leftarrow {0}; \; \lambda^{[0,0]} \leftarrow {0}; \; \mathbb{Q}^{[0]} \leftarrow \text{Subsection~\ref{sec_SUIAP_init}}$} ; \Comment{initialization}
\Repeat \Comment{outer loop}
\State{$l \leftarrow l + 1; \; {\lambda^{[0,l]}}^\star \leftarrow {\lambda^{[k,l-1]}}^\star ; \; \mathbb{Q}^{[0,l]} \leftarrow {\mathbb{Q}^{[k,l-1]}}^\star ; \; k \leftarrow 0; $}
\Repeat \Comment{inner loop (Dinkelbach alg.)}
\State{$k \leftarrow k + 1;$}
\State{$ \left\{ {\mathbb{Q}^{[l,k]} }^\star , {\lambda^{[l,k]}}^\star \right\} \leftarrow \text{(\ref{Dinkelbach_SEE_1}), (\ref{Dinkelbach_SEE_2})} ; $}
\State{$C \leftarrow C_{LB,ab} \left({\mathbb{Q}^{[l,k]}}^\star , {\mathbb{Q}^{[l-1]}}^\star \right) - {\lambda^{[l,k-1]}}^\star P_{\text{tot}}\left({\mathbb{Q}^{[l,k]}}^\star \right);$}
\Until{$C \leq C_{\text{min}}$}
\Until{${\lambda^{[k,l]}}^\star - {\lambda^{[0,l]}}^\star \leq \lambda_{\text{min}} $}
\State{\Return$\left\{{\mathbb{Q}^{[k,l]}}^\star , {\lambda^{[k,l]}}^\star \right\}$}
\end{algorithmic} }}
\caption{\scriptsize{SUIAP algorithm for SEE maximization. $C_{\text{min}}$ ($\lambda_{\text{min}}$) represents the convergence threshold for outer (inner) iterations.} } \label{SUIAP}
\end{algorithm}
%
\subsection{Extended SUIAP for bidirectional-SEE maximization}
{\MinorRevOmid{By employing the results of Lemma~\ref{lemma_BD_Positive_C},}} it is observed that the SEE maximization problem (\ref{opt_SEEMax_BD_Relaxed}) shares a similar mathematical structure in relation to the transmit covariance matrices, i.e., $\ma{Q}_{\mathcal{X}}, \ma{W}_{\mathcal{X}}$, ${\mathcal{X}} \in \{a,b\}$ as addressed for (\ref{problem_SEE_max_1}). \revPeterNew{Hence, a procedure similar to \revPeter{the} SUIAP algorithm can be} {\MinorRR{employed to obtain an iterative solution, with a guaranteed convergence to a point satisfying KKT conditions.}} The computational complexity of each Dinkelbach step is obtained similar to (\ref{SUAIP_complexity}), where $n = 2N_{A}^2 + 2N_{B}^2 $, $n_Y = 2M_{B} + 2M_{A} + 2M_{E}$ and $n_F = 4N_{B} + 4N_{A}+2$.
\subsection{Algorithm analysis} \label{sec_AlgorithmAnalysis}
Due to the iterative structure of the proposed algorithms and the possibility of local optimum points, the convergence behavior of the algorithms are of high interest, both as a verification for algorithm operation as well as an indication of the required computational effort. In this part, the performance of the SUAIP and SSSLM algorithms are studied in terms of the average convergence behavior and computational complexity. Moreover, the impact of the choice of the algorithm initialization is evaluated.
In Fig.~\ref{fig_alg_analysis} (a), the average convergence behavior of the SUAIP algorithm is depicted. As expected, a monotonic objective improvement is observed, with convergence in $5$-$20$ total outer iterations.
In Figs.~\ref{fig_alg_analysis} \revPeterNewNew{(b)-(c), the impact of the proposed initializations for the SUAIP and SSSLM algorithms are depicted. In Fig.~\ref{fig_alg_analysis} (b) it is observed that for the SUIAP algorithm, the proposed initialization in Subsection~\ref{sec_SUIAP_init} reaches close to the benchmark performance\footnote{The benchmark performance is obtained by repeating the algorithm with several random initializations, and choosing the highest obtained SEE.}. For the SSSLM algorithm, the situation is prone to more randomness. This is since, in addition to the choice of the algorithm initialization, the solution is dependent on the used channel realizations used in the construction of SAA, see (\ref{Eq_SSUM_SAA}). In this regard, the resulting cumulative distribution function (CDF) of the resulting SEE values is depicted in Fig.~\ref{fig_alg_analysis}~(c), by examining $100$ instances of the SSSLM algorithm. It is observed that the resulting average SEE differs for different solution instances, however, within $2-3\%$ of the relative difference. This value is smaller for a system with HD nodes, due to the absence of FD jamming and the impact of residual self-interference which result in a simpler problem structure.}
\begin{table}[!t]
\centering
\renewcommand{\arraystretch}{1.1}
\caption{Average CPU Time}\label{tab_net_params}
\centering \vspace{-3mm}
\begin{tabular}[t]{||c|c|c|c|c||}
\hline
Algorithm & SUIAP-HD & SUIAP-FD & SSSLM-HD & SSSLM-FD \\
\hline
\begin{tabular}{c} CPU Time [s] \\ (Initialization) \end{tabular} & \begin{tabular}{c} 17.4 \\ (1.2) \end{tabular} & \begin{tabular}{c} 31.1 \\ (1.5) \end{tabular} & \begin{tabular}{c} $1.39\times 10^4$ \\ (17.7) \end{tabular} & \begin{tabular}{c} $6.8\times 10^4$ \\ (32.2) \end{tabular} \\ \hline
\end{tabular} \vspace{-4mm}
\end{table}
{The required average CPU time\footnote{The reported CPU time is obtained using an \revOmid{Intel Core i$5$ $3320$M processor with the clock rate of $2.6$ GHz and $8$ GB of random-access memory (RAM). As software platform we have used CVX~\cite{YGB:08}, together with MATLAB $2013$a on a $64$-bit operating system.}}, for SSSLM as well as the SUIAP algorithms are depicted in Table~\ref{tab_net_params}, applied on HD and FD scenarios. \revPeterNewNew{Moreover, the CPU time associated with the proposed initialization methods, which can be considered as an intuitive but sub-optimal but practical algorithm in each case, are given in parenthesis in the second row. It is observed that a design with FD-enabled jamming results in a higher CPU time, due to the additional problem complexity associated with the choice of jamming strategy and residual self-interference.}}
\vspace{0mm}
\subsection{Performance comparison} \label{Sim_benchmarks}
In this part the SEE performance of the FD-enabled system is evaluated via the application of the proposed SUAIP and SSSLM algorithms, and under different system conditions. In particular, we are interested in a comparison between the performance of an FD-enabled setup, compared to the case where all nodes operate in HD mode. Moreover, the evaluation of the proposed SEE-specific designs is of interest, in comparison to the available designs which target the maximization of the system's secrecy capacity. The following benchmarks are hence implemented to provide a meaningful comparison.
\MinorRevOmid{\begin{itemize}[leftmargin=*]
\item \textit{SEE-FD}:~The proposed SUAIP (SSSLM) algorithm is implemented using the exact (statistical) CSI, where Bob is capable of FD operation.
\item \textit{SEE-HD}:~Similar to \textit{SEE-FD}, but with restricting the operation of the nodes to the HD mode.
\item \textit{CS-FD}:~The design with the intention of maximizing secrecy capacity. Bob is capable of FD operation.
\item \textit{CS-HD}:~Similar to \textit{CS-FD}, but with restricting the operation of the nodes to the HD mode.
\end{itemize} }
\vspace{-0mm}
\subsubsection{FD-enabled jamming with exact CSI}
\input{./Sections/sim_SUAIP}
In Figs.~\ref{fig_SUAIP}~(a)-(h) the average SEE performance of the defined benchmarks are evaluated, assuming availability of perfect CSI and FD operation at Bob. Hence, both Alice and Bob are simultaneously capable of transmitting AN, see Fig.~\ref{fig_model}.
In Fig.~\ref{fig_SUAIP}~(a) the impact of thermal noise variance is depicted. It is observed that a higher $\sigma_{\text{n}}^2$ results in a smaller SEE both for FD and HD setups. Moreover, a marginal gain for FD setup is obtained compared to the HD setup, if the noise variance is low. This is expected, since FD jamming becomes less effective when Eve is already distorted with high thermal noise power.
In Fig.~\ref{fig_SUAIP}~(b) the impact of the available transmit power budget ($P_{\text{max}}$) for each transceiver is depicted. It is observed that for small values of $P_{\text{max}}$, the resulting SEE is monotonically increasing with an increase in $P_{\text{max}}$. Moreover, the performance of the benchmark algorithms essentially converge for small values of $P_{\text{max}}$. This is grounded in the fact that for a system with low SNR condition, the positive impact of FD jamming disappears as observed from Fig.~\ref{fig_SUAIP}~(a). Conversely, for large values of $P_{\text{max}}$, the traditional designs result in a rapid decrease of SEE, where the proposed SUAIP method converges to a constant value. This is expected, since the designs with the target of maximizing the secrecy rate utilize the maximum available power budget, resulting in a sever degradation of SEE. Moreover, a visible gain is observed with the application of an FD jammer for a high $P_{\text{max}}$ region. \revPeter{Due to a high $P_{\text{max}}$, the link from Alice to Eve} also enjoys a higher SNR, which justifies the application \revPeter{of a FD jammer}.
In Fig.~\ref{fig_SUAIP}~(c) the impact of transceiver accuracy is depicted. As expected, a higher value of $\kappa$ results in a smaller achievable SEE, both in HD and FD setups. Moreover, it is observed that FD jamming can be beneficial only for a system with an accurate hardware operation, due to the impact of residual self-interference. However, results show that targeting SEE as the design objective results in a significant energy efficiency gain, compared to the traditional designs which target the maximization of secrecy rate.
In Fig.~\ref{fig_SUAIP}~(d) the impact of Eve's distance to Alice ($d_E$) is depicted. It is assumed that three nodes are positioned in a line with a total Alice-Bob distance of $100$, where Eve is positioned in between. It is observed that the system's SEE increases as $d_E$ increase, and Eve gets closer to \revPeter{Bob}. Moreover, the application of FD jamming becomes beneficial only when Eve is located in a close distance to Bob, and hence the channel between Bob and Eve, i.e., the jamming channel, is strong.
In Figs.~\ref{fig_SUAIP}~(e) the impact of the number of antenna elements at Eve ($M_E$) on SEE is depicted. As expected, a larger $M_E$ results in a reduced SEE as it results in a stronger Alice-Eve channel. Moreover, the application of an FD jammer becomes gainful for a higher values of $M_E$, in order to counteract the improved Eve reception capability.
In Figs.~\ref{fig_SUAIP}~(f)-(h), the impact of the transceiver's power efficiency is evaluated on the resulting system SEE. In particular, the impact of the zero-state power consumption ($P_{0}$) and PA efficiency ($\mu$) are depicted respectively in Figs.~\ref{fig_SUAIP}~(f) and (g). The impact of the additional power consumption for SIC ($P_{\text{FD}}$) on the system SEE is depicted for different noise regimes in Fig.~\ref{fig_SUAIP}~(h), \revPeterNewNew{where the two constant red lines represent the SEE for the HD setup.} It is observed that higher (lower) values of $\mu$ ($P_{0}, P_{\text{FD}}$) result in a higher SEE. Moreover, it is observed that a marginal gain with the application of an FD jammer is obtained for a high $\mu$, and a small $P_{\text{FD}}$ conditions. This is expected, since a small (large) value of $\mu$ ($P_{\text{FD}}$) results in a bigger waste of power when {using an FD jamming strategy}.
\vspace{-0mm}
\subsubsection{Secure bidirectional communication}
\input{./Sections/sim_SUAIP_BD}
In Fig.~\ref{PerfCSI_BD_Pmax} a system with a bidirectional secure communication between Alice and Bob is studied. In particular, a joint FD operation at Alice and Bob is considered which enables jamming and communication simultaneously at both directions. \MinorRevOmid{Two scenarios are considered regarding the decoding capability at Eve: \emph{i}) Eve treats interference from the non-intended information path as noise~(corresponding to $\rho=1$), and \emph{ii}) Eve is capable of decoding, and hence reducing, the received signal from the non-intended information link~(corresponding to $\rho=0$).} Moreover, a setup with HD Bob and HD Alice is also evaluated, where time-division-duplexing (TDD) or frequency-division-duplexing (FDD) is employed in order to facilitate a bidirectional communication.
It is observed that the resulting SEE increases with $P_{\text{max}}$, however, saturates for high values of maximum transmit power. Moreover, it is observed that a joint FD operation is capable of enhancing the system SEE, with a considerable margin, in the studied bidirectional setup. This is since, due to the coexistence of both communication directions on the same channel the jamming power is re-used for both communication directions, leading to a higher SEE compared to the HD setup. Moreover, the Eve's decoding capability is further decreased in the FD setup considering the scenario (\emph{i}), due to the existence of two information links at the same channel.
\vspace{-0mm}
\subsubsection{FD-enabled jamming with statistical CSI}
\input{./Sections/sim_SSUM}
In Fig.~\ref{fig_SSUM_2} the cumulative distribution function (CDF) of the resulting SEE is evaluated via the application of SSSLM algorithm \revOmid{on $100$ problem instances\footnote{\revOmid{Each problem instance includes a realization of $\ma{H}_{ab},\ma{H}_{bb}$}.}}, where only a statistical CSI is available for the channels to Eve. We choose $|\mathbb{G}_C| = 100$ in the construction of SAA, see Section~\ref{sec_SSUM}, in order to limit the required computational effort. The CDF of the resulting SEE is then evaluated via the utilization of $10000$ channel realizations for each problem instance, following the statistical distribution defined in the beginning of the current section and choosing $\ma{D}_{\mathcal{X}}$ as a matrix of all-$1$ elements. \\
In Fig.~\ref{fig_SSUM_2}~(a) the performance of the SSSLM algorithm with the consideration of the statistical CSI, is compared to the case where the SUIAP algorithm is applied directly on the channel estimate matrices $\tilde{\ma{H}}_{\mathcal{X}}$. It is observed that a significant gain is obtained by taking into account the full channel statistics, however, at the expense of a higher computational complexity. Moreover, the superior SEE performance of the SEE specific design, compared to the secrecy rate maximizing designs is observable.
In Figs.~\ref{fig_SSUM_2}~(b)-(d) the CDF of the resulting SEE is evaluated for different levels of thermal noise ($\sigma_{\text{n}}^2$), hardware inaccuracy ($\kappa$), and the PA efficiency ($\mu$). Similar to the observed trends for the scenario where exact CSI is available, a marginal gain is observed in the resulting SEE with the application of an optimized FD jamming strategy. In particular, the gain of the FD-enabled system is improved for a system with a high SNR, i.e., a high transmit power budget or a low noise level, and as hardware accuracy increases.
\subsection{Successive selection and statistical lower bound maximization (SSSLM)}
In order to address the aforementioned challenges, we propose a successive selection and statistical lower bound maximization (SSSLM) algorithm, which converges to a stationary point of (\ref{problem_SSUM_2})\footnote{\revOmid{Please note that in contrast to Subsection~\ref{subsec_SUIAP}, the operating objective in this part is not a differentiable one, \revPeterNew{hence it violates the conditions given by SIA \cite{marks1978technical}.} In this regard we follow a variation of SIA, i.e., the successive upper-bound minimization (SUM) method \cite{razaviyayn2013unified}, generalizing the convergence arguments in SIA-based methods for non-smooth problems. The proposed SSSLM algorithm is composed of three nested loops: Separation of the SAA into smooth and non-smooth parts at the outer loop, construction of an effective lower bound to SAA as the intermediate loop, and maximization of the constructed bound in the inner loop.}}.
A detailed description of the algorithm steps is given in the following.
\subsubsection{Initialization} \label{subsec_SSUM_init}
The algorithm starts by generating the channel instances $\ma{H}_{ae,i}, \ma{H}_{be,i} , \forall i \in \mathbb{G}_C$, drawn from the known statistical distribution of the channels. \revOmid{The number of channel realizations, i.e., $\left| \mathbb{G}_C \right|$, should be chosen large enough to capture the channel statistics in SAA with adequate accuracy, however, should be kept small to reduce computational complexity. The analytical expression for the choice of $\left| \mathbb{G}_C \right|$ is given in \cite[Theorem 5.18]{ruszczynski2003stochastic}, depending on the required statistical accuracy and the given probability distribution.} For the initialization of $\mathbb{Q}$, we follow the approximation
\begin{align}
\mathbb{E}_{\ma{H}_{ae}, \ma{H}_{be}} \left\{ {\text{SEE}} \left(\mathbb{Q}, \ma{H}_{ae}, \ma{H}_{be} \right) \right\} \approx {\text{SEE}} \left(\mathbb{Q}, \mathbb{E} \left\{ \ma{H}_{ae} \right\}, \mathbb{E}\left\{ \ma{H}_{be}\right\} \right),
\end{align}
where the expectations $\mathbb{E} \left\{ \ma{H}_{ae} \right\}, \mathbb{E} \left\{ \ma{H}_{be} \right\}$ are obtained from the statistical distribution of the channels. Note that the right side of the approximation corresponds to the objective addressed in Subsection~\ref{subsec_SUIAP}, where the \revOmid{SUIAP algorithm} is applied. The obtained solution from \revOmid{SUIAP} is then used as an initialization to the SSSLM algorithm. \vspace{-2mm}
\subsubsection{Outer loop}
In each outer iteration, the objective is decomposed as
\begin{align} \label{SSUM_decomposition}
\text{SAA} \left( \mathbb{Q} \right) = \frac{\sum_{i \in \mathbb{G}_{{C_1}}} \left\{\tilde{C}_{\text{s},i} \left(\mathbb{Q} \right) \right\}^+ + \sum_{i \in \mathbb{G}_{{C_2}}} \left\{ \tilde{C}_{\text{s},i} \left(\mathbb{Q} \right) \right\}^+ }{|\mathbb{G}_C| P_{\text{tot}} \left(\mathbb{Q} \right) }
\end{align}
by separating the set of channel realizations into the disjoint sets $\mathbb{G}_{{C_1}}$ and $\mathbb{G}_{{C_2}}$, such that $\mathbb{G}_{{C}} = \mathbb{G}_{{C_1}} \cup \mathbb{G}_{{C_2}}$. In particular, the set $\mathbb{G}_{{C_1}}$ is updated in each outer iteration as
\begin{align} \label{SSUM_def_F_C_1}
\mathbb{G}_{{C_1}}^{(\text{new})} \leftarrow \left\{ \forall i \;\; \vert \;\; i \in {\mathbb{G}_{{C_1}}} \;\; \text{or} \;\; \tilde{C}_{\text{s},i} \left( \mathbb{Q} \right) = 0 \right\},
\end{align}
\revOmid{where $\mathbb{Q}$ is given from the last intermediate loop, and results in the separation of smooth and non-smooth parts of the objective in (\ref{SSUM_decomposition})}. The algorithm converges when the constructed set ${\mathbb{G}_{{C_1}}}$ does not change. As it will be elaborated, the set members in $\mathbb{G}_{{C_1}}$ incur a high computational complexity, but are capable of resolving the non-smooth points by maintaining the same directional derivative as the SAA. On the other hand, the set members in $\mathbb{G}_{{C_2}}$ are resolved with lower computational complexity, however, they are not capable of handling non-smooth situations. \vspace{-2mm}
\subsubsection{Intermediate loop}
In each intermediate iteration a lower bound is constructed from the original objective SAA, namely $\text{SAA}_{{LB}}$, using the value of $\mathbb{Q}$ from the last inner loop, i.e., $\mathbb{Q}_0$. In order to construct $\text{SAA}_{{LB}}$ we undertake three steps. Firstly, the operator $\{\}^{+}$ is removed from SAA for $i \in \mathbb{G}_{{C_2}}$, which results in a global lower bound. Secondly, concave and tight lower bounds of the functions $\tilde{C}_{\text{s},i}$ are constructed at the point $\mathbb{Q}_0$, denoted as $\hat{C}_{\text{s},i} \left(\mathbb{Q} , \mathbb{Q}_0 \right)$, by applying the inequality (\ref{eq_SEE_max_SEE_Taylor}) on the convex parts. Please note that the value of $\tilde{C}_{\text{s},i}$ functions may be negative at $\mathbb{Q}_0$ for some $i\in \mathbb{G}_{C_2}$, resulting in a bias to the original objective. In order to obtain a tight lower bound, we define the set
\begin{align} \label{SSUM_set_def_C_2_+}
{\mathbb{G}_{{C}_2^+}} := \left\{ \forall i \;\; \vert \;\; i \in {\mathbb{G}_{{C_2}}} , \; \tilde{C}_{\text{s},i} \left( \mathbb{Q}_0 \right) \geq 0 \right\} ,
\end{align}
representing the subset of channel realizations resulting in a non-negative $\tilde{C}_{\text{s},i}$ at $\mathbb{Q}_0$. The corresponding lower bound function is then obtained as
\begin{align} \label{SSUM_SAA_LB}
& \text{SAA}_{{LB}} \left(\mathbb{Q} , \mathbb{Q}_0 \right) := \frac{ \sum_{i \in \mathbb{G}_{{C_1}}} \left\{ \hat{C}_{\text{s},i} \left(\mathbb{Q} , \mathbb{Q}_0 \right) \right\}^+ + \sum_{i \in \mathbb{G}_{{C_2^+}}} \hat{C}_{\text{s},i} \left(\mathbb{Q} , \mathbb{Q}_0 \right) }{ |\mathbb{G}_{{C}}| P_{\text{tot}} \left(\mathbb{Q} \right) }.
\end{align}
It can be verified that the constructed lower bound is tight at the point of approximation, i.e., $\text{SAA} \left(\mathbb{Q}_0\right) = \text{SAA}_{LB} \left(\mathbb{Q}_0, \mathbb{Q}_0\right)$, see Appendix~\ref{appendix_Lemma_SSUM_convergence}. The obtained lower bound is then optimally maximized in the inner loop. The iterations of the intermediate loop converge when $\mathbb{Q}_0$, and hence $\text{SAA}_{{LB}}$, (almost) does not change in subsequent intermediate iterations.
\vspace{-2mm}
\subsubsection{Inner loop}
The inner loop is dedicated to optimally maximize $\text{SAA}_{{LB}}$, under the original problem constrains (\ref{problem_SSUM_2}). Note that the $\text{SAA}_{{LB}}$ is not tractable in the current form, due to the $\{\}^+$ operation. In order to obtain the optimum solution we equivalently write the maximization problem in the inner loop as
\begin{align} \label{SSUM_inner_SAA_bar}
\underset{ a_i \in\{0,1\}, i \in \mathbb{G}_{C_1}}{\text{max}} \;\; \underset{ \mathbb{Q}}{\text{max}} \;\; & \overbar{\text{SAA}_{{LB}}}, \;\;\; {\text{s.t.}} \;\;\; \text{(\ref{problem_SEE_max_1_c})},
\end{align}
where $\overbar{\text{SAA}_{{LB}}}$ is obtained by replacing the terms $\left\{ \hat{C}_{\text{s},i}\right\}^+$ in (\ref{SSUM_SAA_LB}) by $a_i \hat{C}_{\text{s},i}$. Please note that for fixed values of $a_i$, $i\in \mathbb{G}_{C_1}$, the function $\overbar{\text{SAA}_{{LB}}}$ is a concave over affine fraction, and can be maximized to optimality via the application of the Dinkelbach algorithm. Hence (\ref{SSUM_inner_SAA_bar}) can be solved by repeating the Dinkelbach algorithm for all $2^{|\mathbb{G}_{C_1}|}$ possible combinations of $a_i$, $i\in \mathbb{G}_{C_1}$, however, requiring a large number of \revPeter{Dinkelbach} iterations. The optimization problem corresponding to the $k$-th inner iteration is expressed as
\begin{subequations} \label{problem_SSUM_max_Dink}
\begin{align}
\underset{\ma{a}^{[k]} \in \mathbb{A}^{[k]} }{ \text{max}} \underset{\mathbb{Q}^{[k]}}{ \text{max}} \;\; & \overbar{{\text{SAA}}_{\overline{LB}}} \left(\mathbb{Q}^{[k]} , \mathbb{Q}^{[0]}, \ma{a}^{[k]} \right) - \lambda^{[k-1]} P_{\text{tot}} \left(\mathbb{Q}^{[k]} \right) \label{problem_SSUM_max_Dink_a} \\
{\text{s.t.}} \;\;\; & \text{(\ref{problem_SEE_max_1_c})}.
\end{align}
\end{subequations}
where $\overbar{{\text{SAA}}_{\overline{LB}}}$ is \revOmid{the numerator} in $\overbar{\text{SAA}_{{LB}}}$, and $\mathbb{Q}^{[0]}$ is the point for the construction of ${\text{SAA}}_{{LB}}$, given from the intermediate loop. Moreover, the vector $\ma{a} \in \{0,1\}^{|\mathbb{G}_{C_1}|}$ stacks the values of $a_i, \forall i \in \mathbb{G}_{C_1}$, and $\mathbb{A}^{[k]} \subset \{0,1\}^{|\mathbb{G}_{C_1}|}$. It is observed that for a given $\ma{a}^{[k]}, \lambda^{[k-1]}$, (\ref{problem_SSUM_max_Dink}) is a jointly convex optimization problem, and is solved to optimality via MAX-DET algorithm \cite{vandenberghe1998determinant}. Hence, the optimum $\ma{a}^{[k]}, \mathbb{Q}^{[k]}$ are obtained by repeating the MAX-DET algorithm for all combinations $\ma{a}^{[k]} \in \mathbb{A}^{[k]}$. The value of $\lambda$ is then updated by applying the obtained $\mathbb{Q}^{[k]}, \ma{a}^{[k]}$ as
\begin{align}\label{problem_SSUM_max_Dink_lambda}
\lambda^{[k]} = \overbar{\text{SAA}_{\overline{LB}}} \left(\mathbb{Q}^{[k]} , \mathbb{Q}^{[0]}, \ma{a}^{[k]} \right)/ P_{\text{tot}} \left(\mathbb{Q}^{[k]} \right).
\end{align}
Please note that the set $\mathbb{A}^{[k]}$, is initialized as $\{0,1\}^{|\mathbb{G}_{C_1}|}$ but is reduced in each iteration. The following lemma clarifies this reduction. \vspace{-3mm}
\begin{lemma} \label{lemma_SSUM_ModifiedDinkelbach}
Let $g_k (\ma{a}_0)$ be the optimal value of the objective (\ref{problem_SSUM_max_Dink}) at inner iteration $k$, for the given combination $\ma{a}^{[k]} = \ma{a}_0$.
Then, if $g_k(\ma{a}_0)$ is negative, then the combination ${\ma{a}}_0$ will not be an optimum combination.
\end{lemma} \vspace{-3mm}
\begin{proof}
Due to the monotonic improvement of $\lambda$ in every iteration, and the fact that $P_{\text{tot}} \geq 0$, the value of $g_k (\ma{a}_0)$ will never improve after further iterations. This also results in a negative value of $g_k (\ma{a}_0)$ at the optimality. Since at least one of the combinations $\ma{a} \in \{0,1\}^{|\mathbb{G}_{C_1}|}$ equalizes the objective to zero at the optimality, the combination ${\ma{a}_0}$ will never be optimal.
\end{proof}
As a result of Lemma~\ref{lemma_SSUM_ModifiedDinkelbach}, once a combination $\ma{a}_0$ results in a negative value of the objective, then it is safely removed from $\mathbb{A}$ for the next iteration, see Algorithm~\ref{alg_SSSLM}. Note that the above process reduces the required computational complexity, compared to the separately applying the Dinkelbach method on all combinations, in two ways. Firstly, the parameter $\lambda$ is only updated jointly, for all combinations $\ma{a} \in \mathbb{A}$. Secondly, the monotonic reduction in $|\mathbb{A}|$ in each iteration, results in a smaller computational demand in finding the solution to (\ref{problem_SSUM_max_Dink}).
\vspace{-1mm}
\subsubsection{Convergence}
The proposed SSSLM algorithm converges to a stationary point of the original optimization problem (\ref{problem_SSUM_2}). In order to observe this, we first verify the convergence of the algorithm. Afterwards, we show that the converging point is a stationary point of (\ref{problem_SSUM_2}). \par
It is observed that the constructed lower bound in each step of the intermediate loop is maximized to the optimality via the application of the modified Dinkelbach algorithm. On the other hand, the value of ${\text{SAA}}_{{LB}} ( \mathbb{Q})$ after the construction of the new lower bound in each intermediate iteration experiences an improvement. This is grounded on the re-calculation of $\hat{C}_{\text{s},i}$ at the point of approximation, and elimination of the channel instances from $\mathbb{G}_{C_2}$ which result in a negative $\tilde{C}_{\text{s},i}$. Since the both the aforementioned updates result in a monotonic improvement of ${\text{SAA}}_{{LB}} ( \mathbb{Q})$ and as the $\text{SAA}$ is bounded from above, the iterations of inner and intermediate loop will result in a necessary convergence. The convergence of the intermediate loop subsequently results in the necessary convergence of the outer loop, due to the monotonic increase of $|\mathbb{G}_{C_1}|$ after each outer iteration, and the fact that $|\mathbb{G}_{C_1}| \leq |\mathbb{G}_{C}|$. \par
In order to argue the properties of the converging point on the original objective, we observe that neither the SAA nor ${\text{SAA}}_{{LB}} ( \mathbb{Q})$ are necessarily differentiable at the point of convergence. This invalidates the convergence arguments used for \revOmid{SUIAP algorithm} from \cite{marks1978technical}. In this regard, we follow the guidelines given by the SUM method \cite{razaviyayn2013unified}, generalizing the convergence arguments in SIA-based methods for non-smooth problems. \vspace{-2mm}
\begin{lemma} \label{lemma_SSUM_Convergence}
Let $\mathbb{Q}^\star$ be a solution of SSSLM. Then the function SAA, i.e., original problem objective, and ${\text{SAA}}_{{LB}}$, i.e., the constructed lower bound at the last intermediate iteration, are tight and share the same directional derivatives at $\mathbb{Q}^\star$.
\end{lemma} \vspace{-2mm}
\begin{proof}
See Appendix~\ref{appendix_Lemma_SSUM_convergence}.
\end{proof}
The results of Lemma~\ref{lemma_SSUM_Convergence}, together with the fact that ${\text{SAA}}_{{LB}} (\mathbb{Q}) \leq {\text{SAA}}(\mathbb{Q})$ for any feasible $\mathbb{Q}$, jointly satisfy the required assumption set \cite[Assumption~1]{razaviyayn2013unified}, and guarantee that the obtained converging point is indeed a stationary point of the original problem. \par
\vspace{-1mm}
\begin{remark}
\MinorRevOmid{Similar to the SUIAP algorithm, the \revPeterNew{global} optimality of the obtained stationary point via the SSSLM algorithm may not be guaranteed, and the obtained solution depends on the used initialization, see Remark~\ref{remarl_SUIAP_optimalitagap}. In Subsection~\ref{sec_AlgorithmAnalysis}, the optimal performance is numerically evaluated by repeating the SSSLM algorithm with several initializations, where the average $1-3\%$ relative gap is observed for the proposed initialization in Subsection~\ref{subsec_SSUM_init}. }
\end{remark}\vspace{0mm}
\subsubsection{Computational complexity}
The computational complexity of the algorithm is dominated by the maximization defined in (\ref{problem_SSUM_max_Dink}), solved via the MAX-DET algorithm in each inner iteration. The associated arithmetic complexity\footnote{The resulting computational complexity is dominated by the SAA sample size, due to the construction of (\ref{Eq_SSUM_SAA}), as well as the iterations of \revPeterNew{the} outer loop. In order to reduce the resulting computational efforts, the algorithm can be customized to a specific CSI {statistic}, e.g., by substituting the SAA a more efficient structure. \MinorRR{In this case, the achievable SEE must be approximated for a specific statistics in a tractable form, which is then used for the purpose of performance optimization in the design algorithm, e.g., see \cite{Ali_FD_FastFading} for a similar approach with Gaussian channels but for a different system objective. Another approach is to eliminate the operations in the outer loop when occurrence of the non-smooth points is not frequent. Moreover, the obtained initialization point in Subsection~\ref{subsec_SSUM_init} can serve as an intuitive low complexity solution. } } is hence upper-bounded similar to (\ref{SUAIP_complexity}), where $\gamma_{\text{in}} \propto 2^{|\mathbb{G}_{C_1}|}$, $n = 2N_{A}^2 + N_{B}^2 $, $n_Y = |\mathbb{G}_{C}|(2M_{B} + 2M_{E})$, $n_F = 4N_{A}+2N_{B}+2$ \vspace{-5mm}
\begin{algorithm}
\caption{{SEE maximization using statistical CSI, via successive selection and statistical lower bound maximization (SSSLM). $C_{\text{min}}$ ($\lambda_{\text{min}}$) represents the convergence threshold for the intermediate (inner) iterations. } } \label{alg_SSSLM}
{\scriptsize{ \begin{algorithmic}[1]
\State{$k,l,m,\lambda^{[0,0,0]} \gets {0} ; \mathbb{G}_{C_1}^{[0]} \gets {\emptyset};\; \mathbb{G}_{C}, \mathbb{Q}^{[0,0,0]} \gets \text{Subsection~\ref{subsec_SSUM_init}} ;$} \Comment{initialize}
\Repeat \Comment{outer loop}
\State{$m \leftarrow m + 1 ; \; \lambda^{[0,0,m]} \leftarrow \lambda^{[k,l,m-1]}, \mathbb{Q}^{[0,0,m]} \leftarrow \mathbb{Q}^{[k,l,m-1]}; \; \mathbb{G}_{C_1}^{[m]} \leftarrow \text{(\ref{SSUM_def_F_C_1})} ; \; l \leftarrow 0;$}
\Repeat \Comment{intermediate loop}
\State{$l \leftarrow l + 1; \; {\mathbb{G}_{{C}_2^+}} \gets \text{(\ref{SSUM_set_def_C_2_+})} ; \text{SAA}_{{LB}} \gets \text{(\ref{SSUM_SAA_LB})} ;$}
\Repeat \Comment{inner loop }
\State{$k \leftarrow k + 1;\; \left\{\mathbb{Q}^{[k,l,m]}, \lambda^{[k,l,m]} \right\} \gets \text{Dinkelbach's alg. (\ref{problem_SSUM_max_Dink})-(\ref{problem_SSUM_max_Dink_lambda})};$}
\Until{$\text{(\ref{problem_SSUM_max_Dink_a})} \leq C_{\text{min}}$}
\Until{$\lambda^{[k,l,m]} - \lambda^{[0,l,m]} \leq \lambda_{\text{min}} $}
\Until{$\mathbb{G}_{C_{1}}^{[m]} = \mathbb{G}_{C_{1}}^{[m-1]} $}
\State{\Return$\left\{\mathbb{Q}^{[k,l,m]}, \lambda^{[k,l,m]}\right\}$}
\end{algorithmic} }} \vspace{-1mm}
\end{algorithm} \vspace{-0mm}
\subsection{Signal model} \label{sec_model_signalmodel}
The transmission from Alice includes the information-containing signal, intended for Bob, and \revPeter{an} AN signal\footnote{\revOmid{Unlike the data symbols, which follow a known constellation, the AN is generated from a pseudo-random sequence which is not known to the receivers, see \cite[Section~III]{4543070}. This ensures that Eve cannot decode the AN.}}, intended to degrade the reception by Eve. This is expressed as
\begin{align} \label{eq_model_tx_alice}
\ma{x}_a = \underbrace{\ma{q}_a + \ma{w}_a}_{\ma{u}_a} + \ma{e}_{\text{tx},a},
\end{align}
where $\ma{u}_a \in \compl^{{N_{A}}}$ is the intended transmit signal, $\ma{q}_a \sim \mathcal{CN} \left( \ma{0}_{N_{A}}, \ma{Q}_a \right)$ and $\ma{w}_a \sim \mathcal{CN} \left( \ma{0}_{N_{A}}, \ma{W}_a \right)$ respectively represent the information-containing and AN signal, and $\ma{x}_a \in \compl^{N_{A}}$ is the combined transmitted signal from Alice. The transmit distortion, denoted as $\ma{e}_{\text{tx},a} \in \compl^{N_{A}}$ models collective \revPeter{impact} of transmit chain inaccuracies, e.g., \revPeterNew{\revOmid{digital-to-analog \revPeter{converter} noise}}, power amplifier (PA) noise, oscillator phase noise, see Subsection~\ref{sec_model_diststatistics} for more details. Note that the role of hardware inaccuracies becomes important in a system with FD transceivers, due to the impact of \revPeter{a} strong self-interference channel. Similar to the transmission from Alice, the transmission of AN by Bob is expressed as
\begin{align} \label{eq_model_tx_bob}
\ma{x}_b = \ma{w}_b + \ma{e}_{\text{tx},b},
\end{align}
where $\ma{w}_b \sim \mathcal{CN} \left( \ma{0}_{N_{B}}, \ma{W}_b \right)$ is the transmitted artificial noise and $\ma{e}_{\text{tx},b} \in \compl^{N_{B}}$ represents the transmit distortions from Bob. Via the application of (\ref{eq_model_tx_alice}) and (\ref{eq_model_tx_bob}) the received signal at Eve is expressed as
\begin{align} \label{eq_model_rx_eve}
\ma{y}_e &= \ma{H}_{ae}\ma{x}_a + \ma{H}_{be}\ma{x}_b + \ma{n}_e = \ma{H}_{ae}\ma{q}_a + \ma{c}_e ,
\end{align}
where $\ma{n}_e \sim \mathcal{CN} \left( \ma{0}_{M_E}, \sigma_{\text{n},e}^2 \ma{I}_{M_E} \right)$ is the additive thermal noise and
\begin{align} \label{eq_model_collective_interf}
\ma{c}_e := \ma{H}_{ae}\ma{w}_a + \ma{H}_{be}\ma{w}_b + \ma{H}_{ae}\ma{e}_{\text{tx},a} + \ma{H}_{be}\ma{e}_{\text{tx},b} + \ma{n}_e
\end{align}
is the collective interference-plus-noise at Eve. Similarly, the received signal at Bob is formulated as
\begin{align} \label{eq_model_rx_bob}
\ma{y}_b = \underbrace{\ma{H}_{ab}\ma{x}_a + \ma{H}_{bb}\ma{x}_b + \ma{n}_b}_{=: \ma{u}_b} + \ma{e}_{\text{rx},b},
\end{align}
where $\ma{n}_b \sim \mathcal{CN} \left( \ma{0}_{M_{B}}, \sigma_{\text{n},b}^2\ma{I}_{M_{B}} \right)$ is the additive thermal noise, and $\ma{u}_b$ is the received signal, assuming perfect hardware operation. Similar to the transmit side, the receiver side distortion, denoted as $\ma{e}_{\text{rx},b} \in \compl^{M_{B}}$, models the collective impact of receiver chain inaccuracies, e.g., \revPeterNew{\revOmid{analog-to-digital \revPeter{converter} noise}}, oscillator phase noise, and automatic gain control error, see Subsection~\ref{sec_model_diststatistics}. Note that $\ma{y}_b$ includes the received self-interference signal at Bob, originating from the same transceiver. Hence, the \emph{known}, i.e., distortion-free, part of the self-interference can be subtracted applying \revPeter{a} SIC method~\cite{Bharadia:14,BMK:13}. The received signal at Bob, after the application of SIC is hence written as
\begin{align} \label{eq_model_rx_bob_afterSIC}
\tilde{\ma{y}}_b & = {\ma{y}}_b - \ma{H}_{bb}\ma{w}_b \nonumber \\ &= \ma{H}_{ab} \ma{x}_a + \ma{H}_{bb} \ma{e}_{\text{tx},b} + \ma{e}_{\text{rx},b} + \ma{n}_b = \ma{H}_{ab}\ma{q}_a + \ma{c}_b ,
\end{align}
where
\begin{align} \label{eq_model_collective_interf}
\ma{c}_b := \ma{H}_{ab}\ma{w}_a + \ma{H}_{ab}\ma{e}_{\text{tx},a} + \ma{H}_{bb}\ma{e}_{\text{tx},b} + \ma{e}_{\text{rx},b} + \ma{n}_b,
\end{align}
is the collective interference-plus-noise at Bob.
\vspace{-2mm}\subsection{Distortion signal statistics}\label{sec_model_diststatistics}
Similar to \cite{DMBS:12}, we model the impact of transmit (receive) chain inaccuracies by injecting Gaussian-distributed and independent distortion terms at each antenna\footnote{Eve is assumed to operate with a zero-distortion hardware, considering a worst-case scenario.}. Moreover, the variance of the distortion signals are proportional to the power of the intended transmit (receive) signal at the corresponding chain. This model is elaborated in \cite[Subsections~C]{DMBS:12}, \cite{MITTX:98, MITTX:08} regarding the characterization of hardware impairments in transmit chains, and in \cite[Subsections~D]{DMBS:12}, \cite{MITRX:05} for the receiver chains. This is expressed in our system as
\begin{align}
\ma{e}_{\text{tx},a} \sim \mathcal{CN} & \Big( \ma{0}_{N_{A}}, \kappa_a \text{diag} \Big( \mathbb{E} \left\{\ma{u}_a \ma{u}_a^H \right\} \Big) \Big), \;\; \ma{e}_{\text{tx},a} \bot \ma{u}_a , \label{eq_model_e_tx_a} \\
\ma{e}_{\text{tx},b} \sim \mathcal{CN} & \Big( \ma{0}_{N_{B}}, \kappa_b \text{diag} \Big(\mathbb{E} \left\{\ma{w}_b \ma{w}_b^H \right\} \Big) \Big), \;\; \ma{e}_{\text{tx},b} \bot \ma{w}_b, \label{eq_model_e_tx_b} \\
\ma{e}_{\text{rx},b} \sim \mathcal{CN} & \Big( \ma{0}_{M_{B}}, \beta_b \text{diag} \Big( \mathbb{E} \left\{\ma{u}_b \ma{u}_b^H \right\} \Big) \Big), \;\; \ma{e}_{\text{rx},b} \bot \ma{u}_{b}, \label{eq_model_e_rx_b}
\end{align}
where $\kappa_a, \kappa_b, \beta_b \in \mathbb{R}^+$ are distortion coefficients, relating the variance of the distortion terms to the intended signal power, and $\ma{u}_a$ and $\ma{u}_b$ are defined in (\ref{eq_model_tx_alice}) and (\ref{eq_model_rx_bob}), respectively. For further elaborations on the used distortion model please see \cite{DMBS:12, DMBSR:12, ALRWW:14, XaZXMaXu:15}, and the references therein.
\subsection{Power consumption model}\label{sec_model_powerconsumption}
The consumed power of a wireless transceiver can be segmented into three parts. First, the power consumed at the PA, which is related to the effective transmit power via \revPeterNew{the} PA efficiency, see \cite[Eq.~(2)]{SGB:04}. Secondly, the zero-state power, i.e., the power consumed by other circuit blocks, independent from \revPeterNew{the} transmission status\footnote{This includes, e.g., the power consumed at receiver chain, and for base band processing.}, see \cite[Eq.~(3)]{SGB:04}. And finally, the power consumed for the implementation of \revPeter{a} SIC scheme, \revPeterNew{to enable} FD operation. The aforementioned power varies for different SIC methods, and by definition, is not relevant for HD transceivers. The consumed power for Alice and Bob \revPeter{can hence be} expressed as
\begin{figure*}[!t]
\normalsize
\setcounter{mytempeqncnt1}{\value{equation}}
\setcounter{equation}{14}
{\small{ \begin{align}
\ma{\Sigma}_b = \mathbb{E}\{\ma{c}_b\ma{c}_b^H\} & = \ma{H}_{ab}\ma{W}_a\ma{H}_{ab}^H + \kappa_a \ma{H}_{ab} \text{diag} \left(\ma{Q}_a+\ma{W}_a\right) \ma{H}_{ab}^H + \kappa_b \ma{H}_{bb} \text{diag} \left( \ma{W}_b \right) \ma{H}_{bb}^H \nonumber\\
& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \beta_b \text{diag} \Big( \ma{H}_{ab} \left( \ma{Q}_a + \ma{W}_a \right) \ma{H}_{ab}^H + \ma{H}_{bb} \ma{W}_b \ma{H}_{bb}^H + \sigma_{\text{n},b}^2 \ma{I}_{M_{B}} \Big) + \sigma_{\text{n},b}^2 \ma{I}_{M_{B}}, \label{eq_model_Sigma_b}\\
\ma{\Sigma}_e = \mathbb{E}\{\ma{c}_e\ma{c}_e^H\} & = \ma{H}_{ae}\ma{W}_a\ma{H}_{ae}^H + \ma{H}_{be}\ma{W}_b \ma{H}_{be}^H + \kappa_a \ma{H}_{ae} \text{diag} \left(\ma{Q}_a+\ma{W}_a\right) \ma{H}_{ae}^H + \kappa_b \ma{H}_{be} \text{diag} \left( \ma{W}_b \right) \ma{H}_{be}^H + \sigma_{\text{n},e}^2 \ma{I}_{M_{E}}. \label{eq_model_Sigma_e}
\end{align} }}
\setcounter{equation}{\value{mytempeqncnt1}}
\hrulefill
\end{figure*}
\setcounter{equation}{10}
\revOmid{\begin{align} \label{eq_model_power_Alice}
P_{A} \left(\ma{Q}_a, \ma{W}_a\right) = \frac{1 + \kappa_a}{\mu_A} \text{tr}\left(\ma{Q}_a + \ma{W}_a\right) + P_{A,0}, \;\; P_{A} \leq P_{A,\text{max}}
\end{align}
and
\begin{align} \label{eq_model_power_Bob}
P_{B} \left( \ma{W}_b \right) = \frac{1 + \kappa_b}{\mu_B} \text{tr}\left(\ma{W}_b\right) + P_{B,0} + P_{\text{FD}} , \;\; P_{B} \leq P_{B,\text{max}}.
\end{align}}
In the above arguments, $P_{\mathcal{X}}, P_{\mathcal{X},0}, \mu_{\mathcal{X}}$, and $P_{\mathcal{X},\text{max}}$, where $\mathcal{X} \in \{A,B\}$, respectively represent the consumed power, the zero-state power, PA efficiency, and the maximum allowed power consumption for each node. The additional required power for the implementation of \revPeterNew{a} SIC scheme is denoted by $P_{\text{FD}}$. From (\ref{eq_model_power_Alice}) \revPeterNew{and} (\ref{eq_model_power_Bob}), the total system power consumption is obtained as \vspace{-3mm}
\revOmid{ \begin{align} \label{eq_model_power_total}
P_{\text{tot}} \left(\ma{Q}_a, \ma{W}_a, \ma{W}_b\right) = P_{A}\left(\ma{Q}_a, \ma{W}_a\right) + P_{B}\left( \ma{W}_b \right).
\end{align} }
\subsection{Secrecy energy efficiency}\label{sec_model_secrecy_EE}
\revOmid{Following \cite{4543070, 5961840, GKJPO13}, the achievable secrecy rate\footnote{\revOmid{The system\revPeterNew{'s} secrecy capacity is lower bounded by all achievable secrecy rates, resulting from different choices of transmit covariance matrices, see \cite[Theorem~1]{5961840}, \cite[Equation~(6)]{4543070}.}} for Alice-Bob communication} is expressed as $C_{{ab}} = \left\{ \tilde{C}_{{ab}} \right\}^{+},
$ such that
\begin{align} \label{eq_model_ab_cap}
\tilde{C}_{{ab}}= \text{log} \left| \ma{I} + \ma{H}_{ab}\ma{Q}_a\ma{H}_{ab}^H \ma{\Sigma}_b^{-1} \right| - \text{log} \left| \ma{I} + \ma{H}_{ae}\ma{Q}_a\ma{H}_{ae}^H \ma{\Sigma}_e^{-1} \right|,
\end{align} \setcounter{equation}{16}
where $\ma{\Sigma}_b$, $\ma{\Sigma}_e$ are given in (\ref{eq_model_Sigma_b}), (\ref{eq_model_Sigma_e}), and represent the covariance of the interference-plus-noise terms at Bob and Eve, respectively. \revOmid{The SEE}, as a measure of securely communicated information per energy unit, is consequently expressed as
\begin{align} \label{eq_model_Sec_cap}
\text{SEE} = \frac{C_{{ab}}}{P_{\text{tot}}}.
\end{align}
\revPeter{It} is the intention of the remaining \revPeterNew{sections} of this paper to improve the efficiency of the defined wiretap channel, in terms of the SEE, and provide comparison \revPeterNew{between FD and} usual HD strategies.
\MinorRevOmid{\begin{remark}
In this part, we have introduced an MIMOME wiretap channel, where Bob is capable of FD operation and sends a jamming signal in order to improve the information security. However, this does not facilitate an FD communication, as Alice remains an HD node. This setup is relevant in the practical asymmetric scenarios, e.g., the uplink of an FD cellular communication system~\cite{7463025}, where users are usually not capable of FD operation. The setup with the joint FD operation at Alice and Bob, facilitating a joint jamming and an FD bidirectional communication, is later discussed in Section~\ref{sec_SEE_max_BD}.
\end{remark}
\begin{remark}
\revOmid{In this part we assume the availability of exact CSI for all channels, relevant to the scenarios with a collaborative eavesdropper, e.g.,~\cite{UntRel:1, UntRel:2}. The scenario with the availability of partial CSI is discussed in Section~\ref{sec_SSUM}. }
\end{remark} }
\begin{remark}
\revOmid{Unlike the data symbols, which follow a known constellation, the AN is generated from a pseudo-random sequence which is not known to the receivers, see \cite[Section~III]{4543070}. This ensures that Eve may not decode the AN \revPeterNew{and hence, cannot cancel the interference caused by the AN transmissions}}
\end{remark}
\subsection{Weighted MSE minimization via Alternating QCP (AltQCP)} \label{sec:MWMSE}
An optimization problem for minimizing the weighted sum MSE is written as
\begin{subequations} \label{eq:global_opt_problem_MWMSE}
\begin{align}
\underset{\mathbb{V},\mathbb{U}}{ \text{min}} \;\; & \sum_{i \in \mathbb{I}}\sum_{k \in \mathbb{F}_K} \text{tr} \left({\ma{S}_i^k} \ma{E}_i^k\right) \label{eq:global_opt_problem_MWMSE_a} \\
{\text{s.t.}} \;\; & \text{tr}\bigg( \left( \ma{I}_{N_i} + \ma{\Theta}_{\text{tx},i}\right) \sum_{l\in \mathbb{F}_K} \ma{V}_i^l{\ma{V}_i^l}^H \bigg) \leq P_i, \;\; \forall i \in \mathbb{I}, \label{eq:global_opt_problem_MWMSE_b}
\end{align}
\end{subequations}
where $\mathbb{X}:= \{\ma{X}_{i}^k, \; \forall i \in \mathbb{I}, \; \forall k \in \mathbb{F}_K\}$, with $\mathbb{X} \in \{\mathbb{U}, \mathbb{V}\}$, and (\ref{eq:global_opt_problem_MWMSE_b}) represents the transmit power constraint. It is worth mentioning that the application of ${\ma{S}_i^k} \succ 0$, as a weight matrix associated with $\ma{E}_i^k$ is two-folded. Firstly, it may appear as a diagonal matrix, emphasizing the importance of different data streams and different users. Secondly, it can be applied as an auxiliary variable which later relates the defined weighted MSE minimization to a sum-rate maximization problem, see Subsection~\ref{sec:perf_CSI_Rate}. \par
It is observed that (\ref{eq:global_opt_problem_MWMSE}) is not a jointly convex problem. Nevertheless, it holds a QCP structure separately over the sets $\mathbb{V}$ and $\mathbb{U}$, in each case when other variables are fixed. In this regard, the objective (\ref{eq:global_opt_problem_MWMSE_a}) can be decomposed over $\mathbb{U}$ for different communication directions, and for different subcarriers. The optimal minimum MSE (MMSE) receive filter can be hence calculated in closed form as
\begin{align} \label{wmmse_U_mmse}
\ma{U}_{i,\text{mmse}}^k = \left( \ma{\Sigma}_i^k + \ma{H}_{ii}^k\ma{V}_i^k{\ma{V}_i^k}^H{\ma{H}_{ii}^k}^H \right)^{-1} {\ma{H}_{ii}^k} {\ma{V}_i^k}.
\end{align}
Nevertheless, the defined problem is coupled over $\ma{V}_{i}^k$, due to the impact of inter-carrier leakage, as well as the power constraint (\ref{eq:global_opt_problem_MWMSE_b}). The Lagrangian function, corresponding to the optimization (\ref{eq:global_opt_problem_MWMSE}) over $\mathbb{V}$ is expressed as
\begin{align} \label{Lagrangian_WMMSE}
\mathcal{L} & \left( \mathbb{V}, \boldsymbol{\iota}\right) : = \sum_{i \in \mathbb{I}} \bigg( \iota_i { \mathcal{P}_i }\left( \mathbb{V} \right) + \sum_{k \in \mathbb{F}_K} \text{tr} \left({\ma{S}_i^k} \ma{E}_i^k\right) \bigg), \\ & \mathcal{P}_i \left( \mathbb{V} \right):= - P_i + \text{tr}\bigg( \left( \ma{I}_{N_i} + \ma{\Theta}_{\text{tx},i}\right) \sum_{l\in \mathbb{F}_K} \ma{V}_i^l{\ma{V}_i^l}^H \bigg),
\end{align}
where $ \boldsymbol{\iota}:= \{\iota_i,\; i\in\mathbb{I}\}$ is the set of dual variables. The dual function, corresponding to the above Lagrangian is defined as
\begin{align}
\mathcal{F}\left( \boldsymbol{\iota} \right) : & = \underset{\mathbb{V}}{\text{min}} \;\; \mathcal{L} \left( \mathbb{V}, \boldsymbol{\iota}\right) \label{WMMSE_dualfunction}
\end{align}
where the optimal ${\ma{V}_i^k}$ is obtained as
\begin{align} \label{WMMSE_ClosedForm_V}
& {\ma{V}_i^k}^\star = \left( {\ma{J}_i^k} + \iota_i \left( \ma{I}_{N_i} + \ma{\Theta}_{\text{tx},i}\right) + {\ma{H}_{ii}^k}^H {\ma{U}_i^k} {\ma{S}_i^k} {\ma{U}_i^k}^H {\ma{H}_{ii}^k } \right)^{-1} \nonumber \\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \times {\ma{H}_{ii}^k}^H {\ma{U}_i^k}{\ma{S}_i^k},
\end{align}
and
\begin{align}
{\ma{J}_i^k} :&= \sum_{l \in \mathbb{F}_K} \sum_{j \in \mathbb{I}} \bigg( {\ma{H}_{ji}^k}^H \text{diag} \left( {\ma{U}_j^l} {\ma{S}_j^l} {\ma{U}_j^l}^H {\ma{\Theta}_{\text{rx},j}^l} \right) {\ma{H}_{ji}^k} \nonumber \\ & \;\;\;\; + \text{diag} \left({\ma{H}_{ji}^l}^H {\ma{U}_j^l} {\ma{S}_j^l} {\ma{U}_j^l}^H {\ma{H}_{ji}^l} {\ma{\Theta}_{\text{tx},i}^l} \right)
\bigg).
\end{align}
Due to the convexity of the original problem (\ref{eq:global_opt_problem_MWMSE}) over $\mathbb{V}$, the defined dual problem is a concave function over $\boldsymbol{\iota}$, with $\mathcal{P}_i(\mathbb{V})$ as a subgradient, see \cite[Eq.~(6.1)]{bertesekas1999nonlinear}. As a result, the optimal $\boldsymbol{\iota}$ is obtained from the maximization
\begin{align}
\boldsymbol{\iota}^\star = \underset{\boldsymbol{\iota} \geq 0}{\text{argmax}} \; \mathcal{F}\left( \boldsymbol{\iota} \right),
\end{align}
following a standard subgradient update,~\cite[Subsection~6.3.1]{bertesekas1999nonlinear}. \par
%
Utilizing the proposed optimization framework, the alternating optimization over $\mathbb{V}$ and $\mathbb{U}$ is continued until a stable point is obtained. Note that due to the monotonic decrease of the objective in each step, and the fact that (\ref{eq:global_opt_problem_MWMSE_a}) is non-negative and hence bounded from below, the defined procedure leads to a necessary convergence. Algorithm~\ref{alg:MWMSE} defines the necessary optimization steps.
\begin{algorithm}[H]
\small{ \begin{algorithmic}[1]
\State{$\ell \leftarrow {0} ; \;\;\;\; \text{(set iteration number to zero)}$}
\State{$\mathbb{V} \leftarrow \text{right~singular~matrix~initialization,~see~\cite[Appendix~A]{5585631}}$ }
\State{$\mathbb{U} \leftarrow \text{solve~(\ref{wmmse_U_mmse})}$ }
\Repeat
\State{$\ell \leftarrow \ell + 1 $}
\State{$\mathbb{V} \leftarrow \text{solve (\ref{WMMSE_ClosedForm_V}) or QCP (\ref{eq:global_opt_problem_MWMSE}), with fixed} \; \mathbb{U}$}
\State{$\mathbb{U} \leftarrow \text{solve (\ref{wmmse_U_mmse}) or QCP (\ref{eq:global_opt_problem_MWMSE}) with fixed} \; \mathbb{V}$}
\Until{$\text{a stable point, or maximum number of $\ell$ reached}$}
\State{\Return$\left\{\mathbb{U},\mathbb{V}\right\}$}
\end{algorithmic} }
\caption{\small{Alternating QCP (AltQCP) for weighted MSE minimization} } \label{alg:MWMSE}
\end{algorithm}
\subsection{Weighted MMSE (WMMSE) design for sum rate maximization} \label{sec:perf_CSI_Rate}
Via the utilization of $\ma{V}_{i}^k$ as the transmit precoders, the resulting communication rate for the $k$-th subcarrier and for the $i$-th communication direction is written as
\begin{align} \label{model:rate_formulation}
I_i^k = B \text{log}_2 \left| \ma{I}_{d_i} + {{\ma{V}}_{i}^k}^H{{\ma{H}}_{ii}^k}^H \big({\ma{\Sigma}}_{i}^k\big)^{-1} {\ma{H}}_{ii}^k{\ma{V}}_{i}^k \right|,
\end{align}
where $B$ and ${\ma{\Sigma}}_{i}^k$ are defined in (\ref{eq_model_distortion_stat_res_tx}) and (\ref{eq_model_aggregate_interference_covariance_CSI_Perfect}). The sum rate maximization problem can be hence presented as
\begin{align} \label{eq:model_optimization}
\underset{\mathbb{V}}{ \text{max}} \;\; & \;\;\sum_{i \in \mathbb{I}}\sum_{k \in \mathbb{F}_K} I_i^k , \;\; {\text{s.t.}} \;\; \text{(\ref{eq:global_opt_problem_MWMSE_b})}.
\end{align}
The optimization problem (\ref{eq:model_optimization}) is intractable in the current form. In the following we propose an iterative optimization solution, following the WMMSE method \cite{CACC:08}. \\
Via the application of the MMSE receive linear filters from (\ref{wmmse_U_mmse}), the resulting MSE matrix is obtained as
\begin{align} \label{wmmse_E_mmse}
{\ma{E}_{ i,{\text{mmse}} }^k} = \left( \ma{I}_{d_i} + {\ma{V}_i^k}^H {\ma{H}_{ii}^k}^H \left({\ma{\Sigma}_i^k}\right)^{-1} \ma{H}_{ii}^k \ma{V}_i^k \right)^{-1}.
\end{align}
By recalling (\ref{model:rate_formulation}), and upon utilization of $\ma{U}_{i,\text{mmse}}^k$, we observe the following useful connection to the rate function
\begin{align} \label{wmmse_wmmseEquivalent}
I_i^k = - B \text{log}_2 \left| \ma{E}_{i,\text{mmse}}^k \right|,
\end{align}
which facilitates the decomposition of rate function via the following lemma, see also \cite[Eq.~(9)]{CACC:08}.
\begin{lemma} \label{lemma_logdetE}
Let $\ma{E} \in \compl^{d \times d}$ be a positive definite matrix. The maximization of the term $-\text{log} \left|\ma{E} \right|$ is equivalent to the maximization
\begin{align}
\underset{\ma{E}, \ma{S}}{ \text{max}} - \text{tr}\left( \ma{S} \ma{E} \right) + \text{log} \left|\ma{S} \right| + d,
\end{align}
where $\ma{S} \in \compl^{d \times d}$ is a positive definite matrix, and we have
\begin{align} \label{W_opt_perfectCSI}
\ma{S} = \ma{E}^{-1} ,
\end{align}
at the optimality.
\end{lemma}
\begin{proof}
See \cite[Lemma~2]{JPKR:11}.
\end{proof}
By recalling (\ref{wmmse_wmmseEquivalent}), and utilizing Lemma~\ref{lemma_logdetE}, the original optimization problem over $\mathbb{V}$ can be equivalently formulated as
\begin{align} \label{eq:global_opt_problem_rate}
\underset{\mathbb{V},\mathbb{U},\mathbb{S}}{ \text{max}} \;\; & \sum_{k \in \mathbb{F}_K} B \sum_{i \in \mathbb{I}} \bigg(\text{log}\left|\ma{S}_i^k\right| + d_i - \text{tr} \left({\ma{S}_i^k} \ma{E}_i^k\right)\bigg) \;\; {\text{s.t.}} \;\; \text{(\ref{eq:global_opt_problem_MWMSE_b})},
\end{align}
where $\mathbb{S}:= \{\ma{S}_{i}^k \succ 0, \; \forall i \in \mathbb{I}, \; \forall k \in \mathbb{F}_K\}$.
The obtained optimization problem (\ref{eq:global_opt_problem_rate}) is not a jointly convex problem. Nevertheless, it is a QCP over $\mathbb{V}$ when other variables are fixed, and can be obtained with a similar structure as for (\ref{eq:global_opt_problem_MWMSE}). Moreover, the optimization over $\mathbb{U}$ and $\mathbb{S}$ is respectively obtained from (\ref{wmmse_U_mmse}), and (\ref{W_opt_perfectCSI}) as $\ma{S}_i^k = {\ma{E}_i^k}^{-1}$. This facilitates an alternating optimization where in each step the corresponding problem is solved to optimality, see Algorithm~\ref{alg:WMMSE_rate}. The defined alternating optimization steps results in a necessary convergence due to the monotonic increase of the objective in each step, and the fact that the eventual system sum rate is bounded from above.
\begin{algorithm}[H]
\small{ \begin{algorithmic}[1]
\State{$\ell \leftarrow {0} ; \;\;\;\; \text{(set iteration number to zero)}$}
\State{$\mathbb{V} \leftarrow \text{right~singular~matrix~initialization~\cite[Appendix~A]{5585631}} $}
\Repeat
\State{$\ell \leftarrow \ell + 1$}
\State{$\mathbb{U} \leftarrow \text{solve~(\ref{wmmse_U_mmse})}$}
\State{$\mathbb{V} \leftarrow \text{solve QCP (\ref{eq:global_opt_problem_rate}), with fixed} \; \mathbb{U},\mathbb{S}$}
\State{$\mathbb{S} \leftarrow \ma{S}_i^k = \left( {\ma{E}_i^k} \right)^{-1}$ }
\Until{$\text{a stable point, or maximum number of $\ell$ reached}$}
\State{\Return$\left\{\mathbb{V} \right\}$}
\end{algorithmic} }
\caption{\small{AltQCP-WMMSE design for sum rate maximization} } \label{alg:WMMSE_rate}
\end{algorithm}
%
%
\subsection{Norm-bounded CSI error}
In this part we update the defined system model in Section~\ref{sec:model} to the scenario where the CSI is known erroneously. In this respect we follow the so-called deterministic model \cite{WP:09}, where the error matrices are not known but located, with a sufficiently high probability, within a known feasible error region. This is expressed as
\begin{align} \label{eq:model_channel_eror}
\ma{H}_{ij}^{k} = \tilde{\ma{H}}_{ij}^{k} + {\ma{\Delta}}_{ij}^{k}, \;\; {\ma{\Delta}}_{ij}^{k} \in \mathbb{D}_{ij}^{k}, \;\; i,j\in\mathbb{I},
\end{align}
and
\begin{align} \label{eq:model_channel_eror}
\mathbb{D}_{ij}^{k} := \left\{ {\ma{\Delta}}_{ij}^{k} \; \big{\vert} \; \| {\ma{D}}_{ij}^{k} {\ma{\Delta}}_{ij}^{k} \|_F \leq \zeta_{ij}^{k} \right\}, \;\; \forall i,j \in\mathbb{I}, \; k \in \mathbb{F}_{K},
\end{align}
where $\tilde{\ma{H}}_{ij}^{k}$ is the estimated channel matrix and ${\ma{\Delta}}_{ij}^{k}$ represents the channel estimation error. Moreover, ${\ma{D}}_{ij}^{k}\succeq 0$ and $\zeta_{ij}^{k} \geq 0$ jointly define a feasible ellipsoid region for ${\ma{\Delta}}_{ij}^{k}$ which generally depends on the noise and interference statistics, and the used channel estimation method. For further elaboration on the used error model see \cite{WP:09,LB:05,PPNL:06} and the references therein. \\
The aggregate interference-plus-noise signal at the receiver is hence updated as
\begin{align} \label{model:ri_signal_CSIError}
{\boldsymbol{\nu}}_{i}^k & = \ma{H}_{ij}^k \ma{e}_{\text{t},j}^k + \ma{H}_{ii}^k \ma{e}_{\text{t},i}^k + \ma{e}_{\text{r},i}^k + \ma{\Delta}_{ij}^k \ma{V}_{j}^k \ma{s}_{j}^k + \ma{n}_{i}^k, \;\; j \neq i \in \mathbb{I},
\end{align}
where $\ma{\Sigma}_i^k$, representing the covariance of ${\boldsymbol{\nu}}_{i}^k$, is expressed in (\ref{eq_model_aggregate_interference_covariance_CSIError}).
\begin{figure*}[!ht]
\normalsize
\begin{align} \label{eq_model_aggregate_interference_covariance_CSIError}
\ma{\Sigma}_{i}^k & = \ma{\Delta}_{ij}^k \ma{V}_j^k{\ma{V}_j^k}^H {\ma{\Delta}_{ij}^k}^H \nonumber \\
& + \sum_{ j \in \mathbb{I} } \ma{H}_{ij}^k \ma{\Theta}_{\text{tx},j}^k \text{diag} \left( \sum_{l \in \mathbb{F}_K } \ma{V}_j^l{\ma{V}_j^l}^H \right) {\ma{H}_{ij}^k}^H + \ma{\Theta}_{\text{rx},i}^k \text{diag} \bigg( \sum_{l\in\mathbb{F}_K} \bigg( \sigma_{i,l}^2 \ma{I}_{M_i} + \sum_{ j \in \mathbb{I} } \ma{H}_{ij}^l \ma{V}_j^l{\ma{V}_j^l}^H {\ma{H}_{ij}^l}^H \bigg) \bigg) + \sigma_{i,k}^2 \ma{I}_{M_i}.
\end{align}
\hrulefill
\vspace*{-0mm}
\end{figure*}
\subsection{Alternating SDP (AltSDP) for worst-case MSE minimization} \label{WMMSE_CSI_Error}
An optimization problem for minimizing the worst-case MSE under the defined norm-bounded CSI error is written as
\begin{align}
\underset{\mathbb{V},\mathbb{U}}{ \text{min}} \;\; \underset{\mathbb{C}}{\text{max}} \;\; & \sum_{i \in \mathbb{I}}\sum_{k \in \mathbb{F}_K} \text{tr} \left({\ma{S}_i^k} \ma{E}_i^k\right), \nonumber \\
{\text{s.t.}} \;\;\; & \text{(\ref{eq:global_opt_problem_MWMSE_b})}, \;\; \ma{\Delta}_{ij}^{k} \in \mathbb{D}_{ij}^{k}, \;\; \forall i,j \in \mathbb{I}, \;\; k \in \mathbb{F}_K, \label{eq:global_opt_problem_MWMSE_CSIError}
\end{align}
where $\mathbb{C} := \{\ma{\Delta}_{ij}^k, \; \forall i,j \in \mathbb{I} , \;\forall k \in \mathbb{F}_K\}$, and $\ma{E}_{i}^k$ is obtained from (\ref{MSE_Matrix}) and (\ref{eq_model_aggregate_interference_covariance_CSIError}). Note that the above problem is intractable, due to the inner maximization of quadratic convex objective over $\mathbb{C}$, which also invalidates the observed convex QCP structure in (\ref{eq:global_opt_problem_MWMSE}). In order to formulate the objective into a tractable form, we calculate
\begin{align}
& \sum_{k\in \mathbb{F}_K} \text{tr} \left({\ma{S}_i^k} \ma{E}_i^k \right) \nonumber \\ & = \sum_{k\in \mathbb{F}_K} \Bigg{(}
\left\| {\ma{W}_i^k}^H \left( {\ma{U}_i^k}^H \ma{H}_{ii}^k \ma{V}_i^k - \ma{I}_{d_i} \right) \right\|_{{F}}^2 \nonumber \\
& \;\;\;\; + \left\| {\ma{W}_i^k}^H {\ma{U}_i^k}^H \ma{\Delta}_{i3-i}^k \ma{V}_{3-i}^k \right\|_{{F}}^2 + \sigma_{i,k}^2 \left\| {\ma{W}_i^k}^H {\ma{U}_i^k}^H\right\|_{{F}}^2 \nonumber \\
& \;\;\;\; + \sum_{j \in \mathbb{I} } \sum_{l\in \mathbb{F}_{N_j}} \sum_{m\in \mathbb{F}_K} \left\| {\ma{W}_i^k}^H {\ma{U}_i^k}^H \ma{H}_{ij}^k \left(\ma{\Theta}_{\text{tx},j}^k\right)^{\frac{1}{2}} \ma{\Gamma}_{N_j}^l \ma{V}_j^m \right\|_{{F}}^2 \nonumber \\
& \;\;\;\; + \sum_{j \in \mathbb{I} } \sum_{l\in \mathbb{F}_{M_i}} \sum_{m\in \mathbb{F}_K} \left\| {\ma{W}_i^k}^H {\ma{U}_i^k}^H \left(\ma{\Theta}_{\text{rx},i}^k\right)^{\frac{1}{2}} \ma{\Gamma}_{M_i}^l \ma{H}_{ij}^m\ma{V}_j^m \right\|_{{F}}^2 \nonumber \\
& \;\;\;\;+ \left\| {\ma{W}_i^k}^H {\ma{U}_i^k}^H \left( \ma{\Theta}_{\text{rx},i}^k \sum_{q\in\mathbb{F}_K} \sigma_{i,q}^2 \right)^{\frac{1}{2}} \right\|_{F}^2 \Bigg{)} \label{error_representation_fro_norm} \\
& = {\sum_{j \in \mathbb{I} } \sum_{k \in \mathbb{F}_K } \Big\| \ma{c}_{ij}^k + \ma{C}_{ij}^k \text{vec}\left( \ma{\Delta}_{ij}^k \right) \Big\|_{2}^2 } , \label{quadratic_error_representation_final}
\end{align}
where $\ma{\Gamma}_{M}^l$ is an $M \times M$ zero matrix except for the $l$-th diagonal element equal to $1$. In the above expressions $\ma{W}_{i}^k = \left( \ma{S}_{i}^k \right)^{\frac{1}{2}}$, and
\begin{align}\label{calculate_c_ij}
& \ma{c}_{ij}^k {:=} \nonumber \\ &\left[ \begin{array}{c} \delta_{ij} \text{vec} \left( {\ma{W}_i^k}^H \left( {\ma{U}_i^k}^H \tilde{\ma{H}}_{ij}^k\ma{V}_j^k - \ma{I}_{d_j \delta_{ij}} \right) \right) \\ \left\lfloor \text{vec} \left( {\ma{W}_i^k}^H {\ma{U}_i^k}^H \tilde{\ma{H}}_{ij} \left( \ma{\Theta}_{\text{tx},j}^k \right)^{\frac{1}{2}} \ma{\Gamma}_{N_j}^l \ma{V}_j^m \right) \right\rfloor_{l \in \mathbb{F}_{N_j} , m \in \mathbb{F}_K} \\ \left\lfloor \text{vec} \left( {\ma{W}_i^m}^H {\ma{U}_i^m}^H \left( \ma{\Theta}_{\text{rx},i}^m \right)^{\frac{1}{2}} \ma{\Gamma}_{M_{i}}^l \tilde{\ma{H}}_{ij}^k \ma{V}_j^k \right) \right\rfloor_{l \in \mathbb{F}_{M_i}, m \in \mathbb{F}_K } \\ \delta_{ij} \text{vec} \left( {\ma{W}_i^k}^H {\ma{U}_i^k}^H \left( \sigma_{i,k}^2 \ma{I}_{M_i} + \ma{\Theta}_{\text{rx},i}^k \sum_{m \in \mathbb{F}_K} \sigma_{i,m}^2 \right)^{\frac{1}{2}} \right) \end{array} \right],
\end{align}
\begin{align}\label{calculate_C_ij}
& \ma{C}_{ij}^k {:=} \nonumber \\ &\left[ \begin{array}{c} {\ma{V}_j^k}^T \otimes \left( {\ma{W}_i^k}^H {\ma{U}_i^k}^H \right) \\
\left\lfloor \left( \left( \ma{\Theta}_{\text{tx},j}^k \right)^{\frac{1}{2}} \ma{\Gamma}_{N_j}^{l} \ma{V}_j^m \right)^T \otimes \left( {\ma{W}_i^k}^H {\ma{U}_i^k}^H \right) \right\rfloor_{l \in \mathbb{F}_{N_j} , m\in \mathbb{F}_K} \\ \left\lfloor {\ma{V}_j^k}^T \otimes \left( {\ma{W}_i^m}^H {\ma{U}_i^m}^H \left( \ma{\Theta}_{\text{rx},i}^k \right)^{\frac{1}{2}} \ma{\Gamma}_{M_i}^l \right) \right\rfloor_{l \in \mathbb{F}_{M_i}, m \in \mathbb{F}_K } \\ \ma{0}_{M_i d_i \times M_i N_i} \end{array} \right],
\end{align}
where $\delta_{ij}$ is the Kronecker delta where $\delta_{ij}=1$ for $i=j$ and zero otherwise. Moreover we have $\ma{c}_{ij}^k \in \compl^{ \tilde{d}_{ij} \times 1}$, $\ma{C}_{ij}^k \in \compl^{ \tilde{d}_{ij} \times {M_i N_j}}$ such that
\begin{align}\label{length_d}
& \tilde{d}_{ij} := d_id_j \left( 1 + K \left( N_j + M_i \right) \right) + d_i M_i .
\end{align}
Please note that (\ref{error_representation_fro_norm}) is obtained by recalling (\ref{MSE_Matrix}) and (\ref{eq_model_aggregate_interference_covariance_CSIError}) and the known matrix equality \cite[Eq.~(516)]{MCB:08}, and (\ref{calculate_c_ij})-(\ref{calculate_C_ij}) are calculated via the application of \cite[Eq.~(496),~(497)]{MCB:08}. \par
By applying the Schur's complement lemma on the epigraph form of the quadratic norm (\ref{quadratic_error_representation_final}), i.e., $ \big\| \ma{c}_{ij}^k + \ma{C}_{ij}^k \text{vec}\left( \ma{\Delta}_{ij}^k \right) \big\|_{2}^2 \leq \tau_{ij}^k$, the optimization problem (\ref{eq:global_opt_problem_MWMSE_CSIError}) is equivalently written as
\begin{subequations} \label{global_opt_problem_MWMSE_CSIError}
\begin{align}
& \underset{\mathbb{V},\mathbb{U}, \mathbb{T}} {\text{min}} \;\; \underset{\mathbb{C}}{\text{max}} \;\; \sum_{i \in \mathbb{I}}\sum_{k \in \mathbb{F}_K} \tau_{ij}^k, \;\;\; {\text{s.t.}} \;\;\; \text{(\ref{eq:global_opt_problem_MWMSE_b})}, \;\; \| \ma{b}_{ij}^k \|_{F} \leq \zeta_{ij}^k, \\
& \left[\hspace{-0mm}\begin{array}{cc} 0 &\hspace{-0mm} {{{\ma{b}}_{ij}^k}}^H {{\tilde{\ma{D}}_{ij}^k}}^{H} {{\ma{C}_{ij}^k}}^H \\ {\ma{C}_{ij}^k}{\tilde{\ma{D}}_{ij}}^k {\ma{b}}_{ij}^k & \hspace{-0mm}\ma{0}_{\tilde{d}_{ij} \times \tilde{d}_{ij}} \end{array} \hspace{-0mm}\right] \hspace{-0mm}+\hspace{-0mm} \left[\begin{array}{cc} \tau_{ij}^k & {{{\ma{c}}_{ij}^k}}^H \\ {\ma{c}}_{ij}^k & \ma{I}_{\tilde{d}_{ij}} \end{array} \right] \succeq 0,
\end{align}
\end{subequations}
where $\mathbb{T}:= \{\tau_{ij}^k, \; \forall i,j \in \mathbb{I}, \; \forall k \in \mathbb{F}_K\}$ and
\begin{align}
\tilde{\ma{D}}_{ij}^k & := \ma{I}_{N_j} \otimes \left({\ma{D}_{ij}^k}\right)^{-1}, \\
\tilde{\ma{\Delta}}_{ij}^k & := \ma{D}_{ij}^k {\ma{\Delta}}_{ij}^k, \;\; \ma{b}_{ij}^k := \text{vec}\left(\tilde{\ma{\Delta}}_{ij}^k \right),
\end{align}
are defined for notational simplicity. The problem (\ref{global_opt_problem_MWMSE_CSIError}) is still intractable, due to the impact of the inner maximization. The following lemma translates the given structure into a tractable form.
\begin{lemma} \label{petersen}
Generalized Petersen's sign-definiteness lemma: Let $\ma{Y} = \ma{Y}^H$, and $\ma{X},\ma{P},\ma{Q}$ are arbitrary matrices with complex valued elements. Then we have
\begin{align}
\ma{Y} \succeq \ma{P}^H \ma{X} \ma{Q} + \ma{Q}^H \ma{X}^H \ma{P}, \;\; \forall \ma{X} \; : \; \| \ma{X}\|_F \leq \zeta,
\end{align}
if and only if
\begin{align}
\exists \lambda \geq 0, \; \left[\begin{array}{cc} \ma{Y} - \lambda \ma{Q}^H\ma{Q} & - \zeta \ma{P}^H \\ - \zeta \ma{P} & \lambda \ma{I} \end{array} \right] \succeq 0.
\end{align}
\end{lemma}
\begin{proof}
See~\cite[Proposition~2]{EM:04}, \cite{khlebnikov2008petersen}.
\end{proof}
By choosing the matrices in Lemma~\ref{petersen} such that $\ma{X} = {\ma{b}}_{ij}^k$, $\ma{Q} = \left[ -1, \; \ma{0}_{1 \times \tilde{d}_{ij} } \right]$ and
\begin{align} \label{CSI_Error_ChoosingMatrices_Lemma_Peterson}
\ma{Y} = \left[\begin{array}{cc} \tau_{ij}^k & {{\ma{c}}_{ij}^k}^H \\ {\ma{c}}_{ij}^k & \ma{I}_{\tilde{d}_{ij}} \end{array} \right], \ma{P} = \left[\begin{array}{cc} \ma{0}_{{M_i N_j} \times 1 }, \;\; {\tilde{\ma{D}}_{ij}^k}^H {{\ma{C}}_{ij}^k}^H \end{array} \right],
\end{align}
the optimization problem in (\ref{global_opt_problem_MWMSE_CSIError}) is equivalently written as
\begin{subequations} \label{Prob:MMSE_CSI_Error_final}
\begin{align}
\underset{\mathbb{V},\mathbb{U}, \mathbb{T}, \mathbb{M}}{ \text{min}} \;\;\; & \sum_{i,j \in \mathbb{I}} \sum_{k \in \mathbb{F}_K} \tau_{ij}^k \\
{\text{s.t.}} \;\; & {\ma{F}}_{i,j}^k \succeq 0, \; \ma{G}_i \succeq 0 , \;\; \forall i,j \in \mathbb{I},\;k \in \mathbb{F}_K,
\end{align}
\end{subequations}
where $\mathbb{M} :=\{\lambda_{ij}^k, \; \forall i,j \in \mathbb{I}, k \in \mathbb{F}_K \}$, and
\begin{align}
\ma{G}_{i} :&= \left[\hspace{-0mm}\begin{array}{cc} P_i &\hspace{-0mm} \tilde{\ma{v}}_i^H \\ \tilde{\ma{v}}_i & \hspace{-0mm}\ma{I}_{} \end{array} \hspace{-0mm}\right], \; \tilde{\ma{v}}_i := \left\lfloor \text{vec} \left( \left( \ma{I} + \ma{\Theta}_{\text{tx},i} \right)^{\frac{1}{2}} \ma{V}_{i}^k \right) \right\rfloor_{k\in \mathbb{F}_K} ,\\
{\ma{F}}_{i,j}^k :&= \left[\begin{array}{ccc} \tau_{ij}^k - \lambda_{ij}^k & {{\ma{c}}_{ij}^k}^H & \ma{0}_{1 \times M_i N_j }\\ {\ma{c}}_{ij}^k & \ma{I}_{\tilde{d}_{ij}} & - \zeta_{ij}^k {\ma{C}}_{ij}^k\tilde{\ma{D}}_{ij}^k \\ \ma{0}_{{M_i N_j \times 1 }} & - \zeta_{ij}^k {\tilde{\ma{D}}_{ij}^k}^H {{\ma{C}}_{ij}^k}^H & \lambda_{ij}^k \ma{I}_{M_i N_j} \end{array} \right].
\end{align}
Similar to (\ref{eq:global_opt_problem_rate}), the obtained problem in (\ref{Prob:MMSE_CSI_Error_final}) is not a jointly, but a separately convex problem over $\mathbb{V}$ and $\mathbb{U}$, in each case when the other variables are fixed. In particular, the optimization over $\mathbb{V},\mathbb{T}, \mathbb{M}$ is cast as an SDP, assuming a fixed $\mathbb{U}$. Afterwards, the optimization over $\mathbb{U}, \mathbb{T}, \mathbb{M}$ is solved as an SDP, assuming a fixed $\mathbb{V}$. The described alternating steps are continued until a stable point is achieved, see Algorithm~\ref{alg:WMMSECSIError} for a detailed explanation.
\begin{algorithm}[H]
\small{ \begin{algorithmic}[1]
\State{$\ell \leftarrow {0} ; \;\;\;\; \text{(set iteration number to zero)}$}
\State{$\mathbb{V},\mathbb{U} \leftarrow \text{similar~initialization~as~Algorithm~$1$}$}
\Repeat
\State{$\ell \leftarrow \ell + 1$}
\State{$\mathbb{V}, \mathbb{T}, \mathbb{M} \leftarrow \text{solve SDP (\ref{Prob:MMSE_CSI_Error_final}), with fixed} \; \mathbb{U}$}
\State{$\mathbb{U}, \mathbb{T}, \mathbb{M} \leftarrow \text{solve SDP (\ref{Prob:MMSE_CSI_Error_final}), with fixed} \; \mathbb{V}$}
\Until{$\text{a stable point, or maximum number of $\ell$ reached}$}
\State{\Return$\left\{ \mathbb{U},\mathbb{V} \right\}$}
\end{algorithmic} }
\caption{\small{Alternating SDP (AltSDP) for worst-case MMSE design under CSI error.} } \label{alg:WMMSECSIError}
\end{algorithm}
\subsection{WMMSE for sum rate maximization} \label{sec:Error_CSI_Rate}
Under the impact of CSI error, the worst-case rate maximization problem is written as
\begin{subequations} \label{eq:model_optimization_Error_CSI_Rate}
\begin{align}
\underset{\mathbb{V}}{ \text{max}} \;\; \underset{\mathbb{C} }{ \text{min}} \;\; & \;\;\sum_{i \in \mathbb{I}} \sum_{k \in \mathbb{F}_K} I_i^k \\
{\text{s.t.}} \;\;\; & \text{(\ref{eq:global_opt_problem_MWMSE_b})}, \;\; \ma{\Delta}_{ij}^{k} \in \mathbb{D}_{ij}^{k}, \;\; \forall i,j \in \mathbb{I}, \;\; k \in \mathbb{F}_K. \label{eq:global_opt_problem_1_normbounded_contraint_Error_CSI_Rate}
\end{align}
\end{subequations}
Via the application of Lemma~\ref{lemma_logdetE}, and (\ref{wmmse_wmmseEquivalent}) the rate maximization problem is equivalently written as
\begin{subequations} \label{eq:global_opt_problem_2_Error_CSI_Rate}
\begin{align}
\underset{\mathbb{V}}{ \text{max}} \;\; \underset{\mathbb{C}}{ \text{min}} \;\; \underset{\mathbb{U}, \mathbb{W}}{ \text{max}} \;\; & \sum_{i \in \mathbb{I}} \sum_{k \in \mathbb{F}_K} B \bigg( \text{log} \left|\ma{W}_i^k{\ma{W}_i^k}^H\right| \nonumber \\ & + d_i - \text{tr} \left({\ma{W}_i^k}^H \ma{E}_i^k\ma{W}_i^k\right) \bigg) \\
{\text{s.t.}} \;\;\;\; & \text{(\ref{eq:global_opt_problem_1_normbounded_contraint_Error_CSI_Rate})}, \label{eq:global_opt_problem_2_constraints}
\end{align}
\end{subequations}
where $\mathbb{W}:= \{\ma{W}_{i}^k, \; \forall i \in \mathbb{I}, \; \forall k \in \mathbb{F}_K\}$. The above problem is not tractable in the current form, due to the inner min-max structure. Following the max-min exchange introduced in \cite[Section~III]{JPKR:11}, and undertaking similar steps as in (\ref{quadratic_error_representation_final})-(\ref{CSI_Error_ChoosingMatrices_Lemma_Peterson}) the problem (\ref{eq:global_opt_problem_2_Error_CSI_Rate}) is turned into
\begin{subequations} \label{Prob:RATE_CSI_Error_final}
\begin{align}
\underset{\mathbb{V},\mathbb{U}, \mathbb{W}, \mathbb{T}, \mathbb{M} }{ \text{max}} \;\;\; & \sum_{i \in \mathbb{I}} \sum_{k \in \mathbb{F}_K} B \bigg( 2 \text{log}\left|\ma{W}_i^k\right| + d_i - \sum_{j \in \mathbb{I}} \tau_{ij}^k \bigg) \\
{\text{s.t.}} \;\; & {\ma{F}}_{i,j}^k \succeq 0, \; \ma{G}_i \succeq 0 , \;\; \forall i,j \in \mathbb{I},\;k \in \mathbb{F}_K,
\end{align}
\end{subequations}
where ${\ma{F}}_{i,j}^k, \ma{G}_i$ are defined in (\ref{Prob:MMSE_CSI_Error_final}). It is observable that the transformed problem holds a separately, but not a jointly, convex structure over the optimization variable sets. In particular, the optimization over $\mathbb{V},\mathbb{T}, \mathbb{M}$ and $\mathbb{U},\mathbb{T}, \mathbb{M}$ are cast as SDP in each case when other variables are fixed. Moreover, the optimization over $\mathbb{W}$ can be efficiently implemented using the MAX-DET algorithm \cite{vandenberghe1998determinant}, see Algorithm~\ref{alg:WMMSECSIError_Rate}. Similar to Algorithm~\ref{alg:WMMSECSIError}, due to the monotonic increase of the objective in each optimization iteration the algorithm convergences to a stationary point. See \cite[Section~III]{JPKR:11} for arguments regarding convergence and optimization steps for a problem with a similar variable separation.
\begin{algorithm}[H]
\small{ \begin{algorithmic}[1]
\State{$\ell \leftarrow {0} ; \;\;\;\; \text{(set iteration number to zero)}$}
\State{$\mathbb{V},\mathbb{U} \leftarrow \text{similar~initialization~as~Algorithm~$1$}$}
\State{$\mathbb{W}, \leftarrow \text{identity~matrix~initialization}$}
\Repeat
\State{$\ell \leftarrow \ell + 1$}
\State{$\mathbb{V}, \mathbb{T}, \mathbb{M} \leftarrow \text{solve SDP (\ref{Prob:MMSE_CSI_Error_final}), with fixed} \; \mathbb{U},\mathbb{W}$}
\State{$\mathbb{U}, \mathbb{T}, \mathbb{M} \leftarrow \text{solve SDP (\ref{Prob:MMSE_CSI_Error_final}), with fixed} \; \mathbb{V},\mathbb{W}$}
\State{$\mathbb{W}, \mathbb{T}, \mathbb{M} \leftarrow \text{solve MAT-DET (\ref{Prob:MMSE_CSI_Error_final}), with fixed} \; \mathbb{V},\mathbb{U}$}
\Until{$\text{a stable point, or maximum number of $\ell$ reached}$}
\State{\Return$\left\{ \mathbb{U},\mathbb{V} \right\}$}
\end{algorithmic} }
\caption{\small{AltSDP-WMMSE algorithm for worst-case rate maximization under CSI error} } \label{alg:WMMSECSIError_Rate}
\end{algorithm}
|
1,941,325,221,025 | arxiv | \section{The Importance of Measuring Muons in MiniBooNE}
MiniBooNE~\cite{boone-prop} is a neutrino oscillation experiment at
Fermilab designed to confirm or rule out the hypothesis that the LSND
$\overline{\nu}_e$ excess~\cite{lsnd} is due to $\overline{\nu}_{\mu}
\ \rightarrow \ \overline{\nu}_e$ oscillations. A general description
of the experiment can be found elsewhere in these proceedings~\cite{hray}.
The neutrino energy reconstruction is critical to the success of the
MiniBooNE oscillation and cross section analyses. Charged current
quasi-elastic (CCQE) events ($\nu_{\mu}n\rightarrow\mu^- p$) are
typically used to measure the neutrino energy spectrum because they
have simple kinematics. Neglecting the nucleon target momentum, the
reconstructed quasi-elastic neutrino energy can be expressed in terms
of the momentum of the muon. Therefore, the muon energy and direction
measurements completely determine the neutrino energy measurement.
The MiniBooNE cosmic muon calibration system~\cite{calib-NIM} uses
stopping muons and their decay electrons to calibrate the event
reconstruction algorithms. This system provides a precise calibration
of the energy, direction and position of muons for the complete range
of muon energies of interest in the experiment, 100-900~MeV.
\section{Cosmic Muon Calibration System}
The muon calibration system consists of a muon tracker located above
the detector, and seven scintillator cubes deployed inside the
detector. The entering position and direction of a cosmic muon
impinging on the detector are determined by the muon tracker, and the
stopping position is determined by the location of the cube in the
case where the muon stops and decays inside the one of the cubes. The
muon energy is then obtained from the range with an uncertainty due to
range straggling of approximately 3\%~\cite{stern}. The muon range
kinetic energy measurement is compared to the visible energy as
reconstructed by the event fitters on an event by event basis. This
gives the absolute energy scale calibration of the MiniBooNE detector.
\begin{figure}
\hspace{-0.2in}
\centerline{\psfig{file=./cubes_energy_data-mc.eps,height=2.0in,width=2.3in}
\hspace{0.2in}
\psfig{file=./lipari.ps,height=2.3in,width=2.0in,angle=90}}
\caption{\label{fig:cubes-xsec}The energy of cosmic muons in MiniBooNE,
and the $\nu_{\mu}$ interaction cross sections shown as a function
of neutrino energy.}
\end{figure}
The event fitter returns an ``electron equivalent energy,'' which is
the energy of an electron that would have produced the same number of
photoelectrons in the detector~\cite{pdg1}. The visible energy of
cosmic muons is plotted against the kinetic energy calculated from the
range using the cubes in Fig.~\ref{fig:cubes-xsec}a. There is good
agreement between data and Monte Carlo. Using the information in
Fig.~\ref{fig:cubes-xsec}a, the visible energy measurement is
converted to a muon kinetic energy, which is used to calculate the
neutrino quasi-elastic energy. From the cosmic muon calibration
system, the muon kinetic energy uncertainty is measured to be 5\%, and
the angular resolution to be 45~mrad, leading to a neutrino energy
uncertainty of $\sim$10\%.
\section{Measuring $\nu_{\mu}$ Events in MiniBooNE}
The MiniBooNE event rate prediction is described
elsewhere~\cite{monroe-ccqe}. The $\nu_{\mu}$ charged-current
interaction cross section as a function of neutrino energy is shown in
Fig.~\ref{fig:cubes-xsec}b~\cite{lipari}. In MiniBooNE's energy
range, the dominant cross section are CCQE, which comprises 40\% of
the expected neutrino events, and charged current resonant single pion
(CC1$\pi$) events, which are expected to comprise 25\% of the total
event rate.
\subsection{Charged-Current Quasi-Elastic Events}
CCQE events are interesting because they are the dominant event
channel used in successful neutrino oscillation searches. MiniBooNE's
CCQE event selection requires that candidate events pass cosmic
background and fiducial volume cuts, and that the event topology be
consistent with expectations for a single muon passing through the
detector~\cite{monroe-ccqe}. Monte Carlo studies indicate the cuts
are 55\% efficient within the 500cm fiducial volume, producing an 80\%
pure CCQE event sample.
\begin{figure}
\centerline{\psfig{file=./ccqe_enuqe_feb04-1.eps,width=2.5in}
\psfig{file=./ccqe_q2_feb04.eps,width=2.5in}}
\caption{\label{fig:ccqe}Distribution of reconstructed quasi-elastic energy of
MiniBooNE $\nu_{\mu}$ event candidates. Distribution of
reconstructed momentum transfer of MiniBooNE $\nu_{\mu}$ event
candidates. The figures both show data in black points, with
statistical errors, and Monte Carlo expectations in colored bands
with systematic uncertainties. }
\end{figure}
The reconstructed neutrino quasi-elastic energy is shown in
Fig.~\ref{fig:ccqe}a, and the Q$^2$ distribution in
Fig.~\ref{fig:ccqe}b, for 1.6$\times$10$^{20}$~POT. Ongoing studies of
the transmission of light in mineral oil are expected to improve the
optical variations dramatically.
Note that the shape of the neutrino energy spectrum is somewhat harder
in the data than the Monte Carlo predicts, although the deviations sit
within the limits of the current error bands. Note also the hint of a
low Q$^2$ deficit which may indicate a nuclear model deficiency. This
is an active area of study within the neutrino community.
\subsection{Charged-Current Single Pion Events}
Charged-current single pion ($\nu_{\mu}p\rightarrow\mu^-\pi^+p$)
production has been studied since the advent of high energy
accelerator neutrino beams but the cross section for such processes in
the MiniBooNE energy range have not been sufficiently explored. We
describe here the first look at a sample of CC1$\pi$ events in
MiniBooNE.
MiniBooNE's CC1$\pi$ event selection requires the simple yet robust
cut of two Michel electrons following the neutrino interaction. The
majority of pions emitted from these events stop in the detector oil.
These decay quickly to muons, which then decay to Michel electrons.
The muons emitted from the neutrino interaction also come to rest, and
the majority of these decay to Michel electrons. Applying this
requirement to 2.7$\times$10$^{20}$ POT of MiniBooNE data yields over
36,000 CC1$\pi$ candidate events. The Monte Carlo predictions
indicate a purity of 80\% for this sample. This data set is larger by
a factor of 4 than all CC1$\pi$ neutrino data published to date.
The Michel electrons from the CC1$\pi$ candidate events are used to
verify the composition of the data set.
The distance from each Michel to the end of the
muon track in calculated.
Assuming that the closer Michel is associated with the $\mu^-$ from
the neutrino interaction, and the farther Michel is associated with
the $\pi^+$ decay, we expect the closer Michels to have a shorter
lifetime. This occurs because the $\mu^-$ are captured by carbon
nuclei at a rate of 8\%, changing the expected lifetime from
2197.03$\pm$0.04~ns~\cite{mu-} to 2026.3$\pm$1.5~ns~\cite{mu+}. The
observed muon lifetimes for the close and far Michel samples are
2070$\pm$16~ns and 2242$\pm$17~ns, respectively. Again, note that the
observed lifetimes do not yet include systematic uncertainties.
While we are able to successfully extract CC1$\pi$ events with high
purity, full event reconstruction studies are still in progress as the
complex final state requires additional reconstruction handles that
are not yet fully developed.
\section{Conclusions}
MiniBooNE has already amassed the world's largest neutrino data set in
the $\sim$1~GeV region in its quest to confirm or rule out the LSND
oscillation signal. Using a cosmic muon calibration system, we
measure the energy of muons to 5\%, and the directions to better than
45~mrad. This leads to an uncertainty in the reconstruction of
quasi-elastic neutrino energy of $\sim$10\%. We are currently
examining large CCQE and CC1$\pi$ data sets, and expect to have cross
section measurements and $\nu_{\mu}$ disappearance oscillation results
from these data samples in 2005.
|
1,941,325,221,026 | arxiv | \section{Introduction}
The Baikal Neutrino Telescope is being deployed in the
Siberian Lake Baikal, about 3.6 km from shore at
a depth of 1.1 km \cite{APP}, \cite{Proposal}. The central
mission of the project is
the detection of extraterrestrial sources of
high energy neutrinos.
Other fields of interest \cite{Physics} are the search for
neutrinos from WIMP annihilation in the Earth or the Sun,
for neutrino oscillations, and for slowly moving bright objects like
GUT monopoles. Standard cosmic ray physics with muons
generated in the atmosphere is covered as well as
limnological and ecological questions.
In deep underwater detectors, clear water serves
as target material for neutrino interactions, as Cherenkov
radiator for charged particles, and as a shield against
atmospheric muons and sunlight. Energetic neutrinos are
detected easiest by mapping the
Cherenkov light from muons produced in charged current
interactions. "Mapping" means measurement of the
photon arrival times at photodetectors distributed over a
large volume. The feebleness of the
light signal requires a large-area, large-acceptance
light detector with single photoelectron resolution.
Mapping of the Cherenkov cone with a spatial
accuracy not worse than the OM diameter requires
a time resolution of a few nano\-seconds. The water depth
demands pressure protection of the sensor.
The present paper describes design and operation
of the components of the optical module (OM)
most of which have been
developed within our collaboration. After a short
presentation of the telescope in section 2,
section 3 covers the design and the parameters
of the phototube {\it QUASAR-370}. Section 4 gives the construction
of the OM, section 5 describes the electronics and
discusses the operational principle of two PMTs switched
in coincidence. Section 6 presents results from the
different methods of OM calibration, whereas in section 7
the long-term operation underwater is evaluated and
some selected results of the telescope operation are given.
Section 8 summarizes the results and sketches routes of
further development.
\section{The Telescope {\it NT-200}}
After numerous experiments with prototype configurations
\cite{Proposal}, in
April 1993 we deployed a first underwater detector allowing
three-dimensional track reconstruction of muons. This array {\it NT-36}
consisted of 36 OMs at 3 strings \cite{NT-36}.
It was replaced in 1994 by a slightly modified version, in
1995 by a 72-OM array, in 1996 by {\it NT-96} consisting of
96 OMs at 4 strings, and in 1997 by a
144-OM array. These detectors have been
steps towards the Neutrino Telescope {\it NT-200}
\cite{APP,Proposal} with a total of 192 OMs. {\it NT-200}
was completed in April 1998 and is presently taking data.
It is sketched in Fig. 1.
The OMs consist of a pressure glas housing equipped with the {\it
QUASAR-370}.
They are grouped in pairs along the strings.
The two PMTs of a pair are switched in coincidence, defining a {\it
channel}.
The "constructional" basic building block (called {\it "svjaska"})
consists of two pairs of OMs,
and the svjaska electronics module, {\it SEM}, which houses
control units, power supply and the front-end electronics.
\begin{figure}[H]
\centering
\mbox{\epsfig{file=nt200.eps,height=12.5cm, angle=-90}}
\caption[1]{\small
Schematic view of the Baikal Telescope {\it NT-200}.
The expansion lefthand shows 2 pairs of OMs
("svjaska"), with the svjaska electronics module
housing parts of the control and read-out electronics.
}
\end{figure}
\bigskip
A {\it muon trigger} is formed if $\geq m$ channels are hit
within a time window of 500 nsec (this is about twice
the time a relativistic particle needs to cross the {\it NT-200}
array). The value $m$ is typically set to 3 or 4. Times
and amplitudes are digitized in the string electronic modules.
A second system {\it "monopole trigger"}
searches for counting rate patterns characteristic for slowly moving
bright particles like nuclearities or GUT magnetic monopoles
catalyzing proton decays. Depending on the velocity of the
object, such events could cause enhanced counting rates
in individual channels during time intervals of 0.1\,-\,0.8 msec,
separable from Poissonian noise.
\section{The Phototube}
\subsection{Construction and operational principle}
The {\it QUASAR-370}
consists of an electro-optical preamplifier
followed by a conventional photomultiplier (PMT) - see Fig. 2.
In this hybrid scheme, photoelectrons from a large hemispherical cathode
with $>$ 2$\pi$ viewing angle are accelerated by 25 kV to a fast, high
gain scintillator which is placed near the center of the glass
bulb. The light from the scintillator is read out by a small
conventional PMT named {\it UGON}.
One photoelectron emerging from the hemispherical
photocathode yields typically 25 photoelectrons in the conventional
PMT. This high multiplication factor of the electro-optical preamplifier
results in an excellent single electron resolution -- important
for the detection of low level light pulses and background
suppression. Due to the fast acceleration of primary photoelectrons
by 25 kV high voltage, the time jitter can be kept low. This is
most important for accurate track reconstruction. Last not
least, the tube is almost insensitive to the
Earths magnetic field.
\begin{figure}[b]
\centering
\mbox{\epsfig{file=quasar.eps,height=8.8cm, angle=-90}}
\caption[4]{\small
Cross section of the QUASAR-370 tube
}
\end{figure}
A hybrid phototube of this kind, XP2600, was first
developed by PHILIPS \cite{XP-1,XP-2}. After first experience
with the XP2600 we followed their basic design
and developed the {\it "QUASAR"}.
First versions of the {\it QUASAR}-tube had a spherical shape, with
diameters of 30\,cm ({\it QUASAR-300}) and 35\,cm
({\it QUASAR-350}), respectively.
The latest version -- {\it QUASAR-370} -- has
a nonspherical (mushroom) shape of the glass bulb to provide more
isochronic photoelectron trajectories.
Fig. 3 gives the
measured relative transit time as a function of the zenith
angle. The transit time differences are minimized to
$\le$ 2.0 nsec.
\begin{figure}[H]
\begin{minipage}[b]{6.7cm}
\epsfig{file=trans2.eps,width=6.5cm}
\caption[10]{Measured cathode transit time difference for
{\it QUASAR-370} (N$^o$ 254) versus zenith angle $\theta$, for
two azimuth angles $\Phi$.}
\end{minipage}
\hfill
\begin{minipage}[b]{6.7cm}
\epsfig{file=sens2.eps,width=6.5cm}
\vspace*{0.5cm}
\caption [11]{Relative sensitivity of {\it QUASAR-370} (N$^o$ 254)
vs. zenith angle $\theta$, for two azimuth angles $\Phi$.}
\end{minipage}
\end{figure}
The spherical "face" region of the bulb has a diameter of 37 cm.
Modifications towards the mushroom form are made at large zenith
angles.
The bulb is manufatured from borosilicate glass S49-1
by the EKRAN company, Novosibirsk. The photocathode
is of the bialkali type (\mbox{K$_{2}$CsSb}). Its spectral
response is typical for this type of
photocathode, with a maximum at $\lambda$ = 400\,-\,420 nm. The
spectral sensitivity exceeds 60 mA/W at $\lambda=420$~nm
which corresponds to $\sim20\,\%$ quantum efficiency. The
non-uniformity of the response across the photocathode is less than
30\,\% (see Fig. 4).
The luminescent screen is made from pulverized phosphor,
Y$_{2}$SiO$_{5}$(YSO). This scintillator
has a light yield of 20\,-\,30\,\% relative to NaI(Tl) and 30\,-\,40 ns decay
time.
\subsection{Single Photoelectron Resolution}
The single photoelectron resolution of the {\it QUASAR-370} is defined
mainly by the gain $G$ of the electro-optical preamplifier:
\begin{equation}
G=\frac{\mbox{number of photoelectrons detected by small
PMT}}{\mbox{number of photoelectrons at the
preamplifier
photocathode}}.
\label{eq:1}
\end{equation}
Figs. 5 and 6 show typical charge distributions for single- and
multi-photo\-elec\-tron pulses of a {\it QUASAR-370}. The high amplification
factor $G$ allows to separate pulses of one and two
photoelectrons and to identify even the shoulder from 3 p.e. events.
The light pulse has been generated by a light emitting diode.
The
distribution labeled "single p.e." has been obtained by attenuating
the LED to a level when only every tenth LED pulse triggered the {\it QUASAR}.
Averaged over 100 tubes, the mean values for
single photoelectron resolution {\it SPR},
peak-to-valley ratio {\it P/V},
and gain $G$ are
\begin{itemize}
\item {\it SPR} $\approx$ 70\,\% (FWHM),
\item {\it P/V} $\approx$ 2.5,
\item {\it G} $\approx$ 25.
\end{itemize}
\bigskip
\begin{figure}[H]
\begin{minipage}[t]{6.7cm}
\epsfig{file=onepe.eps,width=6.5cm}
\caption [10]
{Charge distribution for single p.e. events.}
\end{minipage}
\hfill
\begin{minipage}[t]{6.7cm}
\epsfig{file=multipe.eps,width=6.5cm}
\caption [11]
{
Charge distribution for multi p.e. events.
}
\end{minipage}
\end{figure}
\subsection{Time Response}
A single photoelectron pulse of the {\it QUASAR-370}
is a superposition of $G$
single photoelectron pulses of the small
tube {\it UGON}, distributed exponentially in time:
\begin{equation}
P(t)=\frac{1}{\tau}\exp(-\frac{t}{\tau}),
\label{eq:3}
\end{equation}
with $\tau$ being the time constant of the scintillator.
Figs. 7 and 8 show the corresponding typical pulseforms
of single and multi-photoelectron pulses.
\vspace*{0.5cm}
\begin{figure}[H]
\begin{minipage}[t]{6.7cm}
\epsfig{file=pmta.epsi,width=6.3cm}
\caption [10]
{
Typical pulse form of a single p.e. pulse (10 mV/div vertically and
25 nsec/div horizontally).
}
\end{minipage}
\hfill
\begin{minipage}[t]{6.7cm}
\epsfig{file=GRAFIKC.ps,width=6.5cm}
\caption [11]
{
Typical pulse form of a multi-p.e. pulse (100 mV/div vertically and
25 nsec/div horizontally).
}
\end{minipage}
\end{figure}
The best single photoelectron time resolution is
obtained by using a double threshold discriminator as sketched in
Fig. 9. It consists of two discriminators with different thresholds
and integration constants: a {\it timing} discriminator with a
threshold of $0.25\,q_{1}$
and a {\it strobe} discriminator
with a threshold of $0.3\,Q_{1}$
($q_1$ and $Q_1$ are the most probable charges of a single
photoelectron pulse from the small PMT and from the big photocathode,
respectively).
\begin{figure}[t]
\centering
\mbox{\epsfig{file=discr.eps,height=9cm, angle=-90}}
\caption[4]{\small
Block diagram of the discriminator. D$_1$ -- timing
discriminator with threshold 0.25 $q_1$, D$_2$ -- strobe
discriminator with threshold 0.3 $Q_1$ (see text).
}
\end{figure}
The time is defined by the front of the first of the $G$
single photoelectron pulses of the small PMT.
In this case, the transit time distribution for single photoelectron
pulses with respect to the big photocathode is described
by
\begin{equation}
W_{1}(t)=\frac{G}{\tau}\exp(-\frac{G}{\tau}t).
\label{eq:4}
\end{equation}
$W_1(t)$ is determined
by the scintillator decay time constant $\tau$ and by the gain
$G$ of the electro-optical
preamplifier. For the best tubes and an accelerating
voltage of 25 kV, $G$ is about
50, and the FWHM of $W_{1}(t)$ is 1.8 nsec for
point illumination. For typical tubes the transit time FWHM
is between 2 and 3.5 nsec. Fig. 10 shows the single photoelectron
transit time distribution for head-on full-cathode illumination.
The measured FWHM (3.8 nsec) is a convolution of the
jitter for point illumination and the transit time differences from
different parts of the photocathode.
\vspace*{0.5cm}
\begin{figure}[b]
\centering
\mbox{\epsfig{file=ttime.eps,height=8cm}}
\caption[4]{\small
Single photoelectron transit time distribution for head-on,
full-cathode illumination of the photocathode of {\it QUASAR-370}
(N$^o$ 1). The FWHM is 3.8 nsec.
}
\end{figure}
We should note here that some "late" events
contribute to the tail of the $W_{1}(t)$ distribution.
These events are due to backscattering of photoelectrons in
the luminescent screen. Elastically (or nearly
elastically) scattered electrons may leave the screen without
yielding a signal above the discriminator threshold. They are bent
back by the electrical field in the electro-optical preamplifier
and hit the screen a second time. Due to the high voltage
(25 kV), the scale of delay times
of late events in the {\it QUASAR-370} is considerably smaller than
in conventional PMTs -- about 10 nsec compared to
30\,-\,100 nsec.
The level of
ordinary afterpulses in the {\it QUASAR-370} is substantially less\linebreak
($\le$ 2\%) than
in conventional PMTs. The reasons are {\it a)}
the complete vacuum separation
between the electro-optical preamplifier and the small PMT,
and {\it b)} the low sensitivity of the photocathode to
backscattered X-ray photons of typically 10 keV (compared to
some 100 eV in conventional PMTs).
Table I summarizes the main parameters of the {\it QUASAR-370}
and of the small PMT {\it UGON}.
\vspace*{0.5cm}
\begin{center}
{\bf \large Table I} \\ [0.5cm]
\begin{tabular}{|l|c|c|}
\hline
& QUASAR-370 & UGON \\
\hline
bulb material & borosilicate glass & borosilicate glass \\
photocathode & K$_2$CsSb & K$_2$CsSb \\
photocatode diameter & 37 cm & 2.5 cm \\
spectral sensitivity ant $\lambda$ = 410 nm & 60 mA/W & 60 mA/W \\
number of stages & 1 & 12/13 \\
gain & 25 & 10$^7$ \\
1-PE resolution & 70 \% & -- \\
peak-to-valley ration (1PE) & 2.5 & 1.3 \\
TT difference (center-edge) & $\le$1.5 nsec & $\le$ 1 nsec \\
TT jitter for 1 PE point illumination & 2 nsec & 2.2 nsec \\
noise rate ($\ge$ 0.25 PE, 20$^o$C) & 30 kHz & $\le$ 1 kHz \\
\hline
\end{tabular}
\end{center}
\section{Design of the Optical Module}
\subsection{General Description}
The OM basically consists of the {\it QUASAR-370} enclosed
in a transparent, nearly spherical pressure housing, see Fig. 11.
The optical contact between the photocathode region of the tube and
the pressure sphere is made by liquid glyzerine sealed with a layer of
polyurethane.
Apart from the PMT, the OM contains two HV supplies (25 kV and 2 kV)
for the hybrid PMT, a voltage divider, two preamplifiers, a calibration
LED and a vacuum probe. Each OM is electrically connected to the
Svjaska Electronics Module (SEM, see figs.1 and 15) by four electrical
lines. They pass the signal
driving the LED from the SEM to the OM, and the PMT anode and dynode
signal from the OM to the SEM. The fourth cable supplies the low
voltages for the PMT HV-system and the preamplifier. A vacuum valve
(not shown in Fig. 11) allows to evacuate the sphere to 0.7 atm (see
section 4.2).
The OM is fixed to the string by a steel frame locked via a shackle.
Fig. 12 shows a photography of an OM pair.
\vspace*{0.5cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=om.eps,height=12cm, angle=-90}}
\caption[4]{\small
Schematical view of an Optical Module
}
\end{figure}
\subsection{Pressure Housing}
{\it Early approaches}
For the single string installations operated up to 1989 at Lake
Baikal, cylindrical housings made from epoxy reinforced fiber glass
have been used. These OMs
housed two 15-cm tubes with flat window, facing
upward and downward, respectively. The PMTs were covered with
end caps made from plexiglas. Limits on the flux of GUT monopoles
catalyzing baryon decay as well as a variety of limnologically
relevant results have been obtained with single strings carrying
these OMs \cite{Proposal}. With the advent of big
hemispherical tubes this solution was discarded.
In parallel to the tests of the early versions of
the {\it QUASAR} \cite{Q-300}, we considered a
pressure resistant phototube
and tested pilot samples
of appropriate glass spheres with 37 cm diameter and 0.8\,-\,1 cm
wall thickness \cite{Dor93}. However, in order to have more
flexibility for future improvements of the phototube,
we soon decided to use separate PMTs and pressure housings.
\begin{figure}[H]
\centering
\mbox{\epsfig{file=kugeln.eps,height=8.3cm}}
\caption[1]{\small
A pair of Optical Modules of the Baikal telescope.
The photocathodes point
upward. Cleary seen are the four electrical feedthroughs
and the vacuum valve lefthand.
}
\end{figure}
{\it Present design}
Traditional housings for large deep underwater
instruments consist of two hemispheres whose
equatorial surfaces are carefully ground to match each other.
$15^{\prime \prime}$ spheres are
produced by BENTHOS Inc, USA, and
Jena-Glass, Germany (VITROVEX).
In 1987, together with the EKRAN company (Novosibirsk)
we started the design of an own pressure housing.
It consists from the same S49-1 borosilicate glass used
for the bulb of the {\it QUASAR-370}.
Its refractive index is 1.47\,-\,1.48.
Since we developed the housings for our own purpose, we
could optimize form and dimensions to fit
the demands of the Baikal experiment. The originally
spherical form was elongated by adding a cylindrical
part of 2 cm to the equator of each hemisphere. This
allowed {\it i)} to avoid space problems when mounting the
tube with its high voltage module into the housing and
{\it ii)} additionally to use the same housing for the
underwater modules housing electronics crates.
The elongated housing is superior to a sphere with bigger
diameter since the layer of immersion material
between tube bulb and pressure housing can be kept thinner.
This as well as the small wall thickness (1.0\,cm compared to
1.4\,cm for BENTHOS and VITROVEX)
results in a low light absorbtion. The 1.4\,cm spheres withstand
a water depth of 6.7\,km, the wall thickness of the EKRAN sphere
is sufficient to work at all depths in Lake Baikal (max. 1632\,m).
The transmission at 500\,nm is 87 \% for the EKRAN sphere,
and 83 \% for the other two spheres.
In order to simplify the construction of the metallic belt
used to clamp the OM to the string, the wall thickness
at the equator was increased to 13 mm, forming a flange.
The hermetization of the OM along the equator is achieved
by evacuating it via a special valve to 0.5\,-\,0.7 atm and sealing
it by homogenizing adhesive tape.
\subsection{Optical Contact}
The immersion material filling the gap between the bulb of
the phototube and the pressure housing should have
{\it a)} a good transparency,
{\it b)} an index of refraction close to that
of glass and
{\it c)} high elasticity in order
to protect the bulb from deformation of the pressure housing
($\Delta D \approx$ 0.5 mm at 140 atm).
\vspace{1cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=borglas.eps,height=9cm}}
\caption[4]{\small
Transmission curves of the EKRAN spheres (S49-1),
the VITROVEX spheres (Rasotherm), 1.1 cm glycerine and
1.2 cm SEMICOSIL jelley. All curves are normalized to
their maximum transparency at high wavelength.
}
\end{figure}
In the standard approach, optically transparent silicon jelleys
are used \cite{Mat89,EOM,PRC}.
We have developed an alternative, new method:
The gap between tube and housing is filled with glyzerine
and sealed by casting a liquid polyurethane
compound to the glyzerine surface.
The compound, being lighter than glyzerine, polymerizes
and forms a stable sleeve. The sleeve prevents the glyzerine
to leak into the back hemisphere of the OM which houses
the HV supplies and other electronic components. It
fixes the position of the tube and at the same time does
not prevent the minor displacements necessary to balance
the pressure deformations of the housing.
The advantages of this method are the following:
{\it a)} the index of refraction of glyzerine practically
coincides with that of glass ($n$ = 1.47),
{\it b)} there is no "delamination" of the immersion material
from the glass, a phenomenon
easily appearing when working with jelley, {\it c)}
the cost is low. The disadvantage is the risk that the
polyurethane might leak in which case not only the
optical contact is lost
but also the glyzerine may corrode the electronics.
In parallel, we use the standard method which we tested
first in 1992. The gap
is filled with a two-component silicon jelley (SEMICOSIL, produced
by WACKER, Germany) with an index of refraction $n \approx$ 1.40.
Eight OMs (VITROVEX spheres) with SEMICOSIL jelley were
underwater for one year in 1992/93,
without showing any degradation of optical or
mechanical characteristics. Presently, we use
VITROVEX spheres and SEMICOSIL jelley for about 10 \% of all OMs.
The transparencies of pressure housings, glyzerine and jelley
are shown in \linebreak Fig. 13 as a function of wavelength.
\subsection{Hermetic connectors}
The design of our connectors and penetrators started from
the vacuum-proofed HF connector SRG-50-863
produced
in Russia.
The connector has an impedance of 50 Ohms and withstands
working voltages up, to 500 V and temperature extremes
from -50$^o$C to +155$^o$C.
Following the experience we had gained formerly with connectors
produced by SEACON (USA), we modified
the SRG-50-863 for deep underwater applications.
The new connector is hermetic up to a pressure
of 200 atm.
The outer screen is in electrical contact with water.
In salt water this would result in strong electro-corrosion;
in fresh water, however, it is of negligible relevance.
The hermetic connectors and penetrators developed
in cooperation with the AKIN laboratory, Moscow,
can be operated
at all depths of Lake Baikal, i.e. down to 1.7 km.
\section{Operational Principle}
\subsection{Electronics}
The electronics, the trigger formation
and the data aquisition system of the {\it NT-200} Telescope
have been sketched in \cite{APP}. Here, we describe in more detail
the front-end electronics which is closely connected to the
operational principle of an OM pair. It is housed in the OMs
itself as well as in the Svjaska Electronics Modules (see Fig. 1).
Fig. 14 shows a block diagram of the components.
{\it a) Optical Module}
The OM houses two DC-DC HV supplies, one with a fixed output voltage
(presently 25 kV) for the {\it QUASAR} optical preamplifier,
the other for the small PMT {\it UGON}, with a voltage remotely controllable
in steps of 10 V from 1.00 to 2.27 kV. Both supplies can be
remotely switched off/on. The anode signal is fed to an amplifier (10x),
the signal from the 11th dynode to an inverting
amplifier (3x).
The amplifiers are mounted to a printed board. The
voltage divider for the {\it UGON} is integrated to
the photomultiplier itself.
For amplitude calibrations, a LED is mounted close to the {\it QUASAR}
photocathode. Its light level can be changed from 1 to 1000
photoelectrons.
\bigskip
{\it b) Svjaska Electronics Module}
The anode signals from the two {\it QUASARs} are processed by
the {\it local trigger board}. It consists of two 2-level
discriminators $D_1$ and $D_2$ as described in section 3, one for
each OM, and a coincidence circuit. The threshold
of $D_1$ is set to 0.25 $a_1$, with $a_1$ being
the mean pulse height of a UGON 1-p.e. signal. The
threshold of $D_2$ is remotely adjustable in the range 0.1 -- 10 $A_1$,
with $A_1$ corresponding to 1 photoelectron emitted from
the {\it QUASAR-370} photocathode. The output signal
from $D_1$ has to be confirmed by a signal from
$D_2$ (coincidence in Fig. 9).
The output pulses from $D_1$ have 15 nsec length and are switched
in coincidence in a way, that the leading edge of the output signal
({\it "local trigger"})
is determined by the first of the two input signals.
In this way, late pulses are suppressed.
The dynode signals of a pair of OMs are led to the
{\it Q-T} module and summed by an
analog summator. Each summator input can be inhibited remotely
(i.e. each OM can be excluded individually from the sum).
The sum signal is processed by a charge-to-time
converter based on the $Q$-$T$ circuit 1101PD1 (russian
analogue to MQT-200 from LeCroy). A local trigger would strobe
the 1101PD1, and the input charge is converted, with a maximum
conversion time of 70 $\mu$sec. The width of
the resulting signal corresponds to the charge of the dynode
signal, the leading edge is set by the leading edge
of the local trigger and defines the timing. The signal
is sent to a string electronics module one level higher
in the hierarchy and fed into TDCs
which digitize the time (11 bit) and the time-converted amplitude
information (10\,bit) \cite{APP}.
The {\it Q-T} module can be operated in two modes. The first
uses a time conversion factor just as high that a
1-p.e. signal corresponds to 1 channel of the
amplitude digitizing TDC.
In the second ("calibration") mode, the conversion time
is stretched and one photoelectron corresponds to 20 TDC bins.
\vspace{1.5cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=svjaska.eps,height=13.5cm,angle=-90}}
\vspace{0.8cm}
\caption[4]{\small
Scheme of the svjaska electronics controlling 2 pairs of
OMs (only the
2 PMTs belonging to the first pair are shown).
}
\end{figure}
\subsection{Single OM versus OM Pair}
In the course of the development the projects DUMAND and
BAIKAL, there have been long
discussions about the advantages and drawbacks of
operating the PMTs as single detectors or as pairs. We
have been favouring the pair principle due to the following
reasons:
{\it Firstly}, the average counting rate per PMT is
{\it in situ} (0.5-1)$\cdot 10^5$ Hz
and, due to bioluminescence, seasonally reaches
(2-3)$\cdot 10^5$ Hz. The coincidence reduces the rate
to 100-300 Hz per pair typically. This low counting rate
is of significant advantage for the following goals:
\begin{itemize}
\item[a)] {\it data transmission and trigger formation}
The hard local coincidence allows to transmit all local
signals to the underwater array trigger module just above
the detector, to form an overall
trigger, to read out all signals and to transmit
digitized times and amplitudes via wire cables to shore.
Due to the low rate, a simple underwater hardware trigger
(like e.g. "$\ge$ 3 local triggers
in the whole array within 500 nsec") already gives a
sample nearly free of accidental coincidences.
\item[b)] {\it track reconstruction}
In experiments operating the PMTs in single mode, background hits
due to PMT noise, bioluminescence or K$^{40}$ are mixed into practically
every event \cite{Stenger}. These hits have to be eliminated by various
criteria and repeated fitting procedures rejecting those
PMTs with the highest time residuals. For the NT-200 detector,
the average number of hits {\it not} due to the muon track
is only 0.03/event, compared to about 10/event for an
Ocean experiment operating $\approx$ 200
PMTs in single mode \cite{Stenger}. No coincidence between
distant PMTs reaches the noise hit rejection capabilities
of the local coincidence, due to the small
coincidence window of the latter.
\item[c)] {\it Search for slowly moving bright objects like magnetic
GUT monopoles}
The detection principle is the registration of an excess in
counting rates over time windows of the order of 10$^2$ $\mu$sec.
The rate excesses are buried in the
noise signals if the PMT is operated in single mode. Furthermore,
non-poissonian fluctuations of a single PMT might
fake a monopole event. Noise rates as well as non-poissonian effects
are effectively suppressed by the coincidence (see Fig. 15).
\end{itemize}
{\it Secondly}, "late pulses" are strongly suppressed.
These are pulses delayed by 10-100 nsec
due to (undetected) elastical backscattering of the photoelectrons
and multiplication after their second incidence on the
dynode system. Since it is rather unlikely that
both PMTs give a late pulse and since the response time is derived
from the first PMT yielding a signal, only for a very small
fraction of events the response time is that of a late pulse.
The time resolution is on the one hand worsened
since the azimuthal position of the PMT is unknown
($\Delta x, \Delta y = \pm$ 30 cm, i.e. 1.5 nsec light travel
time in water),
on the other hand, taking the time flag from the {\it first}
hit PMT of a pair (convolution of eq. (4))
sharpens the time resolution.
The two effects almost balance each other.
At least for the {\it NT-200} project (given the
high external noise due to bioluminescence and the
robust "low-tech" philosophy of the electronics
and data transmission), these advantages prevail
the drawbacks which are:
\vspace{-2mm}
\begin{itemize}
\item[a)] A higher number of PMTs in order to instrument
the same volume,
\item[b)] a slightly higher threshold ($\ge$ 0.3 p.e. in
{\it each} of the
two PMTs compared to $\ge$ 0.3 p.e. in one PMT),
\item[c)] some mutual shadowing of the two OMs of a pair.
\item[d)] possible signals in one PMT induced by the other PMT.
\end{itemize}
\vspace{1.0cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=poisson.ps,height=10cm}}
\caption[4]{\small
Hit multiplicity distributions in a time window of 8 msec, as recorded
with the monopole trigger system for two of the 18 channels of
{\it NT-36}.
Experimental data are indicated by points, the curve gives
the Poisson prediction.
}
\end{figure}
\section{Calibration}
\subsection{In-situ tests}
In 1988/89, in-situ calibrations of cylindrical modules
containing a {\it QUASAR-300} (the early 30-cm variant of the
{\it QUASAR)} and a Philips {\it XP-2600} have been performed
in Lake Baikal. We used
a trigger telescope consisting of two tanks
clamped to a string at a vertical distance of 6.5 m. The water
volume in the tanks was optically shielded from
the surrounding water, therefore the two flat-cathode PMTs watching
the tank interior were triggered only by Cherenkov
light from muons crossing the tank.
A second string carried a pair of cylindrical OMs with the test
tubes.
The horizontal distance of this string with respect to
the string with the trigger telescope was varied
between 5 and 15 meters. Fig. 16 sketches the
experimental arrangement and gives the registration
probability by the test tubes
as a function of the distance between
muon and tubes. The curves are the results of
MC calculations based on the independently measured values
for water transparency, lensing effect and transparency
of the plexiglas
cap used at that time, and the photocathode sensitivity.
\bigskip
\begin{figure}[H]
\centering
\mbox{\epsfig{file=schischka.eps,height=9.5cm}}
\caption[4]{\small
Determination of the registration probability of vertical
muons as a function of their passing distance to the OM.
{\it QUASAR-300} and {\it QUASAR-350} are the early versions of the
{\it QUASAR} tube, with 30 and 35 cm diameter, respectively.
}
\end{figure}
\subsection{Plane wave response}
A distant muon track illuminates an OM with a nearly plane wave
of photons. Given an incident flux of photons, $\Phi$ [photons/m$^2$],
the average number of photoelectrons is given by
\vspace{-2mm}
\begin{equation}
N_{PE} = \Phi \cdot F \cdot S(\theta).
\label{eq:5}
\end{equation}
Here, $\theta$ is the zenith angle with respect to the symmetry axis
of the OM, $S(\theta)$ the angular response normalized to unity
at $\theta$\,=\,0, and $F$ the absolute sensitivity at $\theta$\,=\,0.
$S(\theta)$ and $F$ include the relevant information needed for MC
calculations.
We have measured the response of OMs to a plane wave from a pulsed
LED and have determined $S(\theta)$ by rotating the OM in
the light beam. The experimental setup to measure the angular
dependence of the amplitude is shown in Fig. 17. The OM is mounted in
a black box filled with water. It can be rotated an axis
perpendicular to the ligth front.
The OM is illuminated with a green LED
through a plexiglass window. The LED is at a distance of 2.5 meters,
the maximum deviation from planarity at the edge of the module is
4.3$^o$ for the box filled with water. The measured non-uniformity of
the light profile is less than 3\,$\%$ over the module cross section.
\begin{figure}[H]
\centering
\mbox{\epsfig{file=pwstand.eps,height=14cm, angle=-90}}
\caption[4]{\small
Plane wave test stand
}
\end{figure}
The results are shown in Fig. 18. Data points are normalized to the
signal at cos$\theta$ = 0. The deviation of the
curves from linearity is marginally. Neglecting the region at
cos$\theta \ge$ 0.9, a linear fit
\vspace{-2mm}
\begin{equation}
S(\theta) = A + B \cdot cos\theta
\label{eq:6}
\end{equation}
yields for the Baikal OMs $A = 0.49$, $B = 0.51$,
similar to the DUMAND Japanese OMs \cite{Mat89},
the DUMAND European OMs measured with the same setup
\cite{EOM}, and the AMANDA OMs.
Also shown in Fig. 18 is the result of an analytical
simulation including all effects of absorption, refraction
and reflection \cite{Mohr}.
\vspace{0.3cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=plane.eps,height=9.0cm}}
\caption[4]{\small
Angular sensitivity of the Baikal OM (labeled "Quasar") and
the Dumand European OM (labeled "XP-2600").
}
\end{figure}
\subsection{Amplitude and time calibration for the full telescope}
The data taking of the telescope is interrupted about twice
every week for calibration runs in order to determine the
scale parameters of the amplitude and the time information.
{\it a) Amplitude}
The amplitude scale is calibrated by multi-photoelectron signals
from the LED. The average number of photoelectrons $N_{pe}$
of a charge distribution is derived from
\begin{equation}
N_{pe} = A^2/D^2 \cdot (1 + d)^2
\label{eq:7}
\end{equation}
with $A$ and $D$ being mean value and dispersion of the
distribution, respectively,
and $d$ the relative dispersion of
a single-photoelectron signal.
$N_{pe}/A$ gives the scale factor.
The high voltage
for the {\it UGON} is changed
until one photoelectron corresponds to one amplitude channel.
The second (stretched) $Q$-$T$ mode allows to plot the 1-p.e.
spectrum. This allows
an independent determination of the 1 p.e. scale factor
and measures also the threshold value.
{\it b) Time}
The response time $t_i$ of an OM-pair $i$ with respect to
an arbitrarily choosen time $t_0$ is determined by two
calibration parameters,
\begin{equation}
t_i = \beta_i \cdot n_i + \delta t_i
\label{eq:8}
\end{equation}
with $\beta_i$ being the scale factor for the time digitization,
$\delta t_i$ (in nsec) being the relative shifts of channel $i$
with respect to mean value of all channels, and $n_i$ the
measured TDC-channel number for channel $i$.
In the calibration runs, the TDCs are started by noise
pulses of the PMTs and stopped by generator pulses with a
period $\tau$. From a distribution like the one shown in
Fig. 19, start and end point of the plateau,
$K_{min}$ and $K_{max}$, are determined with
an accuracy of 1\,-\,2 channels (1-2 nsec for a 10 bit TDC and
$\tau = 1 \mu$sec). The $\beta_i$ are given by
$\tau / (K_{max} - K_{min})_i$, the flatness of the plateau
determines the differential linearity of the TDC, which is
better than 1 nsec in our case.
\begin{figure}[H]
\centering
\mbox{\epsfig{file=tcalb.eps,height=8cm}}
\caption[4]{\small
Determination of the scale factors $\beta_i$ for time
digitization (see text).
}
\end{figure}
The time shifts $\delta_i$ are determined with a help of
a calibration laser. This nitrogen laser, and a dye laser
shifting the original wavelength of 337 nm to
a spectrum peaking at 475 nm, are housed in a glas
cylinder just above the telescope. The light pulses of
less than 1 nsec width are guided via optical fibers
of equal length to the OMs, with one fiber illuminating
one OM pair. A laser pulse generated a "laser" event
with typically most of the channels firing. The time shifts
$\delta_i$ are given by \cite{Thomas}:
\begin{equation}
\delta_i = \frac{1}{n_{ch}} \cdot
\sum_{j=1}^{n_{ch}}{\frac{1}{n_{ji}}}
\cdot \sum_{l=1}^{n_{ij}}{(\beta_i t_{il} - \beta_j t_{jl})}
\label{eq:8}
\end{equation}
with the first sum running over the total number
of channels, $n_{ch}$, and the second sum over all
events $n_{ij}$ with both channels $i$ and $j$ being
fired.
$\beta_i$, $\beta_k$ are the scale factors for channels
$i$ and $j$, and $t_{il}$ and $t_{jl}$ are
the time codes of channel $i$ and $j$ within event $l$.
Fig. 20 shows the time difference distribution {\it after} the
correction procedure for the second and the sixth channel
of the first string of the {\it NT-36} array. The FWHM of
the laser peak is 2 nsec and the peak position is determined
with an accuray better than one nsec. The right peak
is due to downward going muons which had also triggered
the two channels separated by 25 m
($ \approx 83$ nsec $\cdot$ 0.3 m/nsec).
\vspace{0.5cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=las.eps,height=10cm}}
\caption[4]{\small
Distribution of light arrival time differences
$\Delta t_{i,j,} = t_j - t_i$ for channels 2 and
6 (string 1) for a typical laser run, with muon
triggers recorded in parallel.
}
\end{figure}
We observed a drift in the $\delta$ values over
several months, presumably
due to changes in the speed of light in the fiber
under long-time pressure or water diffusion. These
small effects can be corrected {\it a posteriori}
by reconstructing
muon tracks and requesting an average time residual
like that observed in MC calculations.
The overall accuracy of the time calibration is about 2 nsec.
\section{Long Term Operation Underwater}
\subsection{Reliability}
The first year of {\it NT-36} has demonstrated the stability and
reliability of all mechanical elements of the OMs. None of the OMs
did leak, none of the polyurethane sealing layers
and none of the QUASAR tubes have been damaged.
Until 1996, nearly thousand penetrator/valve holes have been drilled
in more than 130 spheres (OMs and electrical modules) which afterwards
have been operated over one to four years at 1.1 km depth.
None of the feedthroughs did leak. Only 2 spheres -- in 1995 -- leaked
slightly due to cracks which had developed after a year underwater.
This effect was clearly due to a manufacturing error eliminated
in the mean time.
An unexpected problem was discovered with the power lines
submitting\linebreak 300 V to the electronics modules. In 4 out of
30 connectors a parasitic current between the central
wire and earth appeared, leading to
strong electrolytic currents
across the water to the string and particularly to the
failure of the 1993 acoustic coordinate monitoring system
driven via one of these connectors. The reason for this effect seems
to be that, under the pressure of
water, the plastificator from the PVC jacket is squeezed into the
connector. This does not influence the functionality of the HF lines,
but obviously leads to the formation conducting channels in the
300\,V connectors. In the mean time, the jackets of the cables
have been improved an the effect has disappeared.
Another problem was the initially unacceptably high failure rate
of some electronic components. The percentage of working channels,
averaged over the full year, was only $S \approx$ 70\,\%
in 1993/94 ({\it NT-36}).
(An array with a linearly decreasing number of living OMs,
starting with 100\,\% and ending after a year with 50\,\% living OMs,
would have $S$ = 75\,\%.) Losses where dominantly due to failures
or misoperation of the HV supplies, secondly to failures of
the $SEM$ controllers. With $S$ still only 75\,\% in 1994/95, the
year 1995 was used for a total re-design of the 25 kV supply.
This, and changes at the 2 kV supply as well as at the controllers
led to $S$ = 85 \% for the 1996 array {\it NT-96}. The goal for
the next years will be to increase $S$ up to 90\,-\,95\,\%. For the
OMs alone this number is already nearly reached, and next
improvements have to concentrate to other components of the
detector.
In summary, the reliability optical module is suitable for long-term
underwater operation in a 200 OM array, taking into account that
yearly repair of failed components is possible. For arrays
larger by an order of magnitude, further significant improvements
are desirable.
\subsection{Sedimentation}
A phenomenon strongly influencing the sensitivity and, consequently,
the counting rates of upward facing modules, is sedimentation
of biomatter and dust on the upper hemispheres of the modules.
Fig. 21 shows the trigger rates for two different conditions
over a period of 225 days, starting with April 13th, 1993.
Firstly, for the case that
at least 4 upward facing channels have been hit (upper graph),
secondly, for the condition that at least 4 downward facing channels
have been hit (lower graph). Only channels operating all 225 days
have been included. In the second case, one observes
a slight decrease of the rates down to 85-90\% of its original value.
In contrast, the rate for the {\it upward} trigger falls down by
nearly
an order of magnitude.
\vspace*{1cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=counting.eps,height=8.5cm,width=8cm}}
\caption[4]{\small
Trigger rates over a period of 225 days starting with
April 13th, 1993. Top: Trigger {\it 4-up}
(at least 4 upward
channels hit). Bottom: Trigger {\it 4-down}
(at least 4 downward channels hit).
}
\end{figure}
The inspection of the spheres after one year of operation showed that
sediments had formed a "hat" of bad transmission on the upward
facing hemispheres (see Fig. 22). The region near the equator was almost
free of sediments. This suggests to describe the variation of
the sensitivity $\eta$ of an optical module by the
following formula:
\begin{equation}
\eta = \eta_0 \cdot (p_1 + (1 - p_1) \cdot e^{-t/p_2})
\label{eq:9}
\end{equation}
where $t$ is the time after deployment in days. $p_1$ stands for
the part of the sensitivity contributed by the
equatorial region, the second
term describes the top region with exponentially decreasing
light transmission.
Replacing the sensitivity $\eta_0$ used in the Monte-Carlo
calculations by $\eta$ as defined above, and fitting the resulting
trigger rates to those experimentally measured in 1993 (1994), one
gets $p_1 = 0.33$ (0.36) and $p_2 = 96.2$ (102.0) days
(numbers in brackets are for the 1994
array {\it NT-36$^{\prime}$}).
Consequently, the sensitivity of an upward facing module
to atmospheric muons decreases
to 35\,$\%$ after a year.
Both parameters change only slightly from year to year.
Note that the sensitivity of an upward facing OM
to upward going muons from neutrino interactions
is influenced less, since in average for these tracks the
equatorial part of the module is illuminated stronger
than the top region. Presently, we are looking for methods
to reduce sedimentation effects. E.g., the accumulation
of sediments can be reduced by
a smoother OM surface or by dressing the OM with
a plexiglas cone (see Fig. 22).
\vspace*{0.4cm}
\begin{figure}[H]
\centering
\mbox{\epsfig{file=hat.eps,height=6cm,angle=-90}}
\vspace{0.4cm}
\caption[4]{\small
Sedimentation on upward facing modules (region $p_1$ --
negligable sedimentation, region $p_2$ -- exponentially decreasing
sensitivity due to sedimentation(see eq. 9). The dashed lines indicate
the
plexiglas cone to prevent sedimentation on the sphere.
}
\end{figure}
The straight-forward solution is of course to direct all OMs downward.
This increases the sensitivity to upward muons from neutrino
interactions by\linebreak \mbox{20\,-\,30\, \%},
slightly reduces the identification capabilities with
respect to fake event (downward muons faking upward muons),
and limits the precision of downward muon physics.
In the presently operating
array {\it NT-200}, 160 of the 192 OMs face downward.
\subsection{Track Reconstruction}
During 4 years of data taking with the stepwise increasing
stages of the Baikal Neutrino Telescope we have accumulated
technical and methodical experience as well as
first relevant results. Physics results from {\it NT-36} are
reported in \cite{APP}, and from {\it NT-96}
in \cite{ICRC97,Erice}.
\begin{figure}[H]
\begin{minipage}[b]{6.7cm}
\hspace*{0.5cm}
\epsfig{file=event.eps,height=7.5cm,width=5cm}
\caption [10]
{Single muon event recorded with {\it NT-36}.
Hit channels are in black. The thick line gives the
reconstructed path, thin lines pointing to the channels mark
the path of Cherenkov photons as given by the fit to the
measured times.
The sizes of the ellipses are proportional to the
recorded amplitudes.
}
\end{minipage}
\hfill
\begin{minipage}[b]{6.7cm}
\epsfig{file=neutrino.ps,height=7.5cm,width=5cm}
\vspace{2.5cm}
\caption [11]
{
A neutrino event with 19 hits, recorded with {\it NT-96}.
The fake probability of this event is
smaller than 1\,\%.
}
\end{minipage}
\end{figure}
The initial test an
underwater telescope has to undergo is the correct
reconstruction of atmospheric muons. They enter the
array from above and are recorded
with a frequency of several Hz.
Fig. 24
shows a typical single muon event
firing 7 of the 18 channels of {\it NT-36} and
reconstructed with a $\chi^2/NDF = 0.57$.
Monte Carlo calculations using as input the timing and amplitude properties
of the {\it QUASAR} measured in laboratory do well reproduce the
experimental
data like amplitudes, time differences and angular distribution \cite{APP}.
The crucial demonstration of the
functionality of a neutrino telescope
is the identification of the rare neutrino
events among the huge amount of atmospheric muons. Their
signature is given by a muon entering the array from {\it below}.
Still too small to detect the feable fluxes from extraterrestrial
neutrino sources,
{\it NT-200} will be used to investigate those neutrinos which
have been generated in the atmosphere
in interactions of charged cosmic rays ("atmospheric neutrinos")
and to search for neutrinos due to dark matter annihilation
in the center of the Earth \cite{Erice}.
In {\it NT-200}, about one neutrino per day will be recorded.
With {\it NT-36} and {\it NT-96}, 14 neutrino events have been
identified, in accordance with Monte Carlo estimates.
Fig. 25 shows one "gold plated" event with 19 hits.
\section{Summary and Outlook}
We have constructed a deep underwater Optical Module (OM)
which is the key component of the neutrino telescope in
Lake Baikal. Most parts of the OM, like the phototube {\it QUASAR-370},
the pressure sphere protecting the {\it QUASAR}, electronics
as well as connectors and feedthroughs have been developed
by our collaboration, in close cooperation with industry.
Since 1993, we have been permanently operating configurations
with a growing number of OMs. A number of scientifically relevant
results have been obtained, ranging from counting rate variations
which reflect water transport processes to a precise measurement
of the angular spectrum of atmospheric muons. Most
notably, first neutrino events have been unambiguously
identified. During all years of data taking, none of the
pressure housing did leak, and the reliability of the OMs has
been improved continuously.
The 144-OM array operated in 1997 was upgraded
to 192 OMs in April 1998. With this upgrade ({\it NT-200}), the
short term goal of the collaboration is completed.
The technical
solutions used until now turned out to be adequat for a first
generation neutrino telescope like {\it NT-200}. In the next step, we
envisage a telescope consisting of 1000\,-\,2000 OMs. For this
telescope, the OM design will change in various respects.
Firstly, an array much larger than {\it NT-200} calls for
higher reliability. Some basic peculiarities
of the Baikal telescope may change. The present electronics
was conceptually developed in 1988\,-\,1990. Clearly,
a new design of the electroncis is necessary, using
higher integrated circuitry, eventually omitting the "svjaska
electronics modules" and moving the corresponding functions
to the OM or to the string electronics modules, making
use of more advanced signal transmission techniques etc.
Almost for sure, a future OM will have less feed-throughs in order
to further minimize the danger of leaks. This, in turn,
request changes in the electronics. For instance, one might
be forced to omit the separate dynode read-out for amplitude
mesurement and pay for that with a somewhat smaller dynamic
range.
As an example for future develoments of the OM we focus here
on its main component, the {\it QUASAR}-tube.
Amplitude and timing characteristics of
the {\it QUASAR-370} depend strongly on the ratio $\frac{G}{\tau}$
(eq. 3).
We are investigating some
new scintillator materials like ScBO$_{3}$:Ce (SBO),
YAlO$_{3}$:Ce (YAO) and YClO$_{5}$:Ce (YCO), with
2\,-\,3 times larger light yield than
Y$_{2}$SiO$_{5}$:Ce (YSO) and 28\,-\,30 ns decay time.
We have manufactured each 5 pilot tubes with SBO and YAO.
The average gain values of the corresponding preamplifier
tubes turned out to be twice as high as for fo YSO,
$G$ = 50\,-\,60. The single photoelectron resolution is
less than 50\,\% and the time resolution about 1 nsec.
Technological improvements
like the chemical protection of single scintillator grains are
expected to give a larger gain $G$ in tubes
with ordinary Y$_{2}$SiO$_{5}$:Ce.
Other lines of improvement are directed to increase significantly the
photocathode sensitivity and to decrease the dark current rate. For
the present version, the average dark current rate is
about 35 kHz at room temperature and half of that at 0$^o$C.
Another line of principial re-design and improvement is the
replacement of the scintillator screen and the small PMT by
an diode array or by a foil first dynode followed by
a mesh dynode chain. All these improvement make the {\it QUASAR}
increasingly interesting to other fields
apart from underwater telescopes,
in particular for Air-Cherenkov Telescopes.
As mentioned in section 5.2, we prefered a pairwise operation of
OMs by different reasons, most notably effective suppression of noise
counts and elimination of prepulses, late pulse and afterpulses.
However, for much larger projects the high cost of this
approach may be not acceptable. In order to maintain the local
coincidence and, at the same time, to reduce the number of OMs,
we have developed a two-channel version of the {\it QUASAR}. We made use
of the fact, that photoelectrons of the central part of the
{\it QUASAR} are focussed onto the central spot of the luminescent
screen, whereas photoelectrons from peripherical regions are collected
onto the circular edge area of the screen. We have developed 2 pilot
samples of a two-channel, small
(3cm diameter) version of the UGON with a mesh dynode system,
one channel collecting photoelectrons from the central region,
the other from the periphery. The cross-talks measured are
about 2\,\%. Switching the two channels in coincidence
results in a noise rate as low as 100 Hz.
\vspace{0.5cm}
\section {Acknowledgments}
We are indebted to G.N. Dudkin, V.Yu. Egorov, O.I. Gress, A.I. Klimov,\linebreak
G.L. Kharamanian, A.A. Lukanin, P. Mohrmann, A.I. Panfilov, V.A. Poleshuk,
V.A. Primin, I.A. Sokalski, Ch. Wiebusch and M.S. Zakharov for help
at various stages of development and tests of the optical module.
\newpage
|
1,941,325,221,027 | arxiv | \section{\label{sec:intro}Introduction}
Gauge theories are of paramount importance in fundamental physics.
Its most prominent example, the standard model of particle physics, describes electromagnetic, weak and strong interactions.
In some regimes, interactions can be treated in terms of perturbative expansions.
However, since the coupling in quantum field theories is typically scale-dependent, there are regimes (e.g. low-energy QCD) where non-perturbative methods are required~\cite{peskin1995introduction,gross1973asymptotically}.
Lattice gauge theory is a gauge invariant lattice regularization of gauge theories, in which either spacetime~\cite{wilson1974confinement} or space~\cite{kogut1975hamiltonian} is discretized.
This has allowed to uncover many interesting features of non-perturbative quantum field theories, in particular using Monte-Carlo simulations~\cite{aoki2014review}.
Nevertheless, certain aspects are difficult to study within this framework, e.g. fermionic theories with finite chemical potentials may suffer from the sign problem~\cite{troyer2005computational} and time dynamics are difficult to access as Monte-Carlo simulations require a formulation in Euclidean spacetime.
One class of approaches to these problems is based on a Hamiltonian formulation of lattice gauge theories, first proposed by Kogut and Susskind~\cite{kogut1975hamiltonian}.
Other formulations in the Hamiltonian picture include the quantum link model~\cite{horn1981finite,orland1990lattice,chandrasekharan1997quantum,brower1999qcd} or the prepotential approach~\cite{mathur2005harmonic}.
It has been shown that these Hamiltonians or truncations~\cite{zohar2015formulation} thereof can be mapped to Hamiltonians of quantum devices (e.g. ultracold atoms, trapped ions or superconducting qubits) in order to study such theories by quantum simulation~\cite{wiese2014towards,zohar2016quantum,dalmonte2016lattice, byrnes2006simulating}.
Another option is to study the Hamiltonian by designing appropriate variational ansatz states which are both efficiently tractable and capture the most relevant features of the theory.
Both ideas have been successfully applied to one-dimensional theories. The implementation of quantum simulators has been demonstrated using trapped ions~\cite{martinez2016real} and ultracold atoms~\cite{gorg2019realization,schweizer2019floquet,yang2020observation,mil2020scalable}.
On the numerical side, there has been a lot of success in applying matrix product state (MPS) methods to (1+1)-dimensional Abelian and non-Abelian lattice gauge theories~\cite{banuls2017density,buyens2014matrix,buyens2016hamiltonian,buyens2017real,kuhn2015non,banuls2013mass,banuls2015thermal,banuls2017efficient,pichler2016real,silvi2017finite,silvi2019tensor,silvi2014lattice,bruckmann2019nonlinear,funcke2020topological,rico2014tensor}, enabling the study of finite chemical potential scenarios and out-of-equilibrium dynamics which would not have been accessible in Monte-Carlo simulations of Euclidean lattice gauge theory.
Also some generalizations of Gaussian states have proven to be suitable for these theories~\cite{sala2018variational}.
The situation becomes more challenging in higher spatial dimensions, in particular due to appearance of magnetic interactions, leading to four-body plaquette terms on the lattice.
There have been ideas on how to overcome this problem in quantum simulators (either by employing a digital~\cite{tagliacozzo2013simulation,tagliacozzo2013optical,zohar2017digital,zohar2017digital,bender2018digital} or an analog simulation scheme~\cite{zohar2013quantum}) but so far they are out of experimental reach.
On the numerical side, tensor network methods in 2+1d have been applied to pure gauge theories~\cite{tagliacozzo2014tensor} and for studying U(1) ground states in quantum link models~\cite{tschirsich2019phase,felser2019two}. It has also been shown that fermionic Gaussian projected entangled pair states can be gauged~\cite{zohar2015fermionic} and serve as numerical ansatz states for lattice gauge theories, admitting a sign-problem free Monte-Carlo contraction scheme~\cite{zohar2018combining}.
In this work, we study (2+1)-dimensional compact quantum electrodynamics (compact QED).
It is a good starting point for the study of higher dimensional lattice gauge theories since it shares some features with (3+1)-dimensional Quantum Chromodynamics (QCD), e.g. that it is in a confined phase for all values of the coupling constant~\cite{polyakov1977quark}.
To access physics which is difficult to simulate with Monte-Carlo simulation of Euclidean lattice gauge theories, we not only study ground state properties but also non-equilibrium physics, namely real-time dynamics after a quantum quench.
Since exact diagonalization (ED) methods become infeasible in higher dimensions for reasonable system sizes, in particular due to the infinite local Hilbert space of the gauge field, it seems unavoidable to use variational techniques (in 1+1d the infinite dimension can be avoided either by integrating out the gauge field nonlocally \cite{hamer1997series,bringoltz2009volume,banuls2017efficient} or by using the natural restriction of gauge symmetry which makes the dimensions finite \cite{kasper2020jaynes-cummings}).
We choose to work with complex periodic Gaussian states, a generalization of periodic Gaussian states, first proposed in~\cite{drell1979quantum} to prove confinement in the weak-coupling limit of 2+1d compact QED, thus establishing the existence of one confining phase for all couplings also in the Hamiltonian picture (after it had been proven in the action formalism~\cite{polyakov1977quark}).
As expectation values with respect to periodic Gaussian states cannot be evaluated analytically, the authors of reference~\cite{drell1979quantum} used Feynman diagram techniques to evaluate all relevant quantities in the weak-coupling regime.
In contrast to that approach, we develop a numerical approximation scheme to evaluate these states for the whole coupling region.
By extending the variational manifold to complex periodic Gaussian states we are also able to account for real-time dynamics.
One appealing feature of these states is that they do not require any truncation in Hilbert space which allows us to study truncation effects which are required in other approaches and give estimates in which coupling regimes they are justified.
The manuscript is structured as follows: In Sec.~\ref{model}, we introduce the model and the variational ansatz including a scheme for its numerical evaluation.
In the first part of Sec.~\ref{static}, we study ground state energy density and string tension over the whole coupling region.
In the second part, we investigate truncation effects by comparing the variational ground state energy with exact diagonalization results where the local Hilbert space is truncated in the electric basis.
In Sec.~\ref{dynamic}, we study real-time dynamics after a quantum quench using the time-dependent variational principle.
In Sec.~\ref{conclusion}, we conclude.
\section{\label{model} Model and variational ansatz}
\subsection{(2+1)-dimensional compact QED} \label{modelsection}
We define the theory of (2+1)-dimensional compact QED on a square lattice of extent $L \times L$ with periodic boundary conditions.
The gauge fields reside on the links; $U_{\mathbf{x},i}$ denotes the gauge field operator on the link emanating from site $\mathbf{x}$ in direction $\mathbf{e}_{i}$.
The Hamiltonian in lattice units takes the following form, originally proposed by Kogut and Susskind~\cite{kogut1975hamiltonian}:
\begin{equation} \label{kogutsusskind}
H_{KS}= \frac{g^2}{2}\sum_{\mathbf{x},i} E^2_{\mathbf{x},i} + \frac{1}{2 g^2} \sum_{\mathbf{p}} 2 - ( U_{\mathbf{p}} + U_{\mathbf{p}}^{\dagger})
\end{equation}
with $g^2$ being the coupling constant and $U_{\mathbf{p}} \equiv U_{\mathbf{x},1} U_{\mathbf{x}+\mathbf{e_1},2} U_{\mathbf{x}+\mathbf{e_2},1}^{\dagger} U_{\mathbf{x},2}^{\dagger}$ where $\mathbf{x}$ is the bottom left corner of plaquette $\mathbf{p}$.
$U_{\mathbf{x},i}$ is in the fundamental representation of $U(1)$, it can also be written in terms of an angle $\theta_{\mathbf{x},i}$, $U_{\mathbf{x},i}=e^{i \theta_{\mathbf{x},i}}$ with $ -\pi < \theta_{\mathbf{x},i} \leq \pi$.
The restriction of the gauge field to this compact interval is the reason why the model is called compact QED and why it exhibits interesting features such as confinement in contrast to the non-compact theory~\cite{ben1979confinement}.
$E_{\mathbf{x},i}$ is the electric field operator fulfilling the following commutation relations:
\begin{equation}
\begin{aligned}
[E_{\mathbf{x},i},U_{\mathbf{y},j}]&= \delta_{\mathbf{x},\mathbf{y}} \delta_{i,j} U_{\mathbf{x},i} \\
[\theta_{\mathbf{x},i},E_{\mathbf{y},j}]&= i \delta_{\mathbf{x},\mathbf{y}} \delta_{i,j}
\end{aligned}
\end{equation}
Since we work in the temporal gauge, there is a residual spatial gauge symmetry defined by the Gauss law operators $G_{\mathbf{x}}$.
All physical states need to be eigenstates of them:
\begin{align}
G_{\mathbf{x}} \ket{\mathrm{phys}}=\sum_{i=1}^{2} \left( E_{\mathbf{x},i} - E_{\mathbf{x}-\mathbf{e_{i}},i}\right) \ket{\mathrm{phys}} = Q_{\mathbf{x}}\ket{\mathrm{phys}} \hspace{5pt} \forall \hspace{1pt} \mathbf{x}
\end{align}
where the eigenvalue $Q_{\mathbf{x}}$ gives the static charge configuration at $\mathbf{x}$.
These local constraints put quite severe restrictions on the choice of variational states.
Following~\cite{drell1979quantum}, we thus change to variables where gauge invariance is already incorporated (at least up to a global constraint).
This can be achieved by splitting the electric field $E_{\mathbf{x},i}$ into its transversal part $E_{\mathbf{x},i}^T$, which is dynamical, and a longitudinal part $E_{\mathbf{x},i}^L$ which is fixed by the static charge configuration.
Since the transversal part of the electric field can be expressed by a plaquette field $L_{\mathbf{p}}$ (the lattice analogue of a solenoidal vector field), the remaining dynamical degrees of freedom $\{ L_{\mathbf{p}},U_{\mathbf{p}}=e^{i \theta_{\mathbf{p}}} \}$ reside on plaquettes, having the same Hilbert space structure and fulfilling the same commutation relations as the link variables:
\begin{equation}
\begin{aligned}
[L_{\mathbf{p}},U_{\mathbf{p'}}]&= \delta_{\mathbf{p},\mathbf{p'}} U_{\mathbf{p'}} \\
[\theta_{\mathbf{p}},L_{\mathbf{p'}}]&=i \delta_{\mathbf{p},\mathbf{p'}}
\end{aligned}
\end{equation}
The operator $U_{\mathbf{p}}$ creates an electric flux excitation around plaquette $\mathbf{p}$. However, to construct all possible gauge-invariant flux configurations two global non-contractible flux loops around the torus (one for each spatial direction) are required, their operators are denoted as $ \{ \theta_{1}, L_{1} \}$ and $\{ \theta_{2}, L_{2} \}$ specifying the topological sector of the flux configuration.
$L_1$ and $L_2$ commute with the Hamiltonian and we will restrict ourselves to the topological sector with $L_{1}=L_{2}=0$ which corresponds to no electric flux loops winding around the torus.
For more details see~\cite{kaplan2018gauss} or Appendix~\ref{appformulation}.
Writing the Hamiltonian in terms of these new variables, reads
\begin{equation}
\label{Hamiltonianref}
\begin{aligned}
H_{KS}=&E_C+\frac{1}{g^2} \sum_{\mathbf{p}} (1- \cos{\theta_{\mathbf{p}}}) \\
+& \frac{g^2}{2}\sum_{\mathbf{p}} \sum_{i=1}^{2} \left(L_\mathbf{p}-L_{\mathbf{p}-\mathbf{e}_{i}}+\epsilon_{\mathbf{p}}-\epsilon_{\mathbf{p}-\mathbf{e}_{i}}\right)^2
\end{aligned}
\end{equation}
where $E_C$ is an energy offset given by the lattice Coulomb energy and $\epsilon_{\mathbf{p}}$ accounts for the transversal part of the electric field caused by the static charges only, i.e $\epsilon_{\mathbf{p}}=0$ in case of no static charges.
Even in this formulation there is one remaining global constraint left which is intuitively clear since raising the electric flux around all plaquettes should return the same state due to the periodic boundary conditions. Thus,
\begin{align} \label{globalconstraint}
\prod_{\mathbf{p}} U_{\mathbf{p}} \ket{\mathrm{phys}}=\ket{\mathrm{phys}}
\end{align}
For details on this formulation, we refer the reader to~\cite{drell1979quantum,kaplan2018gauss}. A rigorous derivation of eq.~\eqref{Hamiltonianref} from eq.~\eqref{kogutsusskind} and an explicit formula for the calculation of $\epsilon_{\mathbf{p}}$ and $E_{C}$ can be found in Appendix~\ref{appformulation}.
\subsection{The variational ansatz} \label{variationalansatz}
We formulate our variational ansatz states in terms of the $\theta_{\mathbf{p}}$-variables defined above such that it only needs to fulfill the global constraint~\eqref{globalconstraint}.
Starting from periodic Gaussian states introduced in~\cite{drell1979quantum}, we extend the variational wavefunction to have an imaginary part in order to account for real-time dynamics. The ansatz is based on a complex Gaussian state:
\begin{equation}
\Psi_{CG}(\{ x_{\mathbf{p}} \})\equiv e^{-\frac{1}{2} \sum_{\mathbf{p},\mathbf{p'}} x_{\mathbf{p}} A_{\mathbf{p}\mathbf{p}'} x_{\mathbf{p}'} - i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} x_{\mathbf{p}}}
\end{equation}
with $x_{\mathbf{p}} \in \mathbb{R}$ and $\mathbf{p}=(p_1,p_2)$, $p_1,p_2 \in [0,..,L-1]$.
The linear part in the exponent, i.e. $\epsilon_{\mathbf{p}}$, is fixed by the static charge configuration (see section~\ref{modelsection} and Appendix~\ref{appformulation}) and
\begin{equation}
A_{\mathbf{p}\mathbf{p}'} \equiv \frac{1}{\pi L^2 } \sum_{k_1,k_2=0}^{L-1} e^{2\pi i \frac{\left(p_1-p'_1\right) k_1 + \left(p_2-p'_2\right) k_2}{L}} \left(\gamma_{\mathbf{k}}^R+i \gamma_{\mathbf{k}}^I\right)
\end{equation}
is defined by the variational parameters $\left\{ \gamma_{\mathbf{k}}^R \right\} $ and $\left\{ \gamma_{\mathbf{k}}^I \right\}$. In the following, we will use the shorthand notation $\mathbf{p} \mathbf{k} \equiv 2 \pi \frac{p_1 k_1 + p_2 k_2}{L}$. Since the disorder introduced by static charges is incorporated in $\epsilon_{\mathbf{p}}$, the quadratic part $A$ is assumed to be translationally invariant.
The factor of $1/\pi$ is chosen for later convenience.
Written in terms of Fourier components $x_{\mathbf{k}} = \frac{1}{L} \sum_{\mathbf{p}} e^{i \mathbf{p} \mathbf{k}} x_{\mathbf{p}}$, the quadratic part in the exponential becomes $\sum_{\mathbf{p},\mathbf{p'}} x_{\mathbf{p}} A_{\mathbf{p}\mathbf{p}'} x_{\mathbf{p}'} = \frac{1}{\pi} \sum_{\mathbf{k}} |x_{\mathbf{k}}|^2 \left(\gamma_{\mathbf{k}}^R+i \gamma_{\mathbf{k}}^I\right)$.
Thus, to guarantee convergence of $\Psi_{CG}$ we need to require $\gamma_{\mathbf{k}}^R > 0 \hspace{2pt} \forall \mathbf{k}$.
Since $|x_{\mathbf{k}}|^2=|x_{\mathbf{-k}}|^2$, the variational parameters $\gamma_{\mathbf{k}}^{R/I}$ and $\gamma_{-\mathbf{k}}^{R/I}$ are redundant.
We define the equivalence relation
\begin{equation} \label{defkr}
\begin{aligned}
\mathbf{k} \sim_k \mathbf{k}' \qq{if} & k_1=-k'_1 \pmod{L} \\
\text{and}\quad&k_2=-k'_2 \pmod{L}
\end{aligned}
\end{equation}
With the quotient set $\mathcal{K} \equiv \left\{ [0,..,L-1]^2\setminus{(0,0)}\right\}/{\sim_k}$ we can define a set of independent variational parameters, $\left\{ \gamma^{R/I}_{\mathbf{k}}\right \}_{\mathbf{k} \in \mathcal{K}}$.
Choosing a set of independent parameters will be important later on for applying the time dependent variational principle (see section~\ref{TDVP}).
To construct a suitable ansatz state for compact $U(1)$ gauge fields $(\theta_{\mathbf{p}} \in \left[ -\pi, \pi \right])$
we sum over complex Gaussian states, thus ensuring periodicity:
\begin{align}
\Psi_{CPG}\left(\{ \theta_{\mathbf{p}} \}\right)\equiv \prod_{\mathbf{p}} \left(\sum_{N_{\mathbf{p}}=-\infty}^{+\infty} \right) &\Psi_{CG}\left(\{ \theta_{\mathbf{p}} - 2\pi N_{\mathbf{p}} \}\right)\times \nonumber\\
& \times\delta \left(\sum_{\mathbf{p}} \theta_{\mathbf{p}} - 2\pi N_{\mathbf{p}} \right).
\end{align}
The delta function needs to be included in order to satisfy condition~\eqref{globalconstraint} for physical states.
To shorten notation, we will denote the product over infinite sums $\prod_{\mathbf{p}} \sum_{N_{\mathbf{p}}=-\infty}^{+\infty}$ by $\sum_{\{ N_{\mathbf{p}} \}}$ and the product over integrals $\prod_{\mathbf{p}} \int_{-\pi}^{\pi} d\theta_\mathbf{p}$ by $\int_{-\pi}^{\pi} D\theta$.
The Gaussian nature of the wavefunction is exploited when evaluating expectation values of observables $O$ by combining the integral over $2\pi$ with one of the two infinite sums to an integration over the real axis
\begin{align} \label{formal_expectation_value}
\expval{O}{\Psi_{CPG}}=\sum_{\{N_{\mathbf{p}} \} } \delta\left({\sum_{\mathbf{p}} N_{\mathbf{p}}}\right) f_O(\{ N_{\mathbf{p}} \})
\end{align}
with
\begin{align}
&f_O(\{ N_{\mathbf{p}} \}) \nonumber\\
\equiv& \int\limits_{-\infty}^{+\infty} D\theta \hspace{1pt} \overline{\Psi_{CG}}\left({\theta_{\mathbf{p}}-2\pi N_{\mathbf{p}}}\right) O\left({\theta_{\mathbf{p}}}\right) \Psi_{CG}\left({\theta_{\mathbf{p}}}\right) \delta\left({\sum_{\mathbf{p}} \theta_{\mathbf{p}}}\right).
\end{align}
The integral $f_O(\{ N_{\mathbf{p}} \})$ can be carried out analytically and the remaining infinite sum needs to be evaluated numerically.
Exemplary, we show this procedure for the norm of the variational state, $\braket{\Psi_{CPG}}$.
The computation of observables follows analagously; details on their exact form can be found in Appendix~\ref{observables}.
After carrying out the integrals, the remaining function $f_1\left(\{ N_{\mathbf{p}} \}\right)$ is
\begin{equation}\label{partsum}
\begin{aligned}
f_1(\{ N_{\mathbf{p}} \})=\prod\limits_{\mathbf{k} \neq 0} \sqrt{\frac{\pi}{ \gamma_{\mathbf{k}}^R}} e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}}|^2 \gamma_{\mathbf{k}} }
\end{aligned}
\end{equation}
with $N_{\mathbf{k}} \equiv \frac{1}{L} \sum_{\mathbf{p}} e^{i \mathbf{p} \mathbf{k}} N_{\mathbf{p}}$ the discrete Fourier transform of $N_{\mathbf{p}}$ and $\gamma_{\mathbf{k}} \equiv \gamma_{\mathbf{k}}^R + (\gamma_{\mathbf{k}}^I)^2 (\gamma_{\mathbf{k}}^R)^{-1} $. The $\gamma_{\mathbf{k}}$ parameters determine how fast contributions to the sum in eq.~\eqref{formal_expectation_value} decrease exponentially with increasing $|N_{\mathbf{k}}|^2$.
We group the configurations $N_{\mathbf{p}}$ of this sum in different orders such that within one order the configurations only change up to permutations.
Since all relevant configurations will contain mostly zeros, we will denote orders by its non-zero elements, e.g. $\{ N \}_{1}$ is the set of all permutations of the configuration $N'$ defined by $N'_{\mathbf{p}=0}=1$ and $N'_{\mathbf{p} \neq 0}=0$, i.e. $\{ N \}_{1} \equiv S_{N'}$.
If the parameters $\gamma_{\mathbf{k}}$ are large enough, the sum can be approximated by orders having small Euclidean norm, $||N_{\mathbf{p}} ||_{2}^2 = \sum_{\mathbf{p}} |N_{\mathbf{p}}|^2 = ||N_{\mathbf{k}}||^2_2$.
The higher number of permutations in orders with larger norm cannot compensate for the exponential suppression (this would not be the case if the $\gamma_{\mathbf{k}}$ were arbitrarily small).
Using this scheme, the constraint $\delta\left(\sum_{\mathbf{p}} N_{\mathbf{p}}\right)$ is useful since it excludes many orders, e.g. $\{ N \}_{1}$ or $\{ N \}_{-1}$.
The order with the lowest non-zero norm is therefore $\{ N \}_{1,-1}$.
In fact, the sum in eq.~\eqref{formal_expectation_value} can be expanded in orders containing only pairs of $1,-1$:
\begin{align}
&\braket{\Psi_{CPG}}\nonumber\\
=& \prod\limits_{\mathbf{k} \neq 0} \sqrt{\frac{\pi}{ \gamma_{\mathbf{k}}^R}} \sum_{ \{ N_{\mathbf{k}=\mathbf{0}}=0 \}} e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}}|^2 \gamma_{\mathbf{k}} } \nonumber\\
=&\prod\limits_{\mathbf{k} \neq 0} \sqrt{\frac{\pi}{ \gamma_{\mathbf{k}}^R}} \left( 1 + \sum_{ \{ N \}_{1,-1}} e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}}|^2 \gamma_{\mathbf{k}} }\right. \nonumber\\
&+\left. \sum_{ \{ N \}_{1,1,-1,-1}} e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}}|^2 \gamma_{\mathbf{k}} } + .. \right).
\label{expansion}
\end{align}
$\sum_{ \{ N_{\mathbf{k}=\mathbf{0}}=0 \}}$ denotes the sum over the set of all $N_{\mathbf{p}}$ configurations with $N_{\mathbf{k}=\mathbf{0}}=0$, i.e. fulfilling the global constraint.
For sufficiently large $\gamma_{\mathbf{k}}$ higher orders of the type $\{ N \}_{2,-2}$ or $\{ N \}_{-2,1,1}$ are exponentially suppressed as well as orders with a large number of $1,-1$ pairs.
Thus, the above expansion can be truncated after the first few terms.
Each of the remaining orders is evaluated numerically.
The fact that configurations only change up to permutations within one order can be used to highly parallelize the computation.
On an $8 \times 8$ lattice we are able to compute the first three orders exactly.
This procedure is sufficient for most configurations of variational parameters with $\gamma_{\mathbf{k}} \gtrsim 1$.
However, in the intermediate regime $\gamma_{\mathbf{k}} \approx 1$ more orders are required to obtain good convergence.
In these cases, higher orders are computed using uniform sampling.
Since for all our purposes the different $\gamma_{\mathbf{k}}$ parameters were of the same order of magnitude and the $N_{\mathbf{p}}$ configurations only change up to a permutation within an order, a uniform probability distribution is a suitable ansatz for the exponential in eq.~\eqref{partsum}.
This is only the case for sampling within one order; it would fail if one tried to sample the whole sum.
This combined approach of exact evaluation and uniform sampling has the advantage that it introduces almost no error for most of the variational manifold (up to truncated orders which are exponentially suppressed) and even for regions where uniform sampling is required the error is still suppressed since it only occurs in higher orders. For a detailed error analysis due to truncating orders and uniform sampling see Appendix~\ref{evaluation ansatz}.
When the $\gamma_{\mathbf{k}}$ become small, the above approximation fails.
In that case, one can exploit the fact that $\braket{\Psi_{CPG}}$ can be written as a multidimensional Riemann theta function~\cite{deconinck2003computing} which is defined as
\begin{align}
\theta(z|\Omega)=\sum_{N \in \mathbb{Z}^g} e^{2\pi i (z \cdot N + \frac{1}{2} N \cdot \Omega \cdot N )}
\end{align}
where $z \in \mathbb{C}^g$, $\Omega \in \mathbb{C}^{g \times g}$, such that $\Omega=\Omega^{T}$ and $\mathrm{Im}(\Omega)$ is strictly positive definite.
To bring $\braket{\Psi_{CPG}}$ into this form one can rewrite the delta function as the limit of a Gaussian and exchange the limit with the infinite sum due to uniform convergence.
One can now exploit invariance of the Riemann theta function under modular transformations, in particular the following relation holds (for details see~\cite{deconinck2003computing}):
\begin{align}
\theta\left(z|\Omega\right)= \frac{1}{\sqrt{\det(-i \Omega)}} e^{-i \pi z \cdot \Omega \cdot z} \theta\left(\Omega^{-1} z | - \Omega^{-1}\right)
\end{align}
If we insert this relation and take the limit, we obtain:
\begin{align}
\braket{\Psi_{CPG}}&=\prod\limits_{\mathbf{k} \neq 0} \sqrt{\frac{\pi}{ \gamma_{\mathbf{k}}^R \gamma_{\mathbf{k}}}} \sum_{ \{ N_{\mathbf{p}} \}} e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}}-\epsilon_{\mathbf{k}}|^2 \gamma_{\mathbf{k}}^{-1}} \nonumber\\
&\equiv \sum_{ \{ N_{\mathbf{p}} \}} f_{\mathrm{inv},1}\left(\{ N_{\mathbf{k} \neq 0} \}\right).
\end{align}
with $\gamma_{\mathbf{0}}^{-1}=0$. The exponential weight depends now on $\gamma_{\mathbf{k}}^{-1}$ which allows in principle to approximate the sum with only a very limited number of orders for sufficiently small $\gamma_{\mathbf{k}}$.
However, the sum is not well defined since all constant configurations $N_{\mathbf{p}}=c (1,1,...,1)$ have weight one for $c \in \mathbb{Z}$.
Fortunately, since all $f_{\mathrm{inv},O}(\{ N_{\mathbf{k} \neq 0} \})$ are independent of $N_{\mathbf{k}=\mathbf{0}}$ (as a result of the global constraint on physical states), all these configurations can be factored out such that they cancel when calculating expectation values.
This can be formulated rigorously by defining an equivalence relation for $N_{\mathbf{p}}$ configurations:
\begin{align}
N_{\mathbf{p}} \sim_1 N_{\mathbf{p}}' \hspace{10pt} \mathrm{if} \hspace{5pt} \exists \hspace{10pt} c \in \mathbb{Z} \hspace{10pt} \mathrm{s.t.} \hspace{5pt} N_{\mathbf{p}}-N_{\mathbf{p}}'= c (1,1,...,1)
\end{align}
When calculating expectation values of observables only a sum over representatives of this equivalence relation is required:
\begin{align}
\frac{\expval{O}{\Psi_{CPG}}}{\braket{\Psi_{CPG}}}= \frac{\sum_{ \{ N_{\mathbf{p}} \}/\sim_1 } f_{\mathrm{inv},O}(\{ N_{\mathbf{k} \neq 0} \})}{\sum_{ \{ N_{\mathbf{p}} \}/\sim_1} f_{\mathrm{inv},1}(\{ N_{\mathbf{k} \neq 0} \})}
\end{align}
If we choose the representative to be the one closest in norm to the $N_{\mathbf{p}}=\mathbf{0}$ configuration, we can expand the sum again in orders having mostly $0$'s. In this case we have no constraint so that all orders must be taken into account.
For more details see Appendix~\ref{evaluation ansatz}.
A nice way to check the validity of both numerical approximation schemes presented above is to see whether they agree in the parameter region $\gamma_{\mathbf{k}} \approx 1 $.
This check has been carried out throughout this work since it also indicates that the whole variational manifold can be accessed which is required in order to study the whole coupling region.
To illustrate that both approximation schemes complement each other, we give the variational energy of $\Psi_{CPG}$ with respect to the Kogut-Susskind Hamiltonian given in eq.~\eqref{Hamiltonianref}, written both in the infinite sum representation for high and for low $\gamma_{\mathbf{k}}$:
\begin{widetext}
\begin{equation} \label{varenergy}
\begin{aligned}
\frac{ \expval{H_{KS}}{\Psi_{CPG}}}{\braket{\Psi_{CPG}}} =& E_C+\frac{g^2}{4\pi} \sum_{\mathbf{k}} \gamma_{\mathbf{k}} \left(4-2 \cos\left(\frac{2\pi k_{1}}{L}\right)-2 \cos\left(\frac{2\pi k_{2}}{L}\right) \right)\\
&-\frac{g^2}{2} \sum_{\mathbf{k}} \gamma_{\mathbf{k}}^2 \left(4-2 \cos\left(\frac{2\pi k_{1}}{L}\right)-2 \cos\left(\frac{2\pi k_{2}}{L}\right) \right) \expval{|N_{\mathbf{k}}|^2}\\
&+\frac{1}{g^2} \sum_{\mathbf{p}} \left( 1- e^{-\frac{\pi}{4L^2} \sum_{\mathbf{k} \neq \mathbf{0}} \left(\gamma_{\mathbf{k}}^R\right)^{-1}} \expval{(-1)^{N_{\mathbf{p}}} \cosh\left(\pi \sum_{\mathbf{k}} \mathrm{Re}\left(N_{\mathbf{k}} b_{\mathbf{k}}^{\mathbf{p}}\right)\right) } \right)
\end{aligned}
\end{equation}
with $b_{\mathbf{k}}^{\mathbf{p}} = \frac{1}{L} \gamma_{\mathbf{k}}^I\left(\gamma_{\mathbf{k}}^R\right)^{-1} e^{-i\mathbf{p}\mathbf{k}}$.
The brackets denote the infinite sums:
\begin{align}
\label{elinfinitesum} \expval{|N_{\mathbf{k}}|^2} \equiv& \frac{\sum_{ \{ N_{\mathbf{k}=\mathbf{0}}=0 \}} e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k'}} |N_{\mathbf{k'}}|^2 \gamma_{\mathbf{k'}} } |N_{\mathbf{k}}|^2}{\sum_{ \{ N_{\mathbf{k}=\mathbf{0}}=0 \}} e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k'}} |N_{\mathbf{k'}}|^2 \gamma_{\mathbf{k'}} }} \nonumber\\
=&\frac{1}{2\pi} \gamma_{\mathbf{k}}^{-1} \left(4-2 \cos\left(\frac{2\pi k_{1}}{L}\right)-2 \cos\left(\frac{2\pi k_{2}}{L}\right) \right) \nonumber\\
&- \gamma_{\mathbf{k}}^{-2} \frac{\sum_{ \{ N_{\mathbf{p}} \}/\sim_1 } e^{- \pi \sum_{\mathbf{k'}} |N_{\mathbf{k'}} - \epsilon_{\mathbf{k'}}|^2 \gamma_{\mathbf{k'}}^{-1} } |N_{\mathbf{k}}-\epsilon_{\mathbf{k}}|^2}{\sum_{ \{ N_{\mathbf{p}} \}/\sim_1 } e^{- \pi \sum_{\mathbf{k'}}|N_{\mathbf{k'}} - \epsilon_{\mathbf{k'}}|^2 \gamma_{\mathbf{k'}}^{-1} }} \\
\label{maginfinitesum} \expval{(-1)^{N_{\mathbf{p}}} \cosh(\pi \sum_{\mathbf{k}} \mathrm{Re}\left(N_{\mathbf{k}} b_{\mathbf{k}}^{\mathbf{p}}\right)) }=& \frac{\sum_{ \{ N_{\mathbf{k}=\mathbf{0}}=0 \}} (-1)^{N_{\mathbf{p}}} \cosh\left(\pi \sum_{\mathbf{k}} \mathrm{Re}\left(N_{\mathbf{k}} b_{\mathbf{k}}^{\mathbf{p}}\right)\right) e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}}|^2 \gamma_{\mathbf{k}} } }{\sum_{ \{ N_{\mathbf{k}=\mathbf{0}}=0 \}} e^{2\pi i \sum_{\mathbf{p}} \epsilon_{\mathbf{p}} N_{\mathbf{p}}} e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}}|^2 \gamma_{\mathbf{k}} }} \nonumber\\
=& \frac{\sum_{ \{ N_{\mathbf{p}} \}/\sim_1 } e^{-\pi \sum_{\mathbf{k}} \big(|N_\mathbf{k} - \epsilon_{\mathbf{k}} - \frac{1}{2}^{\mathbf{p}}_\mathbf{k}|^2 -\frac{1}{4} |b_{\mathbf{k}}^{\mathbf{p}}|^2 \big) \gamma_{\mathbf{k}}^{-1} } \cos \left( \pi \sum_\mathbf{k} \gamma_{\mathbf{k}}^{-1} \mathrm{Re} \left[ \big(N_\mathbf{k} - \epsilon_\mathbf{k} - \frac{1}{2}^{\mathbf{p}}_\mathbf{k}\big) b_{\mathbf{k}}^{\mathbf{p}} \right] \right)}{\sum_{ \{ N_{\mathbf{p}} \}/\sim_1 } e^{- \pi \sum_{\mathbf{k}} |N_{\mathbf{k}} - \epsilon_{\mathbf{k}}|^2 \gamma_{\mathbf{k}}^{-1} } }
\end{align}
\end{widetext}
with $\frac{1}{2}^{\mathbf{p}}_\mathbf{k}= \frac{1}{2L} e^{-i\mathbf{p}\mathbf{k}}$.
If we set $\gamma_{\mathbf{k}}^I = 0 \hspace{3pt} \forall \hspace{1pt} \mathbf{k}$ the expressions for high $\gamma_{\mathbf{k}}^R$, i.e. with the sums $\sum_{ \{ N_{\mathbf{k}=\mathbf{0}}=0 \}}$, agree with the results given in~\cite{drell1979quantum} up to redefinitions.
It is important to emphasize that the convergence of infinite sums is determined by $\gamma_{\mathbf{k}} = \gamma_{\mathbf{k}}^R + \left(\gamma_{\mathbf{k}}^I\right)^2 \left(\gamma_{\mathbf{k}}^R\right)^{-1}$ or $\gamma_{\mathbf{k}}^{-1}$, respectively.
For real-time evolutions, e.g. a quantum quench, $\left(\gamma_{\mathbf{k}}^I\right)^2$ will typically become large and so will $\gamma_{\mathbf{k}}$, irrespective of the real part $\gamma_{\mathbf{k}}^R$.
This allows to truncate the expansion in eq.~\eqref{expansion} already after the first term such that everything can be evaluated without resorting to sampling.
This property makes the ansatz well suited for real-time evolution compared with other methods where sampling at all times often makes it difficult to reach long times.
\section{\label{static} Static properties}
In this section, we study the variational ground state of 2+1d compact QED over the whole coupling region.
To minimize the energy we applied a gradient descent algorithm (the formula for the gradient can be found in Appendix~\ref{observables}).
We used different initial seeds to prevent the possibility of getting stuck in local minima.
To make sure that our variational state can approximate the ground state, we compare it first to known exact results.
One should note that exact diagonalization methods cannot be applied to the full theory since the local Hilbert space is infinite.
However, for the case of a single plaquette exact analytical solutions are known, namely the Mathieu functions.
\subsection{Benchmark for one plaquette}
For benchmarking our variational ansatz, we will restrict ourselves to the sector without static charges. The Hamiltonian given in the formulation of the previous chapter, written in the basis of $\theta$, reads:
\begin{align}
H_{1 \mathrm{plaq}}= -2g^2 \frac{\partial^2}{\partial \theta^2} + \frac{1}{g^2} (1-\cos \theta).
\end{align}
The corresponding Schroedinger equation for $\xi(\theta)$ can be written as a Mathieu equation:
\begin{equation}
\left( \frac{\partial^2}{\partial z^2}+a -2q \cos(2z) \right) \Tilde{\xi}(z) = 0
\end{equation}
with $q \equiv -\frac{1}{g^4}$, $a \equiv \frac{2}{g^2}\left( E - \frac{1}{g^2} \right)$ and $\Tilde{\xi}(z) \equiv \xi(\theta /2)$.
$\Tilde{\xi}$ is therefore not $2 \pi$-periodic but $\pi$-periodic.
The $\pi$-periodic solutions are usually separated into even $ce_{2r}(z,q) \hspace{2pt} (r \geq 0)$ and odd $se_{2r}(z,q) \hspace{2pt} (r \geq 1)$ solutions.
The lowest energy, i.e. the lowest characteristic value $a$, corresponds to the solution $ce_{0} (z,q)$.
In Fig.~\ref{Benchmark_one_plaquette}, this exact ground state energy is plotted against the minimized variational energy.
They agree very well over the whole coupling region, even in the regime where the difference is maximal ($g^2 \sim 0.7 $) the relative error is still around $0.5 \% $.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{one_plaquette_with_inset.pdf}
\caption{Benchmark of the variational ground state energy for one plaquette against the value of the exact ground state, given by the Mathieu function with the lowest characteristic value. The inset shows the relative error of the variational ground state energy with respect to the exact goundstate energy.}
\label{Benchmark_one_plaquette}
\end{figure}
\subsection{Ground state properties}
In this section, we study the properties of the varational ground state for an extended lattice and investigate its finite size effects.
We start by studying the ground state energy density $e_{0}(L)$ for lattice sizes up to $8 \times 8$ plaquettes without static charges.
We see that for couplings $g^2 \gtrsim 1.0 $ this size is already enough to get a linear scaling with $\frac{1}{L^2}$. The thermodynamic limit $e_{0}(L=\infty)$ is then extracted with the following fit
\begin{equation} \label{finitesizeformula}
e_{0}(L) = e_{0}(L=\infty) + \frac{a}{L^2}.
\end{equation}
For large couplings the thermodynamic limit can be reached with even smaller lattice sizes.
The region which limits the evaluation of our variational state to $8 \times 8$ is around $g^2 \sim 1.1$ since the variational parameters are of order one ($\gamma^R_{\mathbf{k}} \sim 1 $ , $\gamma^I_{\mathbf{k}}=0 $) and thus both approximation schemes agree (see Appendix~\ref{evaluation ansatz}).
Hence, for couplings below this transition region we can simulate larger lattices, namely $14 \times 14$ for $g^2=0.8,0.9$ and $20 \times 20$ for $0.1 \leq g^2 \leq 0.7$.
For such lattice sizes, the finite size effects become again small enough to extrapolate to the thermodynamic limit.
The result for the ground state energy density in the thermodynamic limit over the whole coupling region is shown in Fig.~\ref{GS_thermo}. To illustrate, we show the extrapolation to the thermodynamic limit for $g^2=0.5$ and $g^2=2.0$ in Fig.~\ref{finitesize}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{energy_scan_thermo.pdf}
\caption{Groundstate energy density extrapolated to the thermodynamic limit. The available lattice sizes are $8 \times 8$ for couplings $g^2 \geq 1.0$, $14 \times 14$ for $g^2=0.8,0.9$ and $20 \times 20$ for $g^2 \leq 0.7$. }
\label{GS_thermo}
\end{figure}
\begin{figure} \label{finite size}
\centering
\includegraphics[width=\columnwidth]{finite_size_scaling.pdf}
\caption{Finite size scaling for the ground state energy density at $g^2=0.5$~(a) and $g^2=2.0$~(b). For $g^2=2.0$, the ground state energy density for $L=8,7,6$ is fitted according to eq.~\eqref{finitesizeformula}. The remaining data points correspond to $L=5,4,3$. For $g^2=0.5$, lattice sizes of $L=20,18,16$ are used for the fit, the remaining data points correspond to $L=14,12,10$. }
\label{finitesize}
\end{figure}
In the next step, we study the string tension over the whole coupling region.
We can measure it in two ways: First, we place static charges and analyze the scaling of the ground state energy depending on the distance between static charges.
We will fit the potential with the following function:
\begin{align} \label{staticpotentialformula}
V(d)= \sigma d + b V_{Coul}(d)
\end{align}
where $\sigma$ is the string tension and $V_{Coul}$ is the lattice Coulomb potential in two dimensions which becomes a logarithmic potential in the continuum limit. The values for $V(d)$ are computed as the difference between the ground state energy with static charges separated by a distance $d$ and the ground state energy without static charges. Exemplary, we show the fit of the potential for $g^2=2.0$ in Fig.~\ref{static_potential_fit}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{potential_fit_2.00}.pdf}
\caption{The static potential $V(d)$ of two charges separated by a distance $d$ at $g^2=2.0$. The data points are computed on an $8 \times 8$ lattice as the difference between the ground state energy with the respective static charge configuration and the ground state energy without static charges. The red line is a fit to the potential according to eq.~\eqref{staticpotentialformula} with $\sigma=1.001$ and $b=0.146$.}
\label{static_potential_fit}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{wilson_fit_0.50}.pdf}
\caption{The data points show different spatial Wilson loops $\expval{W(R_1,R_2)}$ in the ground state at $g^2=0.5$, computed on a $20 \times 20$ lattice, as a function of the area $R_1 \times R_2$. The maximally used edge length of a Wilson loops is $10$ ($R_1,R_2 \leq 10$), with a maximum difference between the edges of one ($|R_2-R_1| \leq 1$). The red line is a fit to the exponential decay of Wilson loops according to eq.~\eqref{wilsonloopformula} with $\sigma=0.013$, $a=0.132$ and $c=0.349$.}
\label{wilson_fit}
\end{figure}
In the second approach we use the scaling of spatial Wilson loops to extract the string tension.
This works at zero temperature since on the Euclidean lattice spatial and temporal Wilson loops are related by $O(4)$ symmetry. At finite temperature this symmetry is broken due to a compactified temporal dimension \cite{svetitsky1982critical}. The formula to calculate Wilson loops of arbitrary size with complex periodic Gaussian states in both the low and high $\gamma_{\mathbf{k}}$ approximation can be found in Appendix~\ref{observables}.
On $8 \times 8$ lattices, we consider all rectangular loops $R_1 \times R_2$ with $R_1,R_2 \leq 4$ (four is the maximal physical length due to the periodic boundary conditions).
Furthermore, we require $|R_1 - R_2| \leq 1$ to avoid additional finite size effects coming from an asymmetry in the edges.
For weak couplings, where larger lattices are accessible, we extend the allowed maximal edge length to $7$ and $10$ (for $14 \times 14$, resp. $20 \times 20$).
We fit the Wilson loop scaling according to the following formula:
\begin{align}\label{wilsonloopformula}
W(R_1,R_2)= e^{-\sigma R_1 R_2 - 2a (R_1+R_2) + c}
\end{align}
The first term corresponds to area law scaling with string tension $\sigma$ and the second term to perimeter law scaling.
To illustrate the procedure, we show the fit for the ground state at $g^2=0.5$ in Fig.~\ref{wilson_fit}.
We also tried to extract the string tension via Creutz ratios \cite{creutzasymptoticfreedom1980} but the results were less reliable than the Wilson loop fits.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{string_tension.pdf}
\caption{String tension fitted via the static potential (blue) and via the decay of spatial Wilson loops (orange). For larger couplings ($g^2 \geq 1.5$) the static potential fit performs better than the fit of Wilson loops and agrees with the strong-coupling prediction $g^2/2$. For small couplings ($g^2 \leq 1.4$) Wilson loop fits are more suitable. The more reliable method is shown with full data points while data points of the other method are made transparent. }
\label{stringtension}
\end{figure}
The result for both approaches is shown in Fig.~\ref{stringtension}.
For large values of the coupling constant, the fit for the static potential works well and agrees with the strong-coupling prediction $\frac{g^2}{2}$.
Since a large coupling implies a significant distance from the continuum limit, moderate lattice sizes are sufficient to observe the onset of the linear part of the potential. The scaling of Wilson loops is prone to errors in that regime as expectation values of large Wilson loops become close to machine precision.
However, for small couplings the Wilson loop scaling is the better method since expectation values of Wilson loops do not decay as fast due to the small string tension.
Since both methods complement each other we chose to make the string tension data for the static potential transparent for couplings $g^2 \leq 1.5$ and the ones extracted by Wilson loops scaling for $g^2 > 1.5$.
The remaining full data points in Fig.~\ref{stringtension} are the most reliable estimates for the string tension.
For small couplings an exponential decay of the string tension is expected according to the formula \cite{ambjorn1982string}:
\begin{align}
\sigma= c \sqrt{\frac{g^2}{\pi^2}} e^{-\frac{\pi^2}{g^2} \nu_0}.
\end{align}
If we fit this formula to the string tension data of the Wilson loop fits between $0.5 \leq g^2 \leq 0.9$ (see Fig.~\ref{stringtensionexp}) we obtain $c=23.53$ and $\nu_0=0.318$ which is close to the theoretical prediction ($\nu_{0,\mathrm{theo}}=0.321$)~\cite{loan2003path}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{string_tension_exp.pdf}
\caption{String tension in the weak-coupling regime. While the Wilson loop fits show exponential decay of the string tension close to the theoretical value ($\nu_0=0.318$ compared to $\nu_{0,\mathrm{theo}}=0.321$), the static potential fits become unreliable for couplings $g^2 \leq 0.6$.}
\label{stringtensionexp}
\end{figure}
\subsection{Truncation effects}
Since our wave function does not require a truncation, we can study truncation effects of other methods.
Here, we will focus on a truncation in the electric basis.
To see these effects we will study the variance of the electric field operator.
For simplicity, we will look at this effect without static charges, since they only introduce $\epsilon$-shifts ($-1/2 < \epsilon < 1/2$) in the electric field.
Since the expectation value of the electric field vanishes in the absence of static charges, we can write the variance in terms of the electric energy
\begin{align}
\mathrm{Var} (E_{\mathbf{x},i}) &= \expval{E_{\mathbf{x},i}^2}-\expval{E_{\mathbf{x},i}}^2=\frac{1}{L^2 g^2} \expval{H_{E}}.
\end{align}
The variance is plotted in the inset of Fig.~\ref{comparison ED} for the ground state which was computed in the last section. To quantitatively show the difference, we compare our variational state to an exact diagonalization calculation of a $\mathbb{Z}_3$ lattice gauge theory.
To reduce the required Hilbert space dimension, we formulate it in terms of plaquette variables, in the same style as we did for the $U(1)$ theory.
The Hilbert space is truncated in the eigenbasis of $L_{\mathbf{p}}$ to three states (corresponding to the eigenvalues $m=0,1,-1$).
To make this a consistent theory we define the gauge field operators cyclically:
\begin{align}
U_p^{\dagger} \ket{m} = \ket{m'}\qq{with $m'=m+1 \pmod{3}$.}
\end{align}
This is equivalent to a $\mathbb{Z}_3$ lattice gauge theory formulated in link variables:
\begin{align}
H_{Z3}= \frac{g^2}{6} \sum_{\mathbf{x},i} (2-P_{\mathbf{x},i}-P^{\dagger}_{\mathbf{x},i}) + \frac{1}{2g^2} \sum_{\mathbf{p}} (2- Q_{\mathbf{p}}-Q_{\mathbf{p}}^\dagger )
\end{align}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{compare_ED_variational_with_variance.pdf}
\caption{
Comparison of the ground state energy density on a $3 \times 3$ lattice without static charges, computed for a $\mathbb{Z}_3$ lattice gauge theory by exact diagonalization (orange) and for the full $U(1)$ theory by minimizing the variational energy (blue). The inset shows the variance of the electric field on a link in the variational ground states.
}
\label{comparison ED}
\end{figure}
with $Q_{\mathbf{p}} \equiv Q_{\mathbf{x},1}Q_{\mathbf{x}+\mathbf{e}_{1},2} Q_{\mathbf{x}+\mathbf{e}_{2},1}^{\dagger}Q_{\mathbf{x},2}^{\dagger}$ where $\mathbf{x}$ is the vertex at the bottom left corner of plaquette $\mathbf{p}$ and $Q_{\mathbf{x},i}$ the cyclic raising operator of the electric field on link $(\mathbf{x},i)$, such that (see \cite{horn1979hamiltonian} for details)
\begin{align}
&P_{\mathbf{x},i}^N=Q_{\mathbf{x},i}^N=1&& P_{\mathbf{x},i}^\dagger P_{\mathbf{x},i}=Q_{\mathbf{x},i}^\dagger Q_{\mathbf{x},i}=1\nonumber\\
&P_{\mathbf{x},i}^\dagger Q_{\mathbf{x},i} P_{\mathbf{x},i} =e^{i\frac{2\pi}{3}Q_{\mathbf{x},i}}. &&
\label{eq:ZN_algebra}
\end{align}
The maximal lattice size we can achieve in our ED calculation for a reasonable amount of time is $3 \times 3$ plaquettes.
We calculate the ground state energy density for this lattice size with ED and our variational ansatz.
The result is shown in Fig.~\ref{comparison ED}. The two approaches exhibit good agreement in the strong coupling regime. For intermediate couplings differences becomes more pronounced leading to qualitatively different results in the weak-coupling limit $g \to 0$ .
Since the electric Hamiltonian becomes bounded in the truncated theory, it does not contribute in the weak coupling limit. In the $U(1)$ theory, however, the electric Hamiltonian is unbounded and the growth in electric energy leads to a finite result for the ground state energy in the continuum limit.
\section{\label{dynamic} Real-time dynamics}
In this section, we study out-of-equilibrium dynamics by applying the following quench protocol: We prepare the ground state for the compact QED Hamiltonian at some coupling $g^2$, quench to a Hamiltonian with a different coupling constant $g^2_{\mathrm{quench}}$ and observe the subsequent time evolution. The observables we track during the evolution are Wilson loops and the electric field (their expectation values in terms of the variational parameters can be found in Appendix~\ref{observables}). In addition we check whether the energy is conserved throughout the whole time evolution.
\subsection{Time-dependent variational principle} \label{TDVP}
To study dynamical phenomena, we employ the time-dependent variational principle.
The equations of motion are projected onto the tangent plane of our variational manifold. For every variational parameter $\gamma_{\mathbf{k}}^{R/I}$ we define a corresponding tangent vector $\ket{\Psi_{\mathbf{k}}^{R/I}} \equiv \mathbb{P}_{\Psi} \left( \frac{\partial}{\partial \gamma_{\mathbf{k}}^{R/I}} \ket{\Psi_{CPG}} \right)$ where $\mathbb{P}_{\Psi}$ ensures orthogonality to $\ket{\Psi_{CPG}}$:
\begin{align} \label{projectionvarmanifold}
\mathbb{P}_{\Psi}(\ket{\psi}) \equiv \ket{\psi}-\braket{\Psi_{CPG}}{\psi}\ket{\Psi_{CPG}}
\end{align}
If we restrict the momenta $\mathbf{k}$ of the variational parameters to the set $\mathcal{K}$ defined in eq.~\eqref{defkr}, all tangent vectors become linearly independent. This allows to invert the Gram matrix $G_{\mathbf{k'}\mathbf{k}} \equiv \braket{\Psi_{\mathbf{k'}}^R}{\Psi_{\mathbf{k}}^R}$ with $\mathbf{k},\mathbf{k'} \in \mathcal{K}$. Since our variational manifold is Kähler, we can express the time evolution of the variational parameters $\gamma_{\mathbf{k}}^{R/I}$ ($\mathbf{k} \in \mathcal{K}$) in the following way \cite{hackl2020geometry}:
\begin{align}
i \left(\dot{\gamma}_{\mathbf{k}}^R+i \dot{\gamma}_{\mathbf{k}}^I\right)&= \frac{1}{2} \sum_{\mathbf{k'} \in \mathcal{K}} (G^{-1})_{\mathbf{k}\mathbf{k'}} \left( \frac{\partial E}{\partial \gamma_{\mathbf{k'}}^{R}} + i \frac{\partial E}{\partial \gamma_{\mathbf{k'}}^I} \right)
\end{align}
with $E \equiv \frac{ \expval{H_{KS}}{\Psi_{CPG}}}{\braket{\Psi_{CPG}}}$ the variational energy in eq.~\eqref{varenergy} and $\dot{\gamma}\equiv\pdv{\gamma}{t}$. The formula for the calculation of the Gram matrix and the gradient of the variational energy can be found in Appendix~\ref{observables}.
\subsection{Benchmark of variational ansatz}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{benchmark_quench_Z3_g_2.5-4.0}.pdf}
\caption{Benchmark of the variational time evolution of the $1 \times 1$ Wilson loop after a quench from $g^2=2.5$ to $g^2=4.0$ on a $3 \times 3$ lattice. It is compared with the time evolution of $\mathbb{Z}_3$ lattice gauge theory computed by exact diagonalization (the truncation from $U(1)$ to $\mathbb{Z}_3$ should only play a minor role in the strong-coupling regime).}
\label{benchmark_time}
\end{figure}
Since we are dealing with a variational ansatz, one should try to test it against exact results.
For a comparison, we use the exact diagonalization results of the $\mathbb{Z}_{3}$ theory.
Since the truncation in the electric basis led to significant differences in the ground state energy already for intermediate coupling and time-dynamics increase the variance in the electric field, we can only expect reasonable agreement for a quench within the strong coupling region.
We choose to quench the Hamiltonian from $g^2=2.5$ to $g^2=4.0$. The result is shown in Fig.~\ref{benchmark_time}. Even though truncation effects might still play a minor role in that quench, the comparison shows that the variational state can approximate amplitude and frequency of the oscillation.
\subsection{Quench dynamics}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{quench_small_81012.pdf}
\caption{Variational time evolution after a quench from $g^2=0.8$ to $g^2=0.5$ for lattice sizes of $8 \times 8$, $10 \times 10$ and $12 \times 12$. The inset shows the relative error in energy $E$ with respect to the initial energy $E_0$ after the quench.}
\label{smallquench81012}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{quench_small_161820.pdf}
\caption{Variational time evolution after a quench from $g^2=0.6$ to $g^2=0.3$ for lattice sizes of $16 \times 16$, $18 \times 18$ and $20 \times 20$. The inset shows the relative error in energy $E$ with respect to the initial energy $E_0$ after the quench.}
\label{smallquench161820}
\end{figure}
We start with quenches in the weak-coupling regime where finite-size effects are most pronounced. We are interested in the maximal time up to which we can extract physics in the thermodynamic limit before boundary effects due to our finite lattice start to play a role.
To compute that point in time, we perform the same quench on different lattice sizes and check where they start to deviate from each other.
In order to easily compare observables for different lattice sizes, we restrict ourselves to the sector without static charges. We will focus on tracking the $1 \times 1$ Wilson loop during time evolution.
We probed two different quenches, one from $g^2=0.8$ to $g^2=0.5$ for an $8 \times 8$, $10 \times 10$ and $12 \times 12$ lattice (shown in Fig.~\ref{smallquench81012}) and another one from $g^2=0.6$ to $g^2=0.3$ for lattice sizes of $16 \times 16$, $18 \times 18$ and $20 \times 20$ (shown in Fig.~\ref{smallquench161820}).
The time evolution on the $8 \times 8$ lattice agrees with the $12 \times 12$ lattice up to $t_{\mathrm{max},8} \sim 3.8$, the $10 \times 10$ lattice up to $t_{\mathrm{max},10} \sim 4.8$. The energy is conserved for all lattice sizes up to a relative error of the order $10^{-3}$. During the time spans where we can reliably extract the time evolution, the Wilson loops indicate equilibrating behavior. This statement is supported by the second quench, where the smaller coupling constants allow us to reach larger lattices. The $16 \times 16$ and $18 \times 18$ lattice agree with the $20 \times 20$ lattice up to $t_{\mathrm{max},16} \sim 8.5$ and $t_{\mathrm{max},18} \sim 9.5$. The energy is conserved up to a relative error of $10^{-6}$. We can only make a statement about the equilibration of Wilson loops since we do not have access to thermal expectation values.
An interesting direction for future research would be to check whether the Wilson loops thermalize.
For the calculation of thermal expectation values one could use Monte-Carlo simulations which have been proven successful in computing thermal properties in lattice gauge theory \cite{coddington1986deconfining,chernodub2001lattice}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{quench_from_weak_to_strong_no_energy.pdf}
\caption{Variational time evolution of the $1 \times 1$, $2 \times 2$, $3 \times 3$ and $4 \times 4$ Wilson loop after a quench from $g^2=0.5$ to $g^2=4.0$ on an $8 \times 8$ lattice. The inset shows the relative error in energy $E$ with respect to the initial energy $E_0$ after the quench.}
\label{weaktostrong}
\end{figure}
In the next step, we look at a quench from weak to strong coupling ($g^2=0.5$ to $g^2=4.0$) for an $8 \times 8$ lattice without static charges.
We track the time evolution of quadratic Wilson loops with edge sizes ranging from one to four. The result is shown in Fig.~\ref{weaktostrong}.
Although all Wilson loops equilibrate at zero on short time scales (between $t_{\mathrm{eq},4} \sim 0.2$ for the $4 \times 4$ Wilson loop and $t_{\mathrm{eq},1} \sim 0.5$ for the $1 \times 1$ Wilson loop), we carried out the same evolution on a $7 \times 7$ lattice and found the same behavior.
The coupling constant at $g^2=4.0$ is large enough to approximate the spectrum by the strong-coupling limit $g^2 \to \infty$, where the eigenstates $\ket{n}$ become diagonal in the electric basis (this can be seen e.g. in the spectrum of the $\mathbb{Z}_3$ theory which is available due to exact diagonalization). In this limit the thermal expectation value of Wilson loops vanishes trivially:
\begin{align}
\expval{W(C)}_{\text{th}}= \frac{1}{Z} \sum_{n} e^{-\beta E_n} \expval**{\prod_{\mathbf{p} \in C} \frac{1}{2}(U_{\mathbf{p}} + U_{\mathbf{p}}^{\dagger} ) }{ n} = 0.
\end{align}
For this special quench, we can thus verify that the Wilson loops equilibrate at their thermal expectation value.
The next quench we will study is from strong to weak coupling. We quench on an $8 \times 8$ lattice from $g^2=4.0$ to $g^2=0.5$ with static charges horizontally separated by four links.
Besides the $1 \times 1$ Wilson loop at the origin, we observe how the electric field of the ground state at $g^2=4.0$, a strongly confined fluxtube, evolves after the quench, in particular the electric field $E_1(x_1=2,x_2=4)$ (one of the links inside the fluxtube, see Fig.~\ref{strongtoweak}). It starts close to one, the strong-coupling value of the electric field, and decreases rapidly to $E_1^C(2,4)=0.322$, the value of the Coulomb electric field on that link (shown in the red dashed line).
The Wilson loop seems to equilibrate on longer time scales.
The energy is conserved up to a relative error of $10^{-2}$.
The larger error compared to previous quenches can be explained by the fact that around $t \sim 0.25$ the approximation method of the infinite sums appearing in the evaluation of expectation values changes from the low $\gamma_{\mathbf{k}}$ to the high $\gamma_{\mathbf{k}}$ approximation (see section~\ref{variationalansatz}).
In that transition region higher orders need to be calculated using uniform sampling (see Appendix~\ref{evaluation}) which introduces additional errors.
However, the relative error is still small and observables have no visible jump in this region, indicating that the two approximation schemes work.
After the transition region the energy is well conserved due to the fact that the variational parameters $\gamma_{\mathbf{k}}^{I}$ increase, making the approximation of the infinite sums involved in the calculation of expectation values very easy (see section~\ref{variationalansatz}).
The spreading of the electric field from inside the flux tube between the two charges towards the Coulomb configuration of the electric field is illustrated in Fig.~\ref{strongtoweakefield}. An interesting question is whether the state becomes deconfined at long times. We cannot use the scaling of spatial Wilson, this only serves as an indicator for confinement in the ground state \cite{svetitsky1982critical}. Since in our formulation the value of the longitudinal (Coulomb) part of the electric field is fixed and only the transversal part is dynamical (see Appendix~\ref{appformulation}), we can measure precisely how much an electric field configuration differs from the Coulomb configuration. At $t=2.0$, in the last of the three pictures in Fig.~\ref{strongtoweakefield}, the difference to the Coulomb configuration is of order $10^{-12}$ for the whole lattice, with no remnant of an electric flux tube between the two charges. This is a strong indication that the state becomes deconfined, corresponding possibly to a thermal state with a temperature above the confinement-deconfinement transiton~\cite{parga1981finite,svetitsky1986symmetry}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{quench_with_wilson_and_elfield_no_energy.pdf}
\caption{Variational time evolution on an $8 \times 8$ lattice after a quench from $g^2=4.0$ to $g^2=0.5$ with a positive charge placed at ($x_1=2,x_2=4$) and a negative charge at ($x_1=6,x_2=4$). We measure (a) the $1 \times 1$ Wilson loop at the origin $W(1,1)$ and (b) the electric field on a link between the two charges $E_1(2,4)$. The red dashed line represents the Coulomb value of the electric field. The inset shows the relative error in energy $E$ with respect to the initial energy $E_0$ after the quench.}
\label{strongtoweak}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{efield_timeevol.pdf}
\caption{Variational time evolution of the electric field on an $8 \times 8$ lattice after a quench from $g^2=4.0$ to $g^2=0.5$ with a positive charge placed at ($x_1=2,x_2=4$) (blue dot) and a negative charge at ($x_1=6,x_2=4$) (red dot). The color of the charges is only for graphical illustration (not related to the colorbar). The expectation value of the electric field is shown at $t=0.0$, $t=0.2$ and $t=2.0$. At $t=0.0$, the state is in the variational ground state for $g^2=4.0$ where the electric flux is confined between the two charges. After the quench, the electric field starts to spread over the lattice ($t=0.2$) and equilibrates at the Coulomb value for this charge configuration ($t=2.0$).}
\label{strongtoweakefield}
\end{figure*}
\section{\label{conclusion} Conclusion}
We introduce a new class of variational states, complex periodic Gaussian states, to study ground state properties and real-time dynamics in a (2+1)-dimensional $U(1)$ lattice gauge theory.
The evaluation of expectation values can only partially be done analytically, an infinite sum remains to be computed numerically.
We present a scheme to approximate them for all variational parameters on an $8 \times 8$ lattice and for the weak-coupling regime up to $20 \times 20$.
This allows us to study the variational ground state of these states over the whole coupling region and extract the thermodynamic limit. We benchmark our ansatz against the exact ground state for the one-plaquette case.
We also compute the string tension using two different methods: First, by fitting the static potential between two charges with a 2d Coulomb potential and a linear potential. Secondly, we fit the exponential decay of Wilson loops with an area and a perimeter law.
The two approaches are complementary since in the strong-coupling regime Wilson loops become difficult to fit due to the tiny value of large Wilson loops while the static potential approach works well as energy differences become larger. In the weak-coupling regime, however, the string tension becomes too small to extract the linear part of the potential on the given lattice sizes while Wilson loops decay only modestly allowing reliable fits. We are able to observe the expected exponential decay of the string tension in the weak-coupling regime.
Since our variational states do not need a truncation in the local Hilbert space, we compare our $U(1)$ ground state data with exact diagonalization results for a $\mathbb{Z}_{3}$ theory to study trunctation effects in the electric basis.
The results agree for strong couplings and start to differ significantly for intermediate couplings. While the ground state energy of the truncated theory goes to zero in the continuum limit $g^2 \to 0$ (since the electric energy is bounded), the variational ground state energy tends towards a finite value due to the variance of the electric field growing unboundedly.
In the last section, using the time-dependent variational principle, we probe out-of-equilibrium dynamics after a quench of the coupling constant. As a benchmark, we compare the variational time evolution after a quench within the strong-coupling regime with exact diagonalization results of the $\mathbb{Z}_3$ theory. We then start by studying quenches within the weak-coupling regime where we expect finite size effects to be significant. We compare the time evolution of a Wilson loop after the same quench for different lattice sizes to estimate at which time scales smaller lattices deviate from the thermodynamic limit. The times we can reach are large enough to indicate equilibration of Wilson loops.
In the next step, we perform a quench from weak ($g^2=0.5)$ to strong coupling ($g^2=4.0$) and track the time evolution of differently sized Wilson loops. They all equilibrate at zero which is the thermal expectation value in the strong-coupling limit ($g^2 \to \infty$). Since the spectrum at $g^2=4.0$ is close to the strong-coupling limit, this indicates that the Wilson loops equilibrate at their thermal expectation values.
We also study a quench from strong to weak coupling in the sector of two static charges. We observe that the electric flux, which is perfectly confined for the strong-coupling ground state, spreads over the whole lattice and equilibrates at the Coulomb value for the electric field to very high accuracy, leaving no trace of confinement.
In all considered quenches, we see equilibrating behavior of observables up to the times where boundary effects start to play a role. It would be interesting to compare the equilibrated expectation values to thermal expectation values which can be computed by Monte-Carlo simulations \cite{coddington1986deconfining,chernodub2001lattice}. Another interesting application for Monte-Carlo methods would be in the numerical evaluation of the variational ansatz by approximating the infinite sums. This could potentially enable the simulation of larger system sizes. The accuracy of these simulations would need to be high in order to carry out the evolution over reasonable time scales while ensuring energy conservation. Another natural extension of this work is the treatment of (3+1)-dimensional compact QED. By generalizing an idea in Ref.~\cite{drell1979quantum} to complex Gaussian states, a variational ansatz can be designed for 3+1 dimensions. However, due to additional local constraints appearing in 3+1 dimensions (compared to one global constraint in 2+1 dimensions), a new numerical approximation scheme would be required. Another interesting idea is to include dynamical matter.To couple the gauge degrees of freedom to matter, it is essential to find a formulation of such a theory, which admits the same gauge-invariant variables as used in this work for static matter. Recently, such a formulation has been proposed \cite{bender2020gauge}. This could allow to combine a periodic Gaussian state for the gauge field with a fermionic ansatz state, describing dynamical matter.
Extending the ansatz to non-Abelian gauge theories is more difficult since they do not allow a translationally invariant formulation in terms of gauge-invariant plaquette variables. However, other gauge-invariant variables could be used to construct similar ansatz states \cite{ligterink2000toward}.
\acknowledgements
We thank Lorenzo Piroli and Lucas Hackl for helpful discussions. J.B., P.E. and I.C. acknowledge support by the EU-QUANTERA project QTFLAG (BMBF grant No. 13N14780). P.E. acknowledges support from the International Max-Planck Research School for Quantum Science and Technology (IMPRS-QST). J.B. and P.E. thank the Hebrew University of Jerusalem for the hospitality during their stay at the Racah Institute of Physics.
|
1,941,325,221,028 | arxiv | \subsection{Introduction}
\TaggSection{SectionIntroduction}
In this note we construct Nadel multiplier ideal sheaves on Fano
manifolds that do not admit K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metrics, using the Ricci flow.
The result is a simple consequence of
the uniformity of the Poincar\'e and Sobolev inequalities along the flow.
This allows to obtain another proof of the convergence of the Ricci
flow on a certain class of Fano manifolds.
The theory of obstructions to the existence of canonical K\"ahler metrics
(see, e.g., \ref{F})
has a long history starting with the observation that a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein manifold must have a definite or zero
first Chern class. Lichnerowicz and Matsushima proved that for a constant
scalar curvature
K\"ahler manifold the group of automorphisms $\hbox{$\hbox{\rm Aut}(M,\hbox{\rm J})$}$ is a complexification of the group of
isometries. Later it was shown that on such a manifold the closed 1-form on the space of K\"ahler metrics
$\H_{\Omega}}
\def\Ho{{\calH_{\o}}$
(see the end of this section for notation and definitions)
defined by the scalar curvature minus its average must be basic with respect to $\hbox{$\hbox{\rm Aut}(M,\hbox{\rm J})$}_0$,
that is admit an $\hbox{$\hbox{\rm Aut}(M,\hbox{\rm J})$}_0$-invariant potential function, or equivalently
its Futaki character must vanish.
Kobayashi and L\"ubke proved that the tangent bundle of a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein manifold is semistable.
In contrast to these necessary conditions, came work on certain sufficient conditions for the
existence of K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metrics on Fano manifolds.
Siu, Tian and Yau showed that certain finite groups of symmetries can be used to this end and
produced several examples \ref{Si,T1,TY}.
Tian studied the singular locus of sequences of plurisubharmonic functions
and introduced a sufficient condition in terms
of a related holomorphic invariant \ref{T1}: $\alpha_M(G)>n/(n+1)$, where $G$ is compact subgroup of $\hbox{$\hbox{\rm Aut}(M,\hbox{\rm J})$}$ and
$$
\alpha_M(G)=\sup\,\{\;\alpha \,:\, \intm e^{-\alpha(\varphi-\sup\varphi)}\o^n<C_\alpha,\quad} \def\qq{\qquad \all\,\varphi\in{\calH_{\o}}(G),\quad} \def\qq{\qquad[\omega} \def\ome{\omega} \def\O{\Omega]=c_1\}.
$$
Then, Nadel introduced the notion of a multiplier
ideal sheaf on a compact K\"ahler manifold and showed that the nonexistence of certain such
$G$-invariant sheaves implies that $\alpha_M(G)\ge 1$ \ref{N}.
This construction is related to a theory introduced earlier by Kohn in a different context \ref{K}.
Tian translated the failure of the continuity method when
a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric does not
exist to the statement that a certain subsequence of K\"ahler potentials along Aubin's continuity path \ref{A2}
$$
\omega} \def\ome{\omega} \def\O{\Omega_{\varphi_t}^n=\o^n e^{f_\omega} \def\ome{\omega} \def\O{\Omega-t\varphi_t},\quad} \def\qq{\qquad t\in[0,t_0),\; t_0\le1, \global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\AubinPathEq
$$
will diverge
along a subvariety in a manner that can be ruled out when $\alpha_M(G)$ is large enough.
Nadel showed that furthermore the blow-up will occur along a subscheme cut out by a
coherent sheaf of multiplier ideals satisfying certain cohomology vanishing
conditions. These results provide a powerful tool in showing existence of K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metrics,
since these conditions are often violated in specific examples.
This technique was revisited by Demailly and Koll\'ar who also extended it to orbifolds \ref{DK}.
Nadel's main result can be stated as follows.%
\note{\cmreight
This theorem makes use of Nadel's vanishing theorem; see \Ref{MainSection} for the statement.
Also,
Nadel's original statement includes (for simplicity) only the case $\mf\gamma\in(\frac n{n+1},1)$, however
later he comments on the possibility of extending the allowed interval for $\mf\gamma$ \reffoot{N, p. 579}.
}
\FThm{
\TaggThm{NadelThm}
\ref{N}
Let $(M,\h{\rm J})$ be a Fano manifold not admitting a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric.
Let $\gamma\in(\frac n{n+1},\infty)$ and let $\omega} \def\ome{\omega} \def\O{\Omega\in\H_{c_1}$.
Then there exists a subsequence $\{\varphi_{t_j}\}_{j\ge0}$ of solutions of \AubinPathEq\
such that $\varphi_{t_j}-\sup\varphi_{t_j}$ converges in the $L^1(M,\omega} \def\ome{\omega} \def\O{\Omega)$-topology
to $\varphi_\infty\in PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ and
${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty)$ is a proper multiplier ideal sheaf
satisfying
$$
H^r(M,{\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty)\otimes K_M^{-\lfloor\gamma\rfloor})=0, \quad} \def\qq{\qquad \all\, r\ge 1.
\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\NadelVanishingFanoEq
$$
}
\noindent
Following Nadel, the sheaves with $\gamma<1$ will be referred to as Nadel sheaves
(see \Ref{NadelSheafDef} below).
The Ricci flow, introduced by Hamilton \ref{H1}, provides another method
for constructing K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metrics on a Fano manifold, and it is therefore
natural to ask whether this method will also yield multiplier ideal obstruction
sheaves in the absence of a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric.
It may be written as a flow equation on the space of K\"ahler potentials ${\calH_{\o}}$,
$$
\o_{\vpt}^n=\o^n e^{f_{\o}-\vp_{\!t}+\dot\vp_{\!t}},\quad} \def\qq{\qquad\,\varphi(0)=\varphi_0.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\FlowPotentialMAEq
$$
This question was first addressed by Phong, \v Se\v sum and Sturm
who proved the following result.
\vglue-0.15cm
\FThm{
\TaggThm{PSSThm}
\ref{PSS}
Let $(M,\h{\rm J})$ be a Fano manifold not admitting a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric.
Let $\gamma\in(1,\infty)$ and let $\omega} \def\ome{\omega} \def\O{\Omega\in\H_{c_1}$.
Then there exists an initial condition $\varphi_0$ and a subsequence
$\{\varphi_{t_j}\}_{j\ge0}$ of solutions of
\FlowPotentialMAEq\
such that $\varphi_{t_j}$ converges in the $L^1(M,\omega} \def\ome{\omega} \def\O{\Omega)$-topology to $\varphi_\infty\in PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ and
${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty)$ is a proper multiplier ideal sheaf satisfying
\NadelVanishingFanoEq.
}
\vglue-0.15cm
The sheaves thus obtained require the exponent to lie in a smaller interval than in
\Ref{NadelThm}, that is, they are not Nadel sheaves. This is a crucial difference between these results.
Indeed, the smaller the exponent $\gamma$, the stronger the vanishing theorem satisfied
by the corresponding multiplier ideal sheaf.
The gist of \Ref{NadelThm} is to use functions invariant under a compact (not necessarily maximally compact)
subgroup
of automorphisms $G$ in order to obtain $G$-invariant
subschemes satisfying in addition certain cohomology vanishing restrictions. When
$\gamma\in(n/(n+1),1)$, the sheaves constructed by Nadel are not only $G$-invariant, but also
satisfy
$$
H^r(M,{\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty))=0, \quad} \def\qq{\qquad \all\, r\ge 0,\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\UsualNadelVanishingEq
$$
while this need not hold for
sheaves corresponding to exponents $\gamma>1$. The subschemes cut out by sheaves
satisfying \UsualNadelVanishingEq\ satisfy various restrictions, and
it is these restrictions that render the continuity method useful in proving
existence of K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metrics on
a large class of manifolds. For example, to state the simplest restrictions,
Nadel shows that \UsualNadelVanishingEq\ implies
that the corresponding subscheme is connected and has arithmetic genus zero
and that if it is one-dimensional it
is a tree of rational curves.
It is not clear how to use the sheaves with $\gamma>1$ to prove such results.
The main result of this note is that the Ricci flow does produce Nadel sheaves,
with $\gamma\in(n/(n+1),1)$.
\vglue-0.1cm
\FThm{
\TaggThm{MainThm}
Let $(M,\h{\rm J})$ be a Fano manifold not admitting a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric.
Let $\gamma\in(n/(n+1),\infty)$ and let $\omega} \def\ome{\omega} \def\O{\Omega\in\H_{c_1}$.
Then there exists an initial condition $\varphi_0$ and a subsequence
$\{\varphi_{t_j}\}_{j\ge0}$ of solutions of
\FlowPotentialMAEq\
such that $\varphi_{t_j}-\frac1V\intm\varphi_{t_j}$ converges in the $L^1(M,\omega} \def\ome{\omega} \def\O{\Omega)$-topology to $\varphi_\infty\in PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ and
${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty)$ is a proper multiplier ideal sheaf satisfying
\NadelVanishingFanoEq.
}
Perelman proved that when a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric exists the Ricci flow will converge to it in the sense
of Cheeger-Gromov. Therefore the following corollary, due to Nadel and Perelman, is known:
\FCor{\ref{N,TZ}
\TaggCor{ConvergenceCor}
Let $(M,\h{\rm J})$ be a Fano manifold and let $G$ be a compact subgroup
of \hbox{$\hbox{\rm Aut}(M,\hbox{\rm J})$}. Assume
that $(M,\h{\rm J})$ does not admit a $G$-invariant Nadel sheaf
as in \Ref{NadelSheafDef}. Then
the Ricci flow will converge in the sense of Cheeger-Gromov to
a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric starting with any $G$-invariant K\"ahler metric.
}
\vglue-0.1cm
Observe that \Ref{MainThm} allows to obtain another proof of this corollary. This proof
differs from the one obtained by the combination of the results of Nadel and Perelman only
in the method of obtaining the $L^\infty$ estimate (and not in the method of obtaining
higher order estimates and convergence \ref{CT,PSS,TZ}).
It has the virtue of simultaneously proving
that a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric exists and that the flow will converge to it (instead of first using the continuity
method to prove that a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric exists and then using this fact via properness of
an energy functional and Ko\polishl{}odziej's results \ref{Ko} to obtain the $L^\infty$ estimate).
In particular, this gives another proof of the theorem that the two-sphere $S^2$
(and more generally, complex projective space $\PP^n$) may be
uniformized using the Ricci flow\note{%
\cmreight Indeed, observe that the group of automorphisms generated by the rotations $\mf z\mapsto ze^{\sqrt{-1}\theta}$
and the
inversion $\mf z\mapsto 1/z$ acts without fixed points on $\mf S^2$ while a 0-dimensional Nadel subscheme is a single reduced point.}
(\ref{CLT,CT,Ch1,H2} and references therein). Still more generally, \Ref{ConvergenceCor} applies
also to symmetric toric Fano manifolds (see \Ref{So}).
We emphasize that no new techniques are needed
beyond those in the continuity method setting described above and our purpose in this note is
solely to point out this similarity between the limiting behavior of the Ricci flow and
that of the more classical continuity method.
We believe \Ref{MainThm} adds important information regarding the limiting behavior
of the Ricci flow beyond that in \Ref{PSSThm}.
The proof of \Ref{MainThm} differs from that of \Ref{PSSThm} in that
it follows the lines of the original continuity method results \ref{A2,N,Si,T1} instead of appealing to
results of Ko\polishl{}odziej. This is also the key to obtaining the result for singularity exponents
in the whole interval $(n/(n+1),\infty)$.
The crucial ingredient that makes this possible is that the relevant estimates
on the Ricci flow established by Perelman, Ye and Zhang, some of which appeared after the work
of Phong-\v Se\v sum-Sturm, allow one to adapt the continuity
method arguments to the setting of the flow.\note{\cmreight With the exception of the case $\mf n=1$ ($\mf M=S^2$)
that
does not make use of Ye and Zhang's estimate.} This is described in \Ref{MainSection}.
\Ref{RemarksSection} contains some remarks, including a brief conjectural discussion on a possible extension of the
result to the setting of constant scalar curvature metrics.
\vglue-0.21cm
\subsectionno{Setup and notation}
Let $(M,\h{\rm J})$ be a connected compact closed K\"ahler manifold of complex dimension $n$
and let $\O\in H^2(M,\RR)$ be a K\"ahler class.
Let $d=\partial} \def\pa{\partial+\bar\partial$ and define the Laplacian $\Delta=-\bar\partial\circ\bar\partial^\star-\bar\partial^\star\circ\bar\partial$ with
respect to
a Riemannian metric $g$ on $M$ and assume that $\h{\rm J}$ is compatible with $g$ and parallel
with respect to its Levi-Civit\`a connection.
Let $\omega} \def\ome{\omega} \def\O{\Omega:=\omega} \def\ome{\omega} \def\O{\Omega_g=\sqrt{-1}/2\pi\cdotg_{i\bar j}} \def\gkj{g_{k\bar j}} \def\gkl{g_{k\bar l}(z)dz^i\wedge\dbz^j$ denote its corresponding K\"ahler form,
a closed positive $(1,1)$-form on $(M,\h{\rm J})$.
Let $H_g$ denote the Hodge projection operator from
the space of closed forms onto the kernel of $\Delta$. Let $V=\O^n([M])=\intm\o^n$.
Denote by $\H_{\Omega}}
\def\Ho{{\calH_{\o}}$ the space of K\"ahler forms representing $\O$.
Let
$PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)\subseteq} \def\ss{\subseteq} \def\sub{\subseteqL_{\h{\small loc}}^1(M)$ denote the set of $\omega} \def\ome{\omega} \def\O{\Omega$-plurisubharmonic functions.
Define the space of smooth strictly $\omega} \def\ome{\omega} \def\O{\Omega$-plurisubharmonic
functions (K\"ahler potentials)
$$
\calH_\omega} \def\ome{\omega} \def\O{\Omega=\{\varphi\in{\call C}^\infty} \def\RR{{\outlin R}(M)\,:\, \o_{\vp}:=\omega} \def\ome{\omega} \def\O{\Omega+\sqrt{-1}\partial\dbar\varphi >0\}.
$$
Let $\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega=-\sqrt{-1}/2\pi\cdot\partial\dbar\log\det(g_{i\bar j}} \def\gkj{g_{k\bar j}} \def\gkl{g_{k\bar l})$ denote the
Ricci form of $\omega} \def\ome{\omega} \def\O{\Omega$. It is well-defined globally and represents
the first Chern class $c_1:=c_1(T^{1,0}M,\h{\rm J})\in H^2(M,\ZZ)$.
Alternatively it may be viewed as minus the curvature form of the canonical line bundle
$K_M$, the top exterior product of the holomorphic cotangent bundle $T^{1,0\,\star}M$.
One calls $(M,\h{\rm J})$ Fano when $c_1>0$.
One calls $\omega} \def\ome{\omega} \def\O{\Omega$ K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein if $\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega=a\omega} \def\ome{\omega} \def\O{\Omega$ for some real $a$.
Let $f_{\omega} \def\ome{\omega} \def\O{\Omega}\in{\call C}^\infty} \def\RR{{\outlin R}(M)$ denote
the unique function satisfying $\sqrt{-1}\partial\dbar f_{\omega} \def\ome{\omega} \def\O{\Omega}=\Rico-\omega} \def\ome{\omega} \def\O{\Omega$
and $\Vm\intm e^{f_{\omega} \def\ome{\omega} \def\O{\Omega}}\o^n=1$.
Let $\Aut(M,\h{\rm J})_0$ denote the identity component of the complex Lie group $\Aut(M,\h{\rm J})$ of automorphisms (biholomorphisms)
of $(M,\h{\rm J})$
and denote by $\aut(M,\h{\rm J})$ its Lie algebra of infinitesimal automorphisms composed
of real vector fields $X$ satisfying $\L_X\h{\rm J}=0$.
Denote by $\hbox{\call H}} \def\Ho{\hbox{\call H}_\omega_\omega} \def\ome{\omega} \def\O{\Omega(G)\subseteq} \def\ss{\subseteq} \def\sub{\subseteq{\calH_{\o}}$ the subspace of $G$-invariant potentials.
For $\varphi\in PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ define the multiplier ideal sheaf associated to $\varphi$ to be the
sheaf ${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\varphi)$ defined for each open set $U\subseteq} \def\ss{\subseteq} \def\sub{\subseteq M$ by local sections
$$
{\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\varphi)(U)=\{h\in {\h{\cal O}}} \def\calD{{\h{\cal D}}_M(U): |h|^2 e^{-\varphi}\inL_{\h{\small loc}}^1(M) \}.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\IdealSheafSectionsEq
$$
Such sheaves are coherent \ref{D, p. 73; N}.
Such a sheaf is called proper if it is neither zero nor the structure sheaf ${\h{\cal O}}} \def\calD{{\h{\cal D}}_M$.
\FDef{\ref{N, Definition 2.4}
\TaggDef{NadelSheafDef}
A proper multiplier ideal sheaf ${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\varphi)$ will be called a Nadel sheaf whenever there
exists $\eps>0$ such that $(1+\eps)\varphi\in PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$.
}
\noindent
According to \Ref{NadelVanishingThm} such sheaves satisfy \UsualNadelVanishingEq.
Define the complex singularity exponent $c_\omega} \def\ome{\omega} \def\O{\Omega(\varphi)$ of a function $\varphi\in PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ by
$$
c_{\omega} \def\ome{\omega} \def\O{\Omega}(\varphi)=\sup\{\,\gamma\,:\, \intm e^{-\gamma\varphi}\o^n<\infty\}.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\ComplexSingularityExponentEq
$$
Note that $\intm e^{-\gamma\varphi}\o^n=\infty$ implies that $\intm e^{-(\gamma+\eps)\varphi}\o^n=\infty$ for any $\eps>0$.
Denote by $\lfloor x\rfloor$ the largest integer not larger than $x$.
\subsection{Proof of the main result}
\TaggSection{MainSection}%
The proof is split into steps similarly to the setting of the
continuity method. First, we obtain an upper bound for
$\frac1V\intm-\vp_{\!t}\o_{\vpt}^n$ in terms of $\frac1V\intm\vp_{\!t}\o^n$.
Second, we show that if the complex singularity exponents of the functions
$\vp_{\!t}$ are uniformly bigger than $n/(n+1)$ then one has a uniform upper bound on
$\frac1V\intm\vp_{\!t}\o^n$, and hence on $\sup\vp_{\!t}$. Third, we show that $-\inf\vp_{\!t}$ is
uniformly bounded from above in terms of $\frac1V\intm-\vp_{\!t}\o_{\vpt}^n$.
Fourth, we construct the Nadel multiplier ideal sheaf.
We turn to the proof.
We assume throughout that $n>1$ (the remaining case will be handled separately at the end).
First we recall some necessary facts and estimates.
Consider the Ricci flow on a Fano manifold
$$
\eqalign} \def\eqano{\eqalignno{
\frac{\pa \omega} \def\ome{\omega} \def\O{\Omega(t)}{\pa t} & =-\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega(t)+\omega} \def\ome{\omega} \def\O{\Omega(t),\quad t\in\RR_+,\cr
\omega} \def\ome{\omega} \def\O{\Omega(0) & =\omega} \def\ome{\omega} \def\O{\Omega,
}\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\KRFEq$$
and a corresponding flow equation on the space of K\"ahler potentials ${\calH_{\o}}$,
$$
\o_{\vpt}^n=\o^n e^{f_{\o}-\vp_{\!t}+\dot\vp_{\!t}},\quad} \def\qq{\qquad\,\varphi(0)=c_0,\quad t\in\RR_+.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\FlowPotentialMAEq
$$
This flow exists for all $t>0$ \ref{Ca}. Here
$c_0$ is a constant uniquely determined by $\omega} \def\ome{\omega} \def\O{\Omega$ \ref{PSS, (2.10)}.
This choice is necessary in order to have the second estimate of \Ref{RFEstimatesThm} (i) below
\ref{CT,\S\S10.1; PSS,\S2}.
Let
$$
||\psi||_{L^p(M,\omega} \def\ome{\omega} \def\O{\Omega)} = \Big(\frac1V\intm\psi^p \o^n\Big)^{\frac1p},
$$
and let
$$
\eqalign} \def\eqano{\eqalignno{
||\psi||^2_{W^{1,2}(M,\omega} \def\ome{\omega} \def\O{\Omega)} & =||\N\psi||^2_{L^2(M,\omega} \def\ome{\omega} \def\O{\Omega)}+||\psi||^2_{L^2(M,\omega} \def\ome{\omega} \def\O{\Omega)}
\cr & =
\frac1V\intm n\sqrt{-1}\partial} \def\pa{\partial\psi\wedge\bar\partial\psi\wedge\omega} \def\ome{\omega} \def\O{\Omega^{n-1}+\frac1V\intm\psi^2\o^n.
}
$$
We will make essential use of the following estimates of Perelman, Ye and Zhang.%
\FThm{
\TaggThm{RFEstimatesThm}
Let $(M,\h{\rm J})$ be a Fano manifold of complex dimension $n>1$ and let $\vp_{\!t}$ satisfy \FlowPotentialMAEq.
There exist $C_1, C_2>0$ depending only on $(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ such that:\newline
(i) \ref{ST,TZ} One has
$$
||f_{\ovpt}||_{L^\infty(M)}\le C_1,\quad} \def\qq{\qquad ||\dot\vp_{\!t}||_{L^\infty(M)}\le C_1, \quad} \def\qq{\qquad\all\, t>0.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\PerelmanIneq
$$
\newline
(ii) \ref{Ye,Z} For all $\psi\in W^{1,2}(M,\omega} \def\ome{\omega} \def\O{\Omega)$ one has
$$
||\psi||_{L^{\frac{2n}{n-1}}(M,\o_{\vpt})}\le C_2 ||\psi||_{W^{1,2}(M,\o_{\vpt})}, \quad} \def\qq{\qquad\all\, t>0.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\SobolevIneq
$$
}
\vglue-0.19cm
Following Aubin \ref{A2}, define functionals on $\H_{c_1}\times\H_{c_1}$ by
$$\eqano{
I(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vp}) & =
\Vm\int_M\sqrt{-1}\partial} \def\pa{\partial\varphi\wedge\bar\partial\varphi\wedge\sum_{l=0}^{n-1}\omega} \def\ome{\omega} \def\O{\Omega^{n-1-l}\wedge\o_{\vp}^{l} =
\Vm\int_M} \def\intm{\int_M\varphi(\o^n-\o_{\vp}^n),&\global \advance \eqncount by 1 \futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\Ieq\cr
J(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vp}) & =\frac{\Vm}{n+1}\int_M\sqrt{-1}\partial} \def\pa{\partial\varphi\wedge\bar\partial\varphi\wedge\sum_{l=0}^{n-1}(n-l)\omega} \def\ome{\omega} \def\O{\Omega^{n-l-1}\wedge\o_{\vp}^{l}.
&\global \advance \eqncount by 1 \futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\Jeq
}$$
Following Ding \ref{Di}, define a functional on $\H_{c_1}\times\H_{c_1}$ by
$$
F(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vp})
=
-(I-J)(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vp})-\frac1V\intm\varphi\o_{\vp}^n
-\log\frac1V\intm e^{f_\omega} \def\ome{\omega} \def\O{\Omega-\varphi}\o^n.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\FFunctionalEq
$$
It is exact, that is to say it satisfies the cocycle condition
$F(\omega} \def\ome{\omega} \def\O{\Omega_1,\omega} \def\ome{\omega} \def\O{\Omega_2)+F(\omega} \def\ome{\omega} \def\O{\Omega_2,\omega} \def\ome{\omega} \def\O{\Omega_3)=F(\omega} \def\ome{\omega} \def\O{\Omega_1,\omega} \def\ome{\omega} \def\O{\Omega_3)$.
Its critical points are precisely K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metrics.
The following monotonicity result is well-known (see, e.g., \ref{CT, Lemma 3.7}).
\FLem{
\TaggLemm{FMonotonicityLemm}
The functional $F$ is monotonically decreasing along the flow \KRFEq.
}
\noindent{\it (i) First step.}
As a consequence of \Ref{FMonotonicityLemm} and \PerelmanIneq\ we have
$$\eqalign} \def\eqano{\eqalignno{
0\ge F(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vpt})
& =
-(I-J)(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vpt})-\frac1V\intm\vp_{\!t}\o_{\vpt}^n
-\log\frac1V\intm e^{-\dot\vp_{\!t}}\o_{\vpt}^n.
\cr
& \ge
-(I-J)(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vpt})-\frac1V\intm\vp_{\!t}\o_{\vpt}^n-C_1.
}\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\FDescreasingEq
$$
From \Ieq-\Jeq\ it follows that $\frac1{n+1}I\le J$. We then have
$$\eqalign} \def\eqano{\eqalignno{
\frac1V\intm-\vp_{\!t}\o_{\vpt}^n
& \le (I-J)(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vpt})+C_1
\cr &
\le \frac n{n+1}I(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vpt})+C_1=\frac n{n+1}\frac1V\intm\vp_{\!t}(\o^n-\o_{\vpt}^n)+C_1.
}$$
Hence,
$$
\frac1V\intm-\vp_{\!t}\o_{\vpt}^n
\le
\frac nV\intm\vp_{\!t}\o^n+(n+1)C_1.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\FirstStepIneq
$$
This completes the first step of the proof.
\noindent {\it (ii) Second step.}
Assume $\gamma>0$ is such that
$$
\frac1V\intm e^{-\gamma(\vp_{\!t}-\frac1V\intm\vp_{\!t}\o^n)}\o^n\le C,
$$
with $C$ independent of $t$.
Using the flow equation, rewrite this as
$$
\frac1V\intm e^{(1-\gamma)\vp_{\!t}+\gamma\frac1V\intm\vp_{\!t}\o^n-\dot\vp_{\!t}-f_\omega} \def\ome{\omega} \def\O{\Omega}\o_{\vpt}^n\le C.
$$
Jensen's inequality gives,
$$
\frac1V\intm \Big((1-\gamma)\vp_{\!t}+\gamma\frac1V\intm\vp_{\!t}\o^n-\dot\vp_{\!t}-f_\omega} \def\ome{\omega} \def\O{\Omega\Big)\o_{\vpt}^n\le \log C.
$$
Using \PerelmanIneq\ and \FirstStepIneq\ we obtain
$$
\gamma\frac1V\intm\vp_{\!t}\o^n\le (1-\gamma)\frac1V\intm -\vp_{\!t}\o_{\vpt}^n+C'
\le n(1-\gamma)\frac1V\intm \vp_{\!t}\omega} \def\ome{\omega} \def\O{\Omega^n+C''.
$$
Whenever $\gamma\in(\frac n{n+1},1)$ this yields an a priori estimate
on $\frac1V\intm\vp_{\!t}\o^n$.
Under this assumption this also implies an a priori upper bound on $\sup\vp_{\!t}$. Indeed,
let $G_\omega} \def\ome{\omega} \def\O{\Omega$ be a Green function for $-\Delta=-\Delta_\omega} \def\ome{\omega} \def\O{\Omega$
satisfying $\int_M} \def\intm{\int_M G_\omega} \def\ome{\omega} \def\O{\Omega(\cdot,y)\o^n(y)=0$. Set $A_\omega} \def\ome{\omega} \def\O{\Omega=-\inf_{M\times M} G_\omega} \def\ome{\omega} \def\O{\Omega$.
Recall the sub-mean value property of $\o$-plurisubharmonic functions:
$$
\varphi(p)\le \frac1V\intm\varphi\o^n+nA_\omega} \def\ome{\omega} \def\O{\Omega,\quad} \def\qq{\qquad \all\,p\in M.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\SupBound
$$
In fact, by assumption $-n<\Delta\varphi$. The Green formula gives,
$$
\eqano{
\varphi(p)-\V\int_M \varphi\o^n & =
-\V\int_M G_\omega} \def\ome{\omega} \def\O{\Omega(p,y)\Delta\varphi(y)\o^n(y)
\cr & =
\V\int_M (G_\omega} \def\ome{\omega} \def\O{\Omega(p,y)+A_\omega} \def\ome{\omega} \def\O{\Omega)(-\Delta\varphi(y))\o^n(y)
\le nA_\omega} \def\ome{\omega} \def\O{\Omega,\cr
}
$$
by the normalization of $G_\omega} \def\ome{\omega} \def\O{\Omega$.
This completes the second step.
\noindent {\it (iii) Third step.}
This step follows from an argument used by Tian \ref{T1} for the continuity method. It adapts to
our setting thanks to \Ref{RFEstimatesThm}.
Put $\eta=\sup_M\vp_{\!t}-\vp_{\!t}+1$ and let $p>0$.
The first part of the argument involves reducing the $L^\infty(M)$ estimate for $\eta$ to
an $L^2(M,\o_{\vpt})$ estimate. First,
$$\eqano{
\int_M} \def\intm{\int_M\eta^p\o_{\vpt}^n
& \ge
\int_M} \def\intm{\int_M\eta^p(\o_{\vpt}^n-\o_{\vpt}^{n-1}\wedge\omega} \def\ome{\omega} \def\O{\Omega)
=
-\int_M} \def\intm{\int_M\eta^p\sqrt{-1}\partial\dbar\eta\wedge\o_{\vpt}^{n-1}
\cr & =
\int_M} \def\intm{\int_M\sqrt{-1}\partial} \def\pa{\partial(\eta^p)\wedge\bar\partial\eta\wedge\o_{\vpt}^{n-1}
\cr & =
\frac{4p}{(p+1)^2}\int_M} \def\intm{\int_M\sqrt{-1}\partial} \def\pa{\partial\eta^{\frac{p+1}2}\wedge\bar\partial\eta^{\frac{p+1}2}\wedge\o_{\vpt}^{n-1}.&\global \advance \eqncount by 1 \futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\FirstMoserIneq
}$$
Combined with \SobolevIneq\ this gives
$$
\frac1{C_2^2}
||\eta||_{L^{\frac{(p+1)n}{n-1}}(M,\o_{\vpt})}^{p+1}
\le
\frac{n(p+1)^2}{4p}
||\eta||_{L^p(M,\o_{\vpt})}^p
+
||\eta||_{L^{p+1}(M,\o_{\vpt})}^{p+1}.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\WeightedIneq
$$
Following Tian, a Moser iteration argument \ref{T1, p. 235-236}
now allows us to conclude that
there exists a constant $C$ depending only on $C_2$ and $(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ such that
$$
\sup\eta\le C ||\eta||_{L^2(M,\o_{\vpt})}.
$$
The second part of the argument requires a uniform Poincar\'e inequality in order
to bound the $L^2(M,\o_{\vpt})$ norm of $\eta$ in terms of its $L^1(M,\o_{\vpt})$ norm.
Recall the following weighted Poincar\'e inequality \ref{F,\S2.4} (see also \ref{TZ}).
\FLem{%
\TaggLemm{WeightedPoincareLemm}%
\ Let $(M,\h{\rm J})$ be a Fano manifold. Then for any function $\psi\in W^{1,2}(M,\omega} \def\ome{\omega} \def\O{\Omega)$,
$$
\frac1V\intm\big(\psi-\frac1V\intm\psi e^{f_{\o}}\o^n\big)^2e^{f_{\o}}\o^n
\le
\frac1V\intm n\sqrt{-1}\partial} \def\pa{\partial\psi\wedge\bar\partial\psi\wedge e^{f_{\o}}\omega} \def\ome{\omega} \def\O{\Omega^{n-1}.
$$
}
As a corollary we have, thanks to \PerelmanIneq, a uniform Poincar\'e-type inequality
along the flow:
$$
e^{-2C_1}||\eta||^2_{L^2(M,\o_{\vpt})}
-
e^{C_1}
||\eta||^2_{L^1(M,\o_{\vpt})}
\le
||\N\eta||^2_{L^2(M,\o_{\vpt})}.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\UniformPoincareIneq
$$
Therefore, applying \FirstMoserIneq, now with $p=1$, combined with \UniformPoincareIneq, we obtain
$$
e^{-2C_1}||\eta||^2_{L^2(M,\o_{\vpt})}
-
e^{C_1}
||\eta||^2_{L^1(M,\o_{\vpt})}
\le
n||\eta||_{L^1(M,\o_{\vpt})},
$$
which completes the second part of the argument.
Finally,
$$
||\eta||_{L^1(M,\o_{\vpt})}
=
1+\sup\vp_{\!t}+\frac1V\intm-\vp_{\!t}\o_{\vpt}^n,\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\InfBound
$$
and this is uniformly bounded in terms of $\frac1V\intm\vp_{\!t}\o^n$ thanks to \FirstStepIneq\ and
\SupBound. This completes the third step.
\noindent
{\it (iv) Fourth step.} Assume that $(M,\h{\rm J})$ does not admit a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric. The
relevant theory of complex Monge-Amp\`ere }\def\ma{\MA}
\def\KR{K\"ahler-Ricci equations due to Aubin and Yau \ref{A1,Y}
now implies that
$\{||\vp_{\!t}||_{L^\infty(M)}\}_{t\in[0,\infty)}$ is unbounded. If not, one would have uniform
higher-order estimates on $\vp_{\!t}$ and properties of Mabuchi's K-energy \ref{M} (equivalent in many
ways to $F$)
then show that a subsequence will converge to a smooth K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric (see \ref{PSS,\S2}).
Combining the first three steps
this implies that
for each $m\in\NN$ we may find an unbounded increasing subsequence of times $\{t_{j(m)}\}_{j(m)\ge1}\subseteq} \def\ss{\subseteq} \def\sub{\subseteq[0,\infty)$ for which
$$
\lim_{j\rightarrow\infty}\int_M} \def\intm{\int_M e^{-(\frac n{n+1}+\frac1m)\big(\varphi_{t_{j(m)}}-\frac1V\intm\varphi_{t_{j(m)}}\o^n\big)}\o^n=\infty.
$$
Hence, by the diagonal argument, there exists a subsequence of potentials
$\{\varphi_{t_j}\}_{j\ge1}$
for which (one may equivalently work throughout with $\sup\varphi_{t_j}$ instead of $\frac1V\intm\varphi_{t_j}\o^n$)
$$
\lim_{j\rightarrow\infty}\int_M} \def\intm{\int_M e^{-\gamma \big(\varphi_{t_j}-\frac1V\intm\varphi_{t_j}\o^n\big)}\o^n=\infty, \quad} \def\qq{\qquad\all\,\gamma\in(n/(n+1),\infty).\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi
$$
One may further extract an unbounded increasing sub-subsequence of times such that
$\{\varphi_{t_{j_k}}-\Vm\intm\varphi_{t_{j_k}}\o^n\}_{k\ge1}$ converges in the $L^1(M,\omega} \def\ome{\omega} \def\O{\Omega)$-topology
to a limit $\varphi_\infty\in PSH(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ \ref{DK, p. 549-550}.
The Demailly-Koll\'ar lower semi-continuity of complex singularity exponents then implies
$$
c_\omega} \def\ome{\omega} \def\O{\Omega(\varphi_\infty)\le \liminf_{k\rightarrow\infty}
c_\omega} \def\ome{\omega} \def\O{\Omega(\h{$\varphi_{t_{j_k}}-\Vm\intm\varphi_{t_{j_k}}\o^n$})
\le \frac n{n+1}.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi
$$
Equivalently,
$$
||e^{-\gamma\varphi_\infty}||_{L^1(M,\omega} \def\ome{\omega} \def\O{\Omega)}=\infty,\quad} \def\qq{\qquad\all\,\gamma\in(n/(n+1),\infty),
$$
and the multiplier ideal sheaf ${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty)$
defined for each open set $U\subseteq} \def\ss{\subseteq} \def\sub{\subseteq M$ by local sections
${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty)(U)=\{h\in {\h{\cal O}}} \def\calD{{\h{\cal D}}_M(U): |h|^2 e^{-\gamma\varphi_\infty}\inL_{\h{\small loc}}^1(M) \}$ is not
${\h{\cal O}}} \def\calD{{\h{\cal D}}_M$.
It is also not zero since $\varphi_\infty$ is not identically $-\infty$ as
its average is zero.
Finally, we recall a version of Nadel's vanishing theorem.
\FThm{
\TaggThm{NadelVanishingThm}
\ref{N}
Let $(M,\h{\rm J},\omega} \def\ome{\omega} \def\O{\Omega)$ be a Hodge manifold and $(L,h)$ an ample holomorphic line bundle over $M$ equipped
with a smooth Hermitian metric with positive curvature form $\Psi_h$ given locally
by $-\sqrt{-1}/2\pi\cdot\partial\dbar\log h$.
Assume that $(1+\eps)\varphi\in PSH(M,\h{\rm J},\Psi_h)$ for some $\eps>0$.
Then
$$
H^r(M,{\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\varphi)\otimes K_M\otimes L)=0, \quad} \def\qq{\qquad \all\, r\ge 1.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\NadelVanishingEq
$$
}
\vglue-0.5cm
The cohomology vanishing statement \NadelVanishingFanoEq\
for the sheaf ${\h{\cal I}}} \def\calA{{\h{\cal A}}} \def\calN{{\h{\cal N}}(\gamma\varphi_\infty)$ just constructed
is a consequence of \Ref{NadelVanishingThm}
with $L=K_M^{-\lfloor\gamma\rfloor-1}$ since
$$
(\lfloor\gamma\rfloor+1) \omega} \def\ome{\omega} \def\O{\Omega+\gamma\sqrt{-1}\partial\dbar\varphi_\infty\ge
(\lfloor\gamma\rfloor+1-\gamma)\omega} \def\ome{\omega} \def\O{\Omega.
$$
This concludes the proof of \Ref{MainThm} for $n>1$.
To treat the remaining case $n=1$ ($M=S^2$),
replace the third step with the following argument. First, one
has a uniform lower bound for the scalar
curvature along the flow (see, e.g., \ref{C}), i.e., a lower bound on the Ricci curvature
when $n=1$. Next,
by Perelman \ref{ST} a uniform diameter bound holds along the flow. Therefore the quantity
$\hbox{\rm diam}(M,g_{\o_{\vpt}})^2\cdot\inf\{\,\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\o_{\vpt}(v,v)\,:\, {||v||_{g_{\o_{\vpt}}}=1}\}$ is uniformly bounded from below.
Applying the Bando-Mabuchi Green's function estimate \ref{BM, Theorem 3.2} now implies that
$A_{\o_{\vpt}}$ (see \SupBound\ for notation) is uniformly bounded from above (here we invoked Perelman's diameter bound again).
Since $\Delta_{\o_{\vpt}}\vp_{\!t}<1$, Green's formula gives
a uniform bound for $-\inf\vp_{\!t}$ in terms of $\frac1V\intm-\vp_{\!t}\o_{\vpt}$. Now the first, second and fourth steps
apply without change to give the desired result. \done
\vglue-0.3cm
\subsection{Remarks and further study}
\TaggSection{RemarksSection}
We end with some
remarks.
We saw that the uniform Sobolev inequality of Ye and Zhang can be used instead of
the Ko\polishl{}odziej\ Theorem in order to obtain the $L^\infty$ estimate for \Ref{ConvergenceCor}.
Observe that this also applies to the proof of the main theorem in \ref{TZ}, at least
in the case of no holomorphic vector fields.
Indeed, by \SupBound, \InfBound\ and \FirstStepIneq, $\sup\vp_{\!t}$ and $-\inf_{\vp_{\!t}}$ are bounded
in terms of $\frac1V\int\vp_{\!t}\o^n$. By
\FlowPotentialMAEq\ and \PerelmanIneq\ one has
$\frac1V\int\vp_{\!t}\o^n\le I(\omega} \def\ome{\omega} \def\O{\Omega,\o_{\vpt})+C_1$. Hence, if a K\"ahler-Einstein } \def\KEno{K\"ahler-Einstein metric exists the properness of $F$ in the
sense of Tian \ref{T3}
implies a uniform $L^\infty$ bound on $\vp_{\!t}$ (compare \ref{PSS,\S3}). The same applies
also to Pali's theorem \ref{P}.
The statement of \Ref{NadelThm} can be refined to hold for all
$\gamma\in(t_0n/(n+1),\infty)$ with $t_0=t_0(\omega} \def\ome{\omega} \def\O{\Omega)\le 1$ defined to be the first time
for which $\{||\vp_{\!t}||_{L^\infty(M)}\}_{t\in[0,t_0)}$ is unbounded,
with $\vp_{\!t}$ a solution of \AubinPathEq\
\ref{T1, p. 234} (see also \ref{N, p. 582}).
One has the inequality $t_0\le \sup
\{\,b\,:\,\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega\ge b\omega} \def\ome{\omega} \def\O{\Omega, \omega} \def\ome{\omega} \def\O{\Omega\in\H_{c_1},b\in\RR\}$, the right hand side a holomorphic
invariant studied by Tian; on some Fano manifolds it is smaller than 1 \ref{T2}.
We do not know whether for these manifolds the exponent in
\Ref{MainThm} can be lowered as well. Perhaps the difference here is related
to the fact that the flow always exists for all
time unlike the continuity path.
Yet, as far as the usefulness of the sheaves is concerned, this does not seem to be crucial since
they all satisfy the same vanishing conditions \UsualNadelVanishingEq\ for exponents smaller than
1.
It is worth mentioning that this invariant is smaller than 1 only when the functional $F$
is not bounded from below \ref{BM,DT}. It would be very
interesting to know what can be said
regarding the converse (compare \ref{R1,\S1}). We are therefore
led to pose the following problem:
\FProb{On a Fano manifold, determine whether the lower boundedness of the functional $F$ (equivalently,
of Mabuchi's K-energy)
on $\H_{c_1}$ is equivalent to $\sup
\{\,b\,:\,\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega\ge b\omega} \def\ome{\omega} \def\O{\Omega, \omega} \def\ome{\omega} \def\O{\Omega\in\H_{c_1},b\in\RR\}=1$.}
The K\"ahler-Ricci } \def\KRno{K\"ahler-Ricci flow has been widely studied for manifolds whose first Chern class is definite or zero (see, e.g.,
\ref{Ch2}).
This flow is simply the
dynamical system induced by integrating minus the Ricci potential vector field $-f$ on the space
of K\"ahler forms $\H_{\Omega}}
\def\Ho{{\calH_{\o}}$.
A vector field $\psi$ on the space of K\"ahler forms is an assignment
$\omega} \def\ome{\omega} \def\O{\Omega\mapsto \psi_\omega} \def\ome{\omega} \def\O{\Omega\in {\call C}^\infty} \def\RR{{\outlin R}(M)/\RR$.
The vector field $f$ is the assignment $\omega} \def\ome{\omega} \def\O{\Omega\mapsto f_\omega} \def\ome{\omega} \def\O{\Omega$ with $f_{\o}$ defined by
$\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega-\mu\omega} \def\ome{\omega} \def\O{\Omega=\sqrt{-1}\partial\dbarf_{\o}$, $\,\mu\in\RR$.
Motivated by this observation one is naturally led to extend the definition of the K\"ahler-Ricci } \def\KRno{K\"ahler-Ricci flow to an arbitrary K\"ahler
manifold, simply by defining the flow lines to be integral curves of minus the Ricci potential vector field $-f$
on $\H_{\Omega}}
\def\Ho{{\calH_{\o}}$, with $\O$ an arbitrary K\"ahler class.
Recall that the Ricci potential is defined in general by
$\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega-H_\omega} \def\ome{\omega} \def\O{\Omega\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega=\sqrt{-1}\partial\dbarf_{\o}$. The resulting flow equation can also be written as
$$
\eqalign} \def\eqano{\eqalignno{
\frac{\pa \omega} \def\ome{\omega} \def\O{\Omega(t)}{\pa t} & =-\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega(t)+H_t\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega(t),\quad t\in\RR_+,\cr
\omega} \def\ome{\omega} \def\O{\Omega(0) & =\omega} \def\ome{\omega} \def\O{\Omega,
}\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\KRFEq$$
for each $t$ for which a solution exists in $\H_{\Omega}}
\def\Ho{{\calH_{\o}}$. This flow, introduced by Guan,
is part of the folklore in the field although it has not been much studied.\note{%
\cmreight It seems that Guan first considered this flow in unpublished work in
the 90's (see references to
\reffoot{G1}).
After posting the first version of this note I became aware, thanks to
G. Sz\'ekelyhidi, of a recent preprint \reffoot{G2} posted by Guan on his webpage in which
this flow is studied. For a different but related flow see \reffoot{S}.
}
Several authors have raised the question whether Nadel's construction can be extended to the study
of constant scalar curvature K\"ahler metrics. We believe that Nadel-type obstruction sheaves should arise
from this dynamical system (as well as from its discretization \ref{R2,R3}; in these two references
a ``discrete" analogue of \Ref{PSSThm} was shown to hold) in the absence of fixed points.
As we saw, it is important to make a choice of how to induce a flow on ${\calH_{\o}}$ from that on $\H_{\Omega}}
\def\Ho{{\calH_{\o}}$,
and different normalizations give rise to different sheaves.
The flow equation \FlowPotentialMAEq\ corresponds to restricting the evolution
to a certain codimension one submanifold of ${\calH_{\o}}$. For a general K\"ahler class one may
define an operator on the space of K\"ahler forms $\H_{\Omega}}
\def\Ho{{\calH_{\o}}$, identified as an open subset of ${\call C}^\infty} \def\RR{{\outlin R}(M)/\RR$, by
$$
h:\varphi\in\H_{\Omega}}
\def\Ho{{\calH_{\o}}\mapsto h(\varphi)\in{\call C}^\infty} \def\RR{{\outlin R}(M)/\RR,\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi
$$
with
$$
H_{\o_{\vp}}\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\o_{\vp}-H_\omega} \def\ome{\omega} \def\O{\Omega\hbox{\rm Ric}\,}
\def\Ricnotsosmall{\hbox{\notsosmall Ric}\,}
\def\Ricfoot{\hbox{\notsosmall Ric}\,}
\def\Ricsmall{\hbox{\small Ric}\,\omega} \def\ome{\omega} \def\O{\Omega =\sqrt{-1}\partial\dbar h(\varphi).\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi
$$
By choosing such an appropriate submanifold,
analogously to \FlowPotentialMAEq, one may consider the flow
$$
\omega} \def\ome{\omega} \def\O{\Omega_{\vp_{\!t}}^n=\o^n e^{f_\omega} \def\ome{\omega} \def\O{\Omega-h(\vp_{\!t})+\dot\vp_{\!t}}\o^n.\global \advance \eqncount by 1 \eqno\futurelet \nexttok \ifx \nexttok $(\the \eqncount)}\def \Mark #1{\xdef #1{(\the \eqncount)}#1\else\expandafter \Mark\fi\GeneralContinuityPathEq
$$
The difficulty lies in the fact that this operator is in general no longer a multiple of the identity.
\def\vrule height.6ex width .5ex depth -.1ex{\vrule height.6ex width .5ex depth -.1ex}
\def\hfil\smallblackbox$\q$\smallblackbox$\q$\smallblackbox\hfil{\hfil\vrule height.6ex width .5ex depth -.1ex$\quad} \def\qq{\qquad$\vrule height.6ex width .5ex depth -.1ex$\quad} \def\qq{\qquad$\vrule height.6ex width .5ex depth -.1ex\hfil}
\hfil\smallblackbox$\q$\smallblackbox$\q$\smallblackbox\hfil
\bigskip
This note was written during a visit to the Technion and I thank that institution,
and especially M. Cwikel and Y. Pinchover, for the hospitality and partial financial support.
I am indebted to my teacher, Gang Tian, for his advice and
warm encouragement.
I am grateful to Yum-Tong Siu, from whose class I first learned on some of Nadel's work.
I thank D. Kim for enjoyable discussions on multiplier ideal sheaves,
V. Tosatti and a referee for valuable comments on this note, and Q. Zhang for a useful discussion on
his article \ref{Z}.
This material is based upon work supported under a National Science
Foundation Graduate Research Fellowship.
\frenchspacing
\bigskip\bigskip
\noindent{\bf Bibliography}
\bigskip
\def\ref#1{\Taggf{#1}\item{ {\bf[}{\sans #1}{\bf]} } }
\def\vglue1.8pt{\vglue1.8pt}
\ref{A1} Thierry Aubin, \'{E}quations du type {M}onge-{A}mp\`ere sur les vari\'et\'es
k\"ahl\'eriennes compactes, {\sl Bulletin des Sciences Math\'ematiques}
{\bf 102} (1978), 63--95.
\vglue1.8pt
\ref{A2} \underbar{\phantom{aaaaa}}, R\'eduction du cas positif de l'\'equation de {M}onge-{A}mp\`ere
sur les vari\'et\'es k\"ahl\'eriennes compactes \`a la d\'emonstration d'une in\'egalit\'e,
{\sl Journal of Functional Analysis} {\bf 57} (1984), 143--153.
\vglue1.8pt
\ref{BM} Shigetoshi Bando, Toshiki Mabuchi, Uniqueness of K\"ahler-Einstein metrics
modulo connected group actions, in {\it Algebraic Geometry,
Sendai, 1985} (T. Oda, Ed.), Advanced Studies in Pure Mathematics {\bf 10},
Kinokuniya, 1987, 11--40.
\vglue1.8pt
\ref{Ca} Huai-Dong Cao, Deformations of K\"ahler metrics to K\"ahler-Einstein metrics on compact
K\"ahler manifolds, {\sl Inventiones Mathematicae} {\bf 81} (1985), 359--372.
\vglue1.8pt
\ref{CLT} Xiu-Xiong Chen, Peng Lu, Gang Tian, A note on uniformization of Riemann surfaces
by Ricci flow, {\sl Proceedings of the American Mathematical Society} {\bf 134} (2006), 3391--3393.
\vglue1.8pt
\ref{CT} Xiu-Xiong Chen, Gang Tian, Ricci flow on K\"ahler-Einstein surfaces,
{\sl Inventiones Mathematicae} {\bf 147} (2002), 487--544.
\vglue1.8pt
\ref{C} Xiu-Xiong Chen, On K\"ahler manifolds with positive orthogonal bisectional curvature,
{\sl Advances in Mathematics} {\bf 215} (2007), 427--445.
\vglue1.8pt
\ref{Ch1} Bennett Chow, The Ricci flow on the 2-sphere, {\sl Journal of Differential Geometry}
{\bf 33} (1991), 325--334.
\vglue1.8pt
\ref{Ch2} Bennett Chow et al., The Ricci flow: Techniques and applications. Part I: Geometric
aspects, American Mathematical Society, 2007.
\vglue1.8pt
\ref{D} Jean-Pierre Demailly, On the Ohsawa-Takegoshi-Manivel $L^2$ extension theorem,
in {\it Complex analysis and Geometry} (P. Dolbeault et al., Eds.),
Progress in Mathematics {\bf 188}, Birkh\"auser, 2000, 47--82.
\vglue1.8pt
\ref{DK} Jean-Pierre Demailly, J\'anos Koll\'ar, Semi-continuity of complex singularity exponents
and K\"ahler-Einstein metrics on Fano orbifolds, {\sl Annales Scientifiques de l'\'Ecole
Normale sup\'erieure} {\bf 34} (2001), 525--556.
\vglue1.8pt
\ref{Di} Wei-Yue Ding, Remarks on the existence problem of positive
{K}\"ahler-{E}instein metrics, {\sl Mathematische Annalen} {\bf 282} (1988), 463--471.
\vglue1.8pt
\ref{DT} Wei-Yue Ding, Gang Tian, The generalized Moser-Trudinger inequality,
in {\it Nonlinear Analysis and Microlocal Analysis: Proceedings of the International
Conference at Nankai Institute of Mathematics} (K.-C. Chang et al., Eds.),
World Scientific, 1992, 57--70. ISBN 9810209134.
\vglue1.8pt
\ref{F} Akito Futaki, K\"ahler-{E}instein metrics and integral invariants,
Lecture Notes in Mathematics {\bf 1314}, Springer, 1988.
\vglue1.8pt
\ref{G1} Daniel Z.-D. Guan, Quasi-Einstein metrics, {\sl International Journal of Mathematics}
{\bf 6} (1995), 371--379.
\vglue1.8pt
\ref{G2} \underbar{\phantom{aaaaa}}, Extremal-solitons and $C^\infty$ convergence of the modified Calabi flow on certain
$CP^1$ bundles, preprint, 2006.
\vglue1.8pt
\ref{H1} Richard S. Hamilton, Three-manifolds with positive Ricci curvature,
{\sl Journal of Differential Geometry} {\bf 17} (1982), 255--306.
\vglue1.8pt
\ref{H2} \underbar{\phantom{aaaaa}}, The Ricci flow on surfaces, in {\it Mathematics and general relativity} (J. I. Isenberg, Ed.),
Contemporary Mathematics {\bf 71}, American Mathematical Society, 1988, 237--262.
\vglue1.8pt
\ref{K} Joseph J. Kohn, Subellipticity of the $\bar \partial$-Neumann problem on pseudo-convex domains: sufficient conditions, Acta Mathematica {\bf 142} (1979), 79--122.
\vglue1.8pt
\ref{Ko} S\polishl awomir Ko\polishl odziej, The complex Monge-Amp\`ere equation,
{\sl Acta Mathematica} {\bf 180} (1998), 69--117.
\vglue1.8pt
\ref{M} Toshiki Mabuchi, K-energy maps integrating Futaki invariants,
{\sl T\^ohoku Mathematical Journal} {\bf 38} (1986), 575--593.
\vglue1.8pt
\ref{N} Alan M. Nadel, Multiplier ideal sheaves and K\"ahler-Einstein metrics of positive scalar
curvature, {\sl Annals of Mathematics} {\bf 132} (1990), 549--596.
\vglue1.8pt
\ref{P} Nefton Pali, Characterization of Einstein-Fano manifolds via the K\"ahler-Ricci flow,
preprint, arxiv:math.DG/0607581v2.
\vglue1.8pt
\ref{PSS} Duong H. Phong, Nata\v sa \v Se\v sum, Jacob Sturm,
Multiplier ideal sheaves and the K\"ahler-Ricci flow, preprint,
arxiv:math.DG/0611794v2. To appear in {\sl Communications in Analysis and Geometry}.
\vglue1.8pt
\ref{R1} Yanir A. Rubinstein, On energy functionals, K\"ahler-Einstein metrics,
and the Moser-Trudinger-Onofri neighborhood, preprint, arxiv:math.DG/0612440. To appear in
{\sl Journal of Functional Analysis}.
\vglue1.8pt
\ref{R2} \underbar{\phantom{aaaaa}}, The Ricci iteration and its applications,
Comptes Rendus de l'Acad\'emie des Sciences Paris, Ser. I {\bf 345} (2007), 445--448.
\vglue1.8pt
\ref{R3} \underbar{\phantom{aaaaa}}, Some discretizations of geometric evolution equations and
the Ricci iteration on the space of K\"ahler metrics, I, preprint, arxiv:0709.0990 [math.DG].
\vglue1.8pt
\ref{ST} Nata\v sa \v Se\v sum, Gang Tian, Bounding scalar curvature and diameter
along the K\"ahler-Ricci flow (after Perelman) and some applications, preprint.
\vglue1.8pt
\ref{S} Santiago R. Simanca, Heat flows for extremal K\"ahler metrics,
{\sl Annali della Scuola Normale Superiore di Pisa} {\bf 4} (2005), 187--217.
\vglue1.8pt
\ref{Si} Yum-Tong Siu, The existence of K\"ahler-Einstein metrics on manifolds with
positive anticanonical line bundle and a suitable finite symmetry group,
{\sl Annals of Mathematics} {\bf 127} (1988), 585--627.
\vglue1.8pt
\ref{So} Jian Song, The $\alpha$-invariant on toric Fano manifolds,
{\sl American Journal of Mathematics} {\bf 127} (2005), 1247--1259.
\vglue1.8pt
\ref{T1} Gang Tian, On K\"ahler-Einstein metrics on certain K\"ahler manifolds with $C_1(M)>0$,
{\sl Inventiones Mathematicae} {\bf 89} (1987), {225--246}.
\vglue1.8pt
\ref{T2} \underbar{\phantom{aaaaa}}, On stability of the tangent bundles of Fano varieties,
{\sl International Journal of Mathematics} {\bf 3} (1992), 401--413.
\vglue1.8pt
\ref{T3} \underbar{\phantom{aaaaa}},
K\"ahler-{E}instein metrics with positive scalar curvature,
{\sl Inventiones Mathematicae} {\bf 130} (1997), {1--37}.
\vglue1.8pt
\ref{TY} Gang Tian, Shing-Tung Yau, K\"ahler-Einstein metrics on complex surfaces
with $C_1>0$, {\sl Communications in Mathematical Physics} {\bf 112} (1987), 175--203.
\vglue1.8pt
\ref{TZ} Gang Tian, Xiao-Hua Zhu, Convergence of K\"ahler-Ricci flow,
{\sl Journal of the American Mathematical Society} {\bf 20} (2007), 675--699.
\vglue1.8pt
\ref{Y} Shing-Tung Yau, On the Ricci curvature of a compact K\"ahler
manifold and the Complex Monge-Amp\`ere equation, I, {\sl Communications in Pure
and Applied Mathematics} {\bf 31} (1978), 339--411.
\vglue1.8pt
\ref{Ye} Rugang Ye, The logarithmic Sobolev inequality along the Ricci flow, preprint,
arxiv:0707.2424v4 [math.DG].
\vglue1.8pt
\ref{Z} Qi S. Zhang, A uniform Sobolev inequality under Ricci flow,
{\sl International Mathematics Research Notices} (2007), Article ID rnm056. Erratum
to: A uniform Sobolev inequality under Ricci flow (2007), Article
ID rnm096.
\end
|
1,941,325,221,029 | arxiv | \section{Introduction}
High order arrays, or tensors have been actively considered in neuroimaging analysis, topic modeling, signal processing and recommendation system \cite{frolov2017tensor, Comon,Hack,Kar,Rendle,zhou, simony2016dynamic,cichocki2015tensor,sidiropoulos2017tensor}.
Researchers have made tremendous efforts to innovate effective methods for the analysis of tensor data.
%
The \emph{spiked tensor model}, introduced in \cite{NIPS2014_5616} by Richard and Montanari, captures a number of statistical estimation tasks in which we need to extract information from a noisy high-dimensional data tensor. We are given a tensor $\bm X\in ({\mathbb R}^n)^{\otimes k}$ in the following form
\begin{align*}
\bm X=\beta {\bm{v}}_1\otimes {\bm{v}}_2\otimes \cdots \otimes {\bm{v}}_k+\bm W,
\end{align*}
where $\bm W$ is a noise tensor, $\beta>0$ corresponds to the signal-to-noise ratio (SNR), and
${\bm{v}}_1\otimes {\bm{v}}_2\otimes \cdots \otimes {\bm{v}}_k$ is a rank one unseen signal tensor to be recovered.
When $k=2$, the spiked tensor model reduces to the \emph{spiked matrix model} of the form ``signal $+$ noise", which has been intensively studied in the past twenty years.
It is now well-understood that the extreme eigenvalues of low rank perturbations of large random matrices undergo a so-called BBP phase transition \cite{baik2005phase} along
with the change of signal-to-noise ratio, first discovered by Baik, P{\' e}ch{\' e} and the first author. There is an order one critical signal-to-noise ratio $\beta_{\rm c}$, such that below $\beta_{\rm c}$, it is information-theoretical impossible to detect the spikes
\cite{perry2016optimality,onatski2013asymptotic,montanari2015limitation}, and above $\beta_{\rm c}$, it is possible to detect the
spikes by Principal Component Analysis (PCA). A body of work has quantified the behavior of PCA in this setting \cite{johnstone2001distribution, baik2005phase,baik2006eigenvalues,paul2007asymptotics, benaych2012singular,bai2012sample, johnstone2009consistency, birnbaum2013minimax,cai2013sparse,ma2013sparse,vu2013minimax,cai2015optimal,el2008spectrum,ledoit2012nonlinear,donoho2018optimal}.
We refer readers to the review article \cite{johnstone2018pca} by Johnstone and Paul,
for more discussion and references to this and related lines of work.
For the spiked tensor model with $k\geq 3$, the same as the spiked matrix model,
there is an order one critical signal-to-noise ratio $\beta_{k}$ (depending on the order $k$), such that below $\beta_{k}$, it is information-theoretical impossible to detect the spikes, and above $\beta_{k}$, the maximum likelihood estimator is a distinguishing statistics \cite{chen2019phase,chen2018phase,lesieur2017statistical,perry2020statistical,jagannath2018statistical}.
In the matrix setting the
maximum likelihood estimator is the top eigenvector, which can be computed in polynomial time
by, e.g., power iteration. However, for order $k\geq 3$ tensor, computing the maximum likelihood estimator is NP-hard in generic setting. In this setting, two ``phase transitions" are often studied. There is the critical signal-to-noise ratio SNR$_{stat}=\beta_k$, below which it is statistically impossible to estimate the parameters. Although above the threshold SNR$_{stat}$, it is possible to estimate the parameters in theory, there is no known efficient algorithm (polynomial time) to achieve recovery close to the statistical threshold SNR$_{stat}$. Thus many algorithm
development fields are interested in another critical threshold SNR$_{comp}\geq {}$SNR$_{stat}$ below which it is impossible for an efficient algorithm to achieve recovery. For the spiked tensor model with $k\geq 3$, it is widely believed there exists a computational-to-statistical gaps SNR$_{comp}- {}$SNR$_{stat}>0$. We refer readers to the article \cite{bandeira2018notes} by Bandeira, Perry and Wein, for more detailed discussion on this phenomenon.
In the work \cite{NIPS2014_5616} of Richard and Montanari, the algorithmic aspects of the spiked tensor model have been studied. They showed that tensor power iteration and approximate message passing algorithm with random initialization recovers the signal provided $\beta\gtrsim n^{(k-1)/2}$. Based on heuristic arguments, they predicted that the necessary and sufficient
condition for power iteration and approximate message passing (AMP) algorithm to succeed is $\beta\gtrsim n^{(k-2)/2}$.
This threshold was proven in \cite{lesieur2017statistical} for AMP by Lesieur, Miolane, Lelarge, Krzakala and Zdeborov{\'a}, for power iteration by Yang, Cheng and the last two authurs \cite{huang2020power}. The same threshold was also achieved by using gradient descent and Langevin dynamics as studied by Gheissari, Jagannath and the first author \cite{arous2020algorithmic}.
In \cite{NIPS2014_5616}, Richard and Montanari also proposed a method based on tensor unfolding, which unfolds the tensor $\bm X$ to an $n^q\times n^{k-q}$ matrix $\mathsf{Mat}(\bm X)$ for some $1\leq q\leq k-1$,
\begin{align}\label{e:MatX}
\mathsf{Mat}(\bm X)=\beta {\bm{v}}{\bm{u}}^\top+Z\in {\mathbb R}^{n^q}\times {\mathbb R}^{n^{k-q}}.
\end{align}
By taking $q=\floor{k/2}$ and performing matrix PCA on $\mathsf{Mat}(\bm X)$, they proved that the tensor unfolding algorithm reovers the signal provided $\beta\gtrsim n^{(\lceil k/2\rceil-1)/2}$, and predicted that the necessary and sufficient condition for tensor unfolding is $\beta\gtrsim n^{(k-2)/4}$.
Several other sophisticated algorithms for the spiked tensor model have been investigated in literature which achieves the sharp threshold $\beta\gtrsim n^{(k-2)/4}$: Sum-of-Squares algorithms \cite{hopkins2015tensor, hopkins2016fast, kim2017community}, sophisticated iteration algorithms \cite{luo2020sharp, zhang2018tensor,han2020optimal}, and an averaged version of gradient descent \cite{biroli2020iron} by Biroli, Cammarota, Ricci-Tersenghi.
The necessary part of this threshold still remains open. Its relation with hypergraphic planted clique problem was discussed in \cite{luo2020open,luo2020tensor} by Luo and Zhang. Its proven for $k=3$ in \cite{hopkins2015tensor} by Hopkins, Shi and Steurer, degree-$4$ sum-of-squares relaxations fail below this threshold.
The candscape complexity of spiked tensor model was studied in \cite{arous2019landscape} by Mei, Montanari, Nica and the first author. A new framework based on the Kac-Rice method that
allows to compute the annealed complexity of the landscape has been proposed in \cite{ros2019complex} by Ros, Biroli,Cammarota and the first author, which was later used to analyze gradient-based algorithms in non-convex setting \cite{sarao2019afraid,mannelli2020complex} by Mannelli, Biroli, Cammarota, Krzakala, Urbani, and Zdeborov{\'a}.
In this paper, we revisit the tensor unfolding algorithm introduced by Richard and Montanari. The unfolded matrix $\mathsf{Mat}(\bm X)$ from \eqref{e:MatX} is a spiked matrix model in the form of ``signal $+$ noise". However, it is different from spiked matrix models in random matrix literature, which requires the dimensions of the matrix to be comparable, namely the ratio of the number of rows and the number of columns converges to a fixed constant. In this setting, the singular values and singular vectors of the spiked matrix model \eqref{e:MatX} has been studied in \cite{ benaych2012singular} by Benaych-Georges and Nadakuditi.
For the unfolded matrix $\mathsf{Mat}(\bm X)$, its dimensions are not comparable. As the size $n$ of the tensor goes to infinity, the ratio of the number of rows and columns goes to zero or infinity, unless $q=k/2$ (in this case $\mathsf{Mat}(\bm X)$ is a square matrix). In this paper, we study the singular values and singular vectors of the spiked matrix model \eqref{e:MatX} in the case where the number of rows (columns) grows polynomially in the number of columns (rows), which we call \emph{low rank perturbation of long random matrices}. In the case when $\beta=0$, the estimates of singular values and singular vectors for long random matrices follow from \cite{alex2014isotropic} by Alex, Erd{\H o}s, Knowles, Yau, and Yin.
To study the low rank perturbations of long random matrices, we use the master equations from \cite{ benaych2012singular}, which characterize the outliers of the perturbed random matrices, and the associated singular values. To analyze the master equations, we use the estimates of singular values and Green's functions of long random matrices from \cite{alex2014isotropic}. Comparing with the setting that the ratio of the number of rows and the number of columns converges to a fixed constant, the challenge is to obtain uniform estimates for the errors in the master equations, which depends only on the number of rows. In this way, we can allow the number of columns to grow much faster than the number of rows.
For the low rank perturbation of long random matrices, we prove there exists a critical signal-to-noise ratio $\beta_{\rm c}$ (depending on the dimensions of the matrix), and it exhibit a BBP type phase transition.
We also obtain estimates of the singular values and singular vectors for this model.
Moreover, we also have precise estimates when the signal-to-noise ratio $\beta$ is close to the threshold $\beta_{\rm c}$. In particular, our results also apply when $\beta$ is close to $\beta_{\rm c}$ in mesoscopic scales. In an independent work \cite{feldman2021spiked} by Feldman, this model has been studied under different assumptions. We refer to Section \ref{s:lowrank} for a more detailed discussion of the differences.
Our results for low rank perturbation of long random matrices can be used to study the unfolded matrix $\mathsf{Mat}(\bm X)$. For the signal-to-noise ratio $\beta=\lambda n^{(k-2)/4}$ and any $1\leq q\leq k-1$, if $\lambda>1$, the PCA on $\mathsf{Mat}(\bm X)$ detects the signal tensor; if $\lambda<1$, the PCA on $\mathsf{Mat}(\bm X)$ fails to capture the signal tensor. This matches the conjectured algorithmic threshold for the spiked tensor model from \cite{NIPS2014_5616}. It is worth mentioning that the threshold we get is independent of the tensor unfolding procedure, namely, it is independent of the choice of $q$. For $q>1$, a further recursive unfolding is needed to recover individual signals ${\bm{v}}_i$, which increases the computational cost. We propose to simply take $q=1$ in the tensor unfolding algorithm for each coordinate axis, and unfold the tensor to an $n\times n^{k-1}$ matrix, which gives good approximation of individual signals ${\bm{v}}_i$, provided $\lambda>1$. In tensor literature, this algorithm is exactly the truncated higher order singular value decomposition (HOSVD) introduced in \cite{de2000multilinear} by De Lathauwer, De Moor and Vandewalle. Later, they developed the higher order orthogonal iteration (HOOI) in \cite{de2000best}, which uses the truncated HOSVD as initialization combining with a power iteration, to find the best low-multilinear-rank approximation of a tensor.
The performance of HOOI was analyzed in \cite{zhang2018tensor} by Zhang and Xia, for the spiked tensor model. It was proven that for the signal-to-noise ratio $\beta=\lambda n^{(k-2)/4}$ satisfies $\lambda\geq C_{gap}$ for some large constant $C_{gap}>0$, HOOI converges within a logarithm factor of iterations.
The paper is organized as follows. In Section \ref{s:main}, we state the main results on the singular values and vectors of low rank perturbations of long random matrices.
In Section \ref{s:TPCA} we study the spiked tensor model, as an application of our results on low rank perturbations of long random matrices. The proofs of our main results are given in Section \ref{s:mainlong}.
\noindent\textbf{Notations }
For two numbers $X,Y$, we write that $X = \OO(Y )$ if there exists a universal constant $C>0$ such
that $|X| \leq C Y$. We write $X = \oo(Y )$, or $X \ll Y$ if the ratio $|X|/Y\rightarrow \infty$ as $n$ goes to infinity. We write
$X\asymp Y$ if there exists a universal constant $C>0$ such that $Y/C \leq |X| \leq C Y$.
We denote the index set $\qq{k}=\{1,2,\cdots,k\}$.
We say an event $\Omega$ holds with high probability if for any large $C>0$, $\mathbb{P}(\Omega)\geq 1-n^{-C}$ for $n$ large enough.
We write $X\prec Y$ or $X=\OO_\prec(Y)$, if $X$ is stochastically dominated by $Y$ in the sense that for all large $C>0$, we have
\begin{align*}
\mathbb{P}(X\geq n^{1/C}Y)\leq n^{-C},
\end{align*}
for large enough $n$.
\noindent\textbf{Acknowledgements }
The research of J.H. is supported by the Simons Foundation as a Junior Fellow at
the Simons Society of Fellows, and NSF grant DMS-2054835.
\section{Main Results}\label{s:main}
In this section, we state our main results on the singular values and singular vectors of low rank perturbations of long random matrices. Let $Z\in {\mathbb R}^{n\times m}$ be an $n\times m$ random matrices, with i.i.d. random entries $Z_{ij}$ satisfying
\begin{assumption}\label{a:Zasup}
The entries of $Z$ are i.i.d., and have mean zero and variance $1/n$, and for any integer $p\geq 1$,
\begin{align*}
\mathbb{E}[Z_{ij}]=0, \quad \mathbb{E}[Z_{ij}^2]=\frac{1}{n}, \quad \mathbb{E}[Z_{ij}^{2p}]\leq \frac{C_p}{n^p},\quad 1\leq i\leq n,\quad 1\leq j\leq m.
\end{align*}
\end{assumption}
We introduce the parameter $\phi:=\sqrt{m/n}$.
It is well-known that for $m,n\rightarrow\infty$, if their ratio converges to $m/n=\phi^2\rightarrow \phi_*^2\in[1,\infty)$ which is independent of $n$, the empirical eigenvalue distribution of $Z Z^\top$ converges to the Marchenko-Pastur law
\begin{align}\label{e:MPa}
\frac{\sqrt{\left(x-\left(\phi_*-1\right)^2\right)\left(\left(\phi_*+1\right)^2-x\right)}}{2\pi x}{\rm d} x,
\end{align}
supported on $[(\phi_*-1)^2, (\phi_*+1)^2]$.
In this paper, we consider the case where the ratio $m/n=\phi^2$ depends on $n$, satisfying that $n^{1/C}\leq \phi\leq n^C$ for some constant $C>0$. In Section \ref{s:localZ}, we collect some results on singular values of the long random matrix $Z$ from \cite{alex2014isotropic}.
Using the estimates of the singular values and singular vectors of long random matrices as input, we can study low rank perturbations of long random matrices. We consider rank one perturbation of long random matrices in the form,
\begin{align}\label{e:bvuc}
\beta {\bm{v}} {\bm{u}}^\top+ Z\in {\mathbb R}^{n\times m},
\end{align}
where ${\bm{v}}\in {\mathbb R}^n$, ${\bm{u}}\in {\mathbb R}^m$ are unit vectors, and $Z$ is an $n\times m$ random matrix satisfying Assumption \ref{a:Zasup}.
We state our main results on the singular values and singular vectors of low rank perturbations of long random matrices \eqref{e:bvuc} in Section \ref{s:lowrank}.
\subsection{Singular values of Long Random Matrices}\label{s:localZ}
Let $Z$ be an $n\times m$ random matrix, with entries satisfying Assumption \ref{a:Zasup}. Let $\phi=\sqrt{m/n}$ satisfy that $n^{1/C}\leq \phi\leq n^C$ for some constant $C>0$. The eigenvalues of sample covariance matrices $ZZ^\top$ in this setting have been well studied in \cite{alex2014isotropic}. We denote the singular values of $Z$ as
\begin{align*}
s_1\geq s_2\geq \cdots \geq s_n\geq 0.
\end{align*}
They are square roots of eigenvalues of the sample covariance matrices $ZZ^\top$. As an easy consequence of \cite[Theorem 2.10]{alex2014isotropic}, we have the following theorem on the estimates of the largest singular value of $Z$.
\begin{theorem}\label{t:extreme}
Under Assumption \ref{a:Zasup}, let $\phi=\sqrt{m/n}$ with $n^{1/C}\leq \phi\leq n^C$. Fix an arbitrarily small $\delta>0$, with high probability the largest singular value $s_1$ of $Z$ satisfies
\begin{align}\label{e:s1est}
|s_1-(\phi+1)|\leq \frac{n^\delta}{n^{2/3}},
\end{align}
provided $n$ is large enough.
\end{theorem}
The results in \cite{alex2014isotropic} give estimates of each eigenvalues of the sample covariance matrix $ZZ^\top$ away from $0$, see Theorem \ref{t:rigidity}. It also gives estimates of locations of each singular value of $Z$. In particular, it implies that the empirical singular value distribution $\sum_{i=1}^n \delta_{s_i}/n$ of $Z$ is close to the pushforward of the Marchenko-Pastur law (after proper normalization) by the map $x\mapsto \sqrt{x}$,
\begin{align*}
&\phantom{{}={}}\rho_{\phi}(x){\rm d} x
=\frac{\sqrt{\left(x^2-\left(\phi-1\right)^2\right)\left(\left( \phi+1\right)^2-x^2\right)}}{\pi x}{\rm d} x.
\end{align*}
We remark that $\rho_\phi$ depends on $m,n$ through $\phi$ and is supported on $[\phi-1, \phi+1]$. As $\phi\rightarrow \infty$ with $n$, $\rho_\phi(x)$ after shifting by $\phi$, i.e. $\rho_\phi(x+\phi)$, converges to the semi-circle distribution on $[-1, +1]$:
\begin{align*}
\rho_\phi(x+\phi){\rm d} x\rightarrow \frac{2\sqrt{(x+1)(1-x)}}{\pi}.
\end{align*}
See Figure \ref{f:sqMP} for some plots of $\rho_\phi$. One can see that the extreme singular values stick to the boundary of the support of the limiting empirical measure.
\begin{figure}[H]
\center
\includegraphics[scale=0.6]{sqMP.png}
\caption{The empirical singular value distribution of $Z$, with $n=1000$ and $m=n, n^{1.5}, n^2$. When $\phi=\sqrt{m/n}$ is large, the empirical singular value converges to the Semicircle distribution supported on $[\phi-1, \phi+1]$.} \label{f:sqMP}
\end{figure}
\subsection{Low Rank Perturbations of Long Random Matrices}\label{s:lowrank}
Let $Z$ be an $n\times m$ random matrix, with entries satisfying Assumption \ref{a:Zasup}. Let $\phi=\sqrt{m/n}$. Without loss of generality we can assume that $\phi\geq 1$, otherwise, we can simply study the transpose of $Z$. We allow $\phi$ to grow with $n$ at any polynomial rate, $1\leq \phi\leq n^C$ for any large number $C>0$. In this regime, $Z$ is a long random matrix.
In this section, we state our main results on the rank one perturbation of long random matrices from \eqref{e:bvuc}:
\begin{align}\label{e:bvuccopy}
\beta {\bm{v}} {\bm{u}}^\top+ Z\in {\mathbb R}^{n\times m},
\end{align}
where ${\bm{v}}\in {\mathbb R}^n$, ${\bm{u}}\in {\mathbb R}^m$ are unit vectors.
As in Theorem \ref{t:extreme}, in this setting, the singular values of $Z$ are roughly supported on
the interval $[\phi-1, \phi+1]$.
The following theorem states that there is an exact $n$-dependent threshold $\sqrt{\phi}$, if $\beta$ is above the threshold, $\beta{\bm{v}}{\bm{u}}^\top+Z$ has an outlier singular value; if $\beta$ is below this threshold, there are no outlier singular values, and all the singular values are stick to the bulk.
\begin{theorem}\label{t:eig}
We assume Assumption \ref{a:Zasup} and $1\leq \phi=\sqrt{m/n}\leq n^C$. Let $\beta=\lambda\sqrt\phi$ with $\lambda\geq 0$, fix arbitrarily small ${\mathfrak c}>0$, and denote $\hat s_1$ the largest singular value of $\beta {\bm{v}} {\bm{u}}^\top + Z$. For any small $\delta>0$, if $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$, with high probability, the largest singular value $\hat s_1$ of $\beta {\bm{v}} {\bm{u}}^\top + Z$ is an outlier, and explicitly given by
\begin{align}\label{e:strong}
\hat s_1=\sqrt{\phi^2+(\lambda^2+1/\lambda^2)\phi+1}+\OO \left(\frac{n^\delta (\lambda-1)^{1/2}}{n^{1/2} \phi}\right).
\end{align}
If $\lambda\leq 1+n^{-1/3+{\mathfrak c}}$, with high probability, $\beta {\bm{v}} {\bm{u}}^\top + Z$ does not have outlier singular values, and the largest singular value $\hat s_1$ satisfies
\begin{align}\label{e:weak}
\hat s_1\leq \phi+1+n^{-2/3+3{\mathfrak c}},
\end{align}
provided $n$ is large enough.
\end{theorem}
We refer to Figure \ref{f:Outlier} for an illustration of Theorem \ref{t:eig}. Theorem \ref{t:eig} also characterizes the behavior of the outlier in the critical case, when $\lambda$ is close to $1$.
\begin{figure}[H]
\center
\includegraphics[scale=0.6]{Outlier.png}
\caption{The empirical singular value distribution of $\beta {\bm{v}}{\bm{u}}^\top+Z$, with $n=1000$,$m=n^{1.5}$, and $\beta=\lambda (m/n)^{1/4}$ for $\lambda=1, 1.5, 2$. We marked the predicted outlier by $\times$ as given by formula \eqref{e:strong}.} \label{f:Outlier}
\end{figure}
We have similar transition for the singular vectors. If $\beta$ is above the threshold $\sqrt{\phi}$, the left singular vector $\hat{\bm{v}}_1$ associated with the largest singular value of $\beta {\bm{v}} {\bm{u}}^\top + Z$ has a large component in the signal ${\bm{v}}$ direction; If $\beta$ is below the threshold $\sqrt{\phi}$, the projection of $\hat{\bm{v}}_1$ on the signal ${\bm{v}}$ direction vanishes.
\begin{theorem}\label{t:ev}
We assume Assumption \ref{a:Zasup} and $1\leq \phi=\sqrt{m/n}\leq n^C$.
Let $\beta=\lambda\sqrt\phi$ with $\lambda\geq 0$, fix arbitrarily small ${\mathfrak c}>0$, and denote $\hat s_1$ the largest singular value of $\beta {\bm{v}} {\bm{u}}^\top + Z$. For any small $\delta>0$, if $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$, with high probability, the left singular vector $\hat{\bm{v}}_1$ associated with the largest singular value of $\beta {\bm{v}} {\bm{u}}^\top + Z$ has a large component in the signal ${\bm{v}}$ direction:
\begin{align}\label{e:strong1}
|\langle \hat{\bm{v}}_1, {\bm{v}}\rangle |=\left(1+\OO\left(\frac{n^\delta}{n^{1/2}(\lambda-1)^{3/2}}\right)\right)\sqrt{\frac{(\lambda^4-1)}{\lambda^4+\lambda^2/\phi}}.
\end{align}
And similar estimates for the right singular vector $\hat{\bm{u}}_1$ associated with the largest singular value of $\beta {\bm{v}} {\bm{u}}^\top + Z$
\begin{align}\label{e:strong22}
&|\langle \hat{{\bm{u}}}_1, {\bm{u}}\rangle|
=\left(1+\OO_\prec\left(\frac{1}{n^{1/2}(\lambda-1)^{3/2}}\right)\right)\sqrt{\frac{\lambda^4-1}{\lambda^2(\lambda^2+\phi)}}.
\end{align}
If $\lambda\leq 1+n^{-1/3+{\mathfrak c}}$, with high probability, the projection of $\hat {\bm{v}}_1$ on ${\bm{v}}$, and the projection of $\hat{\bm{u}}_1$ on ${\bm{u}}$ are upper bounded by
\begin{align}\label{e:weak1}
\max\{| \langle \hat {\bm{v}}_1, {\bm{v}}\rangle|,|\langle \hat {\bm{u}}_1, {\bm{u}}\rangle|\}\leq n^{4{\mathfrak c}}\min\left\{n^{-1/6}, \frac{1}{\sqrt n(\lambda-1)}\right\},
\end{align}
provided $n$ is large enough.
\end{theorem}
\begin{remark}
We remark that when the ratio $\phi=\sqrt{m/n}$ goes to infinity with $n$, with $\lambda$ fixed, then from \eqref{e:strong1}, $|\langle \hat {\bm{v}}_1, {\bm{v}}\rangle|$ converges to $\sqrt{1-\lambda^{-4}}$, and from \eqref{e:strong22}, $|\langle \hat {\bm{u}}_1, {\bm{u}}\rangle|$ converges to $0$.
\end{remark}
\begin{remark}
In this paper, for simplicity of notations, we only consider rank one perturbations of long random matrices. Our method can as well be used to study any finite rank perturbations of long random matrices.
\end{remark}
The singular values and vectors of low rank perturbations of large rectangular random matrices has previously been studied in \cite{benaych2012singular} by Benaych-Georges and Nadakuditi. Our main results Theorems \ref{t:eig} and \ref{t:ev} are generalization of their results in two directions. Firstly, in \cite{benaych2012singular}, the ratio $\phi=\sqrt{m/n}$ is assumed to converge to a constant independent of $n$. In our setting, we allow $\phi$ to have polynomial growth in $n$. As we will see in Section \ref{s:TPCA}, this will be crucial for us to study the tensor principle component analysis. Secondly, we allow the signal-to-noise ratio to be close to the threshold (depending on $n$), and our main results characterize the behaviors of singular values and singular vectors in this regime. In an independent work \cite{feldman2021spiked} by Feldman, the singular values and vectors of multi-rank perturbations of long random matrices have been studied under the assumption that either the signal vectors contains i.i.d. entries with mean zero and variance one, or the noise matrix has Gaussian entries. Both proofs use the master equations which characterize the singular values and singular vectors of low rank perturbations of rectangular random matrices developed in \cite{benaych2012singular}, see Section \ref{s:master}. To analyze the master equation, in \cite{feldman2021spiked}, Feldman needs that the signal vectors have i.i.d. entries with mean zero and variance one. Our argument uses results from \cite{alex2014isotropic}, which gives Green's function estimates of long random matrices, and works for any (deterministic) signal vectors.
\section{Tensor PCA}\label{s:TPCA}
As an application of our main results Theorems \ref{t:eig} and \ref{t:ev}, in this section, we use them to study the asymmetric rank-one spiked tensor model as introduced in \cite{NIPS2014_5616}:
\begin{align}\label{e:rank1}
\bm X=\beta {\bm{v}}_1\otimes {\bm{v}}_2\otimes \cdots \otimes {\bm{v}}_k+\bm W,
\end{align}
where
\begin{itemize}
\item $\bm X\in \otimes^k {\mathbb R}^n$ is the $k$-th order tensor observation.
\item $\bm W\in \otimes^k {\mathbb R}^n$ is a noise tensor. The entries of $\bm W$ are independent random variables with mean zero and variance $1/n$.
\item $\beta\in {\mathbb R}$ is the signal-to-noise ratio.
\item ${\bm{v}}_i\in {\mathbb R}^n$ are unknown unit vectors to be recovered.
\end{itemize}
The goal is
to perform reliable estimation and inference on the unseen signal tensor ${\bm{v}}_1\otimes {\bm{v}}_2\otimes \cdots \otimes {\bm{v}}_k$. We remark that for any rank-one tensor $T\in \otimes^k{\mathbb R}^n$, it can be uniquely written as $T=\beta {\bm{v}}_1\otimes {\bm{v}}_2\otimes \cdots \otimes {\bm{v}}_k$, where ${\bm{v}}_i$ are unit vectors. The model \eqref{e:rank1} is slightly more general than the asymmetric spiked tensor model in \cite{NIPS2014_5616}, which assumes that ${\bm{v}}_1={\bm{v}}_2=\cdots={\bm{v}}_k$. In this section, we make the following assumption on the noise tensor:
\begin{assumption}\label{a:Wasup}
The entries of $\bm W$ are i.i.d., and they have mean zero and variance $1/n$: for any indices $1\leq i_1,i_2,\cdots, i_k\leq n$, and any integer $p\geq 1$,
\begin{align*}
\mathbb{E}[\bm W_{i_1i_2\cdots i_k}]=0, \quad \mathbb{E}[\bm W_{i_1i_2\cdots i_k}^2]=\frac{1}{n}, \quad \mathbb{E}[\bm W_{i_1i_2\cdots i_k}^{2p}]\leq \frac{C_p}{n^p}.
\end{align*}
\end{assumption}
The tensor unfolding algorithm in \cite{NIPS2014_5616} unfolds the tensor $\bm X$ to an $n^{\floor{k/2}}\times n^{\ceil{k/2}} $ matrix, and they proved that it detects the signal when the signal-to-noise ratio satisfies $\beta\gg n^{(\ceil{k/2}-1)/2}$. The conjectured algorithmic threshold is $\beta\gg n^{(k-2)/4}$. We recall the tensor unfolding algorithm. Take any index set $\mathbb{I}\subseteq \qq{1,k}$, with $1\leq |\mathbb{I}|=q\leq k/2$. Given any $\mathbb{I}=\{k_1,k_2,\cdots, k_q\}$, let $\qq{1,k}\setminus \mathbb{I}=\{\ell_1, \ell_2,\cdots, \ell_{k-q}\}$, we denote the matrix ${\mathsf{Mat}_\mathbb{I}(\bm X)}$, which is the $n^q\times n^{k-q}$ matrix obtained from $\bm X$ by unfolding along the axes indexed by $\mathbb{I}$. More precisely, for any indices $i_1,i_2,\cdots, i_k\in \qq{1,n}$, let
$a=1+\sum_{j=1}^q(i_{k_j}-1)n^{j-1}$ and $b=1+\sum_{j=1}^{k-q}(i_{\ell_j}-1)n^{j-1}$, then
\begin{align*}
\mathsf{Mat}_\mathbb{I}(\bm X)_{a,b}=\bm X_{i_1i_2\cdots i_k}.
\end{align*}
If $\mathbb{I}$ is a singleton, i.e. $\mathbb{I}=\{i\}$, we will simply write $\mathsf{Mat}_\mathbb{I}$ as $\mathsf{Mat}_i$.
We can view $\mathsf{Mat}_\mathbb{I}(\bm X)$ as the sum of the unfolding of the signal tensor $ {\bm{v}}_1\otimes {\bm{v}}_2\otimes \cdots \otimes {\bm{v}}_k$ and the noise tensor $\bm W$. Let
\begin{align*}
{\bm{v}}_\mathbb{I}\deq \text{vec}[\otimes_{j=1}^q {\bm{v}}_{k_j}]\in {\mathbb R}^{n^q},\quad {\bm{u}}_{\mathbb{I}}\deq \text{vec}[\otimes_{j=1}^{k-q} {\bm{v}}_{\ell_j}]\in {\mathbb R}^{n^{k-q}},\quad
Z_\mathbb{I}\deq \mathsf{Mat}_\mathbb{I}(\bm W)\in {\mathbb R}^{n^q}\times {\mathbb R}^{n^{k-q}}.
\end{align*}
Then we can rewrite $\mathsf{Mat}_\mathbb{I}(\bm X)$ as
\begin{align}\label{e:Mat}
\mathsf{Mat}_\mathbb{I}(\bm X)=\beta {\bm{v}}_\mathbb{I} {\bm{u}}^\top_{\mathbb{I}}+Z_\mathbb{I}(\bm W)\in {\mathbb R}^{n^q}\times {\mathbb R}^{n^{k-q}}.
\end{align}
To make \eqref{e:Mat} in the form of \eqref{e:bvuc}, we need to further normalize $\mathsf{Mat}_\mathbb{I}(\bm X)$ as
\begin{align}\label{e:newMat}
\widetilde{\mathsf{Mat}_\mathbb{I}}(\bm X)=\frac{\beta}{n^{(q-1)/2}} {\bm{v}}_\mathbb{I} {\bm{u}}^\top_{\mathbb{I}}+\widetilde{Z_\mathbb{I}},\quad \widetilde{Z_\mathbb{I}}=\frac{{Z_\mathbb{I}}}{n^{(q-1)/2}}.
\end{align}
In this way, each entry of $\widetilde{Z_\mathbb{I}}$ has variance $1/n^{q}$. And \eqref{e:newMat} is a rank one perturbation of a $n^{q}\times n^{k-q}$ random matrix $\widetilde{Z_\mathbb{I}}$ in the form of \eqref{e:bvuc} (by taking $(n,m)$ as $(n^q, n^{k-q})$), and the ratio $\phi=\sqrt{n^{k-q}/n^q}=n^{(k-2q)/2}\geq 1$ grows at most polynomially in $n$. We take
$\hat s_1$ the largest singular value of the normalized unfolded matrix $\widetilde{\mathsf{Mat}_\mathbb{I}}(\bm X)$, our main results Theorems \ref{t:eig} and \ref{t:ev} indicate that there is a phase transition at $\beta=n^{(k-2)/4}$ for the tensor unfolding \eqref{e:newMat}.
\begin{theorem}\label{t:tensor}We assume Assumption \ref{a:Wasup}, and fix any index set $\mathbb{I}\subseteq \qq{1,k}$ with $|\mathbb{I}|=q\leq k/2$. Let
$\widetilde{\mathsf{Mat}_\mathbb{I}}(\bm X)$ be the normalized matrix obtained from $\bm X$ by unfolding along the axes indexed by $\mathbb{I}$, as in \eqref{e:newMat}, and denote the ratio $\phi=\sqrt{n^{k-q}/n^q}=n^{(k-2q)/2}\geq 1$.
Let $\beta=\lambda n^{(k-2)/4}$ with $\lambda\geq 0$, fix arbitrarily small ${\mathfrak c}>0$, and denote $\hat s_1$ the largest singular value of $\widetilde{\mathsf{Mat}_\mathbb{I}}(\bm X)$, and $\hat{\bm{v}}_1$ the corresponding left singular vector. For arbitrarily small $\delta>0$, if $\lambda\geq 1+n^{-(1/3-{\mathfrak c})q}$, with high probability, the largest singular value $\hat s_1$ is an outlier, is explicitly given by
\begin{align*}
\hat s_1=\sqrt{\phi^2+(\lambda^2+1/\lambda^2)\phi+1}+\OO \left(\frac{n^\delta (\lambda-1)^{1/2}}{n^{q/2} \phi}\right),
\end{align*}
and the left singular vector $\hat{\bm{v}}_1$ has a large component in the ${\bm{v}}_\mathbb{I}$ direction:
\begin{align}\label{e:strong13}
|\langle \hat{\bm{v}}_1, {\bm{v}}_{\mathbb{I}}\rangle |=\left(1+\OO\left(\frac{n^\delta}{n^{q/2}(\lambda-1)^{3/2}}\right)\right)\sqrt{\frac{(\lambda^4-1)}{\lambda^4+\lambda^2/\phi}}.
\end{align}
If $\lambda\leq 1+n^{-(1/3-{\mathfrak c})q}$, with high probability, $\widetilde{\mathsf{Mat}_\mathbb{I}}(\bm X)$ does not have outlier singular values, the largest singular value $\hat s_1$ satisfies
\begin{align*}
\hat s_1\leq \phi+1+n^{-(2/3-3{\mathfrak c})q},
\end{align*}
and the projection of $\hat {\bm{v}}_1$ on ${\bm{v}}_\mathbb{I}$ is upper bounded by
\begin{align}\label{e:strong23}
|\langle \hat{\bm{v}}_1, {\bm{v}}_\mathbb{I}\rangle|\leq n^{4q{\mathfrak c}}\min\left\{n^{-q/6}, \frac{1}{n^{q/2}(\lambda-1)}\right\},
\end{align}
provided $n$ is large enough.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{t:tensor}]
Theorem \ref{t:tensor} follows from Theorems \ref{t:eig} and \ref{t:eig} by taking $(n,m)$ as $(n^q, n^{k-q})$, and the signal-to-noise ratio $\beta$ as $\beta/n^{(q-1)/2}$. In this way the criteria in Theorems \ref{t:eig} and \ref{t:eig} become that if
\begin{align*}
\frac{\beta/n^{(q-1)/2}}{\sqrt\phi}=\frac{\beta}{n^{(q-1)/2}n^{(k-2q)/4}}=\frac{\beta}{n^{(k-2)/4}}=\lambda\geq 1+n^{-(1/3-{\mathfrak c})q}
\end{align*}
then $\hat s_1$ is an outlier with high probability and the left singular vector $\hat{\bm{v}}_1$ has a large component in the ${\bm{v}}_\mathbb{I}$ direction. Otherwise if
\begin{align*}
\lambda\leq 1+n^{-(1/3-{\mathfrak c})q},
\end{align*}
$\hat s_1$ sticks to the bulk and the projection of $\hat {\bm{v}}_1$ on ${\bm{v}}_\mathbb{I}$ is small.
\end{proof}
We remark that as indicated by Theorem \ref{t:tensor}, the tensor unfolding algorithms which unfold $\bm X$ to an $n^q\times n^{k-q}$ matrix for any choice of $1\leq q\leq k/2$ and index set $\mathbb{I}$, essentially share the same threshold, i.e. $\beta=n^{(n-2)/4}$, which matches the conjectured threshold.
As in \eqref{e:strong13}, above the signal-to-noise ratio threshold, the left singular vector $\hat{\bm{v}}_1$ corresponding to the largest singular value of $\widetilde{\mathsf{Mat}_\mathbb{I}}(\bm X)$ is aligned with ${\bm{v}}_\mathbb{I}$. The leading order accuracy for the estimator $\hat {\bm{v}}_1$ as in \eqref{e:strong13} is the same and is independent of $q$.
However, if $q>1$, this does not give us information of individual signal ${\bm{v}}_i$. A further recursive unfolding method is proposed in \cite{NIPS2014_5616} to recover each individual signal ${\bm{v}}_i$.
Since by taking $q>1$ does change the algorithmic threshold, but increasing the computation cost.
We propose the following simple algorithm to recover each signal ${\bm{v}}_i$, by performing the tensor unfolding algorithm for each $1\leq i\leq k$ with $\mathbb{I}=\{i\}$, namely $q=1$.
For each $1\leq i\leq k$, we unfold \eqref{e:rank1} to an $n\times n^{k-1}$ matrix
\begin{align}\label{e:unfold1}\begin{split}
&\widetilde{\mathsf{Mat}_{i}}(\bm X)={\mathsf{Mat}_{i}}(\bm X)=\beta {\bm{v}}_i {\bm{u}}^\top_i + Z_i,\\
& {\bm{u}}_i=\text{vec}[{\bm{v}}_1\otimes {\bm{v}}_2\otimes \cdots \otimes {\bm{v}}_{i-1}\otimes {\bm{v}}_{i+1}\otimes\cdots \otimes {\bm{v}}_k]\in {\mathbb R}^{n^{k-1}},
\end{split}\end{align}
which is \eqref{e:newMat} by taking $q=1$ and $\mathbb{I}=\{i\}$.
In this way \eqref{e:unfold1} is a rank one perturbation of a long $n\times n^{k-1}$ random matrix in the form of \eqref{e:bvuc}, and the ratio $\phi=\sqrt{n^{k-1}/n}=n^{(k-1)/2}$ grows with $n$. We take
$\hat s^{(i)}_1$ the largest singular value of ${\mathsf{Mat}_{i}}(\bm X)=\beta {\bm{v}}_i\otimes {\bm{u}}_i^\top + Z_i$, and denote
\begin{align}\label{e:defhbb}
\hat\beta^{(i)}=\sqrt{\frac{((\hat s^{(i)}_1)^2-(\phi^2+1))+\sqrt{((\hat s^{(i)}_1)^2-(\phi+1)^2)((\hat s^{(i)}_1)^2-(\phi-1)^2)}}{2}},
\end{align}
as the estimator for $\beta$; and
$\hat{\bm{v}}^{(i)}_1$ the left singular vector corresponding to the largest singular value of ${\mathsf{Mat}_{i}}(\bm X)=\beta {\bm{v}}_i {\bm{u}}_i^\top + Z_i$, as the estimator for ${\bm{v}}_i$. This gives the following simple algorithm to recover $\beta$ and ${\bm{v}}_i$.
\begin{algorithm}[H]
\SetAlgoLined
\textbf{Input:} $\bm X$\;
\For{$i$ from $1$ to $k$}{
$\hat s_1^{(i)}=$ largest singular value of $\mathsf{Mat}_i(\bm X)$\;
${\bm{v}}_1^{(i)}=$ left singular vector corresponding to the largest singular value of $\mathsf{Mat}_i(\bm X)$\;
$\hat\beta^{(i)}=\sqrt{\left(((\hat s^{(i)}_1)^2-(\phi^2+1))+\sqrt{((\hat s^{(i)}_1)^2-(\phi+1)^2)((\hat s^{(i)}_1)^2-(\phi-1)^2)}\right)/2}$\;
}
\KwResult{$\{\hat\beta^{(i)}, {\bm{v}}_1^{(i)}\}_{1\leq i\leq k}$.}
\caption{Tensor Unfolding}
\end{algorithm}
In tensor literature, the above algorithm is exactly the truncated higher order singular value decomposition (HOSVD) introduced in \cite{de2000multilinear}. The higher order orthogonal iteration (HOOI), which uses the truncated HOSVD as initialization combining with a power iteration, was developed in \cite{de2000best} to find a best low-multilinear-rank approximation of a tensor. The performance of HOOI was analyzed in \cite{zhang2018tensor} for the spiked tensor model. It was proven that for the signal-to-noise ratio $\beta=\lambda n^{(k-2)/4}$ with $\lambda\geq C_{gap}$ for some large constant $C_{gap}>0$, HOOI converges within a logarithm factor of iterations. As an easy consequence of Theorem \ref{t:tensor}, we have the following theorem which gives the exact threshold of the signal-to-noise ratio, i.e. $\beta=n^{(n-2)/2}$. Above the threshold, our estimators $\hat\beta^{(i)}$ and $\hat{\bm{v}}_1^{(i)}$ approximate the signal-to-noise ratio $\beta$ and the signal vector ${\bm{v}}_i$.
\begin{theorem}\label{t:tensor2}
We assume Assumption \ref{a:Wasup}. For any $1\leq i\leq k$, we unfold $\bm X$ to $\beta {\bm{v}}_i {\bm{u}}^\top_i + Z_i$ as in \eqref{e:unfold1}.
Let the estimator $\hat\beta^{(i)}$ be as defined in \eqref{e:defhbb}, and $\hat{\bm{v}}^{(i)}_1$ the left singular vector corresponding to the largest singular value of ${\mathsf{Mat}_{i}}(\bm X)=\beta {\bm{v}}_i {\bm{u}}_i^\top + Z_i$.
Let $\beta=\lambda n^{(k-2)/4}$ with $\lambda\geq 0$, and fix arbitrarily small ${\mathfrak c}>0$. For arbitrarily small $\delta>0$, if $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$, with high probability, $\hat\beta^{(i)}$ and $\hat{\bm{v}}_1^{(i)}$ approximates $\beta$ and ${\bm{v}}_i$
\begin{align}\label{e:strongc}
|\hat\beta^{(i)}-\beta|\leq \frac{n^\delta}{n^{(k+1)/4}\sqrt{\lambda-1}},
\end{align}
and
\begin{align}\label{e:strong1c}
|\langle \hat{\bm{v}}^{(i)}_1, {\bm{v}}_i\rangle| =\left(1+\OO\left(\frac{n^\delta}{n^{1/2}(\lambda-1)^{3/2}}\right)\right)\sqrt{\frac{(\lambda^4-1)}{\lambda^4+\lambda^2 n^{-(k-1)/2}}}
\sim \sqrt{1-\frac{1}{\lambda^4}}.
\end{align}
If $\lambda\leq1+n^{-1/3+{\mathfrak c}}$, with high probability, the projection of $\hat {\bm{v}}_1^{(i)}$ on ${\bm{v}}_i$ vanishes as $\lambda$ decreases
\begin{align}\label{e:weak1c}
|\langle \hat{\bm{v}}_1^{(i)}, {\bm{v}}_i\rangle|\leq n^{4{\mathfrak c}} \min\left\{n^{-1/6}, \frac{1}{\sqrt n|\lambda-1|}\right\},
\end{align}
provided $n$ is large enough.
\end{theorem}
\begin{remark}
Given the unfolded $n\times n^{k-1}$ matrix ${\mathsf{Mat}_{i}}(\bm X)=\beta {\bm{v}}_i\otimes {\bm{u}}_i^\top + Z_i$, the largest singular value and its left eigenvector can be computed by first computing ${\mathsf{Mat}_{i}}(\bm X){\mathsf{Mat}_{i}}(\bm X)^\top$, then computing the the largest eigenvalue and corresponding eigenvector by power iteration. The total time complexity is $\OO(Tn^k)$, where $T$ is the number of iterations, which can be taken as $T=\ln n$. Therefore, the estimators $\hat \beta^{(i)}$ and $\hat {\bm{v}}_1^{(i)}$ can be computed with time complexity $\OO((\ln n) n^k)$.
To recover the signals ${\bm{v}}_i$ for each $1\leq i\leq k$, we need to repeat the above tensor unfolding algorithm $k$ times, and obtain ${\bm{v}}_1^{(i)}$ for each $1\leq i\leq k$. The total time complexity is $\OO((\ln n) kn^k)$.
\end{remark}
\begin{proof}
The claims \eqref{e:strong1c} and \eqref{e:weak1c} follow directly from \eqref{e:strong13} and \eqref{e:strong23} by taking $q=1$ and $\phi=n^{(k-1)/2}$. In the following we prove \eqref{e:strongc}. Fix arbitrarily small $\delta>0$. For $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$, with high probability, the largest singular value $\hat s_1^{(i)}$ of ${\mathsf{Mat}_{i}}(\bm X)=\beta {\bm{v}}_i {\bm{u}}_i^\top + Z_i$ is given by \eqref{e:strong}
\begin{align}\begin{split}\label{e:diffa}
\hat s_1^{(i)}&=\sqrt{\phi^2+(\lambda^2+1/\lambda^2)\phi+1}+\OO \left(\frac{n^\delta(\lambda-1)^{1/2}}{n^{1/2} \phi}\right)\\
&=\sqrt{\phi^2+\beta^2+\phi^2/\beta^2+1}+\OO\left(\frac{n^\delta(\lambda-1)^{1/2}}{n^{1/2} \phi}\right),
\end{split}\end{align}
where $\phi=n^{(k-2)/4}$. Our $\hat \beta^{(i)}$ as defined in \eqref{e:defhbb} is chosen as the solution of
\begin{align}\label{e:diffhi}
\hat s^{(i)}_1&=\sqrt{\phi^2+ (\hat\beta^{(i)})^2+\phi^2/(\hat\beta^{(i)})^2+1}.
\end{align}
By taking difference of \eqref{e:diffa} and \eqref{e:diffhi}, and rearranging, we get
\begin{align}\label{e:difhihi}
\beta^2+\frac{\phi^2}{\beta^2}-(\hat\beta^{(i)})^2-\frac{\phi^2}{(\hat\beta^{(i)})^2}
=(\beta^2-(\hat\beta^{(i)})^2)\left(1-\frac{\phi^2}{\beta^2(\hat\beta^{(i)})^2}\right)
\leq \frac{n^\delta (\lambda-1)^{1/2}}{n^{1/2}}\frac{\phi+\beta}{\phi}.
\end{align}
We notice that $\beta^2(\hat\beta^{(i)})^2/\phi^2-1\gtrsim (\lambda^4-1)\gtrsim (\lambda-1)$. Thus \eqref{e:difhihi} implies
\begin{align*}
|\beta-(\hat\beta^{(i)})|\leq \frac{n^\delta}{n^{1/2}\sqrt \phi \sqrt{\lambda-1}},
\end{align*}
with high probability, provided $n$ is large enough.
This finishes the proof \eqref{e:strongc}.
\end{proof}
We numerically verify Theorem \ref{t:tensor}. We take $n=600$, $k=3$ and $\beta=\lambda n^{(k-2)/4}=\lambda n^{1/4}$
for $\lambda\in[0,3]$. We sample the signals ${\bm{v}}_1={\bm{v}}_2={\bm{v}}_3={\bm{v}}$ as unit Gaussian vectors, and the noise tensor $\bm W$ with independent Gaussian entries. In the left panel of Figure \ref{f:eig_plot}, we plot the largest singular value of the unfolded matrix ${\mathsf{Mat}_{i}}(\bm X)$ and our theoretical prediction \eqref{e:strong}. In the right panel of Figure \ref{f:eig_plot} we plot
$\beta$ and our estimator $\hat\beta$ as in \eqref{e:defhbb}. The estimator $\hat\beta$ provides a good approximation of $\beta$ provided that $\lambda>1$. In Figure \eqref{f:ev_plot} we plot
$|\hat {\bm{v}}, {\bm{v}}\rangle|$, where the estimator $\hat{\bm{v}}$ is given as the left singular vector corresponding to the largest singular value of the unfolded matrix. Our theoretical prediction (blue curve) as in \eqref{e:strong1c} matches well with the the simulation for $\lambda>1$. For $\lambda\rightarrow 0$, our estimator behaves as poorly as random guess, i.e. taking $\hat{\bm{v}}$ as a random Gaussian vector (Green curve). For $\lambda$ in a small neighborhood of $1$, we don't have a good estimation of $|\langle \hat{\bm{v}}, {\bm{v}}\rangle|$, but only an upper bound \eqref{e:weak1c}.
In the second panel of Figure \ref{f:ev_plot}, we zoom in around $\lambda=1$, the red curve, $\min\{n^{-1/6}, 1/\sqrt n |\lambda-1|\}$ corresponding to the bound \eqref{e:weak1c}, provides a good upper bound of $|\langle \hat {\bm{v}}, {\bm{v}}\rangle|$.
\begin{figure}
\includegraphics[scale=0.6]{eig_plot.png}
\caption{For $n=600$, $k=3$, $\beta=\lambda*n^{(k-2)/4}$ with $\lambda\in[0,3]$, averaging over $500$ trials, the left panel plots the largest singular value of the unfolded matrix and our theoretical prediction; the left panel plots the $\beta$ and the estimator $\hat\beta$.} \label{f:eig_plot}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{innerp.png}
\caption{This plot of $|\langle \hat{{\bm{v}}}, {\bm{v}}\rangle|$ is for $n=600$, $\beta=\lambda*n^{(k-2)/4}$ with $\lambda\in[0,3]$, averaging over $500$ trials.} \label{f:ev_plot}
\end{figure}
\section{Low rank Perturbations of Long Random Matrices}\label{s:mainlong}
In this section we prove our main results as stated in Section \ref{s:main}. The proof of Theorem \ref{t:extreme} is given in Section \ref{s:longM}. We also collect the isotropic local law for long random matrices from \cite{alex2014isotropic}.
In section \ref{s:master}, we derive a master equation which characterizes the outliers of the perturbed matrix $\beta {\bm{v}} {\bm{u}}^\top+ Z$. The master equation has been used intensively to study the low rank perturbation of random matrices, for both singular values and eigenvalues, see \cite{benaych2011eigenvalues, benaych2012singular, benaych2011fluctuations,huang2018mesoscopic }.
The proofs of Theorems \ref{t:eig} and \ref{t:ev} are given in Sections \ref{s:proveeig} and \ref{s:proveev} respectively.
\subsection{Long Random Matrices}\label{s:longM}
Let $Z$ be an $n\times m$ random matrix, with entries satisfying Assumption \ref{a:Zasup}. Let $\phi=\sqrt{m/n}$, with $n^{-C}\leq \phi\leq n^C$. In this section we recall some results of the sample covariance matrices in this setting from \cite{alex2014isotropic}. It turns out in this setting the correct normalization is to study $ZZ^\top/\phi$, which corresponds to the standard sample covariance matrices, with variance $\phi$.
We denote the following $n$-dependent Marchenko-Pastur law corresponding to the ratio $m/n=\phi^2\geq 1$,
\begin{align}\label{e:MP}
\rho^{\phi}_{MP}(x){\rm d} x=\frac{\sqrt{\left(x-\left(\sqrt \phi-\frac{1}{\sqrt\phi}\right)^2\right)\left(\left(\sqrt \phi+\frac{1}{\sqrt\phi}\right)^2-x\right)}}{2\pi x/\phi}{\rm d} x,
\end{align}
and its $1/n$-quanties as
\begin{align}\label{e:quantiles}
\frac{i-1/2}{n}=\int_{\nu^\phi_i}^\infty \rho^\phi_{MP}(x){\rm d} x.
\end{align}
The normalization in \eqref{e:MP} is different from that in \eqref{e:MPa}, which corresponds to the sample covariance matrix $ZZ^\top$.
We remark that both the Marchenko-Pastur law $\phi_{MP}^\phi$ and its quantiles $\nu_i^\phi$ depend on $m,n$ through $\phi$.
We recall the following eigenvalue rigidity result from \cite[Theorem 2.10]{alex2014isotropic}.
\begin{theorem}[Eigenvalue Rigidity]\label{t:rigidity}
Under Assumption \ref{a:Zasup}, let $\phi=\sqrt{m/n}$ with $n^{1/C}\leq \phi\leq n^C$, the eigenvalues $\lambda_1\geq\lambda_2\geq \cdots \geq \lambda_n\geq 0$ of $Z Z^\top/\phi$ are close to the quantiles of the Marchenko-Pastur law \eqref{e:MP}: fix any $\varepsilon>0$ and arbitrarily small $\delta>0$, with high probability it holds
\begin{align*}
|\lambda_i-\nu^\phi_i|\leq \frac{n^\delta}{i^{1/3}n^{2/3}},
\end{align*}
uniformly for all $i\in \qq{1, \floor{(1-\varepsilon)n}}$. If in addition $\phi-1\geq c$ for some constant $c>0$, then with high probability, we also have
\begin{align*}
|\lambda_i-\nu^\phi_i|\leq \frac{n^\delta}{(n+1-i)^{1/3}n^{2/3}},
\end{align*}
uniformly for all $i\in \qq{\floor{n/2},n}$, provided $n$ is large enough.
\end{theorem}
We denote the singular values of $Z$ as $s_1\geq s_2\geq \cdots \geq s_n\geq 0$. We can then restate Theorem \ref{t:rigidity} in terms of the singular values of $Z$, thanks to the following easy relation
\begin{align}\label{e:singeig}
\lambda_i=s_i^2/\phi,\quad 1\leq i\leq n.
\end{align}
Theorem \ref{t:extreme} is an easy consequence of Theorem \ref{t:rigidity} and the relation \eqref{e:singeig}.
\begin{proof}[Proof of Theorem \ref{t:extreme}]
The largest singular value $s_1$ of $Z$, and the largest eigenvalue $\lambda_1$ of $ZZ^\top/\phi$ are related by $\lambda_1=s_1^2/\phi$. Therefore, Theorem \ref{t:rigidity} implies that $|s_1^2/\phi-(\sqrt\phi+1/\sqrt \phi)^2|\leq n^\delta/n^{2/3}$, with high probability, provided $n$ is large enough. The claim \eqref{e:s1est} follows from rearranging.
\end{proof}
The empirical eigenvalue distribution of $ZZ^\top/\phi$ is close to the Marchenko-Pastur law \eqref{e:MP}. Thanks to the relation \eqref{e:singeig}, the empirical singular value distribution $\sum_{i=1}^n \delta_{s_i}/n$ of $Z$ is close to the push forward of the Marchenko-Pastur law \eqref{e:MP} (after proper normalization) by the map $x\mapsto \sqrt{x}$,
\begin{align*}
&\phantom{{}={}}\rho_{\phi}(x):=2(x/\phi)\rho^\phi_{MP}(x^2/\phi)
=\frac{\sqrt{\left(x^2-\left(\phi-1\right)^2\right)\left(\left( \phi+1\right)^2-x^2\right)}}{\pi x}.
\end{align*}
We remark that $\rho_\phi$ is supported on $[\phi-1, \phi+1]$,
For later use, we denote the hermitization $\tilde Z$ of $Z$ as,
\begin{align}\label{e:tildeZ}
\tilde Z=\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right].
\end{align}
Then $\tilde Z$ has $m-n$ zero eigenvalues, and its other eigenvalues are given by $\pm s_1, \pm s_2,\cdots, \pm s_n$.
We denote the normalized Stieltjes transform of nonzero eigenvalues of $\tilde Z$ as
\begin{align*}
s(z)=\frac{1}{2n}\sum_{i=1}^n\frac{1}{z-s_i}+\frac{1}{z+s_i},
\end{align*}
and the Green's function of $\tilde Z$ as
\begin{align}\label{e:defG}
G(z)=\left(z
-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-1}
=\left[
\begin{array}{cc}
(z-Z Z^\top/z)^{-1} & (z-Z Z^\top/z)^{-1}(Z/z)\\
(Z^\top/z)(z-Z Z^\top/z)^{-1} & (z-Z^\top Z/z)^{-1}
\end{array}
\right].
\end{align}
Then thanks to Theorem \ref{t:rigidity}, $s(z)$ is close to the Stieltjes transform of the symmetrized version of $\rho_\phi$,
\begin{align*}
m_\phi(z)=\frac{1}{2}\int\frac{\rho_\phi(x)+\rho_\phi(-x)}{z-x}{\rm d} x
=\int\frac{z\rho_\phi(x)}{z^2-x^2}{\rm d} x
=\int \frac{z\rho_{MP}^\phi(x)}{z^2-x\phi}{\rm d} x.
\end{align*}
More explicitly, using the formula of $\rho_{MP}^\phi$ from \eqref{e:MP}, $m_\phi(z)$ is given by
\begin{align}\label{e:mpsi}
m_{\phi}(z)=\frac{-(\phi-1/\phi)+z^2/\phi\pm \sqrt{(z^2/\phi-(\sqrt\phi+1/\sqrt\phi)^2)(z^2/\phi-(\sqrt\phi-1/\sqrt\phi)^2)}}{2z/\phi},
\end{align}
and it satisfies the algebraic equation
\begin{align}\label{e:mgamma}
m_\phi^2+\left(\frac{\phi^2-1}{z}-z\right)m_\phi+1=0.
\end{align}
Let $z=E+\mathrm{i}\eta$ with $\kappa\deq \min\{|E-(\phi-1)|, |E-(\phi+1)|\}$.
We denote the spectral domains
\begin{align}\begin{split}\label{e:defS}
&\bm S\deq \{z=E+\mathrm{i}\eta: \kappa\leq {\mathfrak c}^{-1}, n^{-1+{\mathfrak c}}\leq \eta\leq {\mathfrak c}^{-1}\},\\
&\tilde\bm S \deq \{z=E+\mathrm{i}\eta: E\not\in[(\phi-1),(\phi+1)], n^{-2/3+{\mathfrak c}}\leq \kappa\leq {\mathfrak c}^{-1}, 0<\eta\leq {\mathfrak c}^{-1}\}.
\end{split}\end{align}
The spectral domain $\bm S$ contains the spectral information inside the bulk, and the spectral domain $\tilde \bm S$ contains the spectral information close to the spectral edge.
We recall the following isotropic local law from \cite[Theorem 3.11, 3.12]{alex2014isotropic}, which will be used in Sections \ref{s:proveeig} and \ref{s:proveev}.
\begin{theorem}[Isotropic Local Law]\label{t:isotropiclaw}
For unit vectors ${\bm{a}}\in {\mathbb R}^n$ and ${\bm{b}}\in {\mathbb R}^m$, with high probability, uniformly for $z\in \bm S$, we have
\begin{align*}\begin{split}
&\left| \langle {\bm{a}}, (z- Z Z^\top/z)^{-1}{\bm{a}}\rangle-m_\phi(z)\langle {\bm{a}}, {\bm{a}}\rangle\right|\prec \sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta},
\\
&\left| \langle {\bm{a}}, (z- Z Z^\top/z)^{-1}(Z/z){\bm{b}}\rangle\right|\prec \frac{1}{\phi}\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}\right),
\\
&\left| \langle {\bm{b}}, (z- Z^\top Z/z)^{-1}{\bm{b}}\rangle-\frac{\langle {\bm{b}}, {\bm{b}}\rangle}{z-m_\phi(z)}\right|\prec \frac{1}{\phi^2}\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}\right).
\end{split}
\end{align*}
Uniformly for $z\in \tilde \bm S$ we have the improved estimates:
\begin{align*}\begin{split}
&\left| \langle {\bm{a}}, (z- Z Z^\top/z)^{-1}{\bm{a}}\rangle-m_\phi(z)\langle {\bm{a}}, {\bm{a}}\rangle\right|\prec \sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}},
\\
&\left| \langle {\bm{a}}, (z- Z Z^\top/z)^{-1}(Z/z){\bm{b}}\rangle\right|\prec \frac{1}{\phi}\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}},
\\
&\left| \langle {\bm{b}}, (z- Z^\top Z/z)^{-1}{\bm{b}}\rangle-\frac{\langle {\bm{b}}, {\bm{b}}\rangle}{z-m_\phi(z)}\right|\prec \frac{1}{\phi^2}\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}.
\end{split}\end{align*}
\end{theorem}
\subsection{Master Equation}\label{s:master}
In this section, we derive a master equation, which characterize the outliers of the perturbed matrix $\beta {\bm{v}} {\bm{u}}^\top+ Z$. We denote the Hermitization of $\beta {\bm{v}} {\bm{u}}^\top+ Z$ as
\begin{align}\label{e:hermit}
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
+\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right],
\end{align}
which encodes the spectral information of $\beta {\bm{v}} {\bm{u}}^\top+ Z$.
We can view \eqref{e:hermit} as a rank two perturbation of $\tilde Z$. We have the following well-known low rank perturbation formula
\begin{lemma}\label{l:ABBA}
For two $(n+m)\times r$ matrices $A$ and $B$, it holds that
\begin{align}\label{e:sdt}
\det(I-AB^\top)=\det(I-B^\top A).
\end{align}
The matrix $I-AB^\top$ is invertible if and only if $I-B^\top A$ is invertible, and
\begin{align}\label{e:minv}
(I-AB^\top)^{-1}=I+A(I-B^\top A)^{-1} B^\top.
\end{align}
\end{lemma}
\begin{proof}
The identity \eqref{e:sdt} is Sylvester's determinant theorem. And the second identity \eqref{e:minv} is known as the matrix inversion lemma or Woodbury's matrix identity.
\end{proof}
We will use Lemma \ref{l:ABBA} to study \eqref{e:hermit}, which is a rank two perturbation of $\tilde Z$. Its eigenvalues are given by the roots of the characteristic polynomials
\begin{align*}
&\phantom{{}={}}\det\left(z-\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)\\
&=\det\left(z
-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)\det\left(I-G(z)\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
\right)\\
&=\det\left(z
-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)\det\left(I-U^\top A(z)
\right),
\end{align*}
where we used Lemma \ref{l:ABBA} for $r=2$, and
\begin{align}\label{e:AU}
A(z):=G(z)\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right],\quad
U:=
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right].
\end{align}
Therefore, the singular values of $\beta {\bm{u}}{\bm{v}}^\top+Z$ (which are not eigenvalues of $\tilde Z$) are characterized by
\begin{align}\label{e:IUA}
\det(I-U^\top A(z))=0.
\end{align}
The equation \eqref{e:IUA} can be used to characterize the outliers of $\beta {\bm{v}}{\bm{u}}^\top+Z$, and we will use it to prove Theorem \ref{t:eig} in Section \ref{s:proveeig}.
Thanks to Lemma \ref{l:ABBA}, for $z\in {\mathbb C}^+$ in the upper half plane, we can write explicitly the Green's of the Hermitization \eqref{e:hermit} of $\beta {\bm{v}}{\bm{u}}^\top +Z$,
\begin{align}\begin{split}\label{e:Greena}
&\phantom{{}={}}\left(z-\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-1}\\
&=
\left(I-G(z)\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
\right)^{-1}G(z)\\
&=\left(I+A(z)(I-U^\top A(z))^{-1}U^\top\right)G(z).
\end{split}\end{align}
The Green's function \eqref{e:Greena} contains the information of eigenvectors, and can be used to study the singular vectors of the outliers of $\beta {\bm{v}}{\bm{u}}^\top+Z$. And we will use it to prove Theorem \ref{t:ev} in Section \ref{s:proveev}.
\subsection{Proof of Theorem \ref{t:eig}}\label{s:proveeig}
Let $\phi=\sqrt{m/n}$. We denote the singular values of $\beta {\bm{v}}{\bm{u}}^\top+Z$ as $\hat s_1\geq \hat s_2\geq \cdots\geq \hat s_n\geq 0$, with corresponding normalized left and right singular vectors as $\hat {\bm{v}}_1, \hat{\bm{v}}_2, \cdots, \hat {\bm{v}}_n\in {\mathbb R}^n$, and $\hat {\bm{u}}_1, \hat{\bm{u}}_2, \cdots, \hat {\bm{u}}_n\in {\mathbb R}^m$. Then the nonzero eigenvalues of its Hermitization
\begin{align}\label{e:hermitc}
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
+\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right],
\end{align}
are given by $\hat s_1, -\hat s_1, \hat s_2, -\hat s_2,\cdots, \hat s_n, -\hat s_n$. The corresponding normalized eigenvectors are given by
$[\hat{\bm{v}}_1, \hat{\bm{u}}_1]/\sqrt 2, [\hat{\bm{v}}_1, -\hat{\bm{u}}_1]/\sqrt 2, [\hat{\bm{v}}_2, \hat{\bm{u}}_2]/\sqrt 2, [\hat{\bm{v}}_2, -\hat{\bm{u}}_2]/\sqrt 2,\cdots, [\hat{\bm{v}}_n, \hat{\bm{u}}_n]/\sqrt 2, [\hat{\bm{v}}_n, -\hat{\bm{u}}_n]/\sqrt 2$.
We recall from Theorem \ref{t:rigidity} that with high probability the singular values of $Z$ are bounded by $(\phi+1)+n^{-2/3+{\mathfrak c}/2}$, i.e. $s_1\leq (\phi+1)+n^{-2/3+{\mathfrak c}/2}$. In the remaining of this section, we restrict ourselves to the event that $s_1\leq (\phi+1)+n^{-2/3+{\mathfrak c}/2}$, which holds with high probability. We can view \eqref{e:hermitc} as a rank two perturbation of $\tilde Z$,
\begin{align}\label{e:rankone}
\beta \left[
\begin{array}{c}
{\bm{v}}/\sqrt 2 \\
{\bm{u}}/\sqrt 2
\end{array}
\right]
\left[
\begin{array}{c}
{\bm{v}}/\sqrt 2 \\
{\bm{u}}/\sqrt 2
\end{array}
\right]^\top
-\beta \left[
\begin{array}{c}
{\bm{v}}/\sqrt 2 \\
-{\bm{u}}/\sqrt 2
\end{array}
\right]
\left[
\begin{array}{c}
{\bm{v}}/\sqrt 2 \\
-{\bm{u}}/\sqrt 2
\end{array}
\right]^\top
+\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right],
\end{align}
By the variational formula of eigenvalues, $\hat s_2$ is upper bounded by the second largest eigenvalue of
\begin{align}\label{e:rankone}
\beta \left[
\begin{array}{c}
{\bm{v}}/\sqrt 2 \\
{\bm{u}}/\sqrt 2
\end{array}
\right]
\left[
\begin{array}{c}
{\bm{v}}/\sqrt 2 \\
{\bm{u}}/\sqrt 2
\end{array}
\right]^\top
+\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right].
\end{align}
Again the second largest eigenvalue of \eqref{e:rankone} is upper bounded by $s_1$. Thus we have $\hat s_2\leq s_1\leq (\phi+1)+n^{-2/3+{\mathfrak c}/2}$. Therefore, \eqref{e:hermitc} can have at most one outlier eigenvalue.
We recall from \eqref{e:IUA}, the matrix $\beta {\bm{v}}{\bm{u}}^\top+Z$ has an outlier singular value bigger than $ (\phi+1)+n^{-2/3+{\mathfrak c}}$, if and only if
\begin{align}\label{e:IUAc}
\det(I-U^\top A(x))=0
\end{align}
has an zero with $x\geq (\phi+1)+n^{-2/3+{\mathfrak c}}$, where
\begin{align*}
A(z)=G(z)\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right],\quad
U=
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right].
\end{align*}
We can rewrite the equation \eqref{e:IUAc} as
\begin{align}\label{e:eigeq}
&\det\left(\left[
\begin{array}{cc}
1/\beta & 0\\
0 & -1/\beta
\end{array}
\right]-\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top G(x)\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\right)=0.
\end{align}
We recall the Green's function $G(x)$ from \eqref{e:defG}
\begin{align*}
G(x)=\left(x
-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-1}
=\left[
\begin{array}{cc}
(x-Z Z^\top/x)^{-1} & (x-Z Z^\top/x)^{-1}(Z/x)\\
(Z^\top/x)(x-Z Z^\top/x)^{-1} & (x-Z^\top Z/x)^{-1}
\end{array}
\right],
\end{align*}
and denote the quantities
\begin{align}\begin{split}\label{e:defABC}
&\cA(x)= \langle {\bm{v}}, (x- Z Z^\top/x)^{-1}{\bm{v}}\rangle,\\
&\cB(x)=\langle {\bm{v}}, (x- Z Z^\top/x)^{-1}(Z/x){\bm{u}}\rangle,\\
&{\cal C}(x)=\langle {\bm{u}}, (x- Z^\top Z/x)^{-1}{\bm{u}}\rangle
\end{split}\end{align}
With these notations, we can rewrite \eqref{e:eigeq} as
\begin{align}\label{e:repll}
\det\left(\left[
\begin{array}{cc}
1/\beta & 0\\
0 & -1/\beta
\end{array}
\right]-\frac{1}{2}\left[
\begin{array}{cc}
\cA(x)+2\cB(x)+{\cal C}(x) & \cA(x)-{\cal C}(x)\\
\cA(x)-{\cal C}(x) & \cA(x)-2\cB(x)+{\cal C}(x)
\end{array}
\right]
\right)=0.
\end{align}
It simplifies to
\begin{align}\label{e:eq0}
\left(\frac{1}{\beta}-\cB(x)\right)^2=\cA(x){\cal C}(x).
\end{align}
Let $x-(\phi+1)=\kappa\geq n^{-2/3+{\mathfrak c}}$, then $x\in \tilde \bm S$ as defined in \eqref{e:defS}.
Thanks to the Square root behavior \eqref{e:mpsi} of $m_\phi(z)$ around the spectral edge $z=\phi+1$, we have
\begin{align*}
|\Im[m_\phi(z)]|\asymp \frac{\eta}{\sqrt{\kappa+\eta}}, \quad z=\phi+1+\kappa+\mathrm{i}\eta,\quad \kappa\geq 0.
\end{align*}
In particular we have
\begin{align}\label{e:m/eta}
\sqrt{\frac{|\Im[m_{\phi}(z)]|}{n\eta}}\asymp \frac{1}{n^{1/2}\kappa^{1/4}},\quad z=\phi+1+\kappa+\mathrm{i}\eta,\quad \kappa\geq 0.
\end{align}
Thanks to Theorem \ref{t:isotropiclaw} with ${\bm{a}}={\bm{v}}$ and ${\bm{b}}={\bm{u}}$, by plugging \eqref{e:m/eta}, we have
\begin{align}\begin{split}\label{e:ABCcopyhi}
&\cA(x)=m_\phi(x)+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right),\\
&\cB(x)=\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right),\\
&{\cal C}(x)=\frac{1}{x-m_\phi(x)}+\OO_\prec\left(\frac{1}{\phi^2 n^{1/2}\kappa^{1/4}}\right),
\end{split}\end{align}
where the error terms are continuous in $x$.
Then by plugging \eqref{e:ABCcopyhi} we can rewrite \eqref{e:eq0} as
\begin{align}\label{e:eq1}
\frac{1}{\beta^2}= \frac{m_\phi(x)}{x-m_\phi(x)} +\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right).
\end{align}
We recall the algebraic equation of $m_\phi$ from \eqref{e:mgamma}
\begin{align}\label{e:meq}
m_\phi^2+\left(\frac{\phi^2-1}{x}-x\right)m_\phi+1=0,
\end{align}
By further rearranging \eqref{e:eq1} we get
\begin{align}\label{e:eqha}
\frac{m_\phi(x)}{x-m_\phi(x)}=\frac{1}{\beta^2}+\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right).
\end{align}
Since $m_\phi(x)$ is monotone decreasing for $x\geq \phi+1$, the lefthand side is monotone decreasing for $x\geq \phi+1$. Using the formula \eqref{e:mpsi} for $m_\phi(x)$, for $x=\phi+1$, $m_\phi(x)=1$. For $x\rightarrow \infty$, we have $m_\phi(x)\rightarrow 0$. And for $x=\phi+1+\kappa$, with $\kappa=\oo(1)$, we have
\begin{align*}
m_\phi(x)=1-C\sqrt \kappa+\OO(\kappa),
\end{align*}
for some constant $C>0$. In this regime, the lefthand side of \eqref{e:eqha} behaves
\begin{align}\label{e:mphidiv}
\frac{m_\phi(x)}{x-m_\phi(x)}=\frac{1-C\sqrt \kappa +\OO(\kappa)}{\phi+C\sqrt \kappa +\OO(\kappa)}
=\frac{1}{\phi}\left(1-(C+C/\phi)\sqrt\kappa+\OO(\kappa)\right)
\end{align}
We take
\begin{align*}
\beta=\lambda\sqrt \phi.
\end{align*}
In the following we first prove \eqref{e:weak}.
\begin{proof}[Proof of \eqref{e:weak}]
If $\lambda\leq 1+n^{-1/3+{\mathfrak c}}$, in the following we show that \eqref{e:eq1} does not have solution with $x\geq \phi+1+n^{-2/3+3{\mathfrak c}}$. For $x\geq \phi+1+n^{-2/3+3{\mathfrak c}}$, using \eqref{e:mphidiv} and the monotonicity of $m_\phi(x)$, we have
\begin{align}\begin{split}\label{e:left}
\frac{m_\phi(x)}{x-m_\phi(x)}
&\leq \frac{m_\phi(\phi+1+n^{-2/3+3{\mathfrak c}})}{\phi+1+n^{-2/3+3{\mathfrak c}}-m_\phi(\phi+1+n^{-2/3+3{\mathfrak c}})}\\
&=\frac{1}{\phi}(1-(C+C/\phi)n^{-1/3+3{\mathfrak c}/2}+\OO(n^{-2/3+3{\mathfrak c}})).
\end{split}\end{align}
For $x\geq \phi+1+n^{-2/3+3{\mathfrak c}}$, the righthand side of \eqref{e:eqha} is lower bounded by
\begin{align}\begin{split}\label{e:right}
&\phantom{{}={}}\frac{1}{\beta^2}+\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right)
= \frac{1}{\lambda^2\phi}+\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right)\\
&\geq \frac{1}{\phi}(1-2n^{-1/3+{\mathfrak c}}+\OO(n^{-2/3+2{\mathfrak c}}))+\OO_\prec\left( \frac{1}{\phi n^{1/3+3{\mathfrak c}/4}}\right)
>\frac{m_\phi(x)}{x-m_\phi(x)}.
\end{split}\end{align}
Therefore, \eqref{e:eqha} does not have solution for $x\geq \phi+1+n^{-2/3+3{\mathfrak c}}$. We conclude that $\hat s_1\leq \phi+1+n^{-2/3+3{\mathfrak c}}$. This finishes the proof \eqref{e:weak}.
\end{proof}
In the following, we study the case that $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$.
\begin{proof}[Proof of \eqref{e:strong}]
Similarly to \eqref{e:left} and \eqref{e:right} we have for $x= \phi+1+n^{-2/3+{\mathfrak c}}$
\begin{align*}
\frac{1}{\beta^2}+\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right)\leq \frac{m_\phi(x)}{x-m_\phi(x)}.
\end{align*}
And for $x\rightarrow \infty$,
\begin{align*}
\lim_{x\rightarrow \infty }\frac{1}{\beta^2}+\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right)=\frac{1}{\beta^2}\geq \lim_{x\rightarrow \infty }\frac{m_\phi(x)}{x-m_\phi(x)}=0.
\end{align*}
Therefore \eqref{e:eqha} a has a solution in the interval $[\phi+1+n^{-2/3+{\mathfrak c}}, \infty)$, which is $\hat s_1$.
In particularly we have $\hat s_1\geq \phi+1+n^{-2/3+{\mathfrak c}}$, and $\beta {\bm{v}}{\bm{u}}^\top +Z$ has an outlier. In the following, we compute the value of $\hat s_1$.
We rewrite the equation \eqref{e:eqha} as
\begin{align}\label{e:mphi}
\frac{m_\phi(x)}{x-m_\phi(x)}= \frac{\tau}{\phi},\quad
\tau=\frac{1}{\lambda^2}+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right).
\end{align}
We can solve for $x$ and $m_\phi(x)$ using \eqref{e:meq} and \eqref{e:mphi}
\begin{align}\begin{split}\label{e:xmexp}
&x=\sqrt{\phi^2+(\tau+1/\tau)\phi+1}
=\sqrt{\phi^2+(\lambda^2+1/\lambda^2)\phi+1}+\OO_\prec \left(\frac{(\lambda-1)}{\phi}\frac{1}{n^{1/2}\kappa^{1/4}}+\frac{1}{\phi}\frac{1}{n\kappa^{1/2}}\right),\\
&m_\phi=\frac{x}{1+\phi/\tau}=\left(1+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right)\right) \sqrt{\frac{\phi+\lambda^2}{\lambda^2(1+\lambda^2\phi)}}.
\end{split}\end{align}
We recall that
\begin{align*}
x=\phi+1+\kappa,\quad \kappa\geq n^{-2/3+{\mathfrak c}}
\end{align*}
By a Taylor expansion of the first estimate in \eqref{e:xmexp}, for $\lambda=1+\oo(1)$, there exists some constant $C>0$,
\begin{align*}
x\geq \phi+1+C(\lambda-1)^2+\OO_\prec \left(\frac{(\lambda-1)}{\phi}\frac{1}{n^{1/2}\kappa^{1/4}}+\frac{1}{\phi}\frac{1}{n\kappa^{1/2}}\right)
\end{align*}
By our assumption that $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$. Then we have $\kappa=x-\phi-1\asymp (\lambda-1)^2\gtrsim n^{-2/3+2{\mathfrak c}}$. We can use this estimate of $\kappa$ to simplify the error terms in \eqref{e:xmexp}.
In summary we have for $\beta=\lambda\sqrt \phi$ and $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$, $\beta {\bm{v}}{\bm{u}}^\top + Z$ has an outlier singular value at
\begin{align*}
x=\sqrt{\phi^2+(\lambda^2+1/\lambda^2)\phi+1}+\OO_\prec \left(\frac{(\lambda-1)^{1/2}}{n^{1/2} \phi}\right)
\end{align*}
This finishes the proof of \eqref{e:strong}.
\end{proof}
\subsection{Proof of Theorem \ref{t:ev}}\label{s:proveev}
Let $\phi=\sqrt{m/n}$. We first study the case that $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$.
\begin{proof}[Proof of \eqref{e:strong1} and \eqref{e:strong22}]
If $\lambda\geq 1+n^{-1/3+{\mathfrak c}}$, Theorem \eqref{t:eig} implies that $\beta {\bm{v}}{\bm{u}}^\top +Z$ has an outlier singular value $x=\hat s_1=\phi+1+\kappa$ with $\kappa\asymp (\lambda-1)^2$. Therefore, $x$ is an eigenvalue of the Hermitized matrix:
\begin{align}\label{e:eigvvw}
x {\bm{w}}=\left(\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
+\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right){\bm{w}},
\end{align}
for some unit vector ${\bm{w}}=[\hat{\bm{v}}_1, \hat{\bm{u}}_1]^\top/\sqrt 2$, where $\hat{\bm{v}}_1, \hat{\bm{u}}_1$ are the left and right singular vector of $\beta {\bm{v}}{\bm{u}}^\top +Z$ corresponding to the singular value $\hat s_1$.
By rearranging \eqref{e:eigvvw}, we get
\begin{align}\label{e:weq}
{\bm{w}}=\left(x-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-1}\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top{\bm{w}}.
\end{align}
We denote the length two vector which is the projection of ${\bm{w}}$ on ${\bm{v}}$ and ${\bm{u}}$ direction
\begin{align*}
\tilde {\bm{w}}=\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top{\bm{w}}=\frac{1}{2}\left[
\begin{array}{cc}
{\bm{v}} & {\bm{v}}\\
{\bm{u}} & -{\bm{u}}
\end{array}
\right]^\top
\left[
\begin{array}{c}
\hat {\bm{v}}_1\\
\hat{\bm{u}}_1
\end{array}
\right],
\end{align*}
then it satisfies
\begin{align}\label{e:ttddw}
\tilde {\bm{w}}
=\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top\left(x-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-1}\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\tilde{\bm{w}}.
\end{align}
We denote $\tilde {\bm{w}}=[\tilde {\bm{w}}_1, \tilde {\bm{w}}_2]^\top$ with $\tilde {\bm{w}}_1\in {\mathbb R}^n$ and $\tilde {\bm{w}}_2\in {\mathbb R}^m$.
The inner products $\langle \hat{{\bm{v}}}_1, {\bm{v}}\rangle$ and $\langle \hat{{\bm{u}}}_1, {\bm{u}}\rangle$ are given by
\begin{align}\label{e:innerp}
\langle \hat{{\bm{v}}}_1, {\bm{v}}\rangle =\tilde {\bm{w}}_1+\tilde {\bm{w}}_2,\quad
\langle \hat{{\bm{u}}}_1, {\bm{u}}\rangle =\tilde {\bm{w}}_1-\tilde {\bm{w}}_2.
\end{align}
By taking norm on both sides of \eqref{e:weq}, we get another equation for $\tilde w$,
\begin{align}\label{e:wGw}
1=\tilde {\bm{w}}^\top
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
\left(x-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-2}\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\tilde{\bm{w}}.
\end{align}
Using the notations from \eqref{e:defABC}, we can rewrite \eqref{e:ttddw} as
\begin{align}\label{e:evrelat}
\tilde {\bm{w}}=\frac{1}{2}\left[
\begin{array}{cc}
\cA(x)+2\cB(x)+{\cal C}(x) & \cA(x)-{\cal C}(x)\\
\cA(x)-{\cal C}(x) & \cA(x)-2\cB(x)+{\cal C}(x)
\end{array}
\right]\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\tilde {\bm{w}},
\end{align}
where we recall from \eqref{e:eq0}
\begin{align}\label{e:relate}
\left(\frac{1}{\beta}-\cB(x)\right)^2=\cA(x){\cal C}(x).
\end{align}
Thanks to Theorem \ref{t:isotropiclaw}, for $x=\phi+1+\kappa$ with $\kappa\geq n^{-2/3+{\mathfrak c}}$, we have
\begin{align}\begin{split}\label{e:ABCbb}
&\cA(x)=m_\phi(x)+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right),\\
&\cB(x)=\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right)),\\
&{\cal C}(x)=\frac{1}{x-m_\phi(x)}+\OO_\prec\left(\frac{1}{\phi^2 n^{1/2}\kappa^{1/4}}\right),
\end{split}\end{align}
where we used \eqref{e:m/eta}.
On the event that the singular values of $Z$ are bounded by $(\phi+1)+n^{-2/3+{\mathfrak c}/2}$, i.e. $s_1\leq (\phi+1)+n^{-2/3+{\mathfrak c}/2}$, we have that $\cA(z), \cB(z),{\cal C}(z)$ are analytic for $\Re[z]\geq (\phi+1)+n^{-2/3+{\mathfrak c}/2}$. We take a contour $\omega=\{z: |z-x|=\kappa/2\}$. Inside the contour $\Re[z]\geq \phi+1+n^{-2/3+{\mathfrak c}}/2$, $\cA(z)$ is analytic.
Then we can rewrite $\cA'(x)$ as a contour integral
\begin{align}\begin{split}\label{e:ABC'0}
\cA'(x)&=\frac{1}{2\pi\mathrm{i}}\oint_\omega\frac{\cA(z)}{(z-x)^2}{\rm d} z
=\frac{1}{2\pi\mathrm{i}}\oint_\omega\frac{m_\phi(z)}{(z-x)^2}{\rm d} z
+\OO_\prec\left(\oint_\omega \frac{{\rm d} z}{|z-x|^2}\frac{1}{n^{1/2}\kappa^{1/4}}\right)\\
&=m'_\phi(x)+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{5/4}}\right),
\end{split}\end{align}
where in the last equality, we used that the total length of the contour $\omega$ is of order $\kappa$. Similar argument also gives us the estimates of $\cB'(x)$ and ${\cal C}'(x)$,
\begin{align}\begin{split}\label{e:ABC'}
&\cB'(x)=\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{5/4}}\right),\\
&{\cal C}'(x)=\frac{m'_\phi(x)-1}{(x-m_\phi(x))^2}+\OO_\prec\left(\frac{1}{\phi^2 n^{1/2}\kappa^{5/4}}\right).
\end{split}\end{align}
By slightly rearranging \eqref{e:evrelat}, we get that
\begin{align*}
&\phantom{{}={}}\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]\tilde {\bm{w}}
=\frac{1}{2}\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]\left[
\begin{array}{cc}
\cA(x)+2\cB(x)+{\cal C}(x) & \cA(x)-{\cal C}(x)\\
\cA(x)-{\cal C}(x) & \cA(x)-2\cB(x)+{\cal C}(x)
\end{array}
\right]\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\tilde {\bm{w}}.
\end{align*}
Therefore
\begin{align*}
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\tilde {\bm{w}},
\end{align*}
is an eigenvector of the following matrix with eigenvalue $1$,
\begin{align}
\frac{1}{2}\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]\left[
\begin{array}{cc}
\cA(x)+2\cB(x)+{\cal C}(x) & \cA(x)-{\cal C}(x)\\
\cA(x)-{\cal C}(x) & \cA(x)-2\cB(x)+{\cal C}(x)
\end{array}
\right].\label{e:themm}
\end{align}
By plugging \eqref{e:relate} into \eqref{e:themm}, we can rewrite it as
\begin{align*}
&I+\frac{\beta}{2} \left[
\begin{array}{c}
\sqrt{\cA(x)}-\sqrt{{\cal C}(x)}\\
-(\sqrt{\cA(x)}+\sqrt{{\cal C}(x)})
\end{array}
\right]
\left[
\begin{array}{cc}
\sqrt{\cA(x)}-\sqrt{{\cal C}(x)}&
\sqrt{\cA(x)}+\sqrt{{\cal C}(x)}
\end{array}
\right],
\end{align*}
which has an eigenvector $[\sqrt{{\cal C}(x)}+\sqrt{\cA(x)}, \sqrt{{\cal C}(x)}-\sqrt{\cA(x)}]^\top$ with eigenvalue $1$. We conclude that
the eigenvector $\tilde {\bm{w}}$ satisfies
\begin{align}\label{e:twexxq}
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\tilde{\bm{w}}=c
\left[
\begin{array}{c}
\sqrt{{\cal C}(x)}+\sqrt{\cA(x)}\\
\sqrt{{\cal C}(x)}-\sqrt{\cA(x)}.
\end{array}
\right]
\end{align}
We need to use \eqref{e:wGw} to determine $c$ in the above expression,
\begin{align*}
&\phantom{{}={}}\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top\left(x-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-2}\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]\\
&=-\partial_x \left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top\left(x-\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^{-1}\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]\\
&=
-\frac{1}{2}\left[
\begin{array}{cc}
\cA'(x)+2\cB'(x)+{\cal C}'(x) & \cA'(x)-{\cal C}'(x)\\
\cA'(x)-{\cal C}'(x) & \cA'(x)-2\cB'(x)+{\cal C}'(x)
\end{array}
\right]
\end{align*}
By plugging the above expression to \eqref{e:wGw} and using \eqref{e:ABCbb}, \eqref{e:ABC'0}, \eqref{e:ABC'} to simplify, we get
\begin{align}\begin{split}\label{e:computec}
1&=-2c^2({\cal C}(x)\cA'(x)+\cA(x) {\cal C}'(x)+2\sqrt{\cA(x){\cal C}(x)}\cB'(x))\\
&=-2c^2\left( \frac{m_\phi'(x)}{x-m_\phi(x)}+\frac{m_\phi(m'_\phi-1)}{(x-m_\phi)^2}+\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{5/4}}\right)\right)\\
&=-2c^2\left( \frac{m_\phi'(x)}{x-m_\phi(x)}+\frac{m_\phi(m'_\phi-1)}{(x-m_\phi)^2}\right)\left(1+\OO_\prec\left(\frac{1}{ n^{1/2}\kappa^{3/4}}\right)\right),
\end{split}\end{align}
where we also used that $|m_\phi'(x)|\asymp 1/\sqrt\kappa$, where $\kappa=x-(\phi+1)$.
We recall the inner products $\langle \hat{{\bm{v}}}_1, {\bm{v}}\rangle$ and $\langle \hat{{\bm{u}}}_1, {\bm{u}}\rangle$ from \eqref{e:innerp}, using \eqref{e:twexxq} and \eqref{e:ABCcopyhi}
\begin{align}\label{e:innerp2}
&\langle \hat{{\bm{v}}}_1, {\bm{v}}\rangle =\tilde {\bm{w}}_1+\tilde {\bm{w}}_2=
\frac{2c}{\beta}\sqrt{\cA(x)}
=\frac{2c \sqrt{m_\phi}}{\beta}\left(1+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right)\right).\\
&\langle \hat{{\bm{u}}}_1, {\bm{u}}\rangle =\tilde {\bm{w}}_1-\tilde {\bm{w}}_2=
\frac{2c}{\beta}\sqrt{{\cal C}(x)}
=\frac{2c }{\beta\sqrt{x-m_\phi}}\left(1+\OO_\prec\left(\frac{1}{\phi n^{1/2}\kappa^{1/4}}\right)\right).\label{e:innerp2u}
\end{align}
We also recall from \eqref{e:mphi} that
\begin{align}\label{e:x}
1+\left((\phi^2-1)/x-x\right)m_\phi+m_\phi^2=0, \quad \frac{m_\phi(x)}{x-m_\phi(x)}= \frac{\tau}{\phi},\quad
\tau=\frac{1}{\lambda^2}+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right).
\end{align}
Then by taking derivative with respect to $x$ on both sides of the first expression in \eqref{e:x}, we get the following expression of $m_\phi'(x)$,
\begin{align*}
\left(-(\phi^2-1)/x^2-1\right)m_\phi+\left((\phi^2-1)/x-x\right)m_\phi'+2m_\phi m_\phi'=0,\quad
m_\phi'=\frac{((\phi^2-1)/x^2+1)m_\phi}{m_\phi-1/m_\phi}.
\end{align*}
Explicitly, we can solve for $m_\phi'(x)$ in terms of $m_\phi(x)$ as
\begin{align*}
m_\phi'&= \frac{((\phi^2-1)/x^2+1)}{1-1/m_\phi^2}
=\frac{2-(m_\phi+1/m_\phi)\frac{1}{x}}{1-1/{m_\phi}^2}
=\frac{2-(m_\phi+1/m_\phi)\frac{1}{(\phi/\tau+1)m_\phi}}{1-1/{m_\phi}^2}\\
&=\frac{2-(m_\phi+1/m_\phi)\frac{1}{(\phi/\tau+1)m_\phi}}{1-1/{m_\phi}^2}
=\left(1+\frac{\phi}{\tau}\right)^{-1}\left(1+\frac{2\phi/\tau}{1-1/m_\phi^2}\right).
\end{align*}
With the expression of $m_\phi'(x)$, we can use \eqref{e:computec} to compute $c$
\begin{align*}
\left(1+\OO_\prec\left(\frac{1}{ n^{1/2}\kappa^{3/4}}\right)\right)
&=-2c^2m_\phi \left( \frac{m_\phi'(x)}{m_\phi(x-m_\phi(x))}+\frac{(m'_\phi-1)}{(x-m_\phi)^2}\right)\\
&=-2\frac{c^2m_\phi}{(\phi/\tau)^2} \left( \frac{\phi}{\tau}\frac{m_\phi'(x)}{m_\phi^2}+\frac{(m'_\phi-1)}{m_\phi^2}\right)\\
&=-2\frac{c^2m_\phi}{(\phi/\tau)^2}\frac{2(\phi/\tau)}{m_\phi^2-1}
=4\frac{c^2m_\phi}{\phi/\tau}\frac{1}{1-m_\phi^2}.
\end{align*}
Comparing with \eqref{e:innerp}, we conclude that
\begin{align}\label{e:innerp3}
\langle \hat{\bm{v}}_1, {\bm{v}}\rangle=\frac{2c \sqrt{m_\phi}}{\lambda \sqrt\phi}\left(1+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right)\right)
=\frac{\sqrt{(1-m_\phi^2)/\tau}}{\lambda}\left(1+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{3/4}}\right)\right).
\end{align}
We recall from \eqref{e:xmexp} that
\begin{align}\label{e:mphiexx}
m_\phi=\left(1+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{1/4}}\right)\right) \sqrt{\frac{\phi+\lambda^2}{\lambda^2(1+\lambda^2\phi)}}.
\end{align}
and $\kappa\asymp (\lambda-1)^2$.
Then we conclude that
\begin{align}\label{e:v1iexx}
\langle \hat{\bm{v}}_1, {\bm{v}}_1\rangle=\left(1+\OO_\prec\left(\frac{1}{n^{1/2}(\lambda-1)^{3/2}}\right)\right)\sqrt{\frac{(\lambda^4-1)}{\lambda^4+\lambda^2/\phi}}
\sim \sqrt{1-\frac{1}{\lambda^4}},
\end{align}
as $\phi$ goes to infinity.
For the inner product $\langle \hat{\bm{u}}_1, {\bm{u}}\rangle$ from \eqref{e:innerp2u}, we have
\begin{align*}
&\phantom{{}={}}\langle \hat{{\bm{u}}}_1, {\bm{u}}\rangle
=\langle \hat{{\bm{v}}}_1, {\bm{v}}\rangle\sqrt{\frac{1}{m_\phi}\frac{m_\phi}{x-m_\phi}}\left(1+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{3/4}}\right)\right)\\
&=\sqrt{\frac{\lambda^4-1}{\lambda^2(\lambda^2+\phi)}}\left(1+\OO_\prec\left(\frac{1}{n^{1/2}\kappa^{3/4}}\right)\right)
=\left(1+\OO_\prec\left(\frac{1}{n^{1/2}(\lambda-1)^{3/2}}\right)\right)\sqrt{\frac{\lambda^4-1}{\lambda^2(\lambda^2+\phi)}},
\end{align*}
where for the second line, we plugged in the estimates \eqref{e:x}, \eqref{e:innerp3} and \eqref{e:mphiexx}.
This finishes the proof of \eqref{e:strong1} and \eqref{e:strong22}.
\end{proof}
In the following, we study the case when $\lambda\leq 1+n^{-2/3+{\mathfrak c}}$ and prove \eqref{e:weak1}.
\begin{proof}[Proof of \eqref{e:weak1}]
We recall the Hermitized matrix from \eqref{e:hermitc}. Its nonzero eigenvalues are given by $\hat s_1, -\hat s_1, \hat s_2, -\hat s_2,\cdots, \hat s_n, -\hat s_n$, and the corresponding normalized eigenvectors are given by
$[\hat{\bm{v}}_1, \hat{\bm{u}}_1]/\sqrt 2, [\hat{\bm{v}}_1, -\hat{\bm{u}}_1]/\sqrt 2, [\hat{\bm{v}}_2, \hat{\bm{u}}_2]/\sqrt 2, [\hat{\bm{v}}_2, -\hat{\bm{u}}_2]/\sqrt 2,\cdots, [\hat{\bm{v}}_n, \hat{\bm{u}}_n]/\sqrt 2, [\hat{\bm{v}}_n, -\hat{\bm{u}}_n]/\sqrt 2$. Then its Green's function is given by
\begin{align}\begin{split}\label{e:hermitcc}
&\phantom{{}={}}\left(z-\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right]
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]^\top
+\left[
\begin{array}{cc}
0 & Z\\
Z^\top & 0
\end{array}
\right]\right)^2\\
&=\sum_{i=1}^n \frac{[\hat {\bm{v}}_i, \hat {\bm{u}}_i][\hat {\bm{v}}_i, \hat {\bm{u}}_i]^\top}{2(z-\hat s_i)} + \frac{[\hat {\bm{v}}_i, -\hat {\bm{u}}_i][\hat {\bm{v}}_i, -\hat {\bm{u}}_i]^\top}{2(z-\hat s_i)}+\frac{WW^\top}{z}\\
&=\left(I+A(z)(I-U^\top A(z))^{-1}U^\top\right)G(z),
\end{split}\end{align}
where
\begin{align*}
A(z)=G(z)\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right]
\left[
\begin{array}{cc}
\beta & 0\\
0 & -\beta
\end{array}
\right],\quad
U=
\left[
\begin{array}{cc}
{\bm{v}}/\sqrt 2 & {\bm{v}}\sqrt{2}\\
{\bm{u}}/\sqrt 2 & -{\bm{u}}\sqrt{2}
\end{array}
\right],
\end{align*}
are defined as in \eqref{e:AU}, and $W$ is an $(m+n)\times(m-n)$ matrix with columns the eigenvectors of the Hermitized matrix \eqref{e:hermitc} corresponding to eigenvalue $0$.
We conjugate \eqref{e:hermitcc} by $U$ on both sides, and get a $2\times 2$ matrix
\begin{align*}
U^\top\left(I+A(z)(I-U^\top A(z))^{-1}U^\top\right)G(z)U
=(I-U^\top A(z))^{-1}(U^\top G(z)U).
\end{align*}
This matrix contains the information of projection of $[{\bm{v}}, \pm {\bm{u}}]$ in the directions of $[\hat {\bm{v}}_i, \pm \hat {\bm{u}}_i]$. More precisely
\begin{align}\begin{split}\label{e:decomp}
&((I-U^\top A(z))^{-1}(U^\top G(z)U))_{11}\\
&=\sum_{i=1}^n \frac{\langle [\hat {\bm{v}}_i, \hat {\bm{u}}_i], [{\bm{v}}, {\bm{u}}]\rangle^2}{4(z-\hat s_i)} + \frac{\langle [\hat {\bm{v}}_i, -\hat {\bm{u}}_i], [{\bm{v}}, {\bm{u}}]\rangle^2}{4(z-\hat s_i)}+\frac{\|W[{\bm{v}}, {\bm{u}}]\|_2^2}{2z},\\
&((I-U^\top A(z))^{-1}(U^\top G(z)U))_{22}\\
&=\sum_{i=1}^n \frac{\langle [\hat {\bm{v}}_i, \hat {\bm{u}}_i], [{\bm{v}}, -{\bm{u}}]\rangle^2}{4(z-\hat s_i)} + \frac{\langle [\hat {\bm{v}}_i, -\hat {\bm{u}}_i], [{\bm{v}}, -{\bm{u}}]\rangle^2}{4(z-\hat s_i)}+\frac{\|W[{\bm{v}}, -{\bm{u}}]\|_2^2}{2z}.
\end{split}\end{align}
We notice that since $z\in {\mathbb C}^+$, the imaginary part of each term on the righthand side of \eqref{e:decomp} is negative. By taking the sum of the two terms in \eqref{e:decomp} and imaginary part on both sides, we get
\begin{align}\label{e:evbb}
|\Im[\Tr ((I-U^\top A(z))^{-1}(U^\top G(z)U))]|
\geq \frac{\Im[z] (\langle \hat {\bm{v}}_1, {\bm{v}}\rangle^2+\langle \hat {\bm{u}}_1, {\bm{u}}\rangle^2)}{|z-\hat s_1|^2}.
\end{align}
By taking $z=\hat s_1+\mathrm{i} \eta$ in \eqref{e:evbb}, we get the upper bound
\begin{align}\label{e:evub}
\langle \hat {\bm{v}}_1, {\bm{v}}\rangle^2+\langle \hat {\bm{u}}_1, {\bm{u}}\rangle^2\leq \eta |\Im[\Tr ((I-U^\top A(z))^{-1}(U^\top G(z)U))]|.
\end{align}
In the following, we estimate the lefthand side of \eqref{e:evbb}. We recall $\cA(z), \cB(z), {\cal C}(z)$ from \eqref{e:defABC}, they are well defined for $z\in {\mathbb C}^+$, and Theorem \ref{t:isotropiclaw} implies
\begin{align}\begin{split}\label{e:ABCest}
&\cA(z)=m_\phi(z)+\OO_\prec\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}\right),\\
&\cB(z)=\OO_\prec\left(\frac{1}{\phi}\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}\right)\right),\\
&{\cal C}(z)=\frac{1}{z-m_\phi(z)}+\OO_\prec\left(\frac{1}{\phi^2}\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}\right)\right),
\end{split}\end{align}
The matrix $U^\top G(z) U$ can be expressed in terms of $\cA(z), \cB(z), {\cal C}(z)$
\begin{align}\label{e:UGU}
U^\top G(z)U=\frac{1}{2}\left[
\begin{array}{cc}
\cA(z)+2\cB(z)+{\cal C}(z) & \cA(z)-{\cal C}(z)\\
\cA(z)-{\cal C}(z) & \cA(z)-2\cB(z)+{\cal C}(z)
\end{array}
\right].
\end{align}
$(I-U^\top A(z))$ is a $2\times 2$ matrix, we can invert it by Cramer's rule
\begin{align}\label{e:Ainv}
\frac{1}{\det(I-U^\top A(z))}
\left[
\begin{array}{cc}
1+\beta(\cA(z)-2\cB(z)+{\cal C}(z))/2 & \beta(\cA(z)-{\cal C}(z))/2\\
-\beta(\cA(z)-{\cal C}(z))/2 & 1-\beta(\cA(z)+2\cB(z)+{\cal C}(z))/2
\end{array}
\right],
\end{align}
and the determinant is given by
\begin{align}\label{e:deta}
\det(I-U^\top A(z))=\beta^2\left(\left(\frac{1}{\beta}-\cB(z)\right)^2-\cA(z){\cal C}(z)\right)
\end{align}
By plugging \eqref{e:UGU} and \eqref{e:Ainv} into \eqref{e:evbb}, we get
\begin{align}\label{e:tra}
\Tr ((I-U^\top A(z))^{-1}(U^\top G(z)U))
=\frac{\cA(z)+{\cal C}(z)}{\det(I-U^\top A(z))}.
\end{align}
To use \eqref{e:evub}, we will take $z\in {\mathbb C}^+$ in a small neighborhood of the spectral edge $\phi+1$. Let $z=\phi+1+\kappa+\mathrm{i}\eta$, where $\kappa, \eta\ll1$. Then thanks to the explicit formula of $m_\phi(z)$ from \eqref{e:mpsi}, in this region, we have
\begin{align*}
m_\phi(z)=1-C\sqrt{\kappa+\mathrm{i}\eta}+\OO(|\kappa+\mathrm{i}\eta|),
\end{align*}
for some constant $C>0$, and
\begin{align}\label{e:AC1bb}
\cA(z)+{\cal C}(z)=1-(C+C/\phi)\sqrt{\kappa+\mathrm{i}\eta}+\OO_\prec\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}+|\kappa+\mathrm{i}\eta|\right)\asymp 1.
\end{align}
Recall that we have $\beta=\lambda\sqrt \phi$.
For the denominator in \eqref{e:tra}, using \eqref{e:deta} and the estimates \eqref{e:ABCest}, we get
\begin{align}\begin{split}\label{e:dttIUA}
&\phantom{{}={}}\det(I-U^\top A(z))
=1-\frac{\beta^2 m_\phi(z)}{z-m_\phi(z)}+\OO_\prec\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}\right)\\
&=(1-\lambda^2)+\lambda^2(C+C/\phi)\sqrt{\kappa+\mathrm{i}\eta}+\OO_\prec\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}+|\kappa+\mathrm{i}\eta|\right).
\end{split}\end{align}
There are two cases, either $\lambda\in[1-n^{-1/3+2{\mathfrak c}}, 1+n^{-1/3+{\mathfrak c}}]$, or $\lambda\leq 1-n^{-1/3+2{\mathfrak c}}$. If $\lambda\in[1-n^{-1/3+2{\mathfrak c}}, 1+n^{-1/3+{\mathfrak c}}]$, we can take $\kappa\in [-n^{-2/3+3{\mathfrak c}}, n^{-2/3+3{\mathfrak c}}]$, and $\eta=n^{-2/3+6{\mathfrak c}}$, then
\begin{align}\begin{split}\label{e:ddertt}
&\phantom{{}={}}|\det(I-U^\top A(z))|\geq |\Im[\det(I-U^\top A(z))]|\\
&=\lambda^2(C+C/\phi)\Im[\sqrt{\kappa+\mathrm{i}\eta}]+\OO_\prec\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}+|\kappa+\mathrm{i}\eta|+|1-\lambda|\right)\gtrsim \sqrt\eta .
\end{split}\end{align}
Then \eqref{e:evub},\eqref{e:tra}, \eqref{e:AC1bb} and \eqref{e:ddertt} imply that
\begin{align}\label{e:v1v}
\langle \hat {\bm{v}}_1, {\bm{v}}\rangle^2+\langle \hat {\bm{u}}_1, {\bm{u}}\rangle^2\lesssim \sqrt\eta=n^{-1/3+3{\mathfrak c}}.
\end{align}
If $\lambda\leq 1-n^{-1/3+2{\mathfrak c}}$, we take $\kappa\in [-n^{-2/3+3{\mathfrak c}}, n^{-2/3+3{\mathfrak c}}]$ and $\eta=n^{{\mathfrak c}-1}/(1-\lambda)$ in \eqref{e:dttIUA}
\begin{align}\begin{split}\label{e:dtIIH}
|\det(I-U^\top A(z))|
&\geq 1-\lambda^2+\OO_\prec\left(\sqrt{\frac{|\Im[m_\phi(z)]|}{n\eta}}+\frac{1}{n\eta}+\sqrt{|\kappa+\mathrm{i}\eta|}\right)\\
&\geq 1-\lambda^2+\OO_\prec\left(\sqrt{\frac{\sqrt{|\kappa|+\eta}}{n\eta}}+\frac{1}{n\eta}+\sqrt{|\kappa|+\eta}\right)\\
&\gtrsim 1-\lambda^2+\OO_\prec\left(\frac{1}{n\eta}+\sqrt{|\kappa|+\eta}\right)
\gtrsim 1-\lambda^2.
\end{split}\end{align}
Then \eqref{e:evub},\eqref{e:tra}, \eqref{e:AC1bb} and \eqref{e:dtIIH} imply that
\begin{align}\label{e:v2v}
\langle \hat {\bm{v}}_1, {\bm{v}}\rangle^2+\langle \hat {\bm{u}}_1, {\bm{u}}\rangle^2\lesssim \eta|\Tr ((I-U^\top A(z))^{-1}(U^\top G(z)U))|\lesssim \frac{n^{{\mathfrak c}}}{n(1-\lambda)^2}.
\end{align}
Then \eqref{e:v1v} and \eqref{e:v2v} together imply that
\begin{align*}
\langle \hat {\bm{v}}_1, {\bm{v}}\rangle^2+\langle \hat {\bm{u}}_1, {\bm{u}}\rangle^2\lesssim n^{8{\mathfrak c}} \min\left\{n^{-1/3}, \frac{1}{n(1-\lambda)^2}\right\}.
\end{align*}
This finishes the proof of \eqref{e:weak1}.
\end{proof}
Theorem \eqref{t:ev} gives the behavior of the projection of the singular vector $\hat{\bm{v}}_1$ associated with the largest singular value of $\beta {\bm{v}} {\bm{u}}^\top + Z$ on the signal direction. At the critical value $\lambda=1$, it states
\begin{align*}
|\langle \hat{\bm{v}}_1, {\bm{v}}\rangle|^2\lesssim n^{-1/3+\oo(1)}.
\end{align*}
We believe that it is optimal up to the $\oo(1)$ error in the exponent.
More precisely, we conjecture that
exactly at the critical signal strength, $\beta=\sqrt{\phi}$, the projection of the singular vector $\hat{\bm{v}}$ associated with the largest singular value of $\beta {\bm{v}}\otimes {\bm{u}} + Z$ on the signal ${\bm{v}}$ direction satisfies
\begin{align}\label{e:converge}
n^{1/3}|\langle \hat{\bm{v}}, {\bm{v}}\rangle|^2\rightarrow \Theta,
\end{align}
where $\Theta$ is a random variable of size $\OO(1)$.
The above statement \eqref{e:converge} for low rank perturbations of Gaussian unitary matrices have been proven in \cite{bao2020eigenvector}, where they give explicit characterization of the limiting objection $\Theta$.
\small
\bibliographystyle{abbrv}
|
1,941,325,221,030 | arxiv | \section{Introduction}
Distributional Semantic Models (DSM) are consolidating themselves as fundamental components for supporting automatic
semantic interpretation in different application scenarios in natural language processing. From \textit{question
answering systems}, to \textit{semantic search} and \textit{text entailment}, distributional semantic models support a
scalable approach for representing the meaning of words, which can automatically capture comprehensive
associative commonsense information by analysing word-context patterns in large-scale corpora in an unsupervised or
semi-supervised fashion\cite{thesisAndre,turney,linse}.
However, distributional semantic models are strongly dependent on the size and the quality of the reference corpora,
which embeds the commonsense knowledge necessary to build comprehensive models. While high-quality texts containing
large-scale commonsense information are present in English, such as Wikipedia, other languages may lack
sufficient textual support to build distributional models.
To address this problem, this paper investigates how different distributional semantic models built from
corpora in different languages and with different sizes perform in computing semantic relatedness similarity and
relatedness tasks. Additionally, we analyse the role of machine translation approaches to support the construction of better distributional vectors and for computing semantic similarity and relatedness measures for other languages.
In other words, in the case that there is not enough information to create a DSM for a particular language, this work aims at evaluating whether the benefit of corpora volume for English outperforms the error introduced by machine translation.
Given a pair of words and a human judgement score that represents the semantic relatedness of these two words, the
evaluation method aims at indicating how close distributional models score to humans. Three widely used word-pairs
datasets are employed in this work: Miller \& Charles (MC)\cite{miller1991contextual}, Rubenstein \& Goodenough
(RG)\cite{rubenstein1965contextual} and WordSimilarity 353 (WS-353)\cite{finkelstein2001placing}.
In the proposed model the word-pairs datasets are translated into English as a reference language and the distributional
vectors are defined over the target end model (Figure \ref{fig:experimental_setup}). Despite the simplicity of the proposed method based on machine translation, there is a high relevance for the distributional semantics user/practitioner due to its simplicity of use and the significant improvement in the results.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.8\textwidth]{ml-dsm-fig.pdf}
\caption{Depiction of the experimental setup of the experiment.}
\label{fig:experimental_setup}
\end{figure}
This work presents a systematic study involving 11 languages and four distributional semantic models (DSMs), providing a
comparative quantitative analysis of the performance of the distributional models and the impact of machine
translation approaches for different models.
In summary, this paper answers the following research questions:
\begin{enumerate}
\item Does machine translation to English perform better than the word vectors in the original language (for which
languages and for which distributional semantic models)?
\item Which DSMs and languages benefit more and less from the translation?
\item What is the quality of state-of-the-art machine translation approaches for word pairs (for each language)?
\end{enumerate}
Moreover, this paper contributes with two resources which can be used by the community to evaluate multi-lingual
semantic similarity and relatedness models: \emph{(i)} a high quality manual translation of the three
word-pairs datasets - Miller \& Charles (MC)\cite{miller1991contextual}, Rubenstein \& Goodenough
(RG)\cite{rubenstein1965contextual} and WordSimilarity 353 (WS-353)\cite{finkelstein2001placing} - for 10 languages and
\emph{(ii)} the 44 pre-computed distributional models (four distributional models for each one of the 11 languages)
which can be accessed as a service\footnote{The service is available at \url{http://rebrand.ly/dinfra}.}, together with
the multi-lingual approaches mediated by machine translation.
This paper is organised as follows: Section \ref{related} describes the related work, Section \ref{setup} describes the
experimental setting; while Section \ref{results} analyses the results and provides the comparative analysis from
different models and languages, Finally, Section \ref{conclision} provides the conclusion.
\section{Related Work} \label{related}
Mostof related work has concentrated on leveraging joint multilingual information to improve the performance of the models.
Faruqui \& Dyer\cite{faruqui-dyer:2014:EACL} use the distributional invariance across languages and propose a technique based on
canonical correlation analysis (CCA) for merging multilingual evidence into vectors generated monolingually. They
evaluate the resulting word representations on semantic similarity/relatedness evaluation tasks, showing the improvement
of multi-lingual over the monolingual scenario.
Utt \& Pado\cite{utt-pado:2014:tacl}, develop methods that take advantage of the availability of annotated corpora in English using a translation-based approach to transport the word-link-word co-occurrences to support the creation of syntax-based DSMs.
Navigli \& Ponzetto\cite{navigli2012babelrelate} propose an approach to compute semantic relatedness exploiting the
joint contribution of different languages mediated by lexical and semantic knowledge bases. The proposed model uses a
graph-based approach of joint multi-lingual disambiguated senses which outperforms the monolingual scenario and
achieves competitive results for both resource-rich and resource-poor languages.
Zou et al.\cite{zou2013bilingual} describe an unsupervised semantic embedding (bilingual embedding) for words across two
languages that represent semantic information of monolingual words, but also semantic relationships across different
languages. The motivation of their works was based on the fact that it is hard to identify semantic similarities
across languages, specially when co-occurrences words are rare in the training parallel text. Al-Rfou et
al.\cite{al2013polyglot} produced multilingual word embeddings for about 100 languages using Wikipedia as the reference corpora.
Comparatively, this work aims at providing a comparative analysis of existing state-of-the-art distributional semantic models for different languages as well as analyzing the impact of a machine translation over an English DSM.
\section{Experimental Setup} \label{setup}
The experimental setup consists of the instantiation of four distributional semantic models (Explicit Semantic Analysis
(ESA)\cite{gabrilovich2007computing}, Latent Semantic Analysis (LSA)\cite{landauer1998introduction}, Word2Vec
(W2V)\cite{mikolov2013efficient} and Global Vectors (GloVe)\cite{pennington2014Glove}) in 11 different languages -
English, German, French, Italian, Spanish, Portuguese, Dutch, Russian, Swedish, Arabic and Farsi.
The DSMs were generated from Wikipedia dumps (January 2015), which were preprocessed by lowercasing, stemming and
removing stopwords. For LSA and ESA, the models were generated using the SSpace Package\cite{sspace}, while W2V and GloVe were
generated using the code shared by the respective authors. For the experiment the vector dimensions for LSA, W2V and
GloVe were set to 300 while ESA was defined with 1500 dimensions. The difference of size occurs because ESA is composed
of sparse vectors. All models used in the generation process the default parameters defined in each implementation.
Each distributional model was evaluated for the task of computing semantic similarity and relatedness measures using
three human-annotated gold standard datasets: Miller \& Charles (MC)\cite{miller1991contextual}, Rubenstein \&
Goodenough (RG)\cite{rubenstein1965contextual} and WordSimilarity 353 (WS-353)\cite{finkelstein2001placing}. As these
word-pairs datasets were originally in English, except for those language available in previous works
(\cite{faruqui2014community,camacho2015framework}), the word pairs were translated and reviewed with the help of
professional translators, skilled in data localisation tasks. The datasets are available at
\url{http://rebrand.ly/multilingual-pairs}.
Two automatic machine translation approaches were evaluated: the Google Translate Service and the Microsoft Bing
Translation Service. As Google Translate Service performed 16\% better for overall word-pairs translations, this was set
as the main machine translation model.
The DInfra platform \cite{barzegar2015dinfra} provided the DSMs used in the work. To support experimental
reproducibility, both experimental data and software are available at \url{http://rebrand.ly/dinfra}.
\section{Evaluation \& Results} \label{results}
\subsection{Spearman Correlation and Corpus Size}
Table \ref{tbl:correlation} shows the correlation between the average Spearman correlation values for each DSM and two
indicators of corpus size: \# of tokens and \# of unique tokens.
ESA is consistently more robust (on average) than the other models in relation to the corpus size due the
fact that ESA has larger context windows in opposition to the other distributional models. While ESA considers the whole
document as its context window, the other models are restricted to five (LSA) and ten (Word2Vec and GloVe) words.
Another observation is that the evaluation of the WS-353 dataset is more dependent on the corpus size, which can be
explained by the broader number of semantic relations expressed under the semantic relatedness umbrella.
Table \ref{corpus_data} shows the size of each corpus in different languages regarding the number of unique tokens and
the number of tokens.
\begin{table*}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\bf Gold standard & \multicolumn {2} {|c|}{ MC} &\multicolumn {2} {|c|}{ RG} & \multicolumn {2} {|c|}{ WS353} \\ \hline
& unique tokens & tokens & unique tokens & tokens & unique tokens & tokens \\ \hline
ESA & 0.39 & \textit{0.48} & 0.67 & \bf 0.73 & \textit{0.33} & \textit{ 0.39} \\
LSA & \bf 0.74 & \bf 0.75 & \bf 0.82 & 0.68 & \bf 0.64 & 0.66 \\
W2V & 0.43 & 0.58 & 0.71 & 0.72 & 0.57 & \bf 0.79 \\
Glove & \textit{0.34} & 0.51 & \textit{0.51} & \textit{0.61} & 0.59 & 0.63 \\
\hline
\end{tabular}
\caption{\label{Table1} Correlation between corpus size and different models.}
\label{tbl:correlation}
\end{table*}
\subsection{Word-pair Machine Translation Quality}
The second step evaluates the accuracy of state-of-the-art machine translation approaches for word-pairs (Table
3). The accuracy of the translation for the WS-353 word pairs significantly outperforms the other
datasets. This shows that the higher semantic distance between word pairs (semantic relatedness) has the benefit of
increasing the contextual information during the machine translation process, subsequently improving the mutual
disambiguation process.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|}
\hline
\bf \hspace{0.5cm}lang\hspace{0.5cm} & \bf \hspace{0.5cm}unique tokens\hspace{0.5cm} & \bf \hspace{0.5cm}tokens\hspace{0.5cm} \\ \hline
\bf en & 4.238 & 902.044 \\ \hline
\bf de & 4.233 & 312.380 \\ \hline
\bf fr & 1.749 & 247.492 \\ \hline
\bf ru & 1.766 & 202.163 \\ \hline
\bf it & 1.411 & 178.378 \\ \hline
\bf nl & 2.021 & 105.224 \\ \hline
\bf pt & 0.873 & 96.712 \\ \hline
\bf sv & 1.730 & 82.376 \\ \hline
\bf es & 0.829 & 76.587 \\ \hline
\bf ar & 1.653 & 46.481 \\ \hline
\bf fa & 0.925 & 32.557 \\ \hline
\end{tabular}
\caption{The sizes of the corpora in terms of the number of unique tokens and tokens (scale of $10^6$).}
\label{corpus_data}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline \bf dataset/lang & \bf de & \bf fr & \bf ru & \bf it & \bf nl & \bf pt & \bf sv & \bf es & \bf ar & \bf fa \\ \hline
MC & 0.48 & 0.47 & 0.58 & 0.42 & 0.57 & 0.60 & 0.55 & 0.60 & 0.53 & 0.38 \\
RG & 0.45 & 0.65 & 0.53 & 0.41 & 0.59 & 0.51 & 0.58 & 0.59 & 0.43 & 0.36 \\
WS353 & \textbf{0.78} & \textbf{0.85} & \textbf{0.76} & \textbf{0.76} & \textbf{0.85} & \textbf{0.81} & \textbf{0.78} & \textbf{0.79} & \textbf{0.57} & \textbf{0.43} \\
\hline
\end{tabular}
\label{tbl:trans_acc}
\caption{Translation accuracy.}
\end{table}
\begin{table}[ht]
\small
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\bf DS & \bf Models & \bf en & \bf de & \bf fr & \bf ru & \bf it & \bf nl & \bf pt & \bf sv & \bf es & \bf ar & \bf fa & \bf Model AVG. & \bf DS AVG. \\
\hline
\multirow{5}{*}{MC} & ESA & 0.69 & 0.67 & 0.54 & 0.66 & 0.37 & 0.54 & \textbf{0.67} & 0.37 & 0.58 & 0.37 & 0.56 & 0.53 & \textbf{0.56} \\
& LSA & 0.79 & \textbf{0.70} & 0.55 & 0.63 & 0.58 & 0.55 & 0.41 & \textbf{0.58} & 0.66 & \textbf{0.46} & 0.45 & 0.56 & \\
& W2V & \textbf{0.84} & \textbf{ 0.70} & 0.55 & 0.64 & \textbf{0.74} & \textbf{ 0.57} & 0.37 & 0.40 & \textbf{0.74} & 0.38 & \textbf{0.68} & \textbf{ 0.58} & \\
& Glove & 0.69 & 0.64 & \textbf{0.64} & \textbf{0.76} & 0.51 & 0.55 & 0.62 & 0.40 & 0.65 & 0.38 & 0.45 & 0.56 & \\
\hline
\multirow{5}{*}{RG} & ESA & 0.80 & 0.68 & 0.45 & 0.63 & 0.50 & 0.58 & 0.51 & 0.50 & 0.59 & 0.36 & 0.57 & 0.54 & 0.53 \\
& LSA & 0.72 & 0.65 & 0.30 & 0.51 & 0.48 & 0.52 & 0.30 & 0.53 & 0.35 & 0.35 & 0.46 & 0.45 & \\
& W2V & \textbf{0.85} & \textbf{0.78} & \textbf{0.57} & 0.64 & \textbf{0.69} & \textbf{0.63} & 0.42 & \textbf{0.57} & \textbf{0.64} & \textbf{0.36} & 0.55 & \textbf{0.58} & \\
& Glove & 0.74 & 0.69 & 0.50 & \textbf{ 0.70} & 0.59 & 0.54 & \textbf{0.52} & 0.49 & 0.61 & 0.32 & \textbf{0.59} & 0.56 & \\
\hline
\multirow{5}{*}{WS353} & ESA & 0.50 & 0.39 & 0.32 & 0.44 & 0.34 & 0.53 & 0.44 & 0.43 & 0.37 & 0.26 & 0.37 & 0.39 & 0.41 \\
& LSA & 0.54 & 0.45 & 0.35 & 0.40 & 0.33 & 0.47 & 0.39 & 0.40 & 0.36 & 0.28 & 0.43 & 0.39 & \\
& W2V & \textbf{0.69} & \textbf{0.54} & \textbf{0.50} & \textbf{0.53} & \textbf{0.50} & \textbf{0.58} & \textbf{0.53} & \textbf{0.45} & \textbf{0.53} & \textbf{0.44} & \textbf{0.53} & \textbf{0.51} & \\
& Glove & 0.49 & 0.41 & 0.34 & 0.42 & 0.30 & 0.46 & 0.38 & 0.33 & 0.32 & 0.26 & 0.36 & 0.36 & \\
\hline
& Lang AVG. & 0.70 & 0.61 & 0.47 & 0.58 & 0.49 & 0.54 & 0.46 & 0.45 & 0.53 & 0.35 & 0.50 & 0.50 & \\
\hline
\end{tabular}}
\label{tbl:language_specific}
\caption{Spearman correlation for the language-specific models.}
\end{table}
\begin{table}[ht]
\small
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\bf DS & \bf Models & \bf de & \bf fr & \bf ru & \bf it & \bf nl & \bf pt & \bf sv & \bf es & \bf ar & \bf fa & \bf Model AVG. & \bf Diff. \\
\hline
\multirow{5}{*}{MC} & ESA-MT & 0.55 & 0.53 & 0.42 & 0.38 & 0.45 & 0.38 & 0.48 & 0.39 & 0.31 & 0.58 & 0.45 & -0.08 (-15.1\%) \\
& LSA-MT & 0.61 & 0.72 & 0.65 & 0.67 & 0.66 & 0.70 & 0.74 & 0.78 & 0.69 & 0.75 & 0.70 & 0.14 (25.0\%)\\
& W2V-MT & \textbf{0.68} & \textbf{0.79} & \textbf{0.68} & \textbf{0.77} & \textbf{0.69} & \textbf{0.76} & \textbf{0.81} & \textbf{0.83} & \textbf{0.71} & 0.74 & \textbf{0.75} & \textbf{0.17 (29.3\%)} \\
& GloVe-MT & 0.45 & 0.78 & 0.67 & 0.64 & 0.63 & 0.56 & 0.61 & 0.82 & 0.69 & \textbf{0.79} & 0.66 & 0.10 (17.9\%) \\
\hline
\multirow{5}{*}{RG} & ESA-MT & 0.62 & 0.53 & 0.52 & 0.61 & 0.63 & 0.57 & 0.56 & 0.47 & 0.38 & 0.71 & 0.56 & 0.02 (3.7\%) \\
& LSA-MT & 0.63 & 0.62 & 0.59 & 0.74 & 0.67 & 0.64 & 0.67 & 0.62 & 0.55 & 0.70 & 0.64 & \textbf{0.19 (42.2\%)} \\
& W2V-MT & \textbf{0.69} & \textbf{0.79} & 0.69 & \textbf{0.78} & 0.74 & \textbf{0.75} & \textbf{0.71} & \textbf{0.73} & 0.57 & 0.79 & \textbf{0.72} & 0.14 (24.1\%) \\
& GloVe-MT & 0.62 & 0.77 & \textbf{0.71} & 0.77 & \textbf{0.78} & 0.66 & 0.66 & 0.72 & \textbf{0.65} & \textbf{0.80} & 0.71 & 0.15 (26.8\%) \\
\hline
\multirow{5}{*}{WS353} & ESA-MT & 0.42 & 0.45 & 0.41 & 0.41 & 0.44 & 0.43 & 0.40 & 0.35 & 0.42 & 0.32 & 0.40 & 0.01 (2.6\%) \\
& LSA-MT & 0.51 & 0.51 & 0.47 & 0.48 & 0.51 & 0.39 & 0.51 & 0.44 & 0.37 & 0.43 & 0.46 & \textbf{0.07 (17.9\%)} \\
& W2V-MT & \textbf{0.62} & \textbf{0.59} & \textbf{0.57} & \textbf{0.57} & \textbf{0.63} & \textbf{0.51} & \textbf{0.59} & \textbf{0.55} & \textbf{0.50} & \textbf{0.52} & \textbf{0.57} & 0.06 (11.8\%) \\
& GloVe-MT & 0.45 & 0.48 & 0.42 & 0.43 & 0.46 & 0.33 & 0.42 & 0.41 & 0.33 & 0.37 & 0.41 & 0.05 (13.9\%) \\
\hline
& Lang AVG. & 0.57 & 0.63 & 0.57 & 0.60 & 0.61 & 0.56 & 0.60 & 0.59 & 0.52 & 0.63 & 0.56 & \\
\hline
\end{tabular}}
\label{tbl:machine_translation}
\caption{Spearman correlation for the machine translation models over the English corpora. \emph{Diff.} represents the difference of machine translation score minus the language specific.}
\end{table}
\begin{table}[ht]
\small
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\bf DS & \bf M & \bf de & \bf fr & \bf ru & \bf it & \bf nl & \bf pt & \bf sv & \bf es & \bf ar & \bf fa & \bf M. AVG & \bf DS. AVG \\
\hline
\multirow{5}{*}{MC} & ESA & -0.18 & -0.03 & -0.36 & 0.03 & -0.16 & -0.44 & 0.31 & -0.32 & -0.16 & 0.03 & -0.13 & \multirow{5}{*}{\textbf{0.41}} \\
& LSA & -0.13 & 0.31 & 0.04 & 0.16 & 0.20 & 0.70 & 0.27 & 0.17 & 0.50 & 0.68 & 0.29 & \\
& W2V & -0.02 & 0.43 & 0.07 & 0.05 & 0.21 & \textbf{1.04} & 1.00 & 0.13 & \textbf{0.88} & 0.09 & 0.39 & \\
& GloVe & -0.31 & 0.22 & -0.11 & 0.25 & 0.14 & -0.10 & 0.51 & \textbf{0.26} & 0.85 & 0.75 & 0.25 & \\
\hline
\multirow{5}{*}{RG} & ESA & -0.09 & 0.19 & -0.18 & 0.21 & 0.08 & 0.11 & 0.12 & -0.19 & 0.06 & 0.25 & 0.06 & \multirow{5}{*}{\textbf{0.41}} \\
& LSA & -0.03 & 1.04 & 0.14 & 0.52 & 0.30 & \textbf{1.15} & 0.26 & \textbf{0.77} & 0.57 & 0.52 & 0.52 & \\
& W2V & -0.11 & 0.39 & 0.08 & 0.14 & 0.18 & 0.76 & 0.23 & 0.14 & 0.59 & 0.44 & 0.28 & \\
& GloVe & -0.11 & 0.55 & 0.01 & 0.31 & \textbf{0.43} & 0.28 & 0.35 & 0.17 & \textbf{1.04} & 0.36 & 0.34 & \\
\hline
\multirow{5}{*}{WS353} & ESA & 0.08 & 0.40 & -0.07 & 0.18 & -0.18 & -0.02 & -0.07 & -0.07 & 0.60 & -0.13 & 0.07 & \multirow{5}{*}{0.36} \\
& LSA & 0.12 & 0.43 & 0.19 & 0.45 & 0.09 & -0.01 & 0.27 & 0.21 & 0.34 & 0.01 & 0.21 & \\
& W2V & 0.14 & 0.19 & 0.09 & 0.14 & 0.08 & -0.04 & 0.33 & 0.04 & 0.12 & 0.00 & 0.11 & \\
& GloVe & 0.10 & 0.41 & 0.00 & 0.41 & 0.00 & -0.14 & 0.28 & 0.30 & 0.28 & 0.04 & 0.17 & \\
\hline
& AVG & 0.06 & 0.52 & 0.13 & 0.36 & 0.23 & 0.29 & 0.70 & 0.22 & 0.59 & 0.82 & & \\
\hline
\end{tabular}
\label{tbl:difference}
\caption{Difference between the language-specific and the machine translation approach. \textbf{M. AVG} represents the average of the models and \textbf{DS. AVG} represents the average of the datasets.}
\end{table}
For WS-353 the set of best-performing translations has an average accuracy of 80\% (with maximum 85\% and minimum
76\%). This value dropped significantly for Arabic and Farsi (average 50\%).
For MC and RG, the average translation accuracy for the semantic similarity pairs is 51.5\%. This difference may be a
result of a deficit of contextual information during the machine translation process. For these word-pairs datasets, the
difference between best translation performers and lower performers (across languages) is smaller. Additionally, the final translation accuracy for all languages and all word-pairs datasets is 59\%. French, Dutch and Spanish are the languages with best automatic translations.
\subsection{Language-Specific DSMs}
In the first part of the experiment, the Spearman correlations ($\rho$) between the human assessments and the computation
of the semantic similarity and relatedness for all DSMs instantiated for all languages were evaluated (Figure
\ref{fig:experimental_setup} \emph{(ii)}). Table \ref{tbl:language_specific} shows the Spearman correlation for each DSM
using language-specific corpora (without machine translation), for the three word-pairs datasets.
The comparative language-specific analysis indicates that English is the best-perfor-ming language (0.70), followed by
German (0.61). The lowest Spearman correlation was observed in Arabic (0.35). From the tested DSMs, W2V is consistently the
best-performing DSM (0.56). The language-specific DSMs achieved higher correlations for MC and RG (0.56 and 0.53,
respectively), in comparison to 0.41 for WS-353.
The results for the language-specific DSMs were contrasted to the machine translation (MT) approach, according to the
diagram depicted in Figure \ref{fig:experimental_setup} \emph{(i)}. The Spearman correlation for the MT-mediated approach are shown in
Table \ref{tbl:machine_translation}.
\subsection{Machine Translation based Semantic Relatedness}
Using the MT models, W2V is consistently the best performing DSM (average 0.68), while ESA is consistently the worst
performing model (0.47). We can interpret this result by stating that the benefit of using machine translation for ESA
does not introduces significant performance improvements in comparison to the language-specific baselines.
The best performing languages are French and Farsi ($\rho$ = 0.63). The Spearman correlation variance across languages
in the MT models is low, as the impact of the use of the English corpus on the DSM model has a higher positive impact on
the results in comparison to the variation of the quality of the machine translation. The results for all languages
achieve very similar correlation values.
The impact of the MT model can be better interpreted by examining the difference between the machine translation and
the domain-specific models (depicted in Table 6). LSA accounts for the largest average
percent improvement (28.4\%) using the MT model, while ESA accounts for the lowest value (-2.9\%). As previously
noticed, this can be explained by the sensitivity of these models to the corpus size due to the dimensional reduction
strategy (LSA) or the broader context window (ESA). The remaining models accounted for substantial improvements (W2V =
21.7\%, GloVe = 19.5\%).
Arabic and French achieved the highest percent gains (47\% and 38\%, respectively), while German accounts for worst
results (-4\%).These numbers are consistent with the corpus size. For German, the result shows that the corpus volume of
the German Wikipedia crossed a threshold size (34\% of the English corpus) above which improvements for computing
semantic similarity for the target word-pairs dataset might be marginally relevant, while the translation error accounts
negatively in the final result.
The average improvement for the MT over the language specific model for each word-pairs dataset is consistently
significant: MC = 20\%, RG = 30\% and WS\-353 = 14\%.
\subsection{Summary}
Below, the interpretation of the results are summarised as the core research questions which we aim to answer with this paper:
\\\\
\noindent \textbf{Question 1:} Does machine translation to English perform better than the word vectors in the original
language (for which languages and for which distributional semantic models)?
\noindent Machine translation to English consistently performs better for all languages, with the exception of
German, which presents equivalent results for the language-specific models. The MT approach provides an average
improvement of 16.7\% over language-specific distributional semantic models.
\\\\
\noindent \textbf{Question 2:} Which DSMs or MT-DSMs work best for the set of analysed languages?
\noindent W2V-MT consistently performs as the best model for all word-pairs datasets and languages, except German, in which the difference between MT-W2V and language-speci-fic W2V is not significant.
\\\\
\noindent \textbf{Question 3:} What is the quality of state-of-the-art machine translation approaches for word-pairs?
\noindent The average translation accuracy for all languages and all word-pairs datasets is 59\%. Translation quality varies according to the nature of the word-pair (better translations are provided for word pairs which are semantically related compared to semantically similar word pairs), reaching a maximum of 85\% and a minimum of 36\% across different languages.
\\\\
For the distributional semantics user/practitioner, as a general practice, we recommend using W2V built over an English corpus, supported by machine translation. Additionally, the accuracy of state-of-the-art machine translation approaches work better for translating semantically related word pairs (in contrast to semantically similar word pairs).
\section{Conclusion} \label{conclision}
This work provides a comparative analysis of the performance of four state-of-the-art distributional semantic models
over 11 languages, contrasting the native language-specific models with the use of machine translation over English-based
DSMs. The experimental results show that there is a significant improvement (average of 16.7\% for the Spearman
correlation) by using off-the-shelf machine translation approaches and that the benefit of using a more
informative (English) corpus outweighs the possible errors introduced by the machine translation approach. The average
accuracy of the machine translation approach is 59\%. Moreover, for all languages, W2V showed consistently
better results, while ESA showed to be more robust concerning lower corpora sizes. For all languages, the
combination of machine translation over the W2V English distributional model provided the best results consistently
(average Spearman correlation of 0.68).
Future work will focus on the analysis and translation of two other word-pairs datasets:
SimLex-999\cite{hill2015simlex999} and MEN-3000\cite{bruni}.
\section*{Acknowledgments}
This publication has emanated from research supported by the National Council for Scientific and Technological
Development, Brazil (CNPq) and by a research grant from Science Foundation Ireland (SFI) under Grant Number
SFI/12/RC/2289.
\bibliographystyle{splncs03}
|
1,941,325,221,031 | arxiv | \section{Introduction\label{Introduction}}
Sequential Monte Carlo (SMC) has a history that traces from the 1950's to the present.
The first examples of SMC were simulations of chain polymers in the 1950's \cite{hammersley1954poor,rosenbluth1955monte}.
Starting in the 1960's, SMC was used in the quantum chemistry
community to calculate the ground state energy of the Schr\"{o}dinger equation \cite{kalos1962monte, grimm1971monte}.
SMC became a standard statistical tool in the 1990's, as the
algorithm was applied to problems in Bayesian inference and signal processing \cite{doucetsequential}.
In recent years, the algorithm continues to fascinate researchers
who are ever developing new variations of SMC algorithms
(e.g., \cite{whiteley2016role, gerber2017negative}).
SMC is a tool for evaluating expectations of the form
\begin{equation*}
\E\left[G_0\left(X_0\right) \prod_{t=0}^{T-1}G_t\left(X_{t-1}, X_t\right)f\left(X_{T-1}, X_T\right)\right]
\end{equation*}
where $\left(X_{t}\right)_{t\geq0}$ is a discrete-time Markov chain
on a sequence of state space $\left(E_t\right)_{t \geq 0}$,
functions $\left(G_{t}\right)_{t\geq0}$ are nonnegative, and $f$ is real-valued.
These expectations are called
Feynman-Kac integrals, and they are notoriously difficult to
evaluate when $T$ is large
\cite{liu2008monte}.
SMC is a sampling algorithm that simulates the dynamics of the Markov chain $\left(X_t\right)_{t \geq 0}$
and provides random
approximations for Feynman-Kac integrals that become increasingly accurate as computational
effort is increased.
SMC has a wide range of applications from Bayesian statistics to rare event sampling.
In Bayesian contexts, functions $\left(G_{t}\right)_{t\geq0}$ are typically
unnormalized likelihood ratios between prior and posterior distributions.
SMC is used to estimate statistics of the posterior distribution,
and the resulting algorithm is often called the particle filter \cite{doucetsequential}.
In rare event sampling, on the other hand,
SMC is used to provide estimates of rare event probabilities,
and
functions $\left(G_{t}\right)_{t\geq0}$ bias a process $\left(X_{t}\right)_{t\geq0}$
to explore regions of state space that would rarely be accessed
under typical conditions \cite{hairer2014improved}.
Despite the usefulness of SMC,
practitioners are burdened with the difficult
task of choosing a resampling scheme from the many options.
Past analyses have provided error formulas for a few particular resampling schemes
(e.g., \cite{del2004feynman, chopin2004central, douc2008limit}).
However, the number of resampling schemes has increased rapidly in recent years \cite{li2015resampling},
and more theoretical analysis is required to rigorously compare schemes.
Error formulas are not available for all common resampling schemes (e.g., stratified resampling),
and there remains no consensus among experts about how best to resample.
One goal of the current paper is to describe the resampling step in a unified way
in order to facilitate analysis.
Thus, Section \ref{introducing} introduces a matrix resampling framework,
inspired by work of \citet{hu2008basic} and \citet{whiteley2016role}.
Resampling matrices provide a simple description for a great variety of resampling schemes,
and any scheme in the matrix resampling framework is guaranteed to exhibit important convergence behavior.
In particular, Section \ref{introducing} proves unbiasedness, convergence,
and an upper bound on variance for SMC estimates
made using matrix resampling schemes.
Another goal of the current paper is to present a unified analysis of SMC error.
Section \ref{minimize} explains how error arises within the SMC algorithm
and how error can be reduced by selecting an appropriate resampling scheme.
The scheme that gives the lowest possible resampling error is identified.
To compare the performance of resampling schemes,
Section \ref{minimize} also provides new asymptotic error bounds,
including the first such bounds for stratified resampling and stratified residual resampling.
Technical proofs are presented in an appendix,
following Section \ref{minimize} and the conclusion.
\section{Matrix resampling framework}{\label{introducing}}
The goal of the current section is to provide a matrix resampling framework that ties together diverse SMC resampling schemes.
Section \ref{smcoverview} provides a short overview of SMC.
Section \ref{smcextension} describes the key features of the matrix resampling framework.
Section \ref{simpletheorems} presents convergence theorems
that ensure the validity of SMC estimates.
Section \ref{martingale} presents a martingale argument
to show why SMC estimates are unbiased.
\subsection{Overview of Sequential Monte Carlo}{\label{smcoverview}}
Sequential Monte Carlo begins by sampling initial ``particles'',
and then the algorithm proceeds iteratively through three main steps:
reweighting, resampling, and mutation.
Definition \ref{def:overview} gives an overview of these steps
and the quantities that can be estimated through SMC:
\begin{definition}{\label{def:overview}}
Overview of Sequential Monte Carlo
\begin{enumerate}
\item Initialization: Independently sample
$\xi_{0}^{\left(i\right)}\sim\Law\left(X_{0}\right)$ for $1 \leq i \leq N_0$.
\item
The algorithm proceeds iteratively for $t =0, 1, 2, \ldots$.
\begin{enumerate}
\item
Reweighting: Assign weights $w_t^{\left(i\right)}$ to each particle $\xi_t^{\left(i\right)}$ with
\begin{equation*}
\begin{cases}
w_t^{\left(i\right)} = G_t\left(\xi_t^{\left(i\right)}\right), & t = 0 \\
w_t^{\left(i\right)} = \hat{w}_{t-1}^{\left(i\right)}
G_t\left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_t^{\left(i\right)}\right), & t > 0
\end{cases}
\end{equation*}
\item
Resampling: Replace the ensemble $\left(w_t^{\left(i\right)}, \xi_t^{\left(i\right)}\right)_{1 \leq i \leq N_t}$
with a new ensemble $\left(\hat{w}_t^{\left(j\right)}, \hat{\xi}_t^{\left(j\right)}\right)_{1 \leq j \leq N_{t+1}}$,
where each particle $\hat{\xi}_t^{\left(j\right)}$ is a copy of some particle $\xi_t^{\left(i\right)}$
and weights $\hat{w}_t^{\left(j\right)}$ are defined so that
\begin{equation*}
\frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)} f\left(\xi_t^{\left(i\right)} \right)
\approx
\frac{1}{N_0} \sum_{j=1}^{N_{t+1}} \hat{w}_t^{\left(j\right)} f\left(\hat{\xi}_t^{\left(j\right)}\right)
\end{equation*}
for all functions $f\colon E_t \rightarrow \mathbb{R}$.
\item Mutation: sample
$\xi_{t+1}^{\left(i\right)}\sim \Law \left(X_{t+1}|X_t=\hat{\xi}_t^{\left(i\right)}\right)$ for $1\leq i \leq N_{t+1}$.
\end{enumerate}
\item
Estimation: To estimate quantities $\E\left[\prod_{s=0}^{t-1} G_s f\right]$, use
\begin{equation*}
\frac{1}{N_0}\sum_{i=1}^{N_t} \hat{w}_{t-1}^{\left(i\right)}
f\left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_{t}^{\left(i\right)}\right)
\approx \E\left[\prod_{s=0}^{t-1} G_s f\right]
\end{equation*}
\end{enumerate}
\end{definition}
For notational simplicity, in expectations involving the Markov Chain $\left(X_t\right)_{t \geq 0}$,
the arguments of functions will often be omitted.
For example,
$\E\left[\prod_{s=0}^{t-1} G_s f\right]$
denotes $\E\left[G_0\left(X_0\right) \prod_{s=1}^{t-1} G_s\left(X_{s-1},X_s\right) f\left(X_{t-1},X_t\right)\right]$.
While the reweighting and mutation steps are straightforward,
there are many different ways to carry out the resampling step.
Outlined below are examples of resampling methods:
\begin{example}[Sequential importance sampling]
The simplest resampling scheme, sequential importance sampling \cite{hammersley1954poor, rosenbluth1955monte},
leaves the ensemble of particles and weights completely unchanged:
\begin{equation*}
\left(\hat{w}_t^{\left(j\right)}, \hat{\xi}_t^{\left(j\right)}\right)_{1 \leq j \leq N_{t+1}}
= \left(w_t^{\left(i\right)}, \xi_t^{\left(i\right)} \right)_{1 \leq i \leq N_t}
\end{equation*}
\end{example}
In sequential importance sampling, weights
$\hat{w}_t^{\left(j\right)}$
are multiples of many functions $\left(G_t\right)_{t \geq 0}$:
\begin{equation*}
\hat{w}_t^{\left(j\right)}
= \hat{w}_{t-1}^{\left(j\right)} G_t\left(\hat{\xi}_{t-1}^{\left(j\right)}, \xi_t^{\left(j\right)}\right)
= \hat{w}_{t-2}^{\left(j\right)} G_{t-1}\left(\hat{\xi}_{t-2}^{\left(j\right)}, \xi_{t-1}^{\left(j\right)}\right)
G_t\left(\hat{\xi}_{t-1}^{\left(j\right)}, \xi_t^{\left(j\right)}\right)
= \cdots
\end{equation*}
Consequently, some weights $\hat{w}_t^{\left(j\right)}$
can be very large, while other weights can be very small.
The imbalance in weights can potentially contribute variance to the estimates
$\frac{1}{N_0}\sum_{i=1}^{N_t} \hat{w}_{t-1}^{\left(i\right)}
f\left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_{t}^{\left(i\right)}\right)$,
because the single particle with the highest weight can dominate all the others.
Alternatives to sequential importance sampling,
which alleviate the imbalance in weights,
include multinomial resampling and Bernoulli resampling.
\begin{example}[Multinomial resampling]
In multinomial resampling \cite{holland1975adaptation},
updated particles $\left(\hat{\xi}_t^{\left(j\right)}\right)_{1 \leq i \leq N_0}$ are independently sampled with common distribution
\begin{equation*}
\hat{\xi}_t^{\left(j\right)} \sim
\frac{\sum_{i=1}^{N_t} w_t^{\left(i\right)} \delta\left(\xi_t^{\left(i\right)}\right)}
{\sum_{i=1}^{N_t} w_t^{\left(i\right)}}
\end{equation*}
and each updated particle is assigned an updated weight
$\hat{w}_t^{\left(j\right)} = \overline{w}_t = \frac{1}{N_0}\sum_{i=1}^{N_t} w_t^{\left(i\right)}$.
\end{example}
\begin{example}[Bernoulli resampling]
In Bernoulli resampling \cite{kalos1962monte},
each of the original particles $\xi_t^{\left(i\right)}$
is replicated $N_t^{\left(i\right)}$ times, where the numbers $N_t^{\left(i\right)}$ are independent random variables with
\begin{equation*}
\begin{cases}
N_t^{\left(i\right)} = \left\lfloor \frac{w_t^{\left(i\right)}}{\overline{w}_t} \right\rfloor + 1,
& \text{with probability} \left\{ \frac{w_t^{\left(i\right)}}{\overline{w}_t} \right\}
\vspace{.1cm} \\
N_t^{\left(i\right)} = \left\lfloor \frac{w_t^{\left(i\right)}}{\overline{w}_t} \right\rfloor,
& \text{otherwise}
\end{cases}
\end{equation*}
Here, the floor function $\left\lfloor \cdot \right\rfloor$ is defined by
$\left\lfloor x\right\rfloor =\max\left\{ z\in\mathbb{Z}:z\leq x\right\}$,
the remainder function $\left\{ \cdot \right\}$ is defined by
$\left\{ x\right\} =x-\left\lfloor x\right\rfloor$,
and $\overline{w}_t= \frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)}$ is the average of the weights.
After replication, each updated particle $\hat{\xi}_t^{\left(j\right)}$
is assigned an updated weight $\hat{w}_t^{\left(j\right)} =\overline{w}_t$.
\end{example}
\subsection{Extending the matrix resampling framework}{\label{smcextension}}
Sequential importance sampling and multinomial resampling are both
\emph{matrix resampling schemes}.
First introduced by \citet{hu2008basic} and \citet{whiteley2016role},
matrix resampling schemes involve a resampling step described by a nonnegative matrix $W_t$
with dimensions $N_t \times N_{t+1}$.
The properties of this matrix are:
\begin{itemize}
\item
The $i$th row sum equals the weight $w_t^{\left(i\right)}$ for $1 \leq i \leq N_t$.
\item
The $j$th column sum equals the updated weight $\hat{w}_t^{\left(j\right)}$ for $1 \leq j \leq N_{t+1}$.
\item
Each updated particle $\hat{\xi}_t^{\left(j\right)}$ is independently drawn from a distribution
determined by the $j$th column of the resampling matrix:
\begin{equation*}
\hat{\xi}_t^{\left(j\right)} \sim \frac{\sum_{i=1}^{N_t} w_t^{\left(i, j\right)} \delta\left(\xi_t^{\left(i\right)}\right)}
{\sum_{i=1}^{N_t} w_t^{\left(i, j\right)}}
\end{equation*}
\end{itemize}
Resampling schemes can be divided into \emph{fixed population} resampling schemes,
where particle numbers $\left(N_t\right)_{t \geq 0}$ are deterministic,
and \emph{random population} resampling schemes,
where the number of particles
$\left(N_t\right)_{t \geq 0}$ is random.
While the matrix resampling framework is useful for describing fixed population schemes,
it is necessary to extend the framework
further in order to describe random population resampling schemes.
This section presents a new extension to the matrix resampling framework
to random population schemes that satisfy an upper
bound on the maximum possible number of particles $N_t$.
In these schemes, $N_t$ can be bounded by $C_t N_0$ for each $t \geq 0$,
where $\left(C_t\right)_{t \geq 0}$ is a deterministic series of constants.
This assumption is often satisfied for the random population schemes used in practice.
For example, in Bernoulli resampling,
the random numbers $N_t$ satisfy an upper bound $N_t \leq N_0 \left(t + 1\right)$
and cannot grow in an uncontrolled way,
because
\begin{equation*}
N_{t+1} = \sum_{i=1}^{N_t} N_t^{\left(i\right)}
= \sum_{i=1}^{N_t}
\left(\left\lfloor \frac{w_t^{\left(i\right)}}{\overline{w}_t} \right\rfloor
+ 1\right)
\leq \sum_{i=1}^{N_t} \left(\frac{w_t^{\left(i\right)}}{\overline{w}_t} + 1\right)
= N_0 + N_t
\end{equation*}
The extended matrix resampling framework
differs from the standard matrix resampling framework by including a ``coffin state'' $c$.
The coffin state is an element of state space
that particles $\hat{\xi}_t^{\left(j\right)}$ can potentially occupy, but
particles in the coffin state do not affect any SMC estimates.
By including a coffin state,
the extended matrix resampling framework
is able to reinterpret many random population schemes
as schemes where the number of particles $\left(N_t\right)_{t \geq 0}$ is deterministic
but the number of coffin state particles is random.
In the extended matrix resampling framework,
the Markov chain $X_t$ is allowed to take values in the extended state space $E_t \cup \left\{c\right\}$.
Transitions from the coffin state are described by $\Prob \left\{X_{t + 1} = c \rvert X_t = c \right\} = 1$.
Functions defined on $E_t$ or $E_{t-1} \times E_t$ are extended to take values $f\left(c\right) = 0$ or $f\left(c,c\right) = 0$.
As seen in the definition below,
the extended matrix resampling framework
includes a row in each resampling matrix $W_t$
governing transitions into the coffin state $c$:
\begin{definition}{\label{def:general}}
Extended matrix resampling framework
\begin{enumerate}
\item Initialization: Independently sample
$\xi_{0}^{\left(i\right)}\sim\Law\left(X_{0}\right)$ for $1 \leq i \leq N_0$.
\item
The algorithm proceeds iteratively for $t =0, 1, 2, \ldots$.
\begin{enumerate}
\item
Reweighting: Assign weights $w_t^{\left(i\right)}$ to each particle $\xi_t^{\left(i\right)}$ with
\begin{equation*}
\begin{cases}
w_t^{\left(i\right)} = G_t\left(\xi_t^{\left(i\right)}\right), & t = 0 \\
w_t^{\left(i\right)} = \hat{w}_{t-1}^{\left(i\right)}
G_t\left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_t^{\left(i\right)}\right), & t > 0
\end{cases}
\end{equation*}
\item Resampling: Select a nonnegative matrix $W_t$ with dimensions $\left(N_t + 1\right) \times N_{t+1}$
and row sums
$\sum_{j=1}^{N_{t+1}} w_t^{\left(i,j\right)} = w_t^{\left(i\right)}$ for $1 \leq i \leq N_t$.
Independently, for $1 \leq j \leq N_{t+1}$, select $\hat{\xi}_t^{\left(j\right)}$
from the distribution
\begin{equation*}
\hat{\xi}_t^{\left(j\right)} \sim \frac{\sum_{i=1}^{N_t} w_t^{\left(i, j\right)} \delta\left(\xi_t^{\left(i\right)}\right)
+ w_t^{\left(N_t + 1, j\right)} \delta\left(c\right)}
{\sum_{i=1}^{N_t + 1} w_t^{\left(i, j\right)}}
\end{equation*}
Define the $\hat{w}_t^{\left(j\right)}$ by the column sum
$\hat{w}_t^{\left(j\right)} = \sum_{i=1}^{N_t + 1} w_t^{\left(i, j\right)}$.
\item Mutation: sample
$\xi_{t+1}^{\left(i\right)}\sim \Law \left(X_{t+1}|X_t=\hat{\xi}_t^{\left(i\right)}\right)$ for $1\leq i \leq N_{t+1}$.
\end{enumerate}
\item
Estimation: To estimate quantities $\E\left[\prod_{s=0}^{t-1} G_s f\right]$, use
\begin{equation*}
\frac{1}{N_0}\sum_{i=1}^{N_t} \hat{w}_{t-1}^{\left(i\right)}
f\left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_{t}^{\left(i\right)}\right)
\approx \E\left[\prod_{s=0}^{t-1} G_s f\right]
\end{equation*}
\end{enumerate}
\end{definition}
The extended matrix resampling framework
encompasses a variety of resampling schemes.
For example, Figure \ref{figure1} presents resampling matrices $W_t$
that correspond to sequential importance sampling, multinomial resampling, and Bernoulli resampling.
In the extended matrix resampling framework,
the choice of which matrix $W_t$ to use can be made adaptively,
incorporating any information, such as the values of particles $\left(\xi_t^{\left(i\right)}\right)_{1 \leq i \leq N_t}$
and their weights $\left(w_t^{\left(i\right)}\right)_{1 \leq i \leq N_t}$.
Only the numbers $\left(N_t\right)_{t \geq 0}$ must be fixed in advance of running the SMC algorithm.
\begingroup\abovedisplayskip=0pt\belowdisplayskip=0pt
\begin{figure}[!htbp]
\caption{Examples of resampling matrices $W_0$ when $N_0 = 4$, $N_1 = 6$, and
particles have weights $w_0^{\left(1\right)}=3.2$, $w_0^{\left(2\right)}=2.4$,
$w_0^{\left(3\right)}=.8$, and $w_0^{\left(4\right)}=1.6$.
A horizontal line separates the coffin state $c$.}
\label{figure1}
\begin{minipage}{.32 \textwidth}
\begin{equation*}
\begin{spmatrix}{seq. importance sampling}
3.2 & ~ & ~ & ~ & ~ & ~ \\
~ & 2.4 & ~ & ~ & ~ & ~ \\
~ & ~ & .8 & ~ & ~ & ~ \\
~ & ~ & ~ & 1.6 & ~ & ~\\
\hline
~ & ~ & ~ & ~ & 2 & 2 \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\hfill{}
\begin{minipage}{.32 \textwidth}
\begin{equation*}
\begin{spmatrix}{multinomial resampling}
.8 & .8 & .8 & .8 & ~ & ~ \\
.6 & .6 & .6 & .6 & ~ & ~ \\
.2 & .2 & .2 & .2 & ~ & ~ \\
.4 & .4 & .4 & .4 & ~ & ~ \\
\hline
~ & ~ & ~ & ~ & 2 & 2 \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\hfill{}
\begin{minipage}{.32 \textwidth}
\begin{equation*}
\begin{spmatrix}{Bernoulli resampling}
2 & 1.2 & ~ & ~ & ~ & ~ \\
~ & ~ & 2 & .4 & ~ & ~ \\
~ & ~ & ~ & ~ & .8 & ~\\
~ & ~ & ~ & ~ & ~ & 1.6 \\
\hline
~ & .8 & ~ & 1.6 & 1.2 & .4 \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\end{figure}
\endgroup
\subsection{Unbiasedness, convergence, and variance}{\label{simpletheorems}}
The matrix resampling framework leads to a series of
powerful results on the unbiasedness, convergence, and variance of SMC estimates.
While versions of these theorems were proved previously
\cite{del2004feynman, douc2008limit, whiteley2016role},
this section presents results that hold more broadly
and include all schemes in the matrix resampling framework.
The first of the key theorems that govern the validity of SMC estimates
ensures that estimates
are unbiased:
\begin{thm}{\label{earlyunbiased}}
If $\E\left|\prod_{t=0}^{T-1} G_t f \right| < \infty$, then SMC estimates are unbiased:
\begin{equation*}
\E\left[\frac{1}{N_{0}}\sum_{i=1}^{N_{T}}\hat{w}_{T-1}^{\left(j\right)}
f\left(\hat{\xi}_{T-1}^{\left(i\right)},\xi_{T}^{\left(i\right)}\right)\right]
=\E\left[\prod_{t=0}^{T-1} G_t f \right]
\end{equation*}
\end{thm}
Theorem \ref{earlyunbiased} is quite general and holds without any additional assumptions.
In contrast, Theorems \ref{weak} and \ref{bounded} will require a mild assumption on the
numbers $\left(N_t\right)_{t \geq 0}$ and on the resampling matrices
$\left(W_t\right)_{t \geq 0}$:
\begin{assumption}{\label{assumption1}}
There exist absolute constants $\left(C_t\right)_{t \geq 0}$
such that
$\frac{N_t}{N_0} \leq C_t$ and
$\max_{1\leq i\leq N_{t+1}}\hat{w}_{t}^{\left(i\right)} \leq C_t \max_{1\leq j\leq N_t}w_{t}^{\left(j\right)}$.
\end{assumption}
Assumption \ref{assumption1} guarantees
that the number of particles does not grow too high
and also that the maximum weight does not grow too high during resampling.
This assumption is satisfied for all the schemes presented in the current paper,
taking $C_t = 1$ for fixed population schemes
and $C_t = t + 1$ for random population schemes.
The next result is a widely useful convergence theorem for SMC estimates:
\begin{thm}{\label{weak}}
If $\E\left[\prod_{s=0}^{t} G_s \right]<\infty$ for $0 \leq t \leq T-1$
and $\E\left|\prod_{t=0}^{T-1} G_t f \right|<\infty$, then
\begin{align*}
\label{weakresult}
& \frac{1}{N_{0}}\sum_{i=1}^{N_{T}}
\hat{w}_{T-1}^{\left(i\right)}f\left(\hat{\xi}_{T-1}^{\left(i\right)},\xi_{T}^{\left(i\right)}\right)
\stackrel{\Prob}{\rightarrow}
\E\left[\prod_{t=0}^{T-1} G_t f \right]
& \text{as } N_0 \rightarrow \infty
\end{align*}
\end{thm}
In Theorem \ref{weak}, it is assumed the SMC algorithm is well-defined on a
probability space $\left(\Omega, \mathcal{F}, \Prob\right)$ for any
number of starting particles $N_0 = 1, 2, \ldots$.
As $N_0 \rightarrow \infty$, Theorem \ref{weak} establishes that SMC
estimates converge in probabilty to the correct result.
Another key convergence result is a simple upper bound on the variance of SMC estimates.
The upper bound leads to a clear interpretation that SMC estimates
have a $\frac{1}{\sqrt{N_0}}$ error rate
when functions $\left(G_t\right)_{t \geq 0}$ are bounded.
\begin{thm}{\label{bounded}}
If functions $\left(G_t\right)_{0 \leq t \leq T-1}$ are bounded
and $\E\left|\prod_{t=0}^{T-1} G_t f^2 \right|<\infty$, then
\begin{equation*}
\Var\left[\frac{1}{N_{0}}\sum_{i=1}^{N_{T}}
\hat{w}_{T-1}^{\left(i\right)}f\left(\hat{\xi}_{T-1}^{\left(i\right)},\xi_{T}^{\left(i\right)}\right)\right]
\leq \frac{1}{N_0} \E\left[\prod_{t=0}^{T-1} G_t f^2\right] \prod_{t=0}^{T-1} \sup G_t
\sum_{t=0}^T \prod_{s=0}^{t-1} C_s
\end{equation*}
where $\left(C_t\right)_{t \geq 0}$
are the constants appearing in Assumption \ref{assumption1}.
\end{thm}
While antecedents of Theorems \ref{weak} and \ref{bounded} appear in the SMC literature \cite{douc2008limit, whiteley2016role},
the versions presented here are more general with respect to possible resampling schemes
or are more powerful with respect to unbounded functions $f$.
In examples outlined below,
these theorems determine the convergence behavior
of a diverse set of matrix resampling schemes.
See also Figure \ref{figureexamples},
which provides resampling matrices for the three examples.
\begin{example}[Adaptive resampling and parallel resampling]
Two common variations on the SMC framework are adaptive resampling and parallel resampling.
In adaptive resampling \cite{liu2008monte}, a resampling scheme such as
multinomial or Bernoulli resampling is triggered
if the variation in weights exceeds a certain threshold;
otherwise, sequential importance sampling is applied instead.
In parallel resampling \cite{li2015resampling},
resampling is applied independently on different processors in order to minimize communication costs.
Theorems \ref{weak} and \ref{bounded} guarantee the convergence of many adaptive and parallel resampling schemes.
In particular, convergence is guaranteed even if
the user decides adaptively which resampling scheme to use at the start of each resampling stage
or if resampling decisions are made in parallel across different machines.
\end{example}
\begin{example}[Pruning and enrichment]
In the pruning and enrichment scheme \cite{grassberger1997pruned},
a lower cutoff $u_t$ and an upper cutoff $U_t$ are selected at the beginning of each resampling step.
If $w_t^{\left(i\right)} > U_t$,
then the particle $\xi_t^{\left(i\right)}$ is split into two replicas $\hat{\xi}_t^{\left(j\right)}$
and $\hat{\xi}_t^{\left(k\right)}$ with reduced weights $\hat{w}_t^{\left(j\right)} = \hat{w}_t^{\left(k\right)} = \frac{1}{2} w_t^{\left(i\right)}$.
If $w_t^{\left(i\right)} < u_t$, then instead
an updated particle $\hat{\xi}^{\left(j\right)}$ is drawn from the distribution
\begin{equation*}
\hat{\xi}^{\left(j\right)} \sim \frac{1}{2} \delta\left(\xi^{\left(i\right)}\right) + \frac{1}{2} \delta\left(c\right)
\end{equation*}
with weight $\hat{w}^{\left(j\right)} = 2w_t^{\left(i\right)}$.
Lastly, if $u_t \leq w_t^{\left(i\right)} \leq U_t$,
the $i$th particle and weight are left unchanged,
with $\left(\hat{w}_t^{\left(j\right)}, \hat{\xi}_t^{\left(j\right)}\right)
= \left(w_t^{\left(i\right)}, \hat{\xi}_t^{\left(i\right)}\right)$ for some $1 \leq j \leq N_{t+1}$.
Theorems \ref{weak} and \ref{bounded} guarantee
convergence of the pruning and enrichment scheme even when
cutoff values $u_t$ and $U_t$ are selected adaptively
at the start of each resampling stage.
\end{example}
\begin{example}[Rejection control]
The rejection control scheme \cite{liu1998rejection}
mixes sequential importance sampling and Bernoulli resampling.
In this scheme, first compute the average particle weight $\overline{w}_t = \frac{1}{N_0}\sum_{j=1}^{N_t} w_t^{\left(j\right)}$.
Then, if $w_t^{\left(i\right)} \geq \overline{w}_t$,
the $i$th particle and weight are left unchanged,
with $\left(\hat{w}_t^{\left(j\right)}, \hat{\xi}_t^{\left(j\right)}\right)
= \left(w_t^{\left(i\right)}, \hat{\xi}_t^{\left(i\right)}\right)$ for some $1 \leq j \leq N_{t+1}$.
Otherwise, if $w_t^{\left(i\right)} < \overline{w}_t$,
a particle $\hat{\xi}^{\left(j\right)}$ is drawn from the distribution
\begin{equation*}
\hat{\xi}^{\left(j\right)} \sim \frac{w_t^{\left(i\right)}}{\overline{w}_t} \delta\left(\xi^{\left(i\right)}\right)
+ \left(1 - \frac{w_t^{\left(i\right)}}{\overline{w}_t}\right) \delta\left(c\right)
\end{equation*}
with weight $\hat{w}^{\left(j\right)}=\overline{w}_t$.
Theorems \ref{weak} and \ref{bounded} are the best known convergence results for the rejection control scheme.
\end{example}
\begingroup\abovedisplayskip=0pt\belowdisplayskip=0pt
\begin{figure}[!htbp]
\caption{Examples of resampling matrices $W_0$ when $N_0 = 4$, $N_1 = 5$, and
particles have weights $w_0^{\left(1\right)}=3.2$, $w_0^{\left(2\right)}=2.4$,
$w_0^{\left(3\right)}=.8$, and $w_0^{\left(4\right)}=1.6$.
In parallel resampling the resampling matrix takes a block diagonal form,
with each block corresponding to a different processor.
In pruning and enrichment, the cutoff values are $c_t = 1$ and $C_t = 3$.}{\label{figureexamples}}
\begin{minipage}{.32 \textwidth}
\begin{equation*}
\begin{spmatrix}{parallel mult. resampling}
1.6 & 1.6 & ~ & ~ & ~ \\
1.2 & 1.2 & ~ & ~ & ~ \\
~ & ~ & .4 & .4 & ~ \\
~ & ~ & .8 & .8 & ~ \\
\hline
~ & ~ & ~ & ~ & 2 \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\hfill{}
\begin{minipage}{.32 \textwidth}
\begin{equation*}
\begin{spmatrix}{pruning and enrichment}
1.6 & 1.6 & ~ & ~ & ~ \\
~ & ~ & 2.4 & ~ & ~ \\
~ & ~ & ~ & .8 & ~ \\
~ & ~ & ~ & ~ & 1.6 \\
\hline
~ & ~ & ~ & .8 & ~ \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\hfill{}
\begin{minipage}{.32 \textwidth}
\begin{equation*}
\begin{spmatrix}{rejection control}
3.2 & ~ & ~ & ~ & ~ \\
~ & 2.4 & ~ & ~ & ~ \\
~ & ~ & .8 & ~ & ~ \\
~ & ~ & ~ & 1.6 & ~ \\
\hline
~ & ~ & 1.2 & .4 & 2 \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\end{figure}
\endgroup
\begin{remark}
Many past analyses of SMC \cite{douc2008limit, chan2013general}
have focused on SMC estimates of ratios
$\E\left[\prod_{t=0}^{T-1} G_t f \right] \slash \E\left[\prod_{t=0}^{T-1} G_t \right]$.
In the present analysis, the central focus is shifted toward SMC estimates of quantities
$\E\left[\prod_{t=0}^{T-1} G_t f \right]$.
This central focus has three advantages.
First, estimates of $\E\left[\prod_{t=0}^{T-1} G_t f \right]$
are unbiased, making them simpler to analyze than estimates of ratios,
which are typically biased.
Second, unbiased SMC estimates have not been studied in as much detail
as estimates of ratios have been, despite their central importance in rare event sampling and Bayesian statistics
\cite{del2004feynman, hairer2014improved}.
Third, convergence properties for estimates of ratios
follow as a corollary of convergence properties for unbiased estimates.
For more details of this relationship, refer to the discussion in the appendix.
\end{remark}
\subsection{Martingale analysis of SMC}{\label{martingale}}
Martingale theory provides an essential tool
for the analysis of SMC
\cite{del2004feynman, douc2008limit, chan2013general}.
In the current section, a martingale is used to show
that SMC estimates are unbiased.
In later sections, the same martingale leads to an
error decomposition and asymptotic error formulas for SMC estimates.
The first step in a martingale analysis is to define a filtration
and a martingale sequence on that filtration.
Toward this goal, fix functions $\left(G_t\right)_{0 \leq t \leq T-1}$ and $f$
and define $\sigma$-algebras and conditional expectations as follows:
\begin{definition}{\label{sigmadef}}
$\sigma$-algebras and conditional expectations
\begin{enumerate}
\item
Introduce the filtration $\left(\mathcal{F}_t\right)_{-1 \leq t \leq T}$, where
\begin{equation*}
\begin{cases}
\mathcal{F}_{-1} = \left\{\emptyset, \Omega \right\} \\
\mathcal{F}_0 = \sigma \left( \left(\xi_0^{\left(i\right)}\right)_{1 \leq i \leq N_0}, W_0\right) \\
\mathcal{F}_{t+\frac{1}{2}} = \mathcal{F}_t \vee \sigma \left( \left(\hat{\xi}_t^{\left(j\right)}\right)_{1 \leq j \leq N_{t + 1}} \right),
& 0 \leq t \leq T-1 \\
\mathcal{F}_{t+1} = \mathcal{F}_{t+\frac{1}{2}} \vee \sigma \left( \left(\xi_{t+1}^{\left(i\right)}\right)_{1 \leq i \leq N_{t+1}}, W_t\right),
& 0 \leq t \leq T-2 \\
\mathcal{F}_T = \mathcal{F}_{T-\frac{1}{2}} \vee \sigma\left(\left(\xi_T^{\left(i\right)}\right)_{1 \leq i \leq N_T}\right)
\end{cases}
\end{equation*}
Here, $\mathcal{G} \vee \mathcal{H}$ denotes the smallest $\sigma$-algebra
containing $\mathcal{G}$ and $\mathcal{H}$.
\item
Define the conditional expectations
\begin{align*}
& h_t \left(x_t\right) = \E \left[\left. \prod_{s=t+1}^{T-1} G_s f \right\rvert X_t = x_t\right],
& 0 \leq t \leq T-1
\end{align*}
with the convention that $\prod_{\emptyset} G_s = 1$.
\item
To keep the notation simple,
write $h_T\left(x_T\right) = 1$, $G_T\left(x_{T-1}, x_T\right) = f\left(x_{T-1}, x_T\right)$,
and $w_T^{\left(i\right)} = \hat{w}_{T-1}^{\left(i\right)}
G_T\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right)$
for $1 \leq i \leq N_T$.
\end{enumerate}
\end{definition}
The next theorem shows that the SMC estimate $\frac{1}{N_0} \sum_{i=1}^{N_T}
\hat{w}_{T-1}^{\left(i\right)} f\left(\hat{\xi}_{T-1}^{\left(i\right)},\xi_T^{\left(i\right)} \right)$
for the quantity $\E\left[\prod_{t=0}^{T-1} G_t f\right]$
can be interpreted as a martingale on the filtration $\mathcal{F}_t$:
\begin{thm}{\label{unbiased}}
If $\E\left|\prod_{t=0}^{T-1} G_t f \right| < \infty$, there exists a martingale $M_t$ on the filtration $\mathcal{F}_t$
that satisfies
\begin{equation*}
\begin{cases}
M_{-1} = \E \left[ \prod_{t=0}^{T-1} G_t f\right] \\
M_t = \frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)} h_t \left(\xi_t^{\left(i\right)}\right),
& 0 \leq t \leq T \\
M_{t + \frac{1}{2}} = \frac{1}{N_{0}} \sum_{i=1}^{N_{t+1}} \hat{w}_t^{\left(i\right)}
h_t \left(\hat{\xi}_t^{\left(i\right)}\right),
& 0 \leq t \leq T-1
\end{cases}
\end{equation*}
\end{thm}
\begin{proof}
For $0 \leq t \leq T-1$,
\begin{align}
\label{startpart1}
& \quad \E\left[\left.M_{t+1}\right\rvert \mathcal{F}_{t+\frac{1}{2}} \right]
= \E\left[\left.\frac{1}{N_{0}}\sum_{i=1}^{N_{t+1}}
w_{t+1}^{\left(i\right)} h_{t+1}\left(\xi_{t+1}^{\left(i\right)}\right)
\right\rvert \mathcal{F}_{t+\frac{1}{2}} \right] \\
& = \frac{1}{N_{0}}\sum_{i=1}^{N_{t+1}} \hat{w}_t^{\left(i\right)}
\E\left[\left.
G_{t+1}\left(\hat{\xi}_{t}^{\left(i\right)},\xi_{t+1}^{\left(i\right)}\right)
h_{t+1}\left(\xi_{t}^{\left(i\right)}\right)\right\rvert \mathcal{F}_{t+\frac{1}{2}}\right] \\
\label{endpart1}
& = \frac{1}{N_{0}}\sum_{i=1}^{N_{t+1}}\hat{w}_{t}^{\left(i\right)}
h_{t}\left(\hat{\xi}_{t}^{\left(i\right)}\right)
= M_{t + \frac{1}{2}}
\end{align}
Lines \eqref{startpart1}-\eqref{endpart1} use the fact that $\hat{w}_t^{\left(i\right)}$ is
measurable with respect to $\mathcal{F}_{t+\frac{1}{2}}$,
as well as the definitions for $w_{t+1}^{\left(i\right)}$,
$\Law \left(\xi_{t+1}^{\left(i\right)}\rvert \mathcal{F}_{t+\frac{1}{2}}\right)$,
$h_{t+1}$, and $h_t$.
Next, for $0 \leq t \leq T-1$,
\begin{align}
\label{startpart2}
& \quad \E\left[\left.M_{t + \frac{1}{2}}\right| \mathcal{F}_t\right]
= \E\left[\left.\frac{1}{N_{0}}\sum_{j=1}^{N_{t+1}}
\hat{w}_t^{\left(j\right)}h_{t}\left(\hat{\xi}_{t}^{\left(j\right)}\right)
\right\rvert\mathcal{F}_{t}\right] \\
& =\frac{1}{N_{0}}\sum_{j=1}^{N_{t+1}}\hat{w}_t^{\left(j\right)}
\sum_{i=1}^{N_{t}}h_{t}\left(\xi_{t}^{\left(i\right)}\right)
\Prob\left\{\left. \hat{\xi}_{t}^{\left(j\right)} = \xi_t^{\left(i\right)}\right\rvert\mathcal{F}_{t}\right\} \\
\label{endpart2}
& =\frac{1}{N_0} \sum_{j=1}^{N_{t+1}} \hat{w}_t^{\left(j\right)} \sum_{i=1}^{N_t}
h_{t}\left(\xi_{t}^{\left(j\right)}\right) \frac{w_{t}^{\left(i,j\right)}}{\hat{w}_t^{\left(j\right)}}
=\frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)}h_{t}\left(\xi_{t}^{\left(i\right)}\right)
= M_t
\end{align}
Lines \eqref{startpart2}-\eqref{endpart2} use the fact that $\hat{w}_t^{\left(j\right)}$
is measurable with respect to $\mathcal{F}_{t}$,
the definition for $\Law\left(\left. \hat{\xi}_{t}^{\left(j\right)}\right\rvert\mathcal{F}_{t}\right)$,
and the fact that $\sum_{j=1}^{N_t} w_t^{\left(i, j\right)} = w_t^{\left(i\right)}$.
Lastly, because $\xi_0^{\left(j\right)} \sim \Law \left(X_0\right)$ for $1 \leq j \leq N_0$,
$\E \left[ \frac{1}{N_0}\sum_{i=1}^{N_0} w_0^{\left(i\right)} h_0\left(\xi_{0}^{\left(i\right)}\right) \right]
= \E \left[G_0 h_0 \right]
= \E \left[ \prod_{t=0}^{T-1} G_t f \right]$.
\end{proof}
Theorem \ref{unbiased} guarantees the unbiasedness of SMC estimates,
confirming Theorem \ref{earlyunbiased}.
\section{Unified analysis of SMC error}{\label{minimize}}
The current section provides a unified analysis of SMC error
which facilitates comparison of different resampling schemes.
Section \ref{efficiency} defines complete resampling schemes,
a subset of matrix resampling schemes
which will be covered in the error analysis.
Section \ref{factors} explains how error arises within the SMC algorithm
and how error can be reduced by selecting an appropriate resampling scheme.
Section \ref{sort} identifies the matrix resampling scheme that gives
the lowest possible error.
Section \ref{asymtheory} presents new asymptotic formulas
that can be used to rigorously compare the error associated
with different resampling schemes.
\subsection{Complete resampling schemes}{\label{efficiency}}
A complete resampling scheme is a matrix resampling scheme with the requirement
that all the updated weights $\hat{w}_t^{\left(j\right)}$ equal the same weight
$\overline{w}_t = \frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)}$.
Complete resampling schemes, which include Bernoulli resampling and multinomial resampling,
are very prominent in discussions of SMC.
In fact, several previous reviews of resampling methods
focused solely on complete resampling schemes \cite{douc2005comparison, hol2006resampling}.
The error analysis makes the following assumption:
\begin{assumption}{\label{assumption2}}
The resampling scheme is complete; that is, all the updated weights $\hat{w}_t^{\left(j\right)}$ equal the same weight,
$\overline{w}_t = \frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)}$.
\end{assumption}
There are two major factors that determine the value of a resampling scheme:
the computational cost of using the scheme and the accuracy of the estimates it provides.
The advantage of analyzing complete resampling schemes is that all complete
resampling schemes share a similar computational cost.
In particular, the computational cost of an SMC algorithm is proportional
to the number of non-coffin particles,
and the next proposition
guarantees that the number of non-coffin particles
is similar for all complete resampling schemes, with a statistical range of $N_0 \pm \sqrt{N_0}$ particles:
\begin{prop}{\label{particlenumber}}
If at least one of the weights $\left(w_t^{\left(i\right)}\right)_{1 \leq i \leq N_0}$
is positive, then the number of non-coffin particles satisfies
\begin{equation*}
\begin{cases}
\E \left[\left. \sum_{j=1}^{N_{t+1}} \mathds{1}\left\{\hat{\xi}_t^{\left(j\right)} \neq c \right\} \right| \mathcal{F}_t\right] = N_0 \\
\Var \left[\left.\sum_{j=1}^{N_{t+1}} \mathds{1}\left\{\hat{\xi}_t^{\left(j\right)} \neq c \right\} \right| \mathcal{F}_t \right] \leq N_0
\end{cases}
\end{equation*}
\end{prop}
\begin{proof}
Calculate
$\sum_{j=1}^{N_{t+1}} \Prob \left\{\left.\hat{\xi}_t^{\left(j\right)} \neq c \right| \mathcal{F}_t \right\}
= \sum_{j=1}^{N_{t+1}} \sum_{i=1}^{N_t} \frac{w_t^{\left(i, j\right)}}{\overline{w}_t}
= N_0$
and
\begin{align*}
& \quad \sum_{j=1}^{N_{t+1}} \Var \left[\left. \mathds{1}\left\{\hat{\xi}_t^{\left(i\right)} \neq c \right\} \right| \mathcal{F}_t \right]
= \sum_{j=1}^{N_{t + 1}} \frac{w_t^{\left(N_t + 1,j\right)}}{\overline{w}_t} \left(1-\frac{w_t^{\left(N_t + 1,j\right)}}{\overline{w}_t}\right) \\
& \leq \sum_{j=1}^{N_{t + 1}}\left(1-\frac{w_t^{\left(N_t + 1, j\right)}}{\overline{w}_t}\right)
= \sum_{j=1}^{N_{t + 1}} \sum_{i = 1}^{N_t} \frac{w_t^{\left(i,j\right)}}{\overline{w}_t} = N_0
\end{align*}
\end{proof}
Since all complete resampling schemes share a similar computational cost,
it is the accuracy of these schemes that should be the determining factor in deciding which scheme to use.
The accuracy of SMC estimates made using various resampling schemes is explored in depth in the subsequent sections.
\subsection{Factors contributing to SMC error}{\label{factors}}
The goal of the current section is to show how each step of the SMC algorithm
contributes error to SMC estimates
and how this error can be reduced by selecting an appropriate resampling scheme.
The starting point for the decomposition of SMC error is the martingale introduced in Theorem \ref{unbiased}.
\begin{equation*}
\begin{cases}
M_{-1} = \E \left[ \prod_{t=0}^{T-1} G_t f\right] \\
M_t = \frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)} h_t \left(\xi_t^{\left(i\right)}\right),
& 0 \leq t \leq T \\
M_{t + \frac{1}{2}} = \frac{\overline{w}_t}{N_0} \sum_{i=1}^{N_{t+1}}
h_t \left(\hat{\xi}_t^{\left(i\right)}\right),
& 0 \leq t \leq T-1
\end{cases}
\end{equation*}
where
$h_t \left(x_t\right) = \E \left[\left. \prod_{s=t+1}^{T-1} G_s f \right\rvert X_t = x_t\right]$.
At time $t = -1$,
the martingale is a perfect estimate $M_{-1} = \E\left[\prod_{t=0}^{T-1} G_t f\right]$.
At time $t = T$,
the martingale has evolved to become an imperfect estimate
$M_T = \frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T} f\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right)$.
An additive decomposition of SMC error is
\begin{align*}
\label{thisone}
& \quad \frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T} f\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right)
- \E\left[\prod_{t=0}^{T-1} G_t f\right] \\
& = \underbrace{\left(M_0 - M_{-1}\right)}_{\text{initialization error}} +
\sum_{t=0}^{T-1} \underbrace{\left(M_{t+\frac{1}{2}} - M_t\right)}_{\text{resampling error}}
+ \sum_{t=0}^{T-1} \underbrace{\left(M_{t+1} - M_{t+\frac{1}{2}}\right)}_{\text{mutation error}}
\end{align*}
In this decomposition, SMC error is the sum of three uncorrelated error sources:
initialization error, resampling error and mutation error.
The first error source is initialization error,
which can be written
\begin{equation*}
M_0 - M_{-1} = \frac{1}{N_0} \sum_{i=1}^{N_0}
\left\{G_0\left(\xi_0^{\left(i\right)}\right) h_0\left(\xi_0^{\left(i\right)}\right)
- \E\left[G_0 h_0\right]\right\}
\end{equation*}
Intialization error is caused by random sampling of the particles $\left(\xi_0^{\left(i\right)}\right)_{1 \leq i \leq N_0}$
during the initialization step.
The mean squared initialization error can be calculated
\begin{equation*}
\E\left|M_0 - M_{-1}\right|^2 = \Var\left[\frac{1}{N_0}\sum_{i=1}^{N_0} G_0\left(\xi_0^{\left(i\right)}\right) h_0\left(\xi_0^{\left(i\right)}\right)\right] = \frac{1}{N_0}\Var\left[G_0 h_0\right]
\end{equation*}
This error source is the same for all resampling schemes,
with no dependence on the particular resampling scheme that is used.
Similar to initialization error is mutation error.
Mutation error arises from the random sampling of particles $\left(\xi_t^{\left(i\right)}\right)_{1 \leq i \leq N_t}$ during a mutation step.
Mutation error $M_{t+1} - M_{t+\frac{1}{2}}$ can be written
\begin{equation*}
\frac{\overline{w}_t}{N_0}\sum_{i=1}^{N_t}
\left\{G_{t+1}\left(\hat{\xi}_t^{\left(i\right)}, \xi_{t+1}^{\left(i\right)}\right) h_{t+1}\left(\xi_{t+1}^{\left(i\right)}\right)
- \E\left[\left.G_{t+1} h_{t+1} \right|X_t = \hat{\xi}_t^{\left(i\right)}\right]\right\}
\end{equation*}
An asymptotic expansion shows how mutation error approaches a fixed asymptotic limit,
regardless of which resampling scheme is used:
\begin{prop}{\label{decomposition}}
Assume functions $\left(G_t\right)_{0 \leq t \leq T-1}$ are bounded and assume
$\E\left[\prod_{t=0}^{T-1} G_t f^2 \right] < \infty$.
Then, at each time $0 \leq t \leq T-1$ there exists a constant $C>0$,
independent of resampling scheme, such that
\begin{multline*}
\frac{1}{N_0}\E\left[\prod_{s=0}^t G_s\right]
\E\left[\prod_{s=0}^t G_s \Var\left[\left.G_{t+1} h_{t+1} \right| X_t\right]\right]
\leq \E\left|M_{t+1} - M_{t + \frac{1}{2}}\right|^2 \\
\leq \frac{1}{N_0} \E\left[\prod_{s=0}^t G_s\right]
\E\left[\prod_{s=0}^t G_s \Var\left[\left.G_{t+1} h_{t+1} \right| X_t\right]\right] + \frac{C}{N_0^2}
\end{multline*}
\end{prop}
\begin{proof}
Define $\nu\left(x_t\right) = \Var\left[\left.G_{t+1} h_{t+1}\right| X_t = x_t\right]$. By Theorem \ref{bounded},
\begin{align*}
0 & \leq \E \left[ \frac{\overline{w}_t^2}{N_0} \sum_{i=1}^{N_t} \nu\left(\hat{\xi}_t^{\left(i\right)}\right)\right]
- \E \left[\overline{w}_t\right] \E\left[\frac{\hat{w}_t}{N_0} \sum_{i=1}^{N_t} \nu\left(\hat{\xi}_t^{\left(i\right)}\right)\right] \\
& = \Cov\left[\overline{w}_t, \frac{\hat{w}_t}{N_0} \sum_{i=1}^{N_t} \nu\left(\hat{\xi}_t^{\left(i\right)}\right)\right]
\leq \Var\left[\overline{w}_t\right]^{\frac{1}{2}}
\Var \left[\frac{\overline{w}_t}{N_0} \sum_{i=1}^{N_t} \nu\left(\hat{\xi}_t^{\left(i\right)}\right)\right]^{\frac{1}{2}}
\leq \frac{C}{N_0}
\end{align*}
Moreover, Theorem \ref{unbiased} guarantees $\E\left[\overline{w}_t\right] = \E\left[\prod_{s=0}^t G_s\right]$
and
\begin{equation*}
\E\left[\frac{\overline{w}_t}{N_0} \sum_{i=1}^{N_t} \nu\left(\hat{\xi}_t^{\left(i\right)}\right)\right]
= \E\left[\prod_{s=0}^t G_s \Var\left[\left.G_{t+1} h_{t+1}\right|X_t\right]\right]
\end{equation*}
\end{proof}
In summary, Proposition \ref{decomposition} demonstrates that mutation error,
just like initialization error, does not depend on which particular complete resampling scheme is used.
Having discussed two sources of SMC error -- initialization error and resampling error --
the last error source that remains to be discussed is resampling error.
Resampling error can be written
\begin{equation*}
M_{t+\frac{1}{2}} - M_t
= \frac{\overline{w}_t}{N_0} \sum_{j=1}^{N_{t+1}} h_t\left(\hat{\xi}_t^{\left(j\right)}\right)
- \frac{1}{N_0} \sum_{i=1}^{N_t} h_t\left(\xi_t^{\left(i\right)}\right)
\end{equation*}
Resampling error results from random population changes during the resampling step.
Resampling error exhibits quite different behavior from initialization and mutation error:
the size of this error can vary significantly
depending on which particular resampling scheme is used.
A tool for measuring resampling error \cite{douc2005comparison} is resampling variance
\begin{equation*}
\hat{V}_t^2\left[h_t\right] = \Var\left[\left.\frac{\overline{w}_t}{N_0} \sum_{j=1}^{N_{t+1}} h_t\left(\hat{\xi}_t^{\left(j\right)}\right)
\right| \mathcal{F}_t\right]
\end{equation*}
Reducing resampling variance is a means toward increasing SMC efficiency.
As illustrated in the next lemma,
resampling variance can be reduced by selecting an appropriate resampling scheme:
\begin{lem}{\label{simplelemma}}
\begin{enumerate}[label = (\alph*)]
\item{\label{representation1}}
Let $h_t \in \mathbb{R}^{N_t + 1}$ denote the vector with $h_t^{\left(i\right)}=h_t\left(\xi^{\left(i\right)}\right)$ for $1 \leq i \leq N_t$ and $h_t^{\left(N_t + 1\right)} = 0$.
Then, resampling variance $\hat{V}_t^2\left[h_t\right]$ can be written as a quadratic function of the resampling matrix $W_t$:
\begin{equation*}
\frac{\overline{w}_t}{N_0^2} \sum_{i=1}^{N_t} w_t^{\left(i\right)}
\left|h_t\left(\xi_t^{\left(i\right)}\right)\right|^2
- \frac{1}{N_0^2} h_t^T W_t W_t^T h_t
\end{equation*}
Consequently, minimizing resampling variance is a concave minimization problem.
\item{\label{representation3}}
Consider a resampling matrix $W_t$ containing a sequence of columns
$c_{j_1}, c_{j_2}, \ldots, c_{j_K}$.
Then, replacing the columns $c_{j_1}, c_{j_2}, \ldots, c_{j_K}$
with $K$ identical columns $\frac{1}{K} \sum_{k=1}^K c_{j_k}$
either increases resampling variance or leaves resampling variance unchanged.
\end{enumerate}
\end{lem}
\begin{proof}[Proof of Lemma \ref{simplelemma}]
Resampling variance $\hat{V}^2_t\left[h_t\right]$ can be written as
\begin{align*}
& \quad \Var\left[\left.\frac{\overline{w_t}}{N_0} \sum_{j=1}^{N_{t+1}}
h_t\left(\hat{\xi}_t^{\left(j\right)}\right) \right| \mathcal{F}_t\right]
=\frac{\overline{w}_t^2}{N_0^2} \sum_{j=1}^{N_{t+1}} \Var\left[\left.
h_t\left(\hat{\xi}_t^{\left(j\right)}\right) \right| \mathcal{F}_t\right] \\
& = \frac{\overline{w}_t^2}{N_0^2} \sum_{j=1}^{N_{t+1}}
\E\left[\left.\left|h_t\left(\hat{\xi}_t^{\left(j\right)}\right)\right|^2 \right| \mathcal{F}_t\right]
- \frac{\overline{w}_t^2}{N_0^2} \sum_{j=1}^{N_{t+1}}
\left|\E\left[\left.h_t\left(\hat{\xi}_t^{\left(j\right)}\right)\right|\mathcal{F}_t\right] \right|^2 \\
& = \frac{\overline{w}_t}{N_0} \E\left[\left. \frac{\overline{w}_t}{N_0}
\sum_{j=1}^{N_{t+1}} \left|h_t\left(\hat{\xi}_t^{\left(j\right)}\right)\right|^2 \right|\mathcal{F}_t\right]
- \frac{1}{N_0^2} \sum_{j=1}^{N_{t+1}}
\left|\sum_{i=1}^{N_t} w_t^{\left(i,j\right)} h_t\left(\xi_t^{\left(i\right)}\right) \right|^2 \\
& = \frac{\overline{w}_t}{N_0^2} \sum_{i=1}^{N_t} w_t^{\left(i\right)}
\left|h_t\left(\xi_t^{\left(i\right)}\right)\right|^2
- \frac{1}{N_0^2} h_t^T W_t W_t^T h_t
\end{align*}
Next, let $W_t^{\left(1\right)}, W_t^{\left(2\right)}, \ldots, W_t^{\left(K\right)}$ be the resampling matrices
formed by cyclic permutations of columns $c_{j_1}, c_{j_2}, \ldots, c_{j_K}$.
Then, $h_t^T W_t^{\left(k\right)} {W_t^{\left(k\right)}}^T h_t = h_t^T W_t W_t^T h_t$ for each $1 \leq k \leq K$.
By convexity of $W_t \mapsto h_t^T W_t W_t^T h_t$,
\begin{multline*}
h_t^T \left(\frac{1}{K} \sum_{k=1}^K W_t^{\left(k\right)}\right)
\left(\frac{1}{K} \sum_{k=1}^K W_t^{\left(k\right)}\right)^T h_t \\
\leq \frac{1}{K} \sum_{k=1}^K h_t^T W_t^{\left(k\right)} {W_t^{\left(k\right)}}^T h_t
= h_t^T W_t W_t^T h_t
\end{multline*}
\end{proof}
The second part of Lemma \ref{simplelemma}
is a useful device for comparing common resampling schemes.
In examples below, the lemma is used to analyze efficiency of
three common resampling schemes:
\emph{stratified}, \emph{multinomial residual},
and \emph{stratified residual} resampling.
See also Figure \ref{figure2}, which provides resampling matrices for these three schemes.
\begin{example}[Stratified resampling]
In stratified resampling \cite{kitagawa1996monte},
sample uniform random variables
$U_t^{\left(j\right)} \sim \Unif \left(\frac{j-1}{N_0},\frac{j}{N_0}\right)$ for $1 \leq j \leq N_0$
and select particles
$\hat{\xi}_t^{\left(j\right)} = Q_t\left(U_t^{\left(j\right)}\right)$, where
\begin{align*}
& Q_t\left(x\right)=\xi_t^{\left(i\right)},
&
\frac{\sum_{k=1}^{i-1} w_t^{\left(k\right)}}
{\sum_{k=1}^{N_t} w_t^{\left(k\right)}}
\leq x <
\frac{\sum_{k=1}^i w_t^{\left(k\right)}}
{\sum_{k=1}^{N_t} w_t^{\left(k\right)}}
\end{align*}
It is seen in Figure \ref{figure2} that the resampling matrix for stratified resampling
takes a particular form,
with nonzero matrix entries forming a path rightwards and downwards.
By averaging over all matrix columns, the multinomial resampling matrix is obtained.
Thus, by Lemma \ref{simplelemma}, the resampling variance of stratified resampling
is always as low or lower than that of multinomial resampling.
\end{example}
\begin{example}
In multinomial residual resampling \cite{brindle1980genetic}, first select $\left\lfloor \frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\rfloor$
copies of each particle $\xi_t^{\left(i\right)}$.
Then, select an additional
$R_t = \sum_{i=1}^{N_t} \left\{ \frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\}$ particles
$\hat{\xi}_t^{\left(j\right)}$
independently from the distribution
\begin{equation*}
\hat{\xi}_t^{\left(j\right)} \sim \frac{1}{R_t} \sum_{i=1}^{N_t}
\left\{ \frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\} \delta\left(\xi_t^{\left(i\right)}\right)
\end{equation*}
It is seen in Figure \ref{figure2} that the resampling matrix for multinomial residual resampling
contains a block of columns with just one nonzero matrix entry per column.
By averaging over all matrix columns, the multinomial resampling matrix is obtained.
Thus, by Lemma \ref{simplelemma}, the resampling variance of stratified resampling
is always as low or lower than that of multinomial resampling.
\end{example}
\begin{example}
Stratified residual resampling \cite{bolic2003new}
combines aspects of stratified resampling and multinomial residual resampling.
First select $\left\lfloor \frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\rfloor$
copies of each particle $\xi_t^{\left(i\right)}$.
Then, for $1 \leq j \leq R_t = \sum_{i=1}^{N_t} \left\{ \frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\}$,
sample a uniform random variable $U_t^{\left(j\right)} \sim \Unif \left(\frac{j-1}{R_t},\frac{j}{R_t}\right)$ and
select the particle $\hat{\xi}_t^{\left(j\right)} = Q_t\left(U_t^{\left(j\right)}\right)$,
where
\begin{align*}
& Q_t\left(x\right)=\xi_t^{\left(i\right)},
&
\frac{1}{R_t} \sum_{k=1}^{i-1} \left\{ \frac{w_t^{\left(k\right)}}{\overline{w}_t}\right\}
\leq x <
\frac{1}{R_t} \sum_{k=1}^i \left\{ \frac{w_t^{\left(k\right)}}{\overline{w}_t}\right\}
\end{align*}
The resampling matrix for stratified residual resampling
contains a block of columns where entries for a path rightwards and downwards.
By averaging over this block of columns, the multinomial residual matrix is obtained.
By Lemma \ref{simplelemma}, the resampling variance of stratified residual resampling
is as low or lower than that of multinomial residual resampling.
\end{example}
\begingroup\abovedisplayskip=0pt\belowdisplayskip=0pt
\begin{figure}[!htbp]
\caption{Examples of resampling matrices $W_0$ when $N_0 = 4$, $N_1 = 4$, and
particles have weights $w_0^{\left(1\right)}=3.2$, $w_0^{\left(2\right)}=2.4$,
$w_0^{\left(3\right)}=.8$, and $w_0^{\left(4\right)}=1.6$.}
\label{figure2}
\begin{minipage}{.22 \textwidth}
\begin{equation*}
\begin{spmatrix}{multinomial}
.8 & .8 & .8 & .8 \\
.6 & .6 & .6 & .6 \\
.2 & .2 & .2 & .2 \\
.4 & .4 & .4 & .4 \\
\hline
~ & ~ & ~ & ~ \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\hfill{}
\begin{minipage}{.22 \textwidth}
\begin{equation*}
\begin{spmatrix}{stratified}
2 & 1.2 & ~ & ~ \\
~ & .8 & 1.6 & ~ \\
~ & ~ & .4 & .4 \\
~ & ~ & ~ & 1.6 \\
\hline
~ & ~ & ~ & ~ \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\hfill{}
\begin{minipage}{.22 \textwidth}
\begin{equation*}
\begin{spmatrix}{multinomial residual}
2 & ~ & .6 & .6 \\
~ & 2 & .2 & .2 \\
~ & ~ & .4 & .4 \\
~ & ~ & .8 & .8 \\
\hline
~ & ~ & ~ & ~ \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\hfill{}
\begin{minipage}{.22 \textwidth}
\begin{equation*}
\begin{spmatrix}{stratified residual}
2 & ~ & 1.2 & ~ \\
~ & 2 & .4 & ~ \\
~ & ~ & .4 & .4 \\
~ & ~ & ~ & 1.6 \\
\hline
~ & ~ & ~ & ~ \\
\end{spmatrix}
\end{equation*}
\end{minipage}
\end{figure}
\endgroup
\begin{remark}
While Proposition \ref{decomposition} requires that functions $\left(G_t\right)_{t \geq 0}$ are bounded,
this assumption can be lifted, at the cost of greater complexity.
Using methods to be presented in Section \ref{asymtheory},
it can be shown that mutation error converges in distribution
\begin{equation*}
\label{asymnormal}
\frac{1}{\sqrt{N_0}}\left(M_{t+1} - M_{t+\frac{1}{2}}\right) \stackrel{\mathcal{D}}{\rightarrow}
N\left(0, \E\left[\prod_{s=0}^t G_s\right]
\E\left[\prod_{s=0}^t G_s \Var\left[\left.G_{t+1} h_{t+1}\right| X_t\right]\right]\right)
\end{equation*}
whenever the asymptotic variance is finite.
The asymptotic distribution does not depend on which resampling scheme is used.
\end{remark}
\begin{remark}
Similar to the examples above,
\citet{douc2005comparison} compared resampling variance
between different resampling schemes.
But while \cite{douc2005comparison} used explicit resampling variance calculations,
the resampling matrix framework provides a quicker route to comparing schemes.
In the examples above, it is enough simply to compare columns between resampling matrices
and apply Lemma \ref{simplelemma} to obtain a rigorous error comparison.
\end{remark}
\subsection{Minimizing resampling variance}{\label{sort}}
The goal of SMC is to compute a quantity $\E\left[\prod_{s=t+1}^{T-1} G_s f\right]$ with minimal error.
Sections \ref{efficiency} and \ref{factors} have demonstrated that the error
of an estimate depends critically on the resampling variance.
Thus, it is of foremost concern to find resampling schemes
that minimize resampling variance.
Theorem \ref{optimal} identifies the minimal variance resampling scheme,
a scheme that sorts particles depending on the values
$h_t\left(\xi_t^{\left(i\right)}\right)$:
\begin{thm}{\label{optimal}}
\begin{enumerate}[label = (\alph*)]
\item{\label{complexscheme}}
The following random population scheme minimizes resampling variance $\hat{V}_t^2\left[h_t\right]$:
\begin{enumerate}[label = \arabic*.]
\item
Add one particle
$\left(w_t^{\left(N_t + 1\right)}, \xi_t^{\left(N_t + 1\right)}\right) = \left(\overline{w}_t, c\right)$
to the ensemble
$\left(w_t^{\left(j\right)}, \xi_t^{\left(j\right)}\right)_{1 \leq j \leq N_t}$.
\item
Sort the ensemble $\left(w_t^{\left(j\right)}, \xi_t^{\left(j\right)}\right)_{1 \leq j \leq N_t + 1}$
from highest to lowest by the value of $h_t\left(\xi_t^{\left(j\right)}\right)$
so that
\begin{equation*}
h_t\left(\xi_t^{\left(1\right)}\right) \geq h_t\left(\xi_t^{\left(2\right)}\right)
\geq \cdots \geq h_t\left(\xi_t^{\left(N_t\right)}\right) \geq h_t\left(\xi_t^{\left(N_t + 1\right)}\right)
\end{equation*}
\item
Apply stratified resampling.
\end{enumerate}
\item{\label{simplescheme}}
The fixed population scheme that minimizes
resampling variance $\hat{V}_t^2\left[h_t\right]$
is a simpler version of the scheme in part \ref{complexscheme}.
First sort particles from highest to lowest by the value of $h_t\left(\xi_t^{\left(i\right)}\right)$
and then apply stratified resampling.
\end{enumerate}
\end{thm}
\begin{proof}
Assume particles have been sorted so that
$h_t\left(\xi_t^{\left(1\right)}\right) \geq h_t\left(\xi_t^{\left(2\right)}\right) \geq \cdots \geq h_t\left(\xi_t^{\left(N_t\right)}\right)$
and consider an arbitrary resampling matrix $W_t$.
By Lemma \ref{simplelemma},
the resampling variance is decreased if $h_t^T W_t W_t^T h_t$ is increased.
As a first step toward increasing $h_t^T W_t W_t^T h_t$,
define $P_t$ and $N_t$ by
\begin{equation*}
\begin{cases}
p_t^{\left(i, j\right)} = w_t^{\left(i, j\right)} \mathds{1}\left\{h_t\left(\xi_t^{\left(j\right)}\right) \geq 0\right\},
& 1 \leq i \leq N_t \\
q_t^{\left(i, j\right)} = w_t^{\left(i, j\right)} \mathds{1}\left\{h_t\left(\xi_t^{\left(j\right)}\right) < 0\right\},
& 1 \leq i \leq N_t
\end{cases}
\end{equation*}
and
\begin{equation*}
\begin{cases}
p_t^{\left(N_t + 1, j\right)} = \overline{w}_t - \sum_{k=1}^{N_t} p_t^{\left(k, j\right)} \\
q_t^{\left(N_t + 1, j\right)} = \overline{w}_t - \sum_{k=1}^{N_t} q_t^{\left(k, j\right)}
\end{cases}
\end{equation*}
Then set $S_t = \begin{pmatrix} P_t & Q_t \end{pmatrix}$ and observe that
$h_t^T W_t W_t^T h_t \leq h_t^T S_t S_t^T h_t$.
Let $c^{\left(1\right)}, c^{\left(2\right)}, \ldots, c^{\left(N_{t+1}\right)}$ denote the columns of $P_t$,
sorted so that
$h_t^T c^{\left(1\right)} \geq h_t^T c^{\left(2\right)} \geq \cdots \geq h_t^T c^{\left(N_{t+1}\right)} \geq 0$.
Consider the following algorithm to increase the value of $h_t^T P_t P_t^T h_t$:
\begin{enumerate}
\item
Call a quadruplet $\left(i,j,k,\ell\right)$
a problematic quadruplet
if $p_t^{\left(i,j+\ell\right)}>0$ and $p_t^{\left(i+k,j\right)}>0$
and if $i + k \leq N_t$.
Choose a problematic quadruplet with $i$ as small as possible.
If there is more than one such quadruplet,
choose one with $j$ as small as possible.
\item{\label{keystep}}
Set $\alpha=\min\left\{ p_t^{\left(i,j+\ell\right)},p_t^{\left(i+k,j\right)}\right\}$
and update the entries of $P_t$ with
\begin{align*}
p_t^{\left(i, j\right)} &= p_t^{\left(i, j\right)} + \alpha&
p_t^{\left(i, j+\ell\right)} &= p_t^{\left(i, j+\ell\right)} - \alpha\\
p_t^{\left(i+k, j\right)} &= p_t^{\left(i, j\right)} - \alpha&
p_t^{\left(i+k, j+\ell\right)} &= p_t^{\left(i, j+\ell\right)} + \alpha
\end{align*}
\item{\label{otherstep}}
If necessary, resort the columns
$c^{\left(j + \ell\right)}, c^{\left(j + \ell + 1\right)}, \ldots, c^{\left(N_{t+1}\right)}$
to ensure that $h_t^T c^{\left(j + \ell\right)} \geq h_t^T c^{\left(j + \ell + 1\right)} \geq \cdots \geq h_t^T c^{\left(N_{t+1}\right)}$.
\end{enumerate}
Note that step \ref{keystep} of the algorithm increases $h_t^T P_t^T P_t h_t$
or leaves $h_t^T P_t^T P_t h_t$ unchanged,
while step \ref{otherstep} ensures that
$h_t^T c^{\left(1\right)} \geq h_t^T c^{\left(2\right)} \geq \cdots \geq h_t^T c^{\left(N_{t+1}\right)} \geq 0$.
After repeated applications of the algorithm,
all the problematic quadruplets involving column
$c^{\left(1\right)}$ will eventually be gone
and the same too with columns $c^{\left(2\right)}$, $c^{\left(3\right)}$, etc.
Eventually, the algorithm will have no more problematic quadruplets to correct.
A similar algorithm can be applied to increase the value of $N_t$.
On examination it is seen that the resulting matrix $\begin{pmatrix} P_t & Q_t \end{pmatrix}$
generates the same resampling scheme as described in part \ref{complexscheme}.
The proof of part \ref{simplescheme} is similar.
Because fixed population schemes satisfy
$\hat{V}_t^2\left[h_t\right] = \hat{V}_t^2\left[h_t + c\right]$ for $c \in \mathbb{R}$,
it can be assumed without loss of generality
that $h_t\left(\xi_t^{\left(i\right)}\right) > 0$ for all $\xi_t^{\left(i\right)} \neq c$.
But in this case, the schemes in part \ref{complexscheme} and part \ref{simplescheme} are identical,
and sorted stratified resampling represents the best possible strategy.
\end{proof}
The optimal scheme identified in Theorem \ref{optimal}
is an example of a \emph{sorting} scheme.
In more general sorting schemes,
particles can be sorted using any real-valued coordinate $\theta_t \colon E_t \rightarrow \mathbb{R}$
and then stratified resampling or stratified residual resampling can be used.
Sorting schemes have a long history
dating back at least to \citet{madow1944theory}.
Sorting schemes can lead to a beneficial stratification effect.
Each particle $\hat{\xi}_t^{\left(j\right)}$
is drawn from a subset of particles for which $h_t\left(\xi_t^{\left(i\right)}\right)$ values
are similar, thereby reducing resampling variance.
Theorem \ref{optimal} indicates that the best possible coordinate for sorting is $\theta_t = h_t$.
This is the first known result
which proves that sorting particles can produce an optimal resampling scheme.
The optimal scheme in Theorem \ref{optimal} can be difficult to implement exactly,
because the function
$h_t\left(x_t\right) = \E\left[\left.\prod_{s=t+1}^{T-1} G_s f\right| X_t = x_t\right]$.
can be challenging to compute.
However, $h_t$ is not the only coordinate for sorting particles
that can lead to an effective resampling scheme.
The error formulas of the next section show that effective sorting
is possible with a wide range of different coordinates $\theta_t$.
\subsection{Asymptotic error}{\label{asymtheory}}
In past work \cite{del2004feynman, chopin2004central, douc2008limit},
a central tool for for analyzing SMC error has been
Central Limit Theorems (CLTs) of the form
\begin{equation*}
\sqrt{N_0}\left(\frac{
\overline{w}_{T-1}}{N_0}\sum_{i=1}^{N_T}
f\left(\hat{\xi}_{T-1}^{\left(i\right)},\xi_{T}^{\left(i\right)}\right)
- \E\left[\prod_{t=0}^{T-1} G_t f\right]\right)
\stackrel{\mathcal{D}}{\rightarrow} N\left(0, \eta^2\right)
\end{equation*}
where the quantity $\eta^2$ depends on the particular resampling scheme that is used.
CLTs have been proved for multinomial, multinomial residual and Bernoulli resampling
\cite{del2004feynman, chopin2004central, douc2008limit}.
In the present section, new error formulas are presented for stratified resampling and stratified residual resampling.
These error formulas are not CLTs;
instead they are upper bounds on \emph{asymptotic error}.
Asymptotic error is a new way to measure error that is more general than a CLT
and also more flexible for analysis.
Before presenting asymptotic error formulas,
it is therefore necessary to introduce the key features
of asymptotic error and explain how this error measurement tool can be interpreted.
Asymptotic error is a far-reaching generalization of the error rate in a CLT.
In a CLT, a sequence of random variables $\left(Y_n\right)_{n \geq 1}$
approach a constant $c$ with error measured by an error rate $U_n$.
\begin{equation*}
\frac{Y_n - c}{U_n} \stackrel{\mathcal{D}}{\rightarrow} N\left(0, 1\right)
\end{equation*}
Thus, a CLT can only be proved when there is very precise knowledge of the error rate $U_n$.
In contrast, asymptotic error can be analyzed when knowledge of $U_n$ is less precise
and there is only an upper or lower bound on $U_n$.
A full definition of asymptotic error is provided below:
\begin{definition}{\label{def:asymerror}}
Suppose random variables $\left(Y_n\right)_{n \geq 1}$ satisfy
\begin{equation*}
\label{geq}
\liminf_{n \rightarrow \infty} \E\left[\mathds{1}_{A_n} \left|\frac{Y_n - c}{U_n}\right|^2\right] \geq 1
\end{equation*}
for all possible sequences of sets $\left(A_n\right)_{n \geq 1}$ with $\Prob\left(A_n\right) \rightarrow 1$.
Then, $Y_n$ converges to $c$ with asymptotic error greater than or equal to $U_n$,
and write $\left|Y_n - c\right| \gtrsim U_n$.
Suppose random variables $\left(Y_n\right)_{n \geq 1}$ satisfy
\begin{equation*}
\label{leq}
\limsup_{n \rightarrow \infty} \E\left[\mathds{1}_{B_n} \left|\frac{Y_n - c}{U_n}\right|^2\right] \leq 1
\end{equation*}
for some sequence of sets $\left(B_n\right)_{n \geq 1}$ with $\Prob\left(B_n\right) \rightarrow 1$.
Then, $Y_n$ converges to $c$ with asymptotic error less than or equal to
$U_n$, and write $\left|Y_n - c\right| \lesssim U_n$.
If both conditions are satisfied, $Y_n$ converges to $c$ with asymptotic error $U_n$, and write $\left|Y_n - c\right| \sim U_n$.
\end{definition}
A CLT can be viewed as a particular example
of asymptotic error,
as guaranteed by the following lemma:
\begin{lem}{\label{guarantee}}
Suppose random variables $\left(Y_n\right)_{n \geq 1}$ satisfy
$\frac{Y_n - c}{U_n} \stackrel{\mathcal{D}}{\rightarrow} N\left(0, 1\right)$
as $n \rightarrow \infty$.
Then, $\left|Y_n - c\right| \sim U_n$.
\end{lem}
\begin{proof}
Fatou's Lemma shows
$\liminf_{n \rightarrow \infty} \E\left[\mathds{1}_{A_n} \left|\frac{Y_n - c}{U_n}\right|^2\right] \geq 1$
for all sequences $\left(A_n\right)_{n \geq 1}$ with $\Prob\left(A_n\right) \rightarrow 1$.
Thus, $\left|Y_n - c\right| \gtrsim U_n$.
To show $\left|Y_n - c\right| \lesssim U_n$, construct a sequence $\left(B_n\right)_{n \geq 1}$
with the properties $\Prob\left(B_n\right) \rightarrow 1$ and $limsup_{n \rightarrow \infty} \E\left[\mathds{1}_{B_n} \left|\frac{Y_n-c}{U_n}\right|^2\right] \leq 1$.
First let $L_n$ be the largest number such that
\begin{equation*}
\E\left[\mathds{1}\left\{\left|\frac{Y_n - c}{U_n}\right| < L_n\right\} \left|\frac{Y_n-c}{U_n}\right|^2\right] \leq 1
\end{equation*}
and note that $L_n$ is well-defined by the Monotone Convergence Theorem.
Set $B_n = \left\{\frac{\left|Y_n - c\right|}{U_n} < L_n\right\}$.
For any $\epsilon > 0$,
choose $M > 0$ large enough that
$\Prob\left\{\left|Z\right| < M\right\} > 1 - \frac{\epsilon}{2}$ where $Z \sim N\left(0, 1\right)$.
Since $\frac{Y_n - c}{U_n} \stackrel{\mathcal{D}}{\rightarrow} N\left(0, 1\right)$,
it follows that
$\Prob\left\{\frac{\left|Y_n - c\right|}{U_n} < M\right\} > 1 - \epsilon$
for all $n$ large enough.
Since $x \mapsto \mathds{1}\left\{\left|x\right| < M\right\}\left|x\right|^2$
is bounded and piecewise continuous,
\begin{equation*}
\E\left[\mathds{1}\left\{\left|\frac{Y_n - c}{U_n}\right| < M\right\} \left|\frac{Y_n-c}{U_n}\right|^2\right]
\rightarrow \E\left[\mathds{1}\left\{\left|Z\right| < M\right\} Z^2 \right] < 1
\end{equation*}
For all $n$ large enough, it follows that
$M < L_n$, $\left\{\frac{\left|Y_n - c\right|}{U_n} < M\right\} \subseteq B_n$, and
$\Prob \left( B_n \right) \geq \Prob\left\{\frac{\left|Y_n - c\right|}{U_n} < M\right\} > 1 - \epsilon$.
Since $\epsilon$ is arbitrary, $\Prob\left(B_n\right) \rightarrow 1$.
\end{proof}
Asymptotic error can be compared to mean squared error,
which is another common error metric, different from the error rate in the CLT.
Both asymptotic error and mean squared error are tools
to assess the value of an estimate and to provide confidence intervals around an estimate.
By Chebyshev's inequality, asymptotic error leads to confidence intervals:
\begin{equation*}
\left|Y_n - c\right| \lesssim U_n \implies
\limsup_{n \rightarrow \infty} \Prob\left\{\left|Y_n - c\right| \geq \epsilon U_n \right\} \leq \frac{1}{\epsilon^2}
\end{equation*}
The chief difference between asymptotic error and mean squared error
is robustness to perturbations.
Mean squared error $\E\left|Y_n - c\right|^2$ is quite sensitive to changes in
the behavior of $Y_n$ on a set of vanishing probability,
but asymptotic error is completely robust to these changes.
Thus the confidence intervals derived from asymptotic error bounds
can be much tighter than those derived from mean squared error bounds.
The rigorous treatment of asymptotic error
leads to new results in SMC analysis,
including the first known error formulas for
stratified resampling and stratified residual resampling.
In the following theorem,
these new formulas are presented alongside CLTs
for multinomial, multinomial residual, and Bernoulli resampling,
which are extended
from \cite{del2004feynman, chopin2004central, douc2008limit} to
have less restrictions on functions $\left(G_t\right)_{t \geq 0}$ and $f$:
\begin{thm}{\label{multvar}}
Assume $\E\left[\prod_{s=0}^t G_s\right] < \infty$ for $0 \leq t \leq T-1$, $\E\left|G_0 h_0\right|^2 < \infty$,
and $\E\left[\prod_{s=0}^t G_s \left|G_{t+1} h_{t+1}\right|^2\right]$ for $0 \leq t \leq T-1$.
If multinomial residual or stratified residual resampling is used, assume $\E\left[\prod_{s=0}^{t-1} G_s \mathds{1}\left\{\tilde{G}_t \in \left\{1,2,\ldots\right\} \right\}\right] = 0$ for $0 \leq t \leq T-1$ as well.
Set
$\tilde{G}_t = \E\left[\prod_{s=0}^{t-1} G_s \right] G_t \slash \E\left[\prod_{s=0}^t G_s\right]$
and set
\begin{equation*}
\eta^2 = \Var\left[G_0 h_0\right] + \sum_{t=0}^{T-1} \hat{\eta}_t^2 \left[h_t\right]
+ \sum_{t=0}^{T-1} \E\left[\prod_{s=0}^t G_s\right]
\E\left[\prod_{s=0}^t G_s \Var\left[\left.G_{t+1} h_{t+1} \right| X_t\right]\right]
\end{equation*}
where $\eta^2$ depends on a sequence of a numbers $\left(\hat{\eta}_t\left[h_t\right]^2\right)_{0 \leq t \leq T-1}$.
First assume multinomial resampling, Bernoulli resampling, or multinomial residual resampling is used.
Then SMC estimates satisfy the CLT
\begin{equation*}
\sqrt{N_0}\left(\frac{
\overline{w}_{T-1}}{N_0}\sum_{i=1}^{N_T}
f\left(\hat{\xi}_{T-1}^{\left(i\right)},\xi_{T}^{\left(i\right)}\right)
- \E\left[\prod_{t=0}^{T-1} G_t f\right]\right)
\stackrel{\mathcal{D}}{\rightarrow} N\left(0, \eta^2\right)
\end{equation*}
where $\hat{\eta}_t^2\left[h_t\right]$ is determined by the resampling scheme:
\begin{flalign*}
\begin{array}{l l}
\text{multinomial} &
\left(\E\left[\prod_{s=0}^t G_s\right]\right)^2
\min_{c \in \mathbb{R}}
\E\left[\prod_{s=0}^t \tilde{G_s}
\left|h_t - c \right|^2\right]
\\[.3cm]
\text{multinomial residual} &
\left(\E\left[\prod_{s=0}^t G_s\right]\right)^2
\min_{c \in \mathbb{R}}
\E\left[\prod_{s=0}^{t-1} \tilde{G}_s \left\{\tilde{G}_t\right\} \left|h_t - c\right|^2 \right]
\\[.3cm]
\text{Bernoulli} &
\left(\E\left[\prod_{s=0}^t G_s\right]\right)^2
\E\left[\prod_{s=0}^{t-1} \tilde{G}_s \left\{\tilde{G}_t\right\}\left(1 - \left\{\tilde{G}_t\right\}\right)
h_t^2\right] \end{array}
&&
\end{flalign*}
Next, assume that at each resampling step particles are sorted by a coordinate $\theta_t$
and then stratified or stratified residual resampling is used.
Then,
\begin{equation*}
\left|\frac{\overline{w}_{T-1}}{N_0}\sum_{i=1}^{N_T}
f\left(\hat{\xi}_{T-1}^{\left(i\right)},\xi_{T}^{\left(i\right)}\right)
- \E\left[\prod_{t=0}^{T-1} G_t f\right]\right|
\lesssim \frac{\eta}{\sqrt{N_0}}
\end{equation*}
where $\hat{\eta}_t^2\left[h_t\right]$ is determined by the resampling scheme:
\begin{flalign*}
\begin{array}{l l}
\text{stratified} &
\left(\E\left[\prod_{s=0}^t G_s\right]\right)^2
\min_{p\colon \mathbb{R} \rightarrow \mathbb{R}}
\E \left[\prod_{s=0}^t \tilde{G}_s \left|h_t - p \left(\theta_t\right)\right|^2\right]
\\[.3cm]
\text{stratified residual} &
\left(\E\left[\prod_{s=0}^t G_s\right]\right)^2
\min_{p\colon \mathbb{R} \rightarrow \mathbb{R}}
\E \left[\prod_{s=0}^{t - 1} \tilde{G}_s \left\{\tilde{G}_t\right\} \left|h_t - p \left(\theta_t\right)\right|^2\right]
\end{array} &&
\end{flalign*}
\end{thm}
There are two main conclusions that can be drawn from Theorem \ref{multvar}
about how best to choose a resampling scheme.
The first conclusion
is that residual versions of a resampling scheme should be used whenever possible.
Error formulas for multinomial and multinomial residual
resampling are differentiated by a factor of $\tilde{G}_t$ for multinomial
and a factor of
$\left\{\tilde{G}_t\right\}$ for multinomial residual resampling.
Since $\left\{\tilde{G}_t\right\}$ is always as low or lower than $\tilde{G}_t$,
the multinomial residual resampling scheme can lead to reduced SMC error.
Similarly, stratified residual resampling has an improved asymptotic error upper bound
compared to stratified resampling.
The second conclusion that follows from Theorem \ref{multvar}
is that sorting schemes can substantially reduce error,
depending on the coordinate $\theta_t$ used for sorting.
Error formulas for multinomial and stratified resampling are distinguished by
a factor of $\min_{c \in \mathbb{R}}
\E\left[\prod_{s=0}^t \tilde{G_s}
\left|h_t - c \right|^2\right]$
for multinomial and a factor of
$\min_{p \colon \mathbb{R} \rightarrow \mathbb{R}}
\E \left[\prod_{s=0}^t \tilde{G}_s \left|h_t - p \left(\theta_t\right)\right|^2\right]$
for stratified resampling.
Since
$\min_{p \colon \mathbb{R} \rightarrow \mathbb{R}}
\E \left[\prod_{s=0}^t \tilde{G}_s \left|h_t - p \left(\theta_t\right)\right|^2\right]
\leq \min_{c \in \mathbb{R}}
\E\left[\prod_{s=0}^t \tilde{G_s}
\left|h_t - c \right|^2\right]$,
asymptotic error for stratified resampling is as low or lower
than asymptotic error for multinomial resampling.
In the simplest case where $\theta_t \equiv 0$,
particles are not sorted in any particular order and error reduction may be very mild;
on the other hand,
as the stratification effect due to sorting by $\theta_t$ increases,
the error contributed at each resampling step
approaches zero.
Similarly, asymptotic error for stratified residual resampling
is as low or lower than asymptotic error for multinomial residual resampling,
with a major reduction possible
depending on the coordinate $\theta_t$.
Below, two examples of resampling schemes that use sorting to achieve error reduction are described:
\begin{example}[Sorting in $\mathbb{R}^d$]
When applying SMC to a one-dimensional system,
\citet{kitagawa1996monte} sorted particles $\xi_t^{\left(i\right)}$ by their values in $\mathbb{R}$ and
then applied stratified resampling,
leading to a dramatic reduction in resampling variance.
Later, \citet{gerber2017negative} suggested a more general strategy
of sorting particles in $\mathbb{R}^d$
according to a Hilbert curve, a measurable one-to-one mapping
from $\mathbb{R}^d$ into $\mathbb{R}$.
In both cases, Theorem \ref{multvar} gives an upper bound on asymptotic error
with $\hat{\eta}_t^2\left[h_t\right] = 0$.
This is the lowest possible asymptotic error for any SMC scheme.
It should be noted however that
pre-asymptotic resampling variance for this sorting strategy
is difficult to estimate; further research may help
elucidate the practical efficiency of Hilbert curve sorting.
\end{example}
\begin{example}[Binning]
In binned resampling \cite{huber1996weighted},
the state space is sorted into bins $B_1, B_2, \ldots, B_K$,
and particles $\xi_t^{\left(i\right)}$ are arranged
by bin number, from highest to lowest.
When stratified resampling is applied,
Theorem \ref{multvar} gives an upper bound on asymptotic error with
\begin{equation*}
\label{binformula}
\hat{\eta}_t^2\left[h_t\right] =
\left(\E \left[\prod_{s=0}^t G_s\right]\right)^2
\E \left[\prod_{s=0}^t \tilde{G}_s \sum_{k=1}^K \mathds{1}_{B_k}
\left|h_t - \frac{\E \left[\prod_{s=0}^t \tilde{G}_s \mathds{1}_{B_k} h_t\right]}
{\E \left[\prod_{s=0}^t \tilde{G}_s\right]}\right|^2\right]
\end{equation*}
As values of $h_t$ become increasingly similar in each bin $B_k$,
equation \eqref{binformula} guarantees that asymptotic error must decrease.
In particular, as the diameter of the bins approaches zero in a region that grows to fill the state space $E_t$,
$\hat{\eta}_t^2\left[h_t\right]$ approaches the lowest possible level: $\hat{\eta}_t^2\left[h_t\right] = 0$.
\end{example}
\section{Conclusion} \label{sec:Conclusion}
The present work derives a theoretical framework that unifies past SMC scholarship
and establishes significant new results.
The framework uses a simple parametrization to describe a great variety of resampling schemes.
The theoretical framework includes a unified error analysis and asymptotic error formulas with a unified structure
that can be used to compare resampling schemes.
The resampling matrix framework combines a fresh look at common resampling schemes
with new technical tools to analyze SMC error.
Aymptotic error is defined in a new way,
as mean squared error outside a set of vanishing probability.
This notion of error leads to simple proofs and rigorous comparisons between resampling schemes.
Due to this innovation, asymptotic error formulas are now available
for stratified resampling and stratified residual resampling,
including the full range of unbounded functions $\left(G_t\right)_{t \geq 0}$ and $f$
used in practical implementations of SMC.
The framework leads to two concrete recommendations for how best to resample:
\begin{enumerate}
\item
Firstly, practitioners are encouraged to use stratified residual resampling instead of
multinomial residual resampling and stratified resampling instead of multinomial resampling
in order to reduce resampling variance.
Similar recommendations were given in \citet{douc2005comparison},
but resampling matrices provide a more intuitive and general explanation
for reductions in resampling variance.
\item
Secondly, sorting schemes can lead to extremely low asymptotic error rates.
These schemes are recommended when there is a coordinate $\theta_t$
that can be used to sort particles $\left(\xi_t^{\left(i\right)}\right)_{1 \leq i \leq N_t}$
to achieve a beneficial stratification effect in the resampling step.
\end{enumerate}
In summary, the unifying analysis in the current paper
shines light on the best ways to resample,
providing practical guidance to help SMC users make the most of
the powerful and versatile SMC algorithm.
\section{Appendix}
\subsection{Estimates of ratios}
SMC is often used to approximate ratios
\begin{equation*}
\frac{\sum_{j=1}^{N_{T}}\hat{w}_{T-1}^{\left(j\right)}f\left(\hat{\xi}_{T-1}^{\left(j\right)},\xi_{T}^{\left(j\right)}\right)}
{\sum_{j=1}^{N_T}\hat{w}_{T-1}^{\left(j\right)}
\mathds{1}\left\{\hat{\xi}_{T-1}^{\left(j\right)} \neq c\right\}}
\approx\frac{\E\left[\prod_{t=0}^{T-1} G_t f \right]}{\E\left[\prod_{t=0}^{T-1} G_t \right]}
\end{equation*}
In some cases, the denominator in the SMC estimate may equal zero,
and the estimate can be assigned
an arbitrary value when this occurs.
If $\E\left[\prod_{s=0}^{t} G_s \right]<\infty$ for $0 \leq t \leq T-1$
and $\E\left|\prod_{t=0}^{T-1} G_t f \right|<\infty$, then
Theorem \ref{weak} guarantees
\begin{equation*}
\begin{cases}
\sum_{j=1}^{N_{T}}\hat{w}_{T-1}^{\left(j\right)}f\left(\hat{\xi}_{T-1}^{\left(j\right)},\xi_{T}^{\left(j\right)}\right)
\stackrel{\Prob}{\rightarrow}
\E\left[\prod_{t=0}^{T-1} G_t f \right]
\\
\sum_{j=1}^{N_T}\hat{w}_{T-1}^{\left(j\right)}
\mathds{1}\left\{\hat{\xi}_{T-1}^{\left(j\right)} \neq c\right\}
\stackrel{\Prob}{\rightarrow}
\E\left[\prod_{t=0}^{T-1} G_t \right]
\end{cases}
\end{equation*}
Therefore, SMC estimates of ratios are convergent.
While expressions for the bias and variance of these estimates are challenging to derive,
asymptotic error for these estimates can be studied with the aid of the following lemma:
\begin{lem}{\label{slutskylemma}}
\begin{enumerate}
\item{\label{slutsky}}
If $Y_n \stackrel{\mathcal{D}}{\rightarrow} N\left(0, 1\right)$
and $W_n \stackrel{\Prob}{\rightarrow} c \in \mathbb{R}$,
then $Y_n W_n \stackrel{\mathcal{D}}{\rightarrow} N\left(0, c^2\right)$.
\item{\label{notslutsky}}
If $\left|Y_n\right| \lesssim 1$ and $W_n \stackrel{\Prob}{\rightarrow} c > 0$,
then $\left|Y_n W_n\right| \lesssim c$.
\end{enumerate}
\end{lem}
\begin{proof}
Part \ref{slutsky} follows from Slutsky's Theorem.
To prove part \ref{notslutsky}, first construct a sequence of sets $\left(C_n\right)_{n \geq 1}$ with the properties
$\Prob\left(C_n\right) \rightarrow 1$ and
$\limsup_{n \rightarrow \infty} \left\lVert \mathds{1}_{C_n} \frac{W_n}{c} \right\rVert_{\infty} \leq 1$.
Set $E_{m,n} = \left\{\frac{W_n}{c} \leq 1 + \frac{1}{m}\right\}$ for $m,n \geq 1$
and $D_{1,n} = E_{1,n}$ for $n \geq 1$.
By the hypothesis in part \ref{notslutsky},
there exists a number $N\left(m\right) \geq 1$ such that $\Prob\left(E_{m,n}\right) \geq 1 - \frac{1}{m}$
for $n \geq N\left(m\right)$.
Accordingly, for $m > 1$ define
\begin{equation*}
\begin{cases}
D_{m,n} = D_{m-1,n}, & n < N\left(m\right) \\
D_{m,n} = E_{m,n}, & n \geq N\left(m\right)
\end{cases}
\end{equation*}
By this construction,
for $m \geq M$ and $n \geq N\left(M\right)$,
$\left\lVert \mathds{1}_{D_{m,n}} \frac{W_n}{c}\right\rVert_{\infty} \leq 1 + \frac{1}{M}$
and $\Prob\left(D_{m,n}\right) \geq 1 - \frac{1}{M}$.
Setting $C_n = D_{n,n}$ gives the required sequence.
Lastly, select $\left(B_n\right)_{n \geq 1}$ so that $\Prob\left(B_n\right) \rightarrow 1$ and
$\limsup_{n \rightarrow \infty} \E\left[\mathds{1}_{B_n} Y_n^2\right] \leq 1$.
Then $D_n = B_n \cap C_n$ satisfies $\Prob\left(D_n\right) \rightarrow 1$ and
$\limsup_{n \rightarrow \infty} \E\left[\mathds{1}_{D_n} \left|\frac{Y_n W_n}{c}\right|^2\right]
\leq 1$.
\end{proof}
To apply Lemma \ref{slutskylemma}, set $\tilde{f} = f - \frac{\E\left[\prod_{t=0}^{T-1} G_t f\right]}{\E\left[\prod_{t=0}^{T-1} G_t \right]}$ and observe
\begin{equation*}
\frac{\sum_{j=1}^{N_T} \hat{w}_{T-1}^{\left(j\right)} \tilde{f}\left(\hat{\xi}_{T-1}^{\left(j\right)},\xi_{T}^{\left(j\right)}\right)}
{ \sum_{j=1}^{N_T} \hat{w}_{T-1}^{\left(j\right)} \mathds{1}\left\{\hat{\xi}_{T-1} \neq c\right\}}
= \frac{\sum_{j=1}^{N_{T}}\hat{w}_{T-1}^{\left(j\right)}f\left(\hat{\xi}_{T-1}^{\left(j\right)},\xi_{T}^{\left(j\right)}\right)}
{\sum_{j=1}^{N_T}\hat{w}_{T-1}^{\left(j\right)}
\mathds{1}\left\{\hat{\xi}_{T-1}^{\left(j\right)} \neq c\right\}}
- \frac{\E\left[\prod_{t=0}^{T-1} G_t f \right]}{\E\left[\prod_{t=0}^{T-1} G_t \right]}
\end{equation*}
Since $\frac{1}{N_0} \sum_{j=1}^{N_T} \hat{w}_{T-1}^{\left(j\right)} \mathds{1}\left\{\hat{\xi}_{T-1} \neq c\right\}
\stackrel{\Prob}{\rightarrow} \E\left[\prod_{t=0}^{T-1} G_t\right]$,
the asymptotic error of an SMC estimate is the asymptotic error of
$ \frac{1}{N_0} \sum_{j=1}^{N_T} \hat{w}_{T-1}^{\left(j\right)} \tilde{f}\left(\hat{\xi}_{T-1}^{\left(j\right)},\xi_{T}^{\left(j\right)}\right)$
scaled by a factor of $\left(\E\left[\prod_{t=0}^{T-1} G_t\right]\right)^{-1}$.
A corollary of Theorem \ref{multvar} gives precise expressions for asymptotic error:
\begin{cor}
Set
$\tilde{G}_t = \E\left[\prod_{s=0}^{t-1} G_s \right] G_t \slash \E\left[\prod_{s=0}^t G_s\right]$
and set
\begin{equation*}
\tilde{h}_t \left(x_t\right) = \E\left[\left.\prod_{s=t+1}^{T-1} \tilde{G_s}
\left(f - \frac{\E\left[\prod_{r=0}^{T-1} G_r f\right]}{\E\left[\prod_{r=0}^{T-1} G_r\right]}\right)\right| X_t = x_t\right]
\end{equation*}
Assume that $\E\left[\prod_{s=0}^t G_s\right] < \infty$ for $0 \leq t \leq T-1$, $\E\left|G_0 \tilde{h}_0\right|^2 < \infty$,
and $\E\left[\prod_{s=0}^t G_s \left|G_{t+1} \tilde{h}_{t+1}\right|^2\right]$ for $0 \leq t \leq T-1$.
If multinomial residual or stratified residual resampling is used, also assume $\E\left[\prod_{s=0}^{t-1} G_s \mathds{1}\left\{\tilde{G}_t \in \left\{1,2,\ldots\right\} \right\}\right] = 0$ for $0 \leq t \leq T-1$.
Define
\begin{equation*}
\eta^2 = \Var\left[\tilde{G}_0 \tilde{h}_0\right] + \sum_{t=0}^{T-1} \hat{\eta}_t^2 \left[\tilde{h}_t\right]
+ \sum_{t=0}^{T-1}
\E\left[\prod_{s=0}^t \tilde{G}_s \Var\left[\left.\tilde{G}_{t+1} \tilde{h}_{t+1} \right| X_t\right]\right]
\end{equation*}
where $\eta^2$ depends on a sequence of numbers $\left(\hat{\eta}_t\left[\tilde{h}_t\right]^2\right)_{0 \leq 1 \leq T-1}$.
First assume multinomial resampling, Bernoulli resampling, or multinomial residual resampling is used.
Then SMC estimates satisfy the CLT
\begin{equation*}
\sqrt{N_0}\left(\frac{\sum_{j=1}^{N_{T}}\hat{w}_{T-1}^{\left(j\right)}f\left(\hat{\xi}_{T-1}^{\left(j\right)},\xi_{T}^{\left(j\right)}\right)}
{\sum_{j=1}^{N_T}\hat{w}_{T-1}^{\left(j\right)}
\mathds{1}\left\{\hat{\xi}_{T-1}^{\left(j\right)} \neq c\right\}}
- \frac{\E\left[\prod_{t=0}^{T-1} G_t f \right]}{\E\left[\prod_{t=0}^{T-1} G_t \right]}\right)
\stackrel{\mathcal{D}}{\rightarrow} N\left(0, \eta^2\right)
\end{equation*}
where $\hat{\eta}_t^2\left[\tilde{h}_t\right]$ is determined by the resampling scheme:
\begin{align*}
\begin{array}{l l}
\text{multinomial} &
\E\left[\prod_{s=0}^t \tilde{G_s}
\tilde{h}_t^2\right]
\\[.3cm]
\text{multinomial residual} &
\min_{c \in \mathbb{R}}
\E\left[\prod_{s=0}^{t-1} \tilde{G}_s \left\{\tilde{G}_t\right\} \left|\tilde{h}_t - c\right|^2 \right]
\\[.3cm]
\text{Bernoulli} &
\E\left[\prod_{s=0}^{t-1} \tilde{G}_s \left\{\tilde{G}_t\right\}\left(1 - \left\{\tilde{G}_t\right\}\right)
\tilde{h}_t^2\right]
\end{array}
\end{align*}
Next assume at each resampling step particles are sorted by a coordinate $\theta_t$
and then stratified or stratified residual resampling is used.
Then,
\begin{equation*}
\left|\frac{\sum_{j=1}^{N_{T}}\hat{w}_{T-1}^{\left(j\right)}f\left(\hat{\xi}_{T-1}^{\left(j\right)},\xi_{T}^{\left(j\right)}\right)}
{\sum_{j=1}^{N_T}\hat{w}_{T-1}^{\left(j\right)}
\mathds{1}\left\{\hat{\xi}_{T-1}^{\left(j\right)} \neq c\right\}}
- \frac{\E\left[\prod_{t=0}^{T-1} G_t f \right]}{\E\left[\prod_{t=0}^{T-1} G_t \right]}\right|
\lesssim \frac{\eta}{\sqrt{N_0}}
\end{equation*}
where $\hat{\eta}_t^2\left[\tilde{h}_t\right]$ is determined by the resampling scheme:
\begin{align*}
\begin{array}{l l}
\text{stratified} &
\min_{p\colon \mathbb{R} \rightarrow \mathbb{R}}
\E \left[\prod_{s=0}^t \tilde{G}_s \left|\tilde{h}_t - p \left(\theta_t\right)\right|^2\right]
\\[.3cm]
\text{stratified residual} &
\min_{p\colon \mathbb{R} \rightarrow \mathbb{R}}
\E \left[\prod_{s=0}^{t - 1} \tilde{G}_s \left\{\tilde{G}_t\right\} \left|\tilde{h}_t - p \left(\theta_t\right)\right|^2\right]
\end{array}
\end{align*}
\end{cor}
\subsection{Proofs for Theorems \ref{weak}, \ref{bounded}, and \ref{multvar}}
To prove Theorem \ref{weak}, first introduce intermediate $\sigma$-algebras between $\mathcal{F}_{t-1}$
and $\mathcal{F}_t$:
\begin{equation*}
\begin{cases}
\mathcal{F}_t^{\left(0\right)} = \mathcal{F}_{t-1} \\
\mathcal{F}_t^{\left(i\right)} = \mathcal{F}_t^{\left(i-1\right)} \vee \sigma\left(\xi_t^{\left(i\right)}\right),
& t = 0, 1 \leq i \leq N_t - 1 \\
\mathcal{F}_t^{\left(i\right)} = \mathcal{F}_t^{\left(i-1\right)} \vee \sigma\left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_t^{\left(i\right)}\right),
& t > 0, 1 \leq i \leq N_t - 1 \\
\mathcal{F}_t^{\left(N_t\right)} = \mathcal{F}_t
\end{cases}
\end{equation*}
Next, define a martingale $M_t^{\left(i\right)} = \E\left[\left.\frac{1}{N_0}\sum_{k=1}^{N_T} \hat{w}_{T-1}^{\left(k\right)}
f\left(\hat{\xi}_{T-1}^{\left(k\right)}, \xi_T^{\left(k\right)}\right)\right| \mathcal{F}_t^{\left(i\right)}\right]$.
Since pairs
$\left(w_t^{\left(k\right)}, \xi_t^{\left(k\right)}\right)$ are conditionally independent
given $\mathcal{F}_{t-1}$,
it follows
\begin{equation}{\label{martrep2}}
M_t^{\left(i\right)} = \frac{1}{N_0} \sum_{k=1}^i w_t^{\left(k\right)} h_t \left(\xi_t^{\left(k\right)}\right)
+ \frac{1}{N_0} \sum_{k=i+1}^{N_t} \E\left[\left. w_t^{\left(k\right)} h_t \left(\xi_t^{\left(k\right)}\right)
\right| \mathcal{F}_{t-1}\right]
\end{equation}
The proof of Theorem \ref{weak} also requires two technical lemmas.
\begin{lem} \label{condconv}
For each $n\geq1$, suppose $\mathcal{G}_{n0}\subseteq\mathcal{G}_{n1}\subseteq\mathcal{G}_{n2}\subseteq\cdots\subseteq\mathcal{G}_{n,k_n}$
is a filtration and $\left(Y_{nj}\right)_{1\leq j\leq k_n}$
is a sequence of random variables with $Y_{nj}$ measurable in $\mathcal{G}_{nj}$.
Suppose
\begin{equation*}
\label{twoconditions}
\begin{cases}
\sum_{j=1}^{k_n} \E\left[\left|Y_{nj}\right|\mathds{1}\left\{ \left|Y_{nj}\right|>C\right\} \rvert \mathcal{G}_{n,j-1}\right]\stackrel{\Prob}{\rightarrow} 0, & C > 0 \\
\lim_{\lambda \rightarrow \infty} \sup_{n\geq1} \Prob \left\{ \sum_{j=1}^{k_n} \E\left[\left|Y_{nj}\right|\rvert\mathcal{G}_{n,j-1}\right]>\lambda\right\} = 0
\end{cases}
\end{equation*}
Then, $\sum_{j=1}^{k_n}\left\{ Y_{nj}-\E\left[Y_{nj}\rvert\mathcal{G}_{n,j-1}\right]\right\} \stackrel{P}{\rightarrow}0$.
\end{lem}
\begin{proof}
The lemma can be traced back to the early martingale literature,
particularly \citet[pg. 45-47]{hall1980martingale} and \citet[pg. 626]{mcleish1974dependent}.
The lemma appears in \citet{douc2008limit},
who also use the lemma to prove convergence of SMC schemes.
\end{proof}
\begin{lem}{\label{weaklemma}}
If $\E\left[\prod_{s=0}^{t} G_s \right]<\infty$ for $0 \leq t \leq T-1$
and $\E\left|\prod_{t=0}^{T-1} G_t f \right|<\infty$, then for each $0 \leq t \leq T$
and $C > 0$
\begin{align}
\label{weaklindeberg}
&
\E\left[\left. \frac{1}{N_0} \sum_{i=1}^{N_t} \hat{w}_{t-1}^{\left(i\right)}
\left| f\left(\xi_t^{\left(i\right)}\right) \right|
\mathds{1}\left\{\frac{1}{N_0} \hat{w}_{t-1}^{\left(i\right)} \left|f\left(\xi_t^{\left(i\right)}\right)\right|\geq C\right\}
\right\rvert\mathcal{F}_{t-1} \right]
\stackrel{\Prob}{\rightarrow} 0
\end{align}
\end{lem}
\begin{proof}[Proof of Lemma \ref{weaklemma}]
Use induction on the time index $0 \leq t \leq T$.
For the $t = 0$ case, the Dominated Convergence Theorem shows
\begin{align*}
& \quad \E\left[\frac{1}{N_0}\sum_{i=1}^{N_0}
G_0\left(\xi_0^{\left(i\right)}\right)
\left|f\left(\xi_0^{\left(i\right)}\right)\right|
\mathds{1}\left\{\frac{1}{N_0}G_0\left(\xi_0^{\left(i\right)}\right)
\left|f\left(\xi_0^{\left(i\right)}\right)\right|\geq C\right\}\right] \\
&= \E\left[G_0 \left|f\right| \mathds{1}\left\{ \frac{1}{N_0} G_0 \left|f\right| \geq C\right\}\right]
\rightarrow 0
\end{align*}
Next, assume \eqref{weaklindeberg} holds for all times $0 \leq s \leq t-1$ and consider a time $t \geq 1$.
By the induction assumption,
\begin{align*}
& \E\left[\left.\frac{1}{N_0} \sum_{i=1}^{N_{t-1}}
w_{t-1}^{\left(i\right)}
\mathds{1}\left\{\frac{1}{N_0} w_{t-1}^{\left(i\right)} \geq C\right\}
\right\rvert\mathcal{F}_{t-1}\right]
\stackrel{\Prob}{\rightarrow} 0,
& C>0
\end{align*}
For $\delta > 0$, calculate
\begin{align*}
& \quad \Prob\left\{\max_{1 \leq i \leq N_{t-1}} \frac{w_{t-1}^{\left(i\right)}}{N_0} \geq \delta\right\}
= \E\left[\Prob\left\{\left.\max_{1 \leq i \leq N_{t-1}} \frac{w_{t-1}^{\left(i\right)}}{N_0} \geq \delta
\right|\mathcal{F}_{t-1}\right\}\right] \\
& \leq \E\left[\min\left\{1,
\frac{1}{\delta} \E\left[\left.\frac{1}{N_0} \sum_{i=1}^{N_{t-1}}
w_{t-1}^{\left(i\right)}
\mathds{1}\left\{\frac{1}{N_0} w_{t-1}^{\left(i\right)} \geq \delta\right\}
\right\rvert\mathcal{F}_{t-1}\right]\right\}\right]
\end{align*}
Sending $N_0$ to infinity, it follows that
$\max_{1 \leq i \leq N_{t-1}} \frac{w_{t-1}^{\left(i\right)}}{N_0}
\stackrel{\Prob}{\rightarrow} 0$, and by Assumption \ref{assumption1}
$\max_{1\leq j \leq N_t}\frac{\hat{w}_{t-1}^{\left(j\right)}}{N_0}\stackrel{P}{\rightarrow}0$.
Now for $C>0$ and $\delta>0$, define
$\nu = \left|f\right| \mathds{1}\left\{\left|f\right| \geq \frac{C}{\delta}\right\}$.
Calculate
\begin{align*}
& \quad \Prob \left\{
\E\left[\left.\frac{1}{N_{0}} \sum_{i=1}^{N_t}
\hat{w}_{t-1}^{\left(i\right)} \left| f\left(\xi_t^{\left(i\right)}\right) \right|
\mathds{1}\left\{\frac{1}{N_0} \hat{w}_{t-1}^{\left(i\right)}\left| f\left(\xi_t^{\left(i\right)}\right) \right|\geq C\right\}
\right\rvert\mathcal{F}_t\right]
> \epsilon\right\} \\
& \leq
\Prob \left\{\max_{1\leq j\leq N_t}\frac{\hat{w}_{t-1}^{\left(j\right)}}{N_0} > \delta\right\}
+ \frac{1}{\epsilon}
\E\left[\frac{1}{N_{0}} \sum_{i=1}^{N_t} \hat{w}_{t-1}^{\left(i\right)}
\nu \left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_t^{\left(i\right)}\right)\right] \\
& = \Prob \left\{\max_{1\leq j\leq N_t}\frac{\hat{w}_{t-1}^{\left(j\right)}}{N_0} > \delta\right\}
+ \frac{1}{\epsilon}
\E\left[\prod_{s=0}^{t-1} G_s \left| f\right| \mathds{1}\left\{\left|f\right| \geq \frac{C}{\delta}\right\}\right]
\end{align*}
Sending $N_0$ to infinity and $\delta$ to zero verifies \eqref{weaklindeberg} at time $t$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{weak}]
Lemma \ref{weaklemma} verifies the first condition of Lemma \ref{condconv}, namely,
\begin{equation*}
\sum_{t=0}^T \sum_{i=1}^{N_t}
E\left[\left.
\frac{1}{N_0} w_t^{\left(i\right)} \left| h_t \left(\xi_t^{\left(i\right)}\right) \right|
\mathds{1}\left\{\frac{1}{N_0} w_t^{\left(i\right)} \left|h_t \left(\xi_t^{\left(i\right)}\right)\right| > C\right\}
\right|\mathcal{F}_t^{\left(i-1\right)}\right]
\stackrel{P}{\rightarrow} 0
\end{equation*}
To verify the second condition, observe
\begin{align*}
\label{tightness}
& \quad \Prob \left\{\sum_{t=0}^T \sum_{i=1}^{N_t} \E \left[\left.\frac{1}{N_0} w_t^{\left(j\right)} \left|h_t\left(\xi_t^{\left(j\right)}\right)\right|
\right\rvert \mathcal{F}_t^{\left(i - 1\right)}\right] > \lambda
\right\} \\
& \leq \frac{1}{\lambda} \sum_{t=0}^T \E \left[\frac{1}{N_0} \sum_{j=1}^{N_t} w_t^{\left(j\right)}
\left|h_t\left(\xi_t^{\left(j\right)}\right)\right|\right]
\leq \frac{T+1}{\lambda} \E\left[\prod_{t=0}^{T-1} G_t \left|f\right|\right]
\end{align*}
The last quantity tends to zero as $\lambda \rightarrow \infty$.
Apply Lemma \ref{condconv} to conclude.
\end{proof}
\begin{proof}[Proof of Theorem \ref{bounded}]
The proof
uses a standard variance decomposition for martingales:
\begin{equation*}
\Var\left[M_T\right]
= \sum_{t=0}^T \sum_{i=1}^{N_t}
\E\left[\Var\left[\left.M_t^{\left(i\right)} \right| \mathcal{F}_t^{\left(i-1\right)}\right]\right] \\
\leq \E\left[\sum_{t=0}^T \sum_{i=1}^{N_t}
\left|\frac{1}{N_0} w_t^{\left(i\right)} h_t\left(\xi_t^{\left(i\right)}\right) \right|^2 \right]
\end{equation*}
Since functions $G_t$ are bounded, weights
$\hat{w}_t^{\left(i\right)}$ are also bounded, with $\hat{w}_t^{\left(i\right)} \leq \prod_{s=0}^t C_s \sup G_s$.
Thus, conclude
\begin{align*}
\Var\left[M_T\right] & \leq \sum_{t=0}^T \left(\prod_{s=0}^{t-1} C_s \sup G_s\right)
\E\left[ \frac{1}{N_0^2} \sum_{i=1}^{N_t}
\hat{w}_{t-1}^{\left(i\right)}
\left|G_t \left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_t^{\left(i\right)}\right) h_t\left(\xi_t^{\left(i\right)}\right) \right|^2 \right] \\
& = \frac{1}{N_0} \sum_{t=0}^T \left(\prod_{s=0}^{t-1} C_s \sup G_s\right)
\E\left[ \prod_{s=0}^{t-1} G_s \left|G_t h_t\right|^2\right] \\
& \leq \frac{1}{N_0} \E\left[\prod_{t=0}^{T-1} G_t f^2\right] \prod_{t=0}^{T-1} \sup G_t
\sum_{t=0}^T \prod_{s=0}^{t-1} C_s
\end{align*}
\end{proof}
The proof of Theorem \ref{multvar} requires a series of lemmas.
\begin{lem}{\label{tools}}
For each $n\geq1$, suppose $\mathcal{G}_{n0}\subseteq\mathcal{G}_{n1}\subseteq\mathcal{G}_{n2}\subseteq\cdots\subseteq\mathcal{G}_{n,k_n}$
is a filtration and $S_{n,k_n} = \sum_{j=1}^{k_n} Y_{nj}$ is the sum of martingale differences
with $\E\left[\left.Y_{nj}\right| \mathcal{G}_{n,j-1}\right] = 0$.
Define $V_{n, k_n}^2 = \sum_{j=1}^{k_n} \Var\left[\left.Y_{nj}\right| \mathcal{G}_{n,j-1}\right]$.
\begin{enumerate}[label = (\alph*)]
\item{\label{parta}}
If $V_{n, k_n}^2 \stackrel{\Prob}{\rightarrow} 1$ and if
$\sum_{j=1}^{k_n} \E\left[\left.Y_{nj}^2 \mathds{1}\left\{\left|Y_{nj}\right| > C\right\}\right| \mathcal{G}_{n,j-1}\right] \stackrel{\Prob}{\rightarrow} 0$ for each $C > 0$,
then $S_{n, k_n} \stackrel{\mathcal{D}}{\rightarrow} N\left(0, 1\right)$.
\item{\label{partb}}
If $\Prob\left\{V_{n, k_n}^2 > 1 + \epsilon\right\} \rightarrow 0$ for all $\epsilon > 0$,
then $\left|S_{n,k_n}\right| \lesssim 1$.
\end{enumerate}
\end{lem}
\begin{proof}
Part \ref{parta} is the martingale
CLT \cite[pg.58-59]{hall1980martingale}.
To prove part \ref{partb}, first set $E_{m,n} = \left\{V_{n,k_n}^2 \leq 1 + \frac{1}{m}\right\}$
for $m,n \geq 1$ and $D_{1,n} = E_{1,n}$ for $n \geq 1$.
By the assumption in part \ref{partb}, there exists a number $N\left(m\right) \geq 1$ such that $\Prob\left(E_{m,n}\right) \geq 1 - \frac{1}{m}$
for $n \geq N\left(m\right)$.
Accordingly, for $m \geq 1$ define
\begin{equation*}
\begin{cases}
D_{m,n} = D_{m-1,n}, & n < N\left(m\right) \\
D_{m,n} = E_{m,n}, & n \geq N\left(m\right)
\end{cases}
\end{equation*}
By this construction,
for any $m \geq M$ and $n \geq N\left(M\right)$,
$\E\left[\mathds{1}_{D_{m,n}} S_{n,k_n}^2 \right] \leq 1 + \frac{1}{M}$
and $\Prob\left(D_{m,n}\right) \geq 1 - \frac{1}{M}$.
Setting $B_n = D_{n,n}$ gives $\limsup_{n \rightarrow \infty} \E\left[\mathds{1}_{B_n} S_{n,k_n}^2\right] \leq 1$
and $\Prob\left(B_n\right) \rightarrow 1$.
\end{proof}
\begin{lem}{\label{lemma2}}
Assume
$\E\left[\prod_{s=0}^{t} G_s \right]<\infty$ for $0 \leq t \leq T-1$
and assume $\E\left[\prod_{s=0}^t G_s \left|G_t h_t\right|^2 \right]<\infty$.
Then for each $0 \leq t \leq T$ and $C > 0$,
\begin{equation*}
\E\left[\left. \frac{1}{N_0} \sum_{i=1}^{N_t}
\left| w_t^{\left(i\right)} h_t\left(\xi_t^{\left(i\right)}\right) \right|^2
\mathds{1}\left\{\frac{1}{\sqrt{N_0}} w_t^{\left(i\right)}
\left| h_t\left(\xi_t^{\left(i\right)}\right) \right|\geq C\right\}
\right\rvert\mathcal{F}_{t-1} \right]
\stackrel{\Prob}{\rightarrow} 0
\end{equation*}
\end{lem}
\begin{proof}
For the $t = 0$ case, use the Dominated Convergence Theorem.
For $1 \leq t \leq T$ and $C > 0$, define
$\nu = \left|G_t h_t\right|^2 \mathds{1}\left\{\left|G_t h_t\right| \geq \frac{C \sqrt{N_0}}{2 \E\left[\prod_{s=0}^{t-1} G_s\right]}\right\}$.
Calculate
\begin{align*}
& \quad \Prob \left\{
\E\left[\left.\frac{1}{N_{0}} \sum_{i=1}^{N_t}
\left|w_t^{\left(i\right)} h_t\left(\xi_t^{\left(i\right)}\right) \right|^2
\mathds{1}\left\{\frac{1}{\sqrt{N_0}}
\left|w_t^{\left(i\right)} h_t\left(\xi_t^{\left(i\right)}\right) \right| \geq C\right\}
\right\rvert\mathcal{F}_{t-1}\right]
> \epsilon\right\} \\
& \leq
\Prob \left\{\overline{w}_{t-1} > 2 \E\left[\prod_{s=0}^{t-1} G_s\right]\right\}
+ \frac{2}{\epsilon} \E\left[\prod_{s=0}^{t-1} G_s\right]
\E\left[\frac{\overline{w}_{t-1}}{N_0} \sum_{i=1}^{N_t}
\nu \left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_t^{\left(i\right)}\right)\right] \\
& = \Prob \left\{\overline{w}_{t-1} > 2 \E\left[\prod_{s=0}^{t-1} G_s\right]\right\}
+ \frac{2}{\epsilon} \E\left[\prod_{s=0}^{t-1} G_s\right]
\E\left[\prod_{s=0}^{t-1} G_s \nu \right]
\end{align*}
Since $\overline{w}_{t-1} \stackrel{\Prob}{\rightarrow} \E\left[\prod_{s=0}^{t-1} G_s\right]$ by Theorem \ref{weak},
both terms vanish upon sending $N_0$ to infinity.
\end{proof}
\begin{lem}{\label{CLT}}
Assume $\E\left[\prod_{s=0}^t G_s\right] < \infty$ for $0 \leq t \leq T-1$, $\E\left|G_0 h_0\right|^2 < \infty$,
and $\E\left[\prod_{s=0}^t G_s \left|G_{t+1} h_{t+1}\right|^2\right]$ for $0 \leq t \leq T-1$.
Define
\begin{equation*}
\eta^2 = \Var\left[G_0 h_0\right] + \sum_{t=0}^{T-1} \hat{\eta}_t^2 \left[h_t\right]
+ \sum_{t=0}^{T-1} \E\left[\prod_{s=0}^t G_s\right]
\E\left[\prod_{s=0}^t G_s \Var\left[\left.G_{t+1} h_{t+1} \right| X_t\right]\right]
\end{equation*}
where $\eta^2$ depends on numbers $\left(\hat{\eta}_t\left[h_t\right]^2\right)_{0 \leq 1 \leq T-1}$.
\begin{enumerate}[label = (\alph*)]
\item{\label{partone}}
If $N_0 \hat{V}^2_t\left[h_t\right] \stackrel{\Prob}{\rightarrow} \hat{\eta}_t^2\left[h_t\right]$ for each $0 \leq t \leq T-1$,
then
\begin{equation*}
\sqrt{N_0} \left(\frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T}
f\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right) -
\E\left[\prod_{t=0}^{T-1} G_t f\right]\right)
\stackrel{\mathcal{D}}{\rightarrow} N\left(0, \eta^2\right)
\end{equation*}
\item{\label{parttwo}}
If $\lim_{N_0 \rightarrow \infty} \Prob\left\{N_0 \hat{V}^2_t\left[h_t\right] \geq \hat{\eta}_t^2\left[h_t\right] + \epsilon\right\} = 0$
for each $0 \leq t \leq T-1$ and $\epsilon > 0$,
\begin{equation*}
\left|\frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T}
f\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right) -
\E\left[\prod_{t=0}^{T-1} G_t f\right]\right|
\lesssim \frac{\eta}{\sqrt{N_0}}
\end{equation*}
\end{enumerate}
\end{lem}
\begin{proof}
The proof uses Lemma \ref{tools} to analyze asymptotic behavior of the martingale $\frac{\sqrt{N_0}}{\eta} M_t^{\left(i\right)}$, where $M_t^{\left(i\right)}$ is defined in equation \eqref{martrep2}.
First compute the sum of conditional variances
\begin{align*}
& \quad \sum_{t=0}^T \sum_{i=1}^{N_t} \Var\left[\left.M_t^{\left(i\right)} \right| \mathcal{F}_{t-1}^{\left(i-1\right)}\right]
= \frac{\Var\left[G_0 h_0\right]}{N_0}
+ \sum_{t=1}^T \Var\left[\left.M_t \right| \mathcal{F}_{t-1}\right] \\
& = \frac{\Var\left[G_0 h_0\right]}{N_0}
+ \sum_{t=1}^T \left(\Var\left[\left.\E\left[\left.M_t \right| \hat{\mathcal{F}}_{t-1}\right] \right| \mathcal{F}_{t-1}\right]
+ \E\left[\left.\Var\left[\left.M_t \right| \hat{\mathcal{F}}_{t-1}\right] \right| \mathcal{F}_{t-1}\right]\right) \\
& = \frac{\Var\left[G_0 h_0\right]}{N_0}
+ \sum_{t=1}^T \Var\left[\left. \frac{\overline{w}_t}{N_0} \sum_{i=1}^{N_t} h_t\left(\xi_t^{\left(i\right)}\right) \right| \mathcal{F}_{t-1}\right] \\
& + \sum_{t=1}^T \E\left[\left.
\frac{\overline{w}_{t-1}^2}{N_0^2} \sum_{i=1}^{N_t} \Var\left[\left.G_t h_t\right| X_{t-1} = \hat{\xi}_{t-1}^{\left(i\right)}\right] \right| \mathcal{F}_{t-1}\right] \\
& = \frac{\Var\left[G_0 h_0\right]}{N_0} + \sum_{t=1}^T \hat{V}_t^2\left[h_t\right]
+ \sum_{t=1}^T \frac{\overline{w}_{t-1}}{N_0^2} \sum_{i=1}^{N_{t-1}} w_t^{\left(i\right)} \Var\left[\left.G_t h_t\right| X_{t-1} = \xi_{t-1}^{\left(i\right)}\right]
\end{align*}
Theorem \ref{weak} shows that $\overline{w}_t \stackrel{\Prob}{\rightarrow} \E\left[\prod_{s=0}^{t-1} G_s\right]$
and
\begin{equation*}
\frac{1}{N_0} \sum_{i=1}^{N_{t-1}} w_t^{\left(i\right)} \Var\left[\left.G_t h_t\right| X_{t-1} = \xi_{t-1}^{\left(i\right)}\right]
\stackrel{\Prob}{\rightarrow}
\E\left[\prod_{s=0}^{t-1} G_s \Var\left[\left.G_t h_t \right| X_{t-1}\right]\right]
\end{equation*}
Next, for $C > 0$ and $0 \leq t \leq T$, a useful inequality of \citet{dvoretzky1972asymptotic} gives
\begin{align*}
& \quad N_0 \sum_{i=1}^{N_t} \E\left[\left.
\left|M_t^{\left(i\right)} - M_t^{\left(i-1\right)}\right|^2
\mathds{1}\left\{\sqrt{N_0}\left|M_t^{\left(i\right)} - M_t^{\left(i-1\right)}\right| \geq C\right\}
\right| \mathcal{F}_t^{\left(i-1\right)}\right] \\
& \leq 4
\E\left[\left. \frac{1}{N_0} \sum_{i=1}^{N_t}
\left| w_t^{\left(i\right)} h_t\left(\xi_t^{\left(i\right)}\right) \right|^2
\mathds{1}\left\{\frac{1}{\sqrt{N_0}} w_t^{\left(i\right)}
\left| h_t\left(\xi_t^{\left(i\right)}\right) \right|\geq \frac{C}{2}\right\}
\right\rvert\mathcal{F}_{t-1} \right]
\end{align*}
This last term vanishes upon sending $N_0$ to infinity by Lemma \ref{lemma2}.
Thus, the conditions of Lemma \ref{tools} are satisfied, and parts \ref{partone} and \ref{parttwo} follow.
\end{proof}
\begin{lem}{\label{technical1}}
Assume $\E\left[\prod_{s=0}^t G_s\right] < \infty$ for $0 \leq t \leq T-1$ and assume
$\E\left|\prod_{t=0}^T G_t f\right| < \infty$.
Set
$\tilde{G}_T = \E\left[\prod_{t=0}^{T-1} G_t \right] G_T \slash \E\left[\prod_{t=0}^T G_t\right]$.
Then
\begin{multline}{\label{awful1}}
\frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T}
\left\{\frac{w_T^{\left(i\right)}}{\overline{w}_T}\right\}
\left(1 - \left\{\frac{w_T^{\left(i\right)}}{\overline{w}_T}\right\}\right)
f\left(\xi_t^{\left(i\right)}\right) \\
\stackrel{\Prob}{\rightarrow}
\E\left[\prod_{t=0}^{T-1} G_t \left\{\tilde{G}_T\right\}\left(1 - \left\{\tilde{G}_T\right\}\right) f \right]
\end{multline}
If additionally $\E\left[\prod_{t=0}^{T-1} G_t \mathds{1}\left\{\tilde{G}_T \in \left\{1, 2, \ldots\right\}\right\}\right] = 0$,
then
\begin{equation}{\label{awful2}}
\frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T}
\left\{\frac{w_T^{\left(i\right)}}{\overline{w}_T}\right\}
f\left(\xi_T^{\left(i\right)}\right)
\stackrel{\Prob}{\rightarrow}
\E\left[\prod_{t=0}^{T-1} G_t \left\{\tilde{G}_T\right\} \nu\right]
\end{equation}
\end{lem}
\begin{proof}
For a proof of equation \eqref{awful2} and a special case of equation \eqref{awful1}, see \citet{douc2008limit}.
To prove the more general case of \eqref{awful1},
first define
$L\left(x\right) = \left\{x\right\} - \left\{x\right\}^2$.
By Theorem \ref{weak},
\begin{equation*}
\frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T}
L\left(\tilde{G}_T\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right)\right)
f\left(\xi_T^{\left(i\right)}\right)
\stackrel{\Prob}{\rightarrow}
\E\left[\prod_{t=0}^{T-1} G_t \left\{\tilde{G}_T\right\}\left(1 - \left\{\tilde{G}_T\right\}\right) f\right]
\end{equation*}
Thus, it suffices to show
\begin{equation*}
\frac{\overline{w}_{T-1}}{N_0} \sum_{i=1}^{N_T}
\left|L\left(\frac{w_T^{\left(i\right)}}{\overline{w}_T}\right)
- L\left(\tilde{G}_T\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right)\right)\right|
\left|f\left(\xi_T^{\left(i\right)}\right)\right|
\stackrel{\Prob}{\rightarrow} 0
\end{equation*}
Since $x \mapsto L\left(x\right)$ has Lipschitz constant $1$,
for $\epsilon > 0$ and $\delta > 0$ it follows that
\begin{align*}
& \quad \Prob \left\{
\frac{\hat{w}_{t-1}}{N_0} \sum_{i=1}^{N_t}
\left|L\left(\frac{w_t^{\left(j\right)}}{\hat{w}_t}\right) -
L\left(\tilde{G}_t\left(\hat{\xi}_{t-1}^{\left(i\right)}, \xi_t^{\left(i\right)}\right)\right)\right|
\left|f\left(\xi_t^{\left(i\right)}\right)\right|
> \epsilon \right\} \\
& \leq \Prob \left\{
\left|\frac{\overline{w}_{T-1}}{\overline{w}_T}
\frac{\E\left[\prod_{t=0}^T G_t\right]}{\E\left[\prod_{t=0}^{T-1} G_t\right]} - 1\right| > \delta \right\} \\
& + \frac{\delta}{\epsilon}
\E \left[\frac{\hat{w}_{T-1}}{N_0} \sum_{i=1}^{N_T}
\tilde{G}_T\left(\hat{\xi}_{T-1}^{\left(i\right)}, \xi_T^{\left(i\right)}\right)
\left|f\left(\xi_T^{\left(i\right)}\right)\right|\right] \\
& = \Prob \left\{
\left|\frac{\overline{w}_{T-1}}{\overline{w}_T}
\frac{\E\left[\prod_{t=0}^T G_t\right]}{\E\left[\prod_{t=0}^{T-1} G_t\right]} - 1\right| > \delta \right\}
+ \frac{\delta}{\epsilon}
\E \left[\prod_{s=0}^{t-1} G_s \tilde{G}_t f \right]
\end{align*}
Both terms vanish upon sending $N_0$ to infinity and then $\delta$ to $0$.
\end{proof}
\begin{lem}{\label{technical2}}
Assume $\E\left[\prod_{s=0}^t G_s\right] < \infty$ for $0 \leq t \leq T-1$
and assume $\E\left[\prod_{t=0}^T G_t f^2\right] < \infty$.
At resampling step $T$, assume particles are sorted by a coordinate $\theta_T$
and then stratified or stratified residual resampling is used.
Then for any $p \colon \mathbb{R} \rightarrow \mathbb{R}$
with $\E\left[\prod_{t=0}^T G_t \left|p\left(\theta_T\right)\right|^2\right] < \infty$,
\begin{align*}
& \limsup_{N_0 \rightarrow \infty} \Prob\left\{N_0 \hat{V}_T^2\left[f\right]
> \left(1 + \epsilon\right) N_0 \hat{V}_T^2\left[f - p\left(\theta_T\right)\right] + \epsilon \right\} < \epsilon,
& \epsilon > 0
\end{align*}
\end{lem}
\begin{proof}
Fix $\delta > 0$ and select $\eta \in C_c\left(\mathbb{R}\right)$, which approximates $p$ so that
$\E \left[\prod_{t=0}^T G_t \left|\eta \left(\theta_T\right) - p\left(\theta_T\right)\right|^2\right] < \delta$.
Applying Cauchy's inequality with $\epsilon$,
\begin{equation*}
\hat{V}_T^2\left[f\right] \leq
\left(2 + \frac{2}{\epsilon}\right)
\left(\hat{V}_T^2\left[\eta\left(\theta_T\right)\right]
+ \hat{V}_T^2\left[\eta\left(\theta_T\right) - p\left(\theta_T\right)\right]\right)
+ \left(1 + \epsilon\right) \hat{V}_T^2\left[f - p\left(\theta_T\right)\right]
\end{equation*}
To prove the result it suffices to bound
$\hat{V}_T^2\left[\eta\left(\theta_T\right)\right]$
and $\hat{V}_T^2\left[\eta\left(\theta_T\right) - p\left(\theta_T\right)\right]$.
First bound $\hat{V}_T^2\left[\eta\left(\theta_T\right)\right]$.
On the event that $\hat{w}_T \leq 2\E\left[\prod_{t=0}^T G_t\right]$, it follows
\begin{align*}
& \quad \hat{V}_T^2\left[\eta\left(\theta_T\right)\right]
\leq \left(2\E\left[\prod_{t=0}^T G_t\right]\right)^2
\Var\left[\left.\frac{1}{N_0} \sum_{j=1}^{N_{T+1}} \eta\left(\hat{\theta}_T^{\left(j\right)}\right)
\right| \mathcal{F}_T\right] \\
& = \left(2\E\left[\prod_{t=0}^T G_t\right]\right)^2
\frac{1}{N_0^2} \sum_{j=1}^{N_{T+1}} \Var\left[\left. \eta\left(\hat{\theta}_T^{\left(j\right)}\right)
\right| \mathcal{F}_T\right]
\end{align*}
where $\hat{\theta}_T^{\left(j\right)}$ denotes $\theta_T\left(\hat{\xi}_T^{\left(j\right)}\right)$.
In the resampling step,
a series of particles $\left(\hat{\xi}_T^{\left(j\right)}\right)_{1 \leq j \leq J}$ is randomly selected
(other particles may be deterministically selected)
with $L^{\left(0\right)} \geq \hat{\theta}_T^{\left(1\right)} \geq L^{\left(1\right)} \geq \hat{\theta}_T^{\left(2\right)} \geq \cdots \geq \hat{\theta}_T^{\left(J\right)} \geq L^{\left(J\right)}$ for some
$\mathcal{F}_T$-measurable random variables $L^{\left(j\right)}$. Therefore,
\begin{align*}
& \quad \sum_{j=1}^J \Var\left[\left.\eta\left(\hat{\theta}_T^{\left(j\right)}\right)\right| \mathcal{F}_T\right]
\leq \frac{1}{4} \sum_{j=1}^J \left|\sup_{x \in \left[L^{\left(j-1\right)}, L^{\left(j\right)}\right]} \eta\left(x\right)
- \inf_{x \in \left[L^{\left(j-1\right)}, L^{\left(j\right)}\right]} \eta\left(x\right)\right|^2 \\
& \leq \sup_{x \in \mathbb{R}}\left|\eta\left(x\right)\right| \sum_{j=1}^J
\left|\sup_{x \in \left[L^{\left(j-1\right)}, L^{\left(j\right)}\right]} \eta\left(x\right)
- \inf_{x \in \left[L^{\left(j-1\right)}, L^{\left(j\right)}\right]} \eta\left(x\right)\right|
\leq \sup_{x \in \mathbb{R}}\left|\eta\left(x\right)\right| V\left(\eta\right)
\end{align*}
where $V\left(\eta\right)$ is the total variation of $\eta$.
It remains to bound $\hat{V}_T^2\left[\eta\left(\theta_T\right) - p\left(\theta_T\right)\right]$.
On the event $\hat{w}_T \leq 2\E\left[\prod_{t=0}^T G_t\right]$,
\begin{align*}
& \quad \hat{V}_T^2\left[\eta\left(\theta_T\right) - p\left(\theta_T\right)\right]
= \sum_{j=1}^{N_{T+1}} \frac{\hat{w}_T^2}{N_0^2}
\Var\left[\left.
\left(\eta\left(\hat{\theta}_T^{\left(j\right)}\right) - p\left(\hat{\theta}_T^{\left(j\right)}\right)\right)\right|\mathcal{F}_T\right] \\
& \leq 2\E\left[\prod_{t=0}^T G_t\right]
\E\left[\left. \frac{\hat{w}_T}{N_0} \sum_{j=1}^{N_{T+1}} \left|\eta\left(\hat{\theta}_T^{\left(j\right)}\right) - p\left(\hat{\theta}_T^{\left(j\right)}\right)\right|^2\right|\mathcal{F}_T\right]
\end{align*}
This last term has expectation
\begin{equation*}
2\E\left[\prod_{t=0}^T G_t\right] \E \left[\prod_{t=0}^T G_t \left|\eta \left(\theta_T\right) - p\left(\theta_T\right)\right|^2\right]
< 2 \delta \E\left[\prod_{t=0}^T G_t\right]
\end{equation*}
Conclude
\begin{align*}
& \quad \limsup_{N_0 \rightarrow \infty} \Prob\left\{N_0 \hat{V}_t^2\left[h_t\right]
> \left(1 + \epsilon\right) N_0 \hat{V}_t^2\left[h_t - \nu\right] + \epsilon \right\} \\
& = \limsup_{N_0 \rightarrow \infty}
\Prob\left\{\hat{w}_t \leq 2\E\left[\prod_{s=0}^t G_s\right],
\, \left(2 + \frac{2}{\epsilon}\right)N_0 \hat{V}_T^2\left[\eta\left(\theta_T\right)\right] > \frac{\epsilon}{2}\right\} \\
& + \limsup_{N_0 \rightarrow \infty}
\Prob\left\{\hat{w}_t \leq 2\E\left[\prod_{s=0}^t G_s\right],
\, \left(2 + \frac{2}{\epsilon}\right) N_0 \hat{V}_T^2\left[\eta\left(\theta_T\right) - p\left(\theta_T\right)\right] > \frac{\epsilon}{2}\right\} \\
& \leq \left(\frac{2}{\epsilon}\right)\left(2 + \frac{2}{\epsilon}\right)
\left(2 \delta \E\left[\prod_{t=0}^T G_t\right]\right)
\end{align*}
For small enough $\delta$, this last term is less than $\epsilon$, proving the result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{multvar}]
The proof combines Lemma \ref{CLT} with explicit computations of resampling variances $\hat{V}_t^2$.
For multinomial resampling,
\begin{equation*}
N_0 \hat{V}_t^2\left[h_t\right] = \frac{\overline{w}_t}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)}
\left|h_t\left(\xi_t^{\left(i\right)}\right)\right|^2
- \left|\frac{1}{N_0} \sum_{i=1}^{N_t} w_t^{\left(i\right)}
h_t\left(\xi_t^{\left(i\right)}\right)\right|^2
\end{equation*}
By Theorem \ref{weak}, therefore,
\begin{multline*}
N_0 \hat{V}_t^2\left[h_t\right] \stackrel{\Prob}{\rightarrow} \E\left[\prod_{s=0}^t G_s\right] \E\left[\prod_{s=0}^t G_s h_t^2\right]
- \left(\E\left[\prod_{s=0}^t G_s h_t \right]\right)^2 \\
= \E\left[\prod_{s=0}^t G_s\right] \E\left[\prod_{s=0}^t G_s
\left|h_t - \frac{\E\left[\prod_{s=0}^t G_s h_t\right]}{\E\left[\prod_{s=0}^t G_s\right]}\right|^2\right]
\end{multline*}
For multinomial residual resampling, $N_0 \hat{V}_t^2\left[h_t\right]$ takes the form
\begin{equation*}
\frac{\overline{w}_t^2}{\overline{w}_{t-1}} \left(
\frac{\overline{w}_{t-1}}{N_0} \sum_{i=1}^{N_t} \left\{\frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\}
\left|h_t\left(\xi_t^{\left(i\right)}\right)\right|^2
- \frac{
\left|\frac{\overline{w}_{t-1}}{N_0} \sum_{i=1}^{N_t} \left\{\frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\}
h_t\left(\xi_t^{\left(i\right)}\right)\right|^2}
{\frac{\overline{w}_{t-1}}{N_0}\sum_{i=1}^{N_t} \left\{\frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\}}\right)
\end{equation*}
By Theorem \ref{weak} and Lemma \ref{technical1}, $N_0 \hat{V}_t^2\left[h_t\right]$ converges in probability to
\begin{equation*}
\frac{\left(\E\left[\prod_{s=0}^t G_s\right]\right)^2}{\E\left[\prod_{s=0}^{t-1} G_s\right]}
\E\left[\prod_{s=0}^{t-1} G_s \left\{\tilde{G}_t\right\} \left|h_t
- \frac{\E\left[\prod_{s=0}^{t-1} G_s \left\{\tilde{G}_t\right\} h_t\right]}
{\E\left[\prod_{s=0}^{t-1} G_s \left\{\tilde{G}_t\right\}\right]}\right|^2\right]
\end{equation*}
For Bernoulli resampling, Theorem \ref{weak} and Lemma \ref{technical1} give
\begin{align*}
N_0 \hat{V}_t^2\left[h_t\right] & = \frac{\overline{w}_t^2}{N_0} \sum_{i=1}^{N_t}
\left\{\frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\}
\left(1 - \left\{\frac{w_t^{\left(i\right)}}{\overline{w}_t}\right\}\right)
\left|h_t\left(\xi_t^{\left(i\right)}\right)\right|^2 \\
& \stackrel{\Prob}{\rightarrow}
\frac{\left(\E\left[\prod_{s=0}^t G_s\right]\right)^2}{\E\left[\prod_{s=0}^{t-1} G_s\right]}
\E\left[\prod_{s=0}^{t-1} G_s \left\{\tilde{G}_t\right\}\left(1 - \left\{\tilde{G}_t\right\}\right) h_t^2\right]
\end{align*}
To compute the resampling variance for stratified resampling, consider the function
$p$ that minimizes
$\E\left[\prod_{s=0}^t \tilde{G}_s \left|h_t - p\left(\theta_t\right)\right|^2\right]$.
Since this function can be written as an $L^2$ projection, it is well-defined.
Moreover, by Lemma \ref{simplelemma},
the resampling variance $N_0\hat{V}_t^2\left[h_t - p\left(\theta_t\right)\right]$
is bounded by the multinomial resampling variance,
which converges in probability to
\begin{equation*}
\hat{\eta}_t^2\left[h_t\right] = \left(E\left[\prod_{s=0}^t G_s\right]\right)^2
\E\left[\prod_{s=0}^t \tilde{G}_s \left|h_t - p\left(\theta_t\right)\right|^2\right]
\end{equation*}
Thus, $\Prob\left\{N_0\hat{V}_t^2\left[h_t - p\left(\theta_t\right)\right] > \hat{\eta}_t^2\left[h_t\right] + \epsilon\right\} \rightarrow 0$ for all $\epsilon > 0$.
By Lemma \ref{technical2}, this is enough to guarantee
$\Prob\left\{N_0 \hat{V}_t^2\left[h_t\right] > \hat{\eta}_t^2\left[h_t\right] + \epsilon\right\} \rightarrow 0$ for all $\epsilon > 0$.
The asymptotic variance upper bound for sorted stratified residual resampling is proved similarly.
\end{proof}
\section*{Acknowledgements}
The author would like to thank Jonathan Weare and Omiros Papaspiliopoulos
for conversations that helped shape the presentation of results
and Alicia Zhao for gracious and patient
editorial assistance.
|
1,941,325,221,032 | arxiv |
\subsection{Spin distribution data}
Spin distributions, given by the percentage of clusters with each possible spin, are given for two-dimensional clusters with open and periodic boundary conditions in Table \ref{tabRandClusters2D}. In this table we only consider models 1 and 2 (see text, section \ref{secFixedDensityClusters}) for determining $\ensuremath{\tilde{t}}_{ij}$ -- a comparison of models 2 and 3 is given separately. To facilitate comparison between the two types of boundary conditions, the difference between the two cases (left and right sides of Table \ref{tabRandClusters2D}) is shown in Table \ref{tabRandClusters2D_diff}. The analogous results for 3-dimensional clusters are provided in Ref.~\onlinecite{NielsenThesis}. Also included there is spin distribution data comparing models 2 and 3 for 3D clusters (analagous to the 2D results of Fig.~\ref{tabRandClusters2D_cband}).
\vspace{0.9in}
\subsection{Average spin and percentage magnetic clusters of fixed density clusters}
This section presents complete results for the average spin and the percentage of magnetic clusters for fixed cluster size as a function of doping (zero doping = half-filled). Results for 2D and 3D clusters with open and periodic boundary conditions are shown in the figures below, as indicated in their titles.
In particular, Fig.~\ref{afigAvgSpin2D} shows the average spin of 2D clusters with open and periodic boundary conditions, respectively. To remove an even-odd effect, Fig.~\ref{afigAvgSpin2Dr} shows the same average spin but \emph{relative to $S_{min}$} (i.e. 0.5 is subtracted from cases of odd electron number). Figure \ref{afigPcMag2D} shows the percentage of magnetic clusters (defined as those with greater than minimal ground state spin) as a function of doping for the different cluster sizes. Corresponding results for three-dimensional clusters with open and periodic boundary conditions are given in Ref.~\onlinecite{NielsenThesis}.
\begin{widetext}
\begin{table*}[b]
\begin{center}
\input{randClTable_2D_diff}
\caption{\emph{Difference} between distribution of ground state spin values for 2D random clusters with open and periodic boundary conditions. Values are obtained by subtracting the sets of data in Table \ref{tabRandClusters2D} below. Thus, entries show increase or decrease in percentage when switching from open to periodic boundary conditions. Estimated error $\pm0.7\%$.\label{tabRandClusters2D_diff}}
\end{center}
\end{table*}
\begin{turnpage}
\begin{table*}[b]
\begin{center}
\begin{tabular}{cc}
\input{randClTable_2D}
\input{randClTable_2D_pdic}
\end{tabular}
\caption{ Distribution of ground state spin values for 2D random clusters with \emph{open boundary conditions} (left) and \emph{periodic boundary conditions} (right). Table entries give the percentage of clusters with the ground state spin specified in the column header. Results are the ensemble average of many clusters with fixed size $\ensuremath{N_s}$, density $\rho$, and doping = one electron (1e) or hole (1h). $\ensuremath{\tilde{t}} > \ensuremath{t}$ indicates that $\ensuremath{\tilde{t}}$ is set by our band calculation (to be compared with the case $\ensuremath{\tilde{t}} = \ensuremath{t}$). Estimated error $\pm0.5\%$.\label{tabRandClusters2D}}
\end{center}
\end{table*}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\parbox{0.5\linewidth}{
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ Average Spin \ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.010_2D.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.100_2D.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.300_2D.ps}} \\ \hline
\end{tabular}}
&
\parbox{0.5\linewidth}{
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ Average Spin \ : \ periodic b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.010_2D_pdic.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.100_2D_pdic.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.300_2D_pdic.ps}} \\ \hline
\end{tabular}}
\end{tabular}
\caption{Ground state average spin of 2D random clusters with fixed size and density, as a function of electron-doping (negative = hole-doping). Data for systems with \emph{open and periodic boundary conditions} is shown in the left and right table respectively. The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band. \label{afigAvgSpin2D}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\parbox{0.5\linewidth}{
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ Average Spin - $\mathbf{S_{min}}$\ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.010_2Dr.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.100_2Dr.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.300_2Dr.ps}} \\ \hline
\end{tabular}}
&
\parbox{0.5\linewidth}{
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ Average Spin - $\mathbf{S_{min}}$\ : \ periodic b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.010_2Dr_pdic.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.100_2Dr_pdic.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.300_2Dr_pdic.ps}} \\ \hline
\end{tabular}}
\end{tabular}
\caption{Ground state average spin \emph{relative to minimum spin} of 2D random clusters with fixed size and density, as a function of electron-doping (negative = hole-doping). Data for systems with \emph{open and periodic boundary conditions} is shown in the left and right table respectively. The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band. \label{afigAvgSpin2Dr}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\parbox{0.5\linewidth}{
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ \% magnetic clusters \ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.010_2D.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.100_2D.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.300_2D.ps}} \\ \hline
\end{tabular}}
&
\parbox{0.5\linewidth}{
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ \% magnetic clusters \ : \ periodic b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.010_2D_pdic.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.100_2D_pdic.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.300_2D_pdic.ps}} \\ \hline
\end{tabular}}
\end{tabular}
\caption{Percentage of magnetic clusters (spin 1 or greater) in an ensemble of 2D random clusters with fixed size and density, as a function of electron-doping (negative = hole-doping). Data for systems with \emph{open and periodic boundary conditions} is shown in the left and right table respectively. The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band. \label{afigPcMag2D}}
\end{center}
\end{figure}
\end{turnpage}
\end{widetext}
\clearpage
\subsection{Average spin and percentage magnetic clusters from fixed density large systems}
We next consider large systems with a fixed number of sites $\ensuremath{N_{sys}}$ (10,000 to 1,000,000) and doping ($\ensuremath{N_e^{tot}}$ total electrons). Each system is separately partitioned into clusters of size $\ensuremath{N_s}=2-7$, which are approximated as being independent, and then diagonalized. The resulting data, averaged over many ($\sim 50$) large systems, gives the ensemble average distribution of clusters' ground state spin. The same general trends appear here as for the clusters with fixed local density. For comparison we show the average spin and percentage of magnetic clusters in two dimensions in Figs.~\ref{afigAvgSpin2D_clFile} and \ref{afigPcMag2D_clFile} (for each cluster size separately). We use the same plot format as for the clusters of fixed density, but show only the open boundary condition case. Note that although the cluster size is fixed, there is substantial fluctuation in the local density of clusters; only the average density of the \emph{entire} system is fixed. Results for 3D clusters are given in Ref.~\onlinecite{NielsenThesis}.
\begin{widetext}
\begin{figure*}[b]
\begin{center}
\begin{tabular}{|c|c|} \hline
$\bar{\rho}$ & \textbf{Large System \ : \ 2D \ : \ Average Spin \ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/run8/finalAvgS4-5-6-7_0.010_2D.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/run8/finalAvgS4-5-6-7_0.100_2D.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/run8/finalAvgS4-5-6-7_0.300_2D.ps}} \\ \hline
\end{tabular}
\caption{Ground state average spin of 2D random clusters (\emph{open b.c.}) obtained from large systems ($\ensuremath{N_{sys}} = 1\times 10^6$) with fixed average density $\bar{\rho}$, as a function of electron-doping (negative = hole-doping). The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band. \label{afigAvgSpin2D_clFile}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{tabular}{|c|c|} \hline
$\bar{\rho}$ & \textbf{Large System \ : \ 2D \ : \ \% magnetic clusters \ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/run8/finalPcMag4-5-6-7_0.010_2D.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/run8/finalPcMag4-5-6-7_0.100_2D.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/run8/finalPcMag4-5-6-7_0.300_2D.ps}} \\ \hline
\end{tabular}
\caption{Percentage of magnetic clusters (spin 1 or greater) in an ensemble of 2D random clusters (\emph{open b.c.}) obtained from large systems ($\ensuremath{N_{sys}} = 1\times 10^6$) with fixed average density $\bar{\rho}$, plotted as a function of electron-doping (negative = hole-doping). The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band. \label{afigPcMag2D_clFile}}
\end{center}
\end{figure*}
\end{widetext}
\section{Introduction}
Originally proposed in the early 1960s\cite{Hubbard_1963,Gutzwiller_1963,Kanamori_1963,Anderson_HubModel_1963}, the Hubbard model combines tight binding hopping between nearest neighbors on a lattice with an on-site Coulomb repulsion between electrons in the same orbital state. Though it is one of the simplest interacting models, its on-site intra-orbital correlations are believed to be the most important source of correlations in solids. Indeed, the Hubbard model displays great diversity of transport and magnetic properties, giving rise to insulating, metallic, and superconducting phases as well as ferromagnetic (FM), antiferromagnetic (AF) and paramagnetic spin order. It has been used to study a wide range of correlated systems, including Mott-insulator oxides,\cite{MottBook} high-$\mathrm{T}_c$ superconductors,\cite{AndersonCuprates_1987,LeeCupratesRMP_2006,IzyumovTJmodel_1997,MacridinCuprates_2005} organic materials,\cite{Pyo_2005,Wu_2004,Sing_2003} $\sqrt{3}$-adlayer structures,\cite{Weitering_1997} vanadium oxides,\cite{McWhan_1973,Carter_1991} nickel sulphide-selenide alloys,\cite{Ogawa_1979,ThioBennett_1994,ThioBennett_1995} hydrogenic centers in doped semiconductors\cite{FerreiraSpecificHeat_1981,RefolioKSiInterface_1996}, and quantum dots.\cite{Massimo_hubInQDots_1999} Such great interest and applications have resulted in analyses of the model on different lattices,\cite{Hanisch_diffLatt_1997,Wegner_diffLatt_1998} with multiple\cite{Penc_multiBand} and degenerate\cite{Fresard_degenBand,Kuei_degenBand} bands, and with binary alloy disorder.\cite{Byczuk_alloyDisorder} Many studies restrict themselves to the infinite $U/t$ limit,\cite{Becca_largeU,Obermeier_largeU} which can be realized most effectively in optical lattices,\cite{JakschZoller_2005} but can be approached in semiconductor systems as well. We will be concerned with the case of semiconductors doped with shallow hydrogenic impurities. Here the model is particularly appropriate at low densities (\emph{i.e.} in the insulating phase, where carriers are bound to a few sites and the Coulomb interaction is large compared to the kinetic energy). In this low density limit each site is treated as an effective hydrogen atom with a corresponding effective Rydberg and Bohr radius:
\begin{equation}
\ensuremath{\mathrm{Ry^*}} = \frac{\ensuremath{m^*} e^4}{2 \epsilon^2 \hbar^2} \qquad \qquad \ensuremath{a^*_{\mathrm{B}}} = \epsilon \hbar^2 / \ensuremath{m^*} e^2 \label{eqIntro_effRyAndBohrRad}
\end{equation}
where $\ensuremath{m^*}$ is the effective mass in the appropriate band and $\epsilon$ is the dielectric constant of the host material.
In doped semiconductors, typically $\epsilon \sim 10 - 20$ and $\ensuremath{m^*}$ is 0.05 to 0.5 times the free electron mass, so that $\ensuremath{a^*_{\mathrm{B}}} \sim 10 - 500$ \AA~and $\ensuremath{\mathrm{Ry^*}} \approx 1-50 \mbox{meV}$. Since the $\ensuremath{\mathrm{Ry^*}}$ is usually much smaller than the bandgap of the host semiconductor, the lattice lacks low-energy electronic excitations on the energy scale of the impurity electrons and essentially plays the role of an inert vacuum. Realistic effects like valley degeneracy and mass anisotropy must be included for quantitative calculations but are unnecessary for the qualitative phenomena of interest to us.\cite{Thomas_1981,AndresBhatt_1981} We will assume that all relevant energy scales are much smaller than the gap between the lowest and higher orbital states on an isolated dopant, so that we need only care about the $1s$ orbital of each dopant, which consists of two electronic spin-degenerate states at energy denoted $E_0$. A hydrogenic center, like a hydrogen atom, is known to bind up to two electrons.\cite{Pekeris_1962} With a single electron the problem is that of atomic hydrogen ($H$), and the electron is bound with 1 \ensuremath{\mathrm{Ry^*}}. The two electron case corresponds to the $H^-$ ion, which has a spin singlet ground state bound by 0.0555 \ensuremath{\mathrm{Ry^*}}.\cite{MottBook,BS_QMbook_1977,BhattRice_1981}
We begin with a review of the Hubbard model and its properties on a lattice in section \ref{secHubbardBackground}. The absence of certain magnetic properties, namely ferromagnetism, in real materials leads to a discussion of disorder and reveals the need to incorporate it into the model. This is done in section \ref{secHubbardForHydrogenic} where we motivate and define a model appropriate for doped semiconductors. The parameter ranges of interest for this model are also given in section \ref{secHubbardForHydrogenic}, along with details of the model's solution. Results on finite lattices, selected symmetric clusters, and small random clusters are presented in section \ref{secHubbardResults}. Large systems of random impurities are treated in section \ref{secVaryDensityClusters} by dividing them into smaller clusters which can be solved exactly. Section \ref{secConclusion} highlights our major conclusions and discusses topics for continued work.
\section{Background: the Hubbard model \label{secHubbardBackground}}
\subsection{Definition and general properties}
The Hamiltonian of the Hubbard model on a \emph{lattice} with $\ensuremath{N_s}$ sites is given by:
\begin{equation}
\mathcal{H} = - t\sum_{\langle i,j\rangle\sigma} \left( c^\dag_{i\sigma} c_{j\sigma} + \mbox{h.c.} \right) + U\sum_{i=1}^{N_s} n_{i\uparrow}n_{i\downarrow} \label{eqnHubHamOriginal}
\end{equation}
where $i$ and $j$ range from $1$ to $\ensuremath{N_s}$, and the first sum is over all distinct nearest neighbor pairs. Operators $c^\dag_{i\sigma}$ and $ c_{i\sigma}$ create and annihilate, respectively, an electron of spin $\sigma \in \{\uparrow,\downarrow\}$ on site $i$, and satisfy canonical fermion anticommutation relations
\begin{eqnarray}
\left\{ c^\dag_{i\sigma}, c_{i'\sigma'} \right\} &=& \delta_{ii'} \delta_{\sigma\sigma'} \\
\left\{ c^\dag_{i\sigma}, c^\dag_{i'\sigma'} \right\} &=& 0 \\
\Big\{ c_{i\sigma}, c_{i'\sigma'} \Big\} &=& 0
\end{eqnarray}
for any $i = 1\ldots \ensuremath{N_s}$ and $\sigma \in \{\uparrow,\downarrow\}$. The number operator $n_{i\sigma} = c^\dag_{i\sigma}c_{i\sigma}$, and has eigenvalues 0 and 1. The Hilbert space of each site has as a basis the four states where the site is occupied by an up-spin, a down-spin, neither, or both. The total Hilbert space is the direct product of site Hilbert spaces, and therefore has dimension $4^{\ensuremath{N_s}}$. The parameter $t$ is the quantum mechanical hopping amplitude between (nearest-neighbor) sites, and $U$ is the strength of the on-site Coulomb repulsion. We include a minus sign in front of the kinetic term, so that for the familiar example of the tight-binding model with hydrogenic wavefunctions,\cite{Bhatt_1981} $t(r)=2(1+r/a_{\mathrm{B}})\exp(-r/a_{\mathrm{B}})$ is positive, and restrict ourselves to $U>0$ (repulsive interaction). Note that the eigenstates of each term independently are trivial: they are states of definite momentum when $U=0$ and states of definite position when $t=0$. Thus, inherent in the Hubbard model is a competition between the extended (wave-like) and localized (particle-like) nature of the electrons, and there is no clear classical analogue.
We will be primarily concerned with the strong correlation limit $U \gg t$, so that at half-filling (\emph{i.e.}~one electron per site) the single particle (charge) spectrum has a gap, and the system is insulating (Fig.~\ref{figBandFillings}(a)). Nonetheless, the system can have low lying spin excitations. At large $U/t$ and half-filling, the Hubbard model at low energies is effectively a Heisenberg model,\cite{AndersonEffHeisenberg_1963} in which fermionic electron operators are represented by spin operators. On a bipartite lattice, the Heisenberg model, given by Hamiltonian $\mathcal{H}_{Heis} = J \sum_{<ij>} \vec{S}_i\cdot \vec{S}_j$, has an AF ground state.\cite{ManousakisRMP_1991} Away from half-filling, where there are carriers (see Fig.~\ref{figBandFillings}(b)), one must use a more general low-energy theory that includes a kinetic term, the $t-J$ model:\cite{ChaoSpalekOles_1977,ChaoSpalekOles_1978}
\parbox{3in}{
\begin{eqnarray}
\mathcal{H}_{tJ} &=& - t \sum_{<ij>\sigma} \left( (1-n_{i\bar{\sigma}})c^\dag_{i\sigma}c_{j\sigma}(1-n_{j\bar{\sigma}}) + \mbox{h.c.}\right) \nonumber \\
& & +\, J \sum_{<ij>} \left(\vec{S}_i\cdot \vec{S}_j - \frac{1}{4}n_i n_j \right)\,. \nonumber
\end{eqnarray}} \hfill \parbox{0.1cm}{\begin{eqnarray} \label{eqtJModel} \end{eqnarray}}
\noindent Note that the Hamiltonian operates on the restricted Hilbert space which excludes doubly-occupied sites. From the Heisenberg and $t-J$ models we see that the inclusion of electron-electron interactions results in an AF exchange interaction $\sim \vec{S}_i \cdot \vec{S}_j$, where $\vec{S}_i = \sum_{\alpha\beta} c^\dag_{i\alpha} \sigma_{\alpha\beta} c_{i\beta}$. The exchange term is due to the fact that virtual hopping of electrons between neighboring sites is allowed when their spins are oppositely oriented but not when their spins are parallel (as in a FM configuration), as shown in Fig.~\ref{figExchangeTermOrigin}.\cite{AndersonExchange_1959}
\begin{figure}
\begin{center}
\begin{tabular}{ll}
\includegraphics[height=1.5in]{figs/bandsHalfFilled.eps}
& \includegraphics[height=1.5in]{figs/bandsAboveHalfFilled.eps} \\
\hspace{1cm} a) & \hspace{1cm} b)
\end{tabular}
\caption{(Color online) Schematic figure showing a system at half-filling (a), and slightly above half-filling (b). At half-filling the lower impurity band is completely full and there is a gap to charge excitations. Above half-filling there are electrons present in the upper (unfilled) band that can act as carriers if they occupy extended states (as they do in a lattice). Note also that each band's density of states $N(E)$ is not actually semicircular, but drawn this way for convenience.\label{figBandFillings} }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.3in]{figs/antiferroVirtHop.eps} \hspace{1cm}
\includegraphics[width=1.3in]{figs/ferroNoVirtHop.eps}
\caption{(Color online) Diagrams which graphically tell the origin of the AF exchange interaction term of the $t-J$ model: in an AF configuration (left) electrons and virtually hop to a neighboring site and back (shown by the arrows), resulting in a net lowering of the energy by second order perturbation theory. In a FM configuration (right), however, Pauli exclusion forbids such virtual processes, and the system cannot lower it's energy in this way.\label{figExchangeTermOrigin}}
\end{center}
\end{figure}
The Hubbard Hamiltonian can also be written in terms of spin operators using the identity $\sum_i \left(\vec{S}_i \right)^2 = \sum_i \left(\frac{1}{4}n_i - \frac{3}{2}n_{i\uparrow}n_{i\downarrow}\right)$, where $n_i=n_{i\uparrow}+n_{i\downarrow}$, casting Eq.~\eqref{eqnHubHamOriginal} into the form:
\begin{equation}
\mathcal{H} = -t\sum_{\langle i,j\rangle\sigma} \left( c^\dag_{i\sigma} c_{j\sigma} + \mbox{h.c.} \right) - \frac{2U}{3}\sum_{i=1}^{N_s} \left(\vec{S}_i\right)^2 + \frac{\ensuremath{N_e} U}{6} \label{eqnHubHamOriginal_spinForm}
\end{equation}
where $\ensuremath{N_e}$ is the total number of electrons. This form clearly shows the total spin SU(2) invariance of the Hubbard model, and also that when $U>0$ the interaction energy is lowest when the total spin on each site is maximized, suggesting the existence of ground states with high values of total spin at large $U$. On a bipartite lattice with disjoint sublattices $A$ and $B$, the sign of $t$ can be changed via the transform:
\parbox{2in}{
\begin{eqnarray}
c_{i\sigma} &\rightarrow& +c_{i\sigma} \qquad \mbox{if} \quad i \in A \nonumber \\
c_{i\sigma} &\rightarrow& -c_{i\sigma} \qquad \mbox{if} \quad i \in B \nonumber
\end{eqnarray}} \hfill
\parbox{1cm}{\begin{eqnarray}\label{eqBipartiteTransform}\end{eqnarray}}
\noindent which does not change the canonical commutation relations and thus leaves the spectrum invariant. The Hubbard model also possesses particle-hole symmetry on a bipartite lattice, where $U$ maps to $-U$ and total charge is interchanged with total spin (for a detailed explanation of symmetries in the Hubbard model, see Ref.~\onlinecite{FradkinBook_1991}).
Even when it is applied to simple systems (\emph{e.g.}~1-, 2-, and 3-dimensional lattices), the Hubbard model yields interesting and non-trivial properties, seen through the nature of its excitations, density of states, spectral weight, transport, and optical and magnetic behavior.\cite{GeorgesKotliar_1996,UlmkeJanisVollhart_1995,ChandraKollarVollhart_1999,Eckstein_2007} Here we will concentrate on the nature of magnetic correlations in the ground state, which are then used to construct the ground state (\emph{i.e.}~$\mathrm{T} = 0$) phase diagram.
\subsection{Magnetic Properties}
The magnetic properties of Hubbard systems can be very rich due to competition between two or more magnetic phases. Consider the Hubbard model at half-filling on a bipartite lattice, where there is no classical magnetic frustration, and let $U/t$ be large. The model's quantum ground state is a superposition of ``Neel antiferromagnet states'' where spins on each sublattice are aligned and oppositely oriented to those of the other sublattice as well as ``spin-flip states'' which differ from the Neel AF states by exchanging one or more pairs of spins (states with a greater number of flips occur with lower weight). In other words, the ground state is a superposition of states with long-range Neel order. Since $U$ is large, the $t-J$ approximation (Eq.~\eqref{eqtJModel}) is valid, introducing an exchange energy $J\sim t^2/U$ between neighboring spins. The kinetic term of the $t-J$ Hamiltonian does not play a role since at half-filling there are no mobile carriers. Thus, at half-filling the exchange directly gives rise to an antiferromagnet. When the system is above or below half-filling, however, the kinetic term plays a competing role by favoring a ferromagnetic spin configuration. This is so because as carriers hop from site to site they do not disturb an underlying FM spin configuration, whereas they necessarily scramble an AF one (see Fig.~\ref{figAFscramble}). This scrambling leads to an unfavorable increase in energy, and thus the preference for ferromagnetism.\cite{BrinkmanRice_1970, ShraimonSiggia_1988} Relative to an AF state, a FM system with carrier (electron or hole) density $\delta$ gains kinetic energy of order $t\delta$ due to carrier delocalization and loses magnetic energy of order is $J=4t^2/U$. Thus, at a fixed small $\delta$, when $U$ is large enough, $t \delta \gg J$, and the system prefers a FM configuration over the AF one because it allows carriers to be less confined. Understanding the applicability and validity of this argument, and more generally the factors that govern the magnetic competition found in the Hubbard model, has been the topic of much work. Indeed, it has led to most (if not all) of the rigorous results that are known concerning the Hubbard model.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\parbox{1.5in}{\includegraphics[width=1.2in]{figs/ferroConfig.eps}}
& \parbox{1.5in}{\includegraphics[width=1.2in]{figs/antiferroConfig.eps}} \\
a) & b) \\
& \\
\multicolumn{2}{c}{ \parbox{1.5in}{\includegraphics[width=1.2in]{figs/antiferroScrambled.eps}}} \\
\multicolumn{2}{c} { c) }
\end{tabular}
\caption{(Color online) Diagrams showing why the kinetic term favors a ferromagnetic state: in a) the down spin on the single doubly-occupied site can move freely without disturbing the underlying FM background. However, if the background is AF as in b), motion of electrons on doubly-occupied sites scramble the Neel order. Diagram c) shows the result of the doubly occupied site in b) moving two sites to the right.\label{figAFscramble} }
\end{center}
\end{figure}
Despite the apparent simplicity of the Hubbard model and voluminous literature surrounding it, few rigorous theoretical results have been proven about it. Most striking among them is the result of Nagaoka,\cite{Nagaoka_1966} which states that in the infinite correlation limit $U/t\rightarrow\infty$, the Hubbard model on certain finite lattices of dimension $d\ge 2$ with periodic boundary conditions, $t<0$, and a single hole (away from half-filling), has a FM ground state (\emph{i.e.}~the total spin $S^2$, where $\vec{S}=\sum_i \vec{S}_i$, attains it's maximal value). This result, dubbed the Nagaoka Theorem, applies to most standard lattices, including the square, simple cubic, triangular, kagom\'{e}, bcc, and fcc (hcp).\cite{Nagaoka_1966,Tasaki_2003} In the case of bipartite lattices, such as the square, simple cubic, and bcc, $t$ can be taken positive (the physical sign in the tight binding model) by the transform of Eq.~\eqref{eqBipartiteTransform}. This can be understood from the preceding discussion of a bipartite system, where, upon setting $U=\infty$, the criterion $t \delta \gg J$ is satisfied for any $\delta > 0$ and thus only a single hole is needed to produce a FM ground state. Even though this criterion also predicts ferromagnetism for a finite density of carriers at large $U$, a rigorous result even for the case of a few holes has proved difficult.\cite{TianFewHoles_1991,Trugman_1990} Along with the rigorous proofs in Nagaoka's and Thouless' work, simpler and more modern mathematical proofs are given by Tian\cite{TianNagaokaProof_1990} and Tasaki.\cite{TasakiNagaokaProof_1998}. Another rigorous theorem regarding magnetism in the Hubbard model by Lieb\cite{LiebFerrimagnetism_1989} states that a half-filled bipartite system whose sublattices have different numbers of sites will have an unsaturated FM ground state. It later became clear that the tendency toward ferromagnetism was due to the single particle density of states being dispersionless, or flat, at the center of the band (the Fermi level at half-filling). Later, results of Mielke\cite{MielkeFlatBands_1991} and Tasaki\cite{TasakiFlatBands_1992} generalized this idea to characterize a broader class of half-filled systems with dispersionless single-particle spectra and saturated FM ground states, said to exhibit ``flat-band ferromagnetism.'' We hasten to point out, however, that half-filled Hubbard systems are generally antiferromagnetic (when on a bipartite lattice) or paramagnetic, and that completely or nearly flat bands should be viewed at least as a non-generic case.
This fact underscores the surprising result of Nagaoka, which describes the transition from an antiferromagnet to a ferromagnet upon the addition of a single hole or electron.
The bulk of this section investigates the possibility of saturated ``Nagaoka ferromagnetism'', and it is worthwhile at this point to consider the progress of past work toward understanding the phenomenon. The topic has generated sizable interest, since the Nagaoka Theorem is at the same time striking and of only limited use, saying nothing about the thermodynamic limit where there is relevance to experiment. Many theoretical studies\cite{Becca_largeU,Obermeier_largeU,Denteneer_1996,LongZotos_1993,Chiappe_1993} work in the large or infinite $U$ limit, which is where saturated FM is most likely to occur. In the $U=\infty$ limit doubly-occupied states are eliminated from the Hilbert space, which then has a dimension that scales as $3^{\ensuremath{N_s}}$ -- substantially less than $4^{\ensuremath{N_s}}$ and thereby a great relief for numericists! Indeed, much computational work has been done setting $U=\infty$, including one which relates the system far below half-filling to one of hard-core bosons.\cite{LongZotos_1993}
Investigating the existence, extent, and stability of the Nagaoka state has established several conditions that are known to favor a stable, saturated FM ground state. By considering the stability of the fully polarized state to a single spin flip, it is shown\cite{PastorHirschMuhlschlegel_1994,BarbieriRieraYoung_1990,Hanisch_diffLatt_1997} that an asymmetric density of states with a peak at the appropriate band edge (lower edge of the upper Hubbard band if doping above half-filling; upper edge of lower Hubbard band if doping below) is one such condition. This makes intuitive sense, since having large density of states at the Fermi level diminishes the kinetic cost of filling additional single-particle electronic states and causes the large $U$, which favors spin alignment, to prevail. This generalizes the condition of a flat band discussed earlier, in which the density of states is infinite at the band edge. From the geometrical optimization of finite Hubbard clusters, Pastor \emph{et al.}\cite{PastorHirschMuhlschlegel_1994,PastorHirschMuhlschlegel_1996}~find that saturated ferromagnetism coincides with clusters which are \emph{non-bipartite} and have a large number of frustrated ``tight'' triangular loops. They also find that doping clusters above rather than below half-filling yields a density of states with higher weight at the band edge, and leads to FM ground states. The asymmetry with respect to doping and correspondence between magnetism and triangular loops is corroborated by our results, and appears to be a quite general feature with important experimental ramifications, which we discuss in more detail in section \ref{secGeomDistorted}. It is also known that adding back into the single-band Hubbard model physical interactions that it neglects, particularly a (direct) ferromagnetic Heisenberg exchange interaction, can be important for stabilizing ferromagnetism near half-filling for finite $U$.\cite{StrackVollhardt_1994,KollarStrackVollhardt_1996,Wahle_1998} Additionally, the next-nearest-neighbor (NNN) hopping amplitude $t'$ is believed to play an important role: decreasing $t'/t$ (especially below zero) stabilizes saturated FM to higher hole-doping in the $U=\infty$ Hubbard model on a square lattice.\cite{Park_2008} In one-dimension, where the Lieb-Mattis theorem\cite{LiebMattis_1962} forbids FM in the standard Hubbard model, the addition of (NNN) hopping $t'$ such that $t'/t < 0$ ($t > 0$) results in a widespread FM phase.\cite{DaulNoack_1998} Alternatively, the generalization to a multi-band model (appropriate for many transition metals) with a ferromagnetic exchange interaction between electrons in different orbitals (``Hund's rule couplings'') also abets the stability of a FM state.\cite{Vollhardt_1999} (The inclusion of multiple bands, however, is not crucial to FM stability in 2 and 3 dimensions.\cite{BarbieriYoung_1991}) We do not consider either of these routes, and restrict our study to a nearest-neighbor model with one orbital and on-site Coulomb interaction.
It is important to remember, however, that saturated ferromagnetism in the Hubbard-like models is not ubiquitous. Other work has shown it to be a subtle effect, depending on dimension and lattice geometry. For instance, another rigorous result, due to Lieb and Mattis,\cite{LiebMattis_1962} proves that in finite one-dimensional systems with zero-wavefunction or zero-derivative boundary conditions, the ground state must be a singlet (no spin polarization). More recently Haerter and Shastry\cite{HaerterShastryAFTriangle_2005} have shown that on the frustrated triangular lattice an itinerant hole actually helps to produce an \emph{antiferromagnetic} ground state. They suggest that this phenomenon holds on all lattices with ``electronic frustration,'' defined as those for which the sign of the hopping amplitude around the lattice's smallest closed loop is negative. (Note that Nagaoka's theorem only applies to un-frustrated systems.)
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=2.5in]{figs/sq10PhaseDiag.ps} \\
\\
\includegraphics[width=2.6in]{figs/sc8PhaseDiag3.ps}
\end{tabular}
\caption{Zero temperature mean-field theory phase diagram of the Hubbard model on a $10 \times 10$ square lattice (top) and $8 \times 8 \times 8$ (512 sites) simple cubic lattice (bottom). Doping (horizontal axis) is defined as the number of extra electrons (above half-filling) per site.
\label{figMFTdiagram}}
\end{center}
\end{figure}
\subsection{Elusive Ferromagnetism}
A qualitative picture of the Hubbard model's magnetic behavior at zero temperature can be obtained by a mean-field analysis on square and simple cubic lattices, which results in the phase diagrams shown in Fig.~\ref{figMFTdiagram}. The phase transitions were found by comparing the ground state energy and spin-spin correlations of self-consistent mean-field calculations that were initialized in paramagnetic, AF, FM, and random configurations.
Our analysis does not include the possibility of phase separation, \emph{e.g.}~the existence of polarons corresponding to ``carrier-rich" ferromagnetic and ``carrier-poor" antiferromagnetic regions. If it occurs, phase separation could substantially alter\cite{EisenbergHuseAltshuler_2002} the simple phase diagrams given here. Barbieri and Young construct phase diagrams for the large-$U$ Hubbard model in 2 and 3 dimensions using a variational Gutzwiller technique,\cite{BarbieriYoung_1991} and find phase separation occurs in both cases. Dagotto et~al.,\cite{DagottoPhaseSep_1992} however, argue based on their results on 10- and 16-site square lattices that phase separation is generally absent in the Hubbard model, at least at short length scales. Figure \ref{figMFTdiagram} also agrees with the extensive work by Hirsch\cite{HirschMFT_1985} in two dimensions.
We focus on the region of low-doping and large $U/t$ (the top left of Fig.~\ref{figMFTdiagram}), where there is a FM-AF transition. As expected, at zero doping (half-filling), the system is an antiferromagnet for all values of $U/t$ due to the effective exchange interaction and an absence of mobile carriers. As the doping is increased from zero, it is clear from the mean field perspective that for large enough $U/t$ we expect, on some mesoscopic or macroscopic length scale, a transition to a FM ground state (even though its precise location in phase space depends on dimension as well as lattice structure, and requires more careful work).
Though the stability of the Nagaoka state has been studied extensively and is seen to exist in the Hubbard model, such ferromagnetism has not been observed experimentally. In many Mott-insulator oxides and chacogenide systems this may be explained by an insufficient $U/t$ to allow for ferromagnetism (and finding a naturally occurring material with large enough $U/t$ seems unlikely). However, in doped semiconductors at low dopant densities, $U/t$ is tunable over several orders of magnitude due to the exponential dependence of the hopping $t$ on the dopant spacing [\emph{e.g.}~$t(r)\sim\exp(-r/a_\mathrm{B})$ in the tight binding model]. This versatility makes doped semiconductors a promising candidate for Nagaoka ferromagnetism, as it allows $U/t$ to become large ($\sim 100-1000$), achieving for all practical purposes the limit $U/t \rightarrow \infty$ required by Nagaoka's theorem. Despite this, the absence of ferromagnetism in experiments on a variety of doped semiconductors, both uncompensated\cite{AndresBhatt_1981,SasakiKinoshita_1968,Sasaki_1976,Ue_1971,Quirt_1973} and compensated,\cite{Hirsch_NoExpFerro_1992} is quite clear. In these experiments, the nearest neighbor coupling, though distributed broadly, had a median value of 1-10K, and FM behavior was searched for down to much lower (mK) temperatures to probe the $\mathrm{T}=0$ behavior. Even with the additional hope of alternative theories that predicted ferromagnetism of Anderson localized electrons,\cite{Kamimura_1978,Kamimura_1985} both uncompensated and compensated systems exhibited a significantly lower (by factors of 10-50) magnetic susceptibility compared to the high temperature paramagnetic Curie result, indicating that the systems were predominantly characterized by AF correlations, both at and slightly below half-filling.
To understand how this can be, let us return to the requirements of Nagaoka's theorem. Placing dopants on a superlattice has become possible only very recently\cite{Schofield_2003} -- in naturally formed doped semiconductors, including those of all relevant experiments, the dopants are distributed randomly and an important hypothesis of Nagaoka's theorem is not met. Adding such positional disorder to the Hubbard-like description of the system turns out to be an important ingredient. (It is incorporated into the Hubbard model by setting $t \rightarrow t_{ij}$, which then depends on the separation $r_{ij}$, see section \ref{secHubbardForHydrogenic}, specifically Eq.~\eqref{eqnHubHamDisordered} below).
After introducing positional disorder into the Hubbard model, is not clear whether any of the aforementioned theorems and arguments for uniform systems are still valid (or even relevant). First, we expect a locally fluctuating carrier density, which may wash out any distinction between phase separation at macroscopic and mesoscopic length scales for such systems. Second, since the itinerancy of carriers depends on the local lattice geometry, when the geometry becomes spatially inhomogeneous the itinerancy might be suppressed, and at the very least the magnetic structure will show similar spatial inhomogeneity.
More precisely, it was found that the lack of low-temperature ferromagnetism in semiconductors can be explained by disorder localizing otherwise mobile carriers, thereby reducing the kinetic energy gain (previously $\sim t\delta$) and destroying ferromagnetism. Bhatt and Lee\cite{BhattLee_JAP_1981,BhattLee_1982} gave insight into the true nature of the half-filled (uncompensated semiconductor) case using a perturbative renormalization group method tailored for the large amount of disorder present in the actual system. They found that the randomness of the dopants results in what has been dubbed a valence-bond glass,\cite{Bhatt_1990,Bhatt_1988} random singlet,\cite{Fisher_1994} or Bhatt-Lee phase.\cite{Holcomb_SUSSP_1986,Paalanen_1988} In such a state, spins pair up to form (spin zero) singlets (see also Ref.~\onlinecite{Bhatt_1986}) in a hierarchical fashion, and the resulting structure and behavior is qualitatively different from the antiferromagnet state predicted on a bipartite lattice. There is no long-range AF order, and the magnetic susceptibility is strongly temperature dependent, even down to tens of millikelvin. That compensated semiconductors show no evidence of ferromagnetism\cite{Hirsch_NoExpFerro_1992} can be attributed to the localization of holes on one (or a few) valence bonds, and their consequent inability to move long enough distances to disrupt the local magnetic arrangements. As a result, holes are unable to gain the kinetic energy which favors a spin-polarized background. Thus, even though doped semiconductors give one the ability to tune $U/t$ over several orders of magnitude, Nagaoka ferromagnetism remains elusive.
\section{Hubbard model for hydrogenic systems \label{secHubbardForHydrogenic}}
\subsection{Overview and formulation}
An important question that can still be asked of a system with positional disorder is whether or not the ground state is spin polarized (resulting in macroscopic spin degeneracy). In the remainder of this paper, we attempt to answer an even more basic question -- does there exist, even on the nanoscale, large spin degeneracy in systems of hydrogenic centers, using an appropriate Hubbard-like description? The paramount conclusion is that there \emph{does} exist a regime in doped semiconductors which is more amenable to Nagaoka ferromagnetism. Interestingly, this regime is attainable in nanoscale quantum dots and heterostructures, but not accessible to bulk systems. There we find Nagaoka-like ferromagnetism in the presence of disorder, at least at the nanoscale, and that this regime also possesses a higher likelihood of emerging on mesoscopic or macroscopic scales (\emph{e.g.}~in modulation doped systems). In this section we introduce and motivate the generalized Hubbard model used to characterize the doped semiconductor problem.
\subsection{Random Hubbard Model: positional disorder\label{subsecModel}}
As a first approximation, a system of $\ensuremath{N_s}$ randomly positioned donors can be modeled with the Hubbard Hamiltonian obtained by adding site-dependence to the hopping amplitude in Eq.~\eqref{eqnHubHamOriginal}. Specifically, we make $t_{ij}$ a function of the site separation: $t_{ij}=t\left(|r_i-r_j|\right)$, resulting in the Hamiltonian:
\begin{equation}
\mathcal{H}_{rdm} = - \sum_{i,j,\sigma} \left( t_{ij}c^\dag_{i\sigma} c_{j\sigma} + \mbox{h.c.} \right) + U\sum_i n_{i\uparrow}n_{i\downarrow} \label{eqnHubHamDisordered}
\end{equation}
where $i,j=1\ldots \ensuremath{N_s}$.
This takes into proper account the random positioning of the donors, and, as we discuss below, should be a good model for both uncompensated and compensated \emph{bulk} semiconductors with $\le 1$ electron per donor site (in the latter case, a more rigorous treatment would additionally include random on-site energies reflecting the random fields generated by the (positively charged) acceptor sites).
\subsection{Hubbard model generalization: occupation-dependent hopping\label{subsecModel2}}
A shortcoming of $\mathcal{H}_{rdm}$ (Eq.~\eqref{eqnHubHamDisordered}), both for the lattice and random case, is that it does not account for a fundamental property of hydrogen: the two-electron wavefunction of the $H^-$ ion has much greater extent than the one-electron wavefunction of the $H$ atom. This is reflected in the binding energy (the energy required to remove an electron) of $H^-$ being only 0.0555 \ensuremath{\mathrm{Ry^*}}, whereas $1\,\ensuremath{\mathrm{Ry^*}}$ is necessary to remove the electron of $H$.\cite{MottBook,BS_QMbook_1977} Indeed, using that an effective Bohr radius $a^*$ scales as $1/\sqrt{E_{\mathrm{\scriptsize binding}}}$, we find that the ratio of Bohr radii for $H^-$ and $H$, $a^*_{H^-}/a^*_H = \sqrt{1.0}/\sqrt{0.0555} \approx 4$, showing that the wavefunction of $H^-$ is several times larger than that of $H$. Variational treatments of the $H^-$ ion,\cite{BS_QMbook_1977} as well as an effective pseudopotential calculation,\cite{NielsenBhattTransport} determine the ratio to be in the range $2-4$. This affects the Hubbard-description of the system because it is much easier for an electron on a doubly-occupied hydrogenic center to hop away than it is for the electron on a singly-occupied site to make a similar hop. This implies that the hopping amplitude seen by an itinerant electron, hopping around in a background of singly-occupied sites, is larger than that seen by a hole in a similar background. The fact that the ratio of the two radii is substantial ($2-4$), and the hopping amplitude is exponentially dependent on the radius (in the low density regime), suggest that a doped semiconductor above half-filling is in a quite different regime of parameters than the conventional compensated semiconductor (a system below half-filling). Such a regime, while not obtainable in bulk doped semiconductors, should be realizable in semiconductor heterostructures, as well as quantum dots. In Hubbard model parlance, near half-filling the hopping amplitude for an electron is much larger than for a hole. At the very least, the different radii of the doubly- vs.~singly occupied sites suggest that we modify lattice Hubbard Hamiltonian (\ref{eqnHubHamOriginal}) to become:
\begin{equation}
\mathcal{H}^* = - \sum_{\langle i,j \rangle \sigma} \left( t(n_i,n_j)c^\dag_{i\sigma} c_{j\sigma} + \mbox{h.c.} \right) + U\sum_i n_{i\uparrow}n_{i\downarrow} \label{eqnHubHamOccDep}
\end{equation}
where $n_i$ is the total occupation of site $i$, and the hopping now has occupation dependence given by the piecewise function (the hopping corresponding to the different amplitudes $\ensuremath{t}$ and $\ensuremath{\tilde{t}}$ is shown pictorially on the right):
\begin{displaymath}
t(n_i,n_j) = \hspace{2.5in}
\end{displaymath}
\begin{equation}
\left\{ \begin{array}{ccc}
\ensuremath{\tilde{t}} & \hspace{0.3cm} n_j=1, n_i=2 & \hspace{0.3cm}
\parbox{1.4in}{\includegraphics[width=1.5in]{figs/overlap1.eps}} \\
\vspace{0.2cm} & & \\
\ensuremath{t} & \hspace{0.3cm}\mbox{otherwise} & \hspace{0.3cm}
\parbox{1.4in}{\includegraphics[width=1.5in]{figs/overlap2.eps} \\
\includegraphics[width=1.5in]{figs/overlap3.eps}}
\end{array} \right. \label{eqnPiecewiseT}
\end{equation}
where $\ensuremath{\tilde{t}}$ is larger (and as we will see, can be much larger) than $\ensuremath{t}$.\cite{ErikNanoscaleFM_2007} This model enhances the hopping from a doubly-occupied site to an already singly occupied site (which will become doubly occupied after the hop). One may question why the hopping from a doubly-occupied site to an empty site (the middle picture of Eq.~\eqref{eqnPiecewiseT}) is not also enhanced. The primary reason is that the present formulation is the only way, within the single-band Hubbard model, to preserve the asymptotic spatial dependence of the effective exchange interaction: $J(r) \sim e^{-2r/\ensuremath{a^*_{\mathrm{B}}}}$ (recall $J \sim t^2/U$ and $t \sim e^{-r/\ensuremath{a^*_{\mathrm{B}}}}$). This is of essential importance, since this relation for $J$ has been shown to be asymptotically exact.\cite{HerringFlicker_1964}
Note that Eq.~\eqref{eqnHubHamOccDep} is in general \emph{not} electron-hole symmetric. Only when $\ensuremath{\tilde{t}} = \ensuremath{t}$ and the system is on a bipartite lattice is electron-hole symmetry preserved.\cite{FradkinBook_1991} The general lack thereof is readily seen, since an itinerant hole hops with amplitude $\ensuremath{t}$ whereas an itinerant electron hops with $\ensuremath{\tilde{t}}$. Indeed, the effective low energy theory of the three-parameter Hubbard Hamiltonian when there is less than one electron per site (below half-filling), in the limit $U\gg \ensuremath{t}$, is independent of $\ensuremath{\tilde{t}}$ and given by the familiar $t-J$ Hamiltonian:
\parbox{2in}{\begin{eqnarray}
\mathcal{H}_{tJ} &=& - t \sum_{<ij>\sigma} \left( (1-n_{i\bar{\sigma}})c^\dag_{i\sigma}c_{j\sigma}(1-n_{j\bar{\sigma}}) + \mbox{h.c.}\right) \nonumber \\
& & +\, J \sum_{<ij>} \left(\vec{S}_i\cdot \vec{S}_j - \frac{1}{4}n_i n_j \right) \nonumber
\end{eqnarray}}
\hfill \parbox{.1cm}{\begin{eqnarray} \label{tJEffModel} \end{eqnarray}}
\noindent where the AF exchange $J=4\ensuremath{t}^2/U$, $c_{i\sigma}^\dag$ ($c_{i\sigma}$) is the electron creation (annihilation) operator, and the spin operator $\vec{S}_i$ is as previously defined. When there is greater than one electron per site, however, the low energy spectrum (in the large $U/t$ limit) is given by a $\ensuremath{\tilde{t}}-J$ model, where $\ensuremath{t}$ is replaced by $\ensuremath{\tilde{t}}$ in Eq.~\eqref{tJEffModel}, $(1-n_{i\sigma}$ is replaced by $n_{i\sigma}$, and where $J$ remains determined by the Hubbard $\ensuremath{t}$ parameter, as one might expect.\cite{ChernyshevEffTheories_2004} The Hilbert space restriction then excludes doubly-\emph{vacant} sites. It is worth noting that in the usual $t-J$ model on a non-bipartite graph (defined as a set of sites and hopping links), excluding doubly-vacant sites is \emph{not} equivalent to excluding doubly-occupied sites. This directly corresponds to the lack of electron-hole symmetry in the corresponding Hubbard problem.
It is important to remember that the electron creation and annihilation operators in these models act on a system with a fixed number and arrangement of sites. In a semiconductor, each site corresponds to a dopant atom, and when we speak of adding electrons or holes to the system we mean addition or subtraction of carriers while \emph{leaving the underlying dopant configuration fixed}. Thus, the electron-hole asymmetry here is \emph{not} an asymmetry between n-type and p-type semiconductors, but an asymmetry between a doped semiconductor which has more electrons than dopant atoms and one which has less electrons dopant atoms.
Hirsch has investigated a similar Hubbard model with occupation-dependent hopping, but in a different regime with its focus on superconducting pairs.\cite{HirschOccDepHopping_1995} We proceed with semiconductors in mind, and to allow for the random placement of sites, we add positional dependence to the hopping amplitude in Eq.~\eqref{eqnHubHamOccDep}, similar to the modification yielding Eq.~\eqref{eqnHubHamDisordered} earlier, to arrive at:
\begin{equation}
\mathcal{H}_{rdm}^* = - \sum_{i,j,\sigma} \left( t_{ij}(n_i,n_j)c^\dag_{i\sigma} c_{j\sigma} + \mbox{h.c.} \right) + U\sum_i n_{i\uparrow}n_{i\downarrow} \label{eqnHubHamDisorderedOccDep}
\end{equation}
where $n_i$ is the total occupation of site $i$, and $t_{ij}$ now has an occupation dependence given by:
\begin{equation}
t_{ij}(n_i,n_j) = \left\{ \begin{array}{cc}
\ensuremath{\tilde{t}}_{ij} & n_j=1 \,\, \mbox{and}\,\, n_i=2 \\
\ensuremath{t}_{ij} & \mbox{otherwise}
\end{array} \right. \label{eqnPiecewiseTDisordered}
\end{equation}
One way to view the manifest electron-hole asymmetry of models (\ref{eqnHubHamOccDep}), (\ref{tJEffModel}), and (\ref{eqnHubHamDisorderedOccDep}) is that systems above half-filling are effectively \emph{less random}, and hold greater hope for the Nagaoka phenomenon to take place. This reasoning follows from electrons having more extended wavefunctions than holes, and the concomitant existence of two distinct length scales. Because the electron wavefunctions average over much more of the disorder, systems with a small percentage of extra electrons experience a greatly reduced effect of the positional disorder when compared with corresponding hole-doped (\emph{i.e.}~compensated) systems, and so behave more like the uniform lattice.
Hope for Nagaoka ferromagnetism in electron-doped semiconductors is also found by considering the relation of conventional doped semiconductors to diluted magnetic semiconductors (DMS), for which ferromagnetism does co-exist with disorder. In one type of DMS (III-V), a transition metal atom acts as both a dopant and a local moment (coming from the unfilled d-shell of the atom). For instance, in Ga$_{1-x}$Mn$_x$As,\cite{Ohno_1998,ChibaOhno_2003} the Mn atom acts as an acceptor (p-type) and local moment. These systems also have substantial disorder (due to dopant positions and anti-site defects in the semiconductor itself, \emph{e.g.}~As on Ga sites), but possess macroscopic ferromagnetism for temperatures up to 100K!\cite{Ohno_1998} Thus, disorder by itself does not always destroy ferromagnetism; in fact, in some cases it may even enhance the ferromagnetic transition temperature.\cite{Berciu_DMS_2001,Kennett_2002}
One important difference between conventional ``non-magnetic'' doped semiconductors and DMS is that there exists in the latter two different length scales -- the Bohr radius of the Mn hole wavefunction ($\sim 10$ \AA) and the extent of the localized spin on the Mn ($\sim 1-2$ \AA). Thus, each hole's wavefunction extends over several Mn spins, a phenomenon which is only accentuated as holes delocalize further at higher Mn density. This allows the carrier-magnetic moment interaction to dominate, resulting in a FM ground state\cite{Berciu_DMS_2001,Berciu_DMS_2004}. In the electron-doped semiconductor, the Bohr radius of the electrons that singly-occupy sites (which give rise to the effective AF exchange interaction $J\sim\ensuremath{t}^2/U$), is much smaller than the radius of the electrons which doubly-occupy a site. This dichotomy of length scales could similarly conspire to result in carrier hopping being dominant and ultimately a ferromagnetic (Nagaoka) ground state. (The other difference, of course, is the existence of multiple bands in DMS, which facilitates FM.)
\subsection{Parameter ranges and calculation details\label{secCalcDetails}}
The first step in our analysis of the Hamiltonians (\ref{eqnHubHamOccDep}) and (\ref{eqnHubHamDisorderedOccDep}) is to find values (or ranges of values) appropriate for their parameters. The models are described by the dimensionless ratios $\ensuremath{\tilde{t}}/\ensuremath{t}$ and $U/\ensuremath{t}$ (which depend on a pair of site indices in the case of Eq.~\eqref{eqnHubHamDisorderedOccDep}). To find values of $U/t$ and $\ensuremath{\tilde{t}}/\ensuremath{t}$ appropriate for doped semiconductors, we performed a calculation of the single particle states of donors placed on a simple cubic lattice. Note that although much of our work deals with 2D systems, atomic hydrogen is intrinsically a 3D problem, and thus the calculation of realistic parameters for a system of many hydrogenic centers should likewise be in three dimensions. We choose the simplest such 3D arrangement of centers, the simple cubic lattice.
As already stated, a hydrogen ($H$) atom binds its electron with a strength of 1 $\ensuremath{\mathrm{Ry^*}}$ and will bind a second electron with $0.0555\,\ensuremath{\mathrm{Ry^*}}$ to form a $H^-$ ion. If all of the dopants are positioned on a superlattice, then these two levels broaden in the usual manner into two impurity bands. The exact details of the particle bands depend on the spin configuration in the ground state. Due to the $H^-$ ion's wavefunction being more spatially extended than that of the $H$ atom, the width of the upper impurity band is significantly greater than that of the lower band.
We have calculated these bands for a ferromagnetic configuration of spins in the ground state of a filled lower band (\emph{i.e.}~the uncompensated case). We follow Bhatt and Rice,\cite{BhattRice_1981} and use pseudopotentials and a sphericalized Wigner-Seitz (WS) method on a cubic superlattice. Details of the band calculation can be found elsewhere.\cite{NielsenBhattTransport} We then extract the dependence of $\ensuremath{t}$ and $\ensuremath{\tilde{t}}$ on the impurity density (or equivalently, on the lattice constant) by fitting the calculated bandwidths to a tight binding model. Using the well-known tight binding relationship between hopping parameter and bandwidth on un-frustrated lattices yields (where $z$ is the lattice coordination number):
\begin{eqnarray}
2z\ensuremath{t} &=& \mbox{width of lower band} \nonumber \\
2z\ensuremath{\tilde{t}} &=& \mbox{width of upper band} \label{eqnTightBindingFit} \\
U &=& \mbox{band gap at zero density} \,. \nonumber
\end{eqnarray}
We find $U \approx 1 \ensuremath{\mathrm{Ry^*}}$ and, by matching the bandwidths for the 3D case, we obtain the tight binding parameters $\ensuremath{t}(b)$, $\ensuremath{\tilde{t}}(b)$. Figure \ref{figParamRatiosVsLatSpacing} shows the dependence of the dimensionless Hubbard parameter ratios on the superlattice spacing (lower axis) and impurity density (upper axis). It shows clearly that the range of $U/t$ and $\ensuremath{\tilde{t}}/\ensuremath{t}$ can be varied substantially in the doped semiconductors. The large span of $U/t$ originates in the exponential dependence of the hopping parameter on the atomic spacing, and the variation of $\ensuremath{\tilde{t}}/\ensuremath{t}$ from the relatively large size of the two-electron wavefunction appearing as a factor in this exponential.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3in]{figs/tightBindingParams3.ps}
\caption{Variation of ratios $U/t$ and $\ensuremath{\tilde{t}}/\ensuremath{t}$ with the dopant spacing (related to the dopant density $\rho$ by $\rho = \frac{1}{R^3}$, so the metal-insulator occurs at $R_c/a_{\mathrm{B}} = 4$). \label{figParamRatiosVsLatSpacing}}
\end{center}
\end{figure}
In the results that follow, we either use the exact parameter ratios found here or consider the effect of varying the parameter ratios within the ranges $U/t=[5,100]$ and $\ensuremath{\tilde{t}}/\ensuremath{t}=[1,10]$, which are conservative when compared to the physically attainable ranges.
After determining the parameter ranges of interest, we solve both Hubbard and $t-J$ models on finite systems. We numerically find the ground state, and determine how its spin depends on $\ensuremath{\tilde{t}}/\ensuremath{t}$, $U/\ensuremath{t}$, system size, and system geometry. Hamiltonians (\ref{eqnHubHamOccDep}) and (\ref{eqnHubHamDisorderedOccDep}) were solved using exact diagonalization, ultimately using a generalization of the Lanczos method.\cite{Calvetti_1994} With four states allowed on a site, the Hilbert space grows exponentially in the number of sites, restricting the size of tractable systems significantly. Several optimizations have been exploited to push back this computational barrier. First, since both Hubbard and $t-J$ Hamiltonians commute with the $z$-component of total spin, it follows from the properties of the SU(2) group, that we can restrict the the Hilbert space to the minimal $S_z$ sector without reducing the support of the spectrum. Second, all spatial symmetries are utilized via group theoretic techniques to divide the Hilbert space into sectors for which the Hamiltonian matrix is block diagonal. Third, we factorize the action of the Hubbard Hamiltonian into ``up spin'' and ``down spin'' parts, allowing more efficient computation of the matrix elements. In the $t-J$ model, this can be done only for the kinetic term.
\section{Results for ground state spin in finite clusters\label{secHubbardResults}}
Here we present the results of solving our generalized Hubbard model on finite systems. The results and discussion are divided into units based on the amount of structure present in the system, and what type of boundary conditions were used. Section \ref{secFiniteLattices} considers systems with finite lattice structure and periodic boundary conditions. Note that only nearest neighbor links are kept in the model (see Eq.~\eqref{eqnHubHamOccDep}), so that there is a single pair ($\ensuremath{t},\ensuremath{\tilde{t}}$) of kinetic parameters. We refer to a lattice as being bipartite or non-bipartite if the corresponding Hubbard model with only nearest neighbor hopping is respectively bipartite or not. Section \ref{secSelectedClusters} presents results from clusters with open boundary conditions and selected structures for which all nearest neighbors are equidistant (so there is again a single pair of kinetic parameters). We use the term \emph{cluster} in this section to refer to a finite system possessing less symmetry than a finite lattice. In section \ref{secGeomDistorted}, clusters constructed to have only two or three pairs of kinetic parameters are considered with open boundary conditions. There we also describe a method of adding random perturbations to clusters, and present results for several cases. Finally, in section \ref{secFixedDensityClusters} we analyze ensembles of random clusters. We generate these ensembles with a fixed density, and exact diagonalization results of the individual clusters are averaged to produce our final results.
\begin{figure*}
\begin{center}
\begin{tabular}{|c|c|} \hline
Square & \rule[-0.9in]{0in}{1.9in}\parbox{4.7in}{
\begin{tabular}{c}
\includegraphics[height=1.4in]{figs/sq8_nl.eps}\\ 8 sites \end{tabular}
\hspace{0.5cm}
\begin{tabular}{c}
\includegraphics[height=1.2in]{figs/sq10_nl.eps}\\10 sites \end{tabular}
\hspace{0.5cm}
\begin{tabular}{c}
\includegraphics[height=1.4in]{figs/sq16_nl.eps}\\16 sites \end{tabular}}
\\ \hline
Honeycomb & \rule[-0.8in]{0in}{1.7in}\parbox{4.5in}{
\begin{tabular}{c}
\includegraphics[height=1.2in]{figs/honey6_nl.eps}\\6 sites \end{tabular}
\hspace{1cm}
\begin{tabular}{c}
\includegraphics[height=1.2in]{figs/honey10_nl.eps}\\10 sites \end{tabular}}
\\ \hline
Triangular & \rule[-0.8in]{0in}{1.7in}\parbox{4.5in}{
\begin{tabular}{c}
\includegraphics[height=1.2in]{figs/tri7_nl.eps}\\7 sites \end{tabular}
\hspace{1cm}
\begin{tabular}{c}
\includegraphics[height=1.2in]{figs/tri9_nl.eps}\\9 sites \end{tabular}}
\\ \hline
\end{tabular}
\caption{Lattice geometries for the square, honeycomb and triangular lattices used in this section. The lines connect sites of the finite lattice, which is repeated to show how periodic boundary conditions are implemented. \label{figLatticeGeometries}}
\end{center}
\end{figure*}
\subsection{Finite Lattices \label{secFiniteLattices}}
We have solved the nearest-neighbor Hubbard and corresponding $\ensuremath{\tilde{t}}-J$ models on finite square (8, 10, and 16 sites), honeycomb (6 and 10 sites), and triangular (7 and 9 sites) lattices. These are shown in Fig.~\ref{figLatticeGeometries} with the sites of a single unit cell connected, so that the method of applying periodic boundary conditions in each case is clear. Note the choice of unit cell for all of the bipartite lattices (square and honeycomb) allows a classical Neel state spin assignment, where all of a site's nearest neighbors have spin opposite to it. This requirement is important since a finite bipartite lattice that is magnetically frustrated due to boundary conditions may have an exaggerated preference for FM.
Each finite lattice, with periodic boundary conditions, was doped with up to two electrons or holes away from half-filling. Denoting the number of electrons $\ensuremath{N_e}$, this means that $\ensuremath{N_s} - 2 \le \ensuremath{N_e} \le \ensuremath{N_s} + 2$. The Hubbard model depends on the two dimensionless ratios $U/\ensuremath{t}$ and $\ensuremath{\tilde{t}}/\ensuremath{t}$, whereas the $\ensuremath{\tilde{t}}-J$ model depends only on $\ensuremath{\tilde{t}}/J = \frac{1}{4}(\ensuremath{\tilde{t}}/\ensuremath{t})(U/\ensuremath{t})$. Thus, the value of $\ensuremath{\tilde{t}}/J$ marking the onset of the Nagaoka state defines a straight line in $\log U/\ensuremath{t}$ vs.~$\log \ensuremath{\tilde{t}}/\ensuremath{t}$ space with slope $-1$. We consider each lattice in turn below.
\vspace{0.5in}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/sqLatSummary.ps}
\renewcommand{\baselinestretch}{1}\normalsize
\caption{(Color online) Ground state spin diagram resulting from the exact diagonalization of Eq.~\eqref{eqnHubHamOccDep} on 8-,10-, and 16-site square lattices (periodic b.c.) with 9, 11, and 17 electrons respectively. Hubbard model results are displayed as open symbols. Lines show the result of the corresponding $\ensuremath{\tilde{t}}-J$ model as described in the text. $S_{max}$ denotes the region of largest allowed spin (actual value depends on the lattice size), and $S_{low}$ marks the region of unsaturated (usually minimal) ground state spin. \label{figSqLatticeResults}}
\end{center}
\end{figure}
\subsubsection{Square Lattice}
The square lattice, the stereotypical 2D lattice, is bipartite and is itself a Bravais lattice. Figure \ref{figSqLatticeResults} shows the ground state spin phase diagram for the 8-, 10-, and 16-site square lattices doped with one electron, up to $\ensuremath{\tilde{t}}/\ensuremath{t}=5$. One sees that an increase in $\ensuremath{\tilde{t}}/\ensuremath{t}$ causes the region where the ground state attains its maximum spin to increase. This confirms our intuition about the model, that a FM ground state is more likely when the carriers (an extra electron in this case) have greater hopping amplitude. (Recall that a greater hopping amplitude increases the kinetic energy gain of a delocalized electron in a background of \emph{aligned} spins relative to the case when the background spins are in an AF or random arrangement.) Up to $\ensuremath{\tilde{t}}/\ensuremath{t}=5$, the minimal $U/t$ needed for a fully polarized ground state falls roughly as a power law with $\ensuremath{\tilde{t}}/\ensuremath{t}$. The $\ensuremath{\tilde{t}}-J$ model gives a fairly accurate fit to the Hubbard data (predicting a power law with exponent -1, shown by the lines in Fig.~\ref{figSqLatticeResults}). The fit is especially good at low $\ensuremath{\tilde{t}}/\ensuremath{t}$, which coincides with larger $U/t$ values and thus is where we expect the $\ensuremath{\tilde{t}}-J$ model to be most accurate. Beyond $\ensuremath{\tilde{t}}/\ensuremath{t}=5$, the same general trend is observed, but the phase diagram becomes more complicated as regions of intermediate polarization arise, making the transition from low spin to maximal spin less abrupt (and closer to a second order transition). This behavior is shown in Fig.~\ref{fig16siteIntermediateSpins} for the 16-site square lattice with $\ensuremath{N_e}=17$. The $\ensuremath{\tilde{t}}-J$ line in this case runs through the regions of intermediate spin, though the model itself gives a direct transition from minimal to fully saturated ground state spin.
\begin{figure}
\begin{center}
\includegraphics[width=2.7in]{figs/sq16_17all.eps}
\renewcommand{\baselinestretch}{1}\normalsize
\caption{Detailed ground state spin diagram of the 16-site square lattice with 17 electrons. Labels indicate the ground state's total spin. As $\ensuremath{\tilde{t}}/\ensuremath{t}$ increases beyond 3, the transition to a maximally polarized state is less abrupt and regions of partial spin polarization exist. We find that the $\ensuremath{\tilde{t}}-J$ model gives a direct transition from $S=\frac{1}{2}$ to $S=\frac{15}{2}$, which is shown as the dashed line.\label{fig16siteIntermediateSpins}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/sqLatEHCompare.ps}
\renewcommand{\baselinestretch}{1}\normalsize
\caption{(Color online) Ground state spin diagram for the 8-, 10-, and 16-site square lattices showing the asymmetry between doping with a single hole (dashed line) and a single electron (solid line).
\label{figSingleHoleVsElectron}}
\end{center}
\end{figure}
A comparison of these electron-doped systems with corresponding hole-doped systems reveals a pronounced electron-hole asymmetry. This is expected from the model, since for $\ensuremath{\tilde{t}} \ne \ensuremath{t}$ the Hamiltonian is not electron-hole symmetric: electrons hop with $\ensuremath{\tilde{t}}$ whereas holes hop with amplitude $\ensuremath{t}$. Figure \ref{figSingleHoleVsElectron} compares the Hubbard model with $\ensuremath{N_e} = \ensuremath{N_s} \pm 1$ (one extra electron or one hole) on finite square lattices. In the larger 10- and 16-site lattices with one hole we see very little dependence of the ground state spin on $\ensuremath{\tilde{t}}/\ensuremath{t}$, as would be naively expected. [In the 8-site square lattice an increase in $\ensuremath{\tilde{t}}/\ensuremath{t}$ actually hinders ferromagnetism, seen by an increase in the $U/t$ necessary to reach the totally spin-polarized state. This is most likely a finite size effect, but may have interesting ramifications in the context of finite clusters (see section \ref{secGeomDistorted} below)]. It is clear that the asymmetry between the electron- and hole-doped results originates from the electronic states having greater radius than the hole states, since for equal radii ($\ensuremath{\tilde{t}} = \ensuremath{t}$) the square lattice is bipartite and the problem is electron-hole symmetric. Figure \ref{figSingleHoleVsElectron} is the first of many that illustrate a central result of this thesis: high-spin ground states are attained at \emph{much} lower $U/t$ in the electron-doped case than in the hole-doped case.
The ground state spin of the Hubbard model on finite 2D square lattices with periodic boundary conditions is known\cite{RieraYoung_HubSq16_1989} to behave somewhat erratically as a function of the number of electrons ($\ensuremath{N_e}$), and techniques involving an average over varied boundary conditions have had some success as smoothing out, as well as explaining this behavior.\cite{Gros_1996} We do not address these issues here; instead we focus on the square lattice at two dopings that are known to give high-spin ground states when used with periodic boundary conditions. In addition to the single electron or hole configurations already described, the 16-site square lattice with 4 electrons ($\ensuremath{N_e}=20$) is known to have a ground state spin of maximal value ($S=5$). Figure \ref{fig20e} shows the effect of varying $\ensuremath{\tilde{t}}/\ensuremath{t}$ in this case, and we see, similarly to the case of a single carrier, that increasing $\ensuremath{\tilde{t}}/\ensuremath{t}$ decreases the value of $U/t$ needed to attain the fully saturated ground state.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3in]{figs/sq16_20.ps}
\renewcommand{\baselinestretch}{1}\normalsize
\caption{Ground state phase diagram of the 16-site square lattice with 4 electrons above half-filling (20 electrons total). The line is a spline fit, and is provided as a guide for the eye.
\label{fig20e}}
\end{center}
\end{figure}
\subsubsection{Honeycomb Lattice}
There has been a revived interest in the honeycomb lattice since the recent surge in graphene-related research. Though it is not itself a Bravais lattice (it is a triangular lattice with a two-point basis), the honeycomb lattice is bipartite and thus the Hubbard model is electron-hole symmetric on it for $\ensuremath{\tilde{t}}=\ensuremath{t}$. The mean-field ground state phase diagram of the $\ensuremath{\tilde{t}}=\ensuremath{t}$ Hubbard model for hole-doped systems shows the existence and stability of the Nagaoka phase at large $U/\ensuremath{t}$ near half-filling.\cite{Peres_2004} The magnetic ground state diagrams for Hamiltonian (\ref{eqnHubHamOccDep}) on 6- and 10-site honeycomb lattices with one electron or hole away from half-filling ($\ensuremath{N_e} = \ensuremath{N_s} \pm 1$) are shown in Figs.~\ref{figHoney1ExtraElec} and \ref{figHoney1ExtraHole} respectively.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/honeyExtraE.ps}
\caption{(Color online) Ground state spin diagram from the exact diagonalization of 6- and 10-site honeycomb lattices doped with a single electron (\emph{i.e.}~with 7 and 11 electrons respectively) showing the boundary of the region where there is a complete spin polarization. In the 6-site lattice the transition is from S=5/2 to S=3/2, whereas in the 10-site lattice the transition is more abrupt, changing from S=9/2 to S=1/2 within the resolution used.
\label{figHoney1ExtraElec}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/honeyExtraHole.ps}
\caption{(Color online) Exact diagonalization results showing the boundary of the fully spin polarized region on the 6- and 10-site honeycomb lattices doped with a single hole (\emph{i.e.}~with 5 and 9 electrons respectively). In the 10-site case, the spin on the unsaturated side of the transition is $S=\frac{1}{2}$ except for a region of $S=\frac{5}{2}$ found at intermediate $U/\ensuremath{t}$ for $\ensuremath{\tilde{t}}/\ensuremath{t} > 10$; on the 6-site lattice the unsaturated state has uniform spin $\frac{3}{2}$. Note that there is much less variation with respect to $\ensuremath{\tilde{t}}/\ensuremath{t}$ when compared with Fig.~\ref{figHoney1ExtraElec}.
\label{figHoney1ExtraHole}}
\end{center}
\end{figure}
We find similar qualitative behavior to that of the square lattices: for systems with $\ensuremath{N_e} = \ensuremath{N_s} + 1$, increasing $\ensuremath{\tilde{t}}/\ensuremath{t}$ expands the region of phase space for which the spin is maximal. Again, the $\ensuremath{\tilde{t}}-J$ model result agrees well with the Hubbard results for low $\ensuremath{\tilde{t}}/\ensuremath{t}$. In the case of single hole-doping ($\ensuremath{N_e} = \ensuremath{N_s} - 1$), there is little dependence on $\ensuremath{\tilde{t}}/\ensuremath{t}$ in the 10-site lattice whereas there is the opposite $\ensuremath{\tilde{t}}/\ensuremath{t}$ dependence in the smaller 6-site lattice, similar to the case of the 8-site square lattice.
\subsubsection{Triangular Lattice}
The triangular lattice is a Bravais lattice of particular interest, since it magnetically frustrated (not bipartite). A recent study of the triangular lattice\cite{GhoshSingh_2008} using a many-body expansion technique finds that, at large $U/t$, a $120\,^{\circ}$-ordered AF phase is stable at and below half-filling, and becomes unstable above half-filling. In past studies of finite clusters, it was likewise found that at half-filling antiferromagnetic states are optimal in \emph{non-}bipartite systems (due to the quantum fluctuations arising from what would be frustrated bonds in a static picture).\cite{PastorHirschMuhlschlegel_1994}
With a single extra electron ($\ensuremath{N_e} = \ensuremath{N_s}+1$), the Hubbard model on 7- and 9-site lattices displays saturated ferromagnetism very strongly (on the 9-site lattice with $\ensuremath{\tilde{t}}=\ensuremath{t}$, $U/t\approx 15$ results in a spin polarized ground state). Figure \ref{figTri1ExtraElec} shows our results for the Hubbard model on finite triangular lattices with one extra electron. Classically, the observed dominance of ferromagnetism could be linked to a suppression of competing AF configurations (frustrated on the triangular lattice). One must be careful, however, when applying this reasoning to quantum models, as studies have shown that antiferromagnetism is \emph{enhanced} on the triangular lattice with a single hole\cite{HaerterShastryAFTriangle_2005} due to the subtle interplay of quantum phases. The regnancy of ferromagnetism may also be due to the large number of tight loops in the lattice. Pastor \emph{et al.}\cite{PastorHirschMuhlschlegel_1996}~have remarked that the presence of triangular or square loops coincides with ferromagnetism in finite clusters, and we reach similar findings in our study of clusters below (see sections \ref{secSelectedClusters} and \ref{secGeomDistorted}). The strong FM we see here suggests that this connection extends to lattices as well.
The $\ensuremath{\tilde{t}}-J$ data for the triangular lattice fits the Hubbard data less well than in the previous bipartite lattices. For the 9-site triangular lattice the $\ensuremath{\tilde{t}}-J$ result underestimates the region of saturated spin, and in the case of the 7-site triangular lattice, the Hubbard model does not even transition to the unsaturated state predicted by the $\ensuremath{\tilde{t}}-J$ model. The discrepancy is not an immediate cause for concern, and might even be expected, given the low $U/t$ values at which the the transitions occur.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/triExtraE.ps}
\caption {(Color online) Ground state spin diagram from the exact diagonalization of 7- and 9-site triangular lattices when doped with a single electron, showing the region of saturated spin. On the 9-site lattice, the unsaturated region is predominantly $S=0$ except for a sliver of $S=2$ close to the transition. There is no transition on the Hubbard 7-site lattice, which has a maximally polarized ground state ($S=3$) for the entire plotted area. In the corresponding $t-J$ model, however, the 7-site lattice has a transition from $S=3$ to $S=2$ near $\ensuremath{\tilde{t}} / J \approx 3.0$ (shown by the dotted line).
\label{figTri1ExtraElec}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/triExtraHole.ps}
\caption {(Color online) Ground state spin diagram for the 7- and 9-site triangular lattices doped with a single hole. Nowhere is the ground state spin saturated. Instead, there is a region of minimal spin ($S=0$) at large $U/t$ which is encroached upon by a region of partial spin polarization ($S=2$ and $S=3$ for 7- and 9-sites respectively) as $\ensuremath{\tilde{t}}/\ensuremath{t}$ increases.
\label{figTri1Hole}}
\end{center}
\end{figure}
Since the triangular lattice problem is not bipartite, there can be (and is) electron-hole asymmetry even when $\ensuremath{\tilde{t}} = \ensuremath{t}$. Figure \ref{figTri1Hole} shows the ground state phase diagram for single hole-doped 7- and 9-site triangular lattices ($\ensuremath{N_e} = \ensuremath{N_s} - 1$). These plots are qualitatively different from those of the the hole-doped square and honeycomb lattices: the high-spin region is unsaturated and lies at \emph{lower} $U/\ensuremath{t}$ than a minimal-spin region which dominates at large $U/\ensuremath{t}$. As $\ensuremath{\tilde{t}}/\ensuremath{t}$ is increased, the partially polarized region expands up to larger $U/\ensuremath{t}$ values. The mechanism for this may be related to the ``kinetic antiferromagnetism'' studied by Haerter and Shastry,\cite{HaerterShastryAFTriangle_2005} which explains how the phase dependence of a single hole's motion enhances antiferromagnetism.
\subsection{Selected Symmetric Clusters \label{secSelectedClusters}}
Next we consider a select group of two-dimensional Hubbard clusters that, like the finite lattices, have only a single pair of hopping amplitudes, $\ensuremath{t}$ and $\ensuremath{\tilde{t}}$. Unlike the lattices, these clusters are given \emph{open} boundary conditions. This corresponds to the physical situation in which a small number of sites (dopants or quantum dots) are positioned in a plane such that every pair of nearest neighbors is equidistant. Pastor \emph{et al.}\cite{PastorHirschMuhlschlegel_1994,PastorHirschMuhlschlegel_1996}~have studied the ordinary Hubbard model (Eq.~\eqref{eqnHubHamOriginal}) on all possible geometrically realizable clusters in two and three dimensions. Our analysis of cluster structure here is not as exhaustive, but we calculate the phase diagram along the $\ensuremath{\tilde{t}}/\ensuremath{t}$ axis. Clusters are chosen to lie in the plane such as to retain some spatial symmetries, and their ground state spin is calculated for $1 \le \ensuremath{\tilde{t}}/\ensuremath{t} \le 10$ and $5 < U/t < 100$ when doped with 1 or 2 electrons away from half-filling (in either direction). Figure \ref{figSingleHopSummary} summarizes the results, giving each cluster's geometric structure and its maximal spin as a function of doping. We see that in most cases, the highest spin is attained when doped with a single electron, following our expectation that a low density of extra electrons will favor spin polarization. Indeed, clusters 1-4, 6, and 7, attain their \emph{maximal} ground state spin when doped with one electron. In contrast, clusters 5 and 9 have greater spin polarization below half-filling, and that their polarization is maximal when doped with two holes.
\begin{figure}
\begin{center}
\includegraphics[width=3.3in]{figs/singleHopClusters.eps}
\renewcommand{\baselinestretch}{1}\normalsize
\caption{ Summary of clusters that have a single pair of hopping parameters. $S_{max}^x$ is the maximum spin obtained in the window $\ensuremath{\tilde{t}}/\ensuremath{t} \in [1,10]$, $U/t \in [5,100]$ when the system has 1 or 2 holes or electrons away from half-filling ($x = 1h,2h,1e,2e$ respectively). Note the correspondence of high-spin states with larger numbers of tight loops.
\label{figSingleHopSummary}}
\end{center}
\end{figure}
Clusters 1-3, 5, and 7 we call ``ring-like'', since each is equivalent to a 1-dimensional chain of sites with periodic boundary conditions. In the pair and triangle (clusters 1 and 2), the spin listed in Fig.~\ref{figSingleHopSummary} is the only spin found in the considered parameter range. Figure \ref{figSingleHopRingDiagrams} compares the ground state phase diagrams of the remaining clusters with $|\ensuremath{N_e}^*-\ensuremath{N_s}|$ electrons above and below half-filling, for all values of $\ensuremath{N_e}^*$ such that the resulting phase diagrams are non-trivial (have at least two spin regions). We see from Figs.~\ref{figSingleHopSummary} and \ref{figSingleHopRingDiagrams} that the triangle and square show the greatest percentage spin polarization above half-filling (both have maximally polarized ground states; for the square at large $\ensuremath{\tilde{t}}/\ensuremath{t}$). Also note the $\ensuremath{\tilde{t}}/\ensuremath{t}$ dependence of the square with one hole vs.~with one electron, where we see behavior similar to that of the square lattices. The pentagon is unusual in that it has higher ground state spin when hole-doped. A fully-polarized ground state occurs at large $U/t$ when the system is doped with two holes. Lastly, the hexagon shows very little $\ensuremath{\tilde{t}}/\ensuremath{t}$ dependence, though with two holes ($4e^-$) larger $\ensuremath{\tilde{t}}/\ensuremath{t}$ creates an interval in $U/t$ with low spin ($S=0$). This behavior was also seen in the hole-doped bipartite lattices of section \ref{secFiniteLattices}.
\begin{figure*}
\begin{center}
\begin{tabular}{|c|cc|} \hline
Geometry & \multicolumn{2}{c|}{Ground state phase diagrams} \\ \hline
\includegraphics[width=0.5in]{figs/sq4_nl.eps} &
\parbox{2in}{\vspace{.1cm}\includegraphics[width=2in]{figs/sq4_3_alt.ps}\vspace{.1cm}} &
\parbox{2in}{\vspace{.1cm}\includegraphics[width=2in]{figs/sq4_5_alt.ps}\vspace{.1cm}} \\ \hline
\includegraphics[width=0.65in]{figs/pentagon_nl.eps} &
\parbox{2in}{\vspace{.1cm}\includegraphics[width=2in]{figs/lin5_3_alt.ps}\vspace{.1cm}} &
\parbox{2in}{\vspace{.1cm}\includegraphics[width=2in]{figs/lin5_7_alt.ps}\vspace{.1cm}} \\ \hline
\includegraphics[width=0.65in]{figs/hexagon_nl.eps} &
\parbox{2in}{\vspace{.1cm}\includegraphics[width=2in]{figs/lin6_4_alt.ps}\vspace{.1cm}} &
\parbox{2in}{\vspace{.1cm}\includegraphics[width=2in]{figs/lin6_8_alt.ps}\vspace{.1cm}} \\ \hline
\end{tabular}
\caption{ Ground state spin (T = 0) phase diagrams in the $U/t - \ensuremath{\tilde{t}}/\ensuremath{t}$ plane for clusters 3, 5, and 7 from Fig.~\ref{figSingleHopSummary}. These 2D clusters are ``ring-like'' in the sense that they are equivalent to 1D chains with periodic boundary conditions. The fixed electron number is given in the upper-right corner of each plot, and only selected non-trivial diagrams are shown. \label{figSingleHopRingDiagrams}}
\end{center}
\end{figure*}
The remaining (non-ring-like) clusters, 4, 6, 8, and 9 of Fig.~\ref{figSingleHopSummary}, are created by adjoining triangles and squares. This was done with the hope of engineering clusters with a high-spin ground states, given the individual properties of the triangle and square. Detailed ground state phase diagrams for these clusters are presented in Appendix \ref{appSingleHopDiagrams}. We see in general that increasing $\ensuremath{\tilde{t}}/\ensuremath{t}$ enlarges the high-spin region of the phase diagram for electron-doped clusters and, in this sense, indicates that the high-spin state has become more robust. In hole-doped systems we see a much weaker dependence on $\ensuremath{\tilde{t}}/\ensuremath{t}$, and in the clusters 8 and 9 we see the opposite behavior: as $U/t$ increases there is a transition to lower ground state spin. Upon electron-doping, we find a correlation between structures that have a large number of triangular or square loops and those with high spin ground states. This relationship has also been seen in previous work.\cite{PastorHirschMuhlschlegel_1996} Though a precise reason for this correspondence has not been found, we believe it is due to such systems being electronically unfrustrated, allowing an electron to easily hop among all the sites and to be very effective at increasing the kinetic energy of the FM state. Whatever the mechanism, a heuristic rule for constructing clusters with high spin ground states is that a large cluster with many tight loops (triangular or square) is likely to be strongly magnetic. This has recently become relevant to experiment through the work of Schofield \emph{et al.},\cite{Schofield_2003} who are able to position phosphorous dopants within bulk silicon to nanometer accuracy using a scanning tunneling microscopy (STM) tip. Such capability allows for the construction of cluster geometries made ``to order'', and opens an entirely new area of application for our work. In particular, the ability to test for FM behavior (\emph{i.e.}~high spin ground states) in finite lattices of dopants would be very valuable.
\subsection{Distorted clusters \label{secGeomDistorted}}
More complex 2D clusters are obtained by allowing more than one pair of hopping parameters (\emph{i.e.}~hopping is allowed between sites of different separation distances). In this section we consider clusters with two and three pairs of distinct hopping parameters $\left\{ (\ensuremath{t}_i,\ensuremath{\tilde{t}}_i) \,:\, i \in (1,2,3) \right\}$. Some of these can be viewed as geometric perturbations of clusters in the last section, while many are new geometries not possible under the restriction of equidistant nearest neighbors. For a select group of clusters with two pairs of hopping parameters, we consider the ground state spin as a function of $\ensuremath{t}_2/\ensuremath{t}_1$ and $U/\ensuremath{t}_1$ at a uniform fixed $\ensuremath{\tilde{t}}_i / \ensuremath{t}_i$, $i=1,2$. Our analysis is done over the substantial region of phase space: $t_2/t_1 \in [1,10]$, $t_1/U \in [0.01,0.5]$. (Note that this extends to $U/t < 10$, outside the physical range found earlier, but in the direction that favors non-ferromagnetic behavior.) The results are summarized in Fig.~\ref{figDoubleHopSummary}, which show for each geometry the maximal spin achieved with a doping of up to two electrons or holes (the maximum is taken over the region of phase space stated above). Again we find that most clusters attain their highest spin when doped with $1e^-$ (clusters 1, 2, 4, 7, 10, 12, 14, 15, 18, 20, and 22). Some of the larger clusters also have high spins when doped with two electrons (clusters 11, 18, 20, and 23), since their density is still low enough to favor FM. Although in most cases the maximal spin is greater for electron-doping than hole-doping, there are some which attain high spins even when hole-doped (\emph{e.g.}~clusters 8, 9, 11, and 15).
\begin{figure*}
\begin{center}
\includegraphics[width=3in]{figs/doubleHopClusters1.eps}
\includegraphics[width=3in]{figs/doubleHopClusters2.eps}
\renewcommand{\baselinestretch}{1}\normalsize
\caption{Summary of maximum ground state spins for clusters that have two pairs of kinetic parameters (two distinct nearest neighbor distances). Solid lines represent hopping amplitude $t_1$, and dashed lines $t_2$. Cluster geometries are listed by size, and maximal spin is given for dopings of -2,-1,1, and 2 electrons away from half-filling. Each cluster is identified by a number, \#$_{cl}$, and the maximum is taken over the region $t_2/t_1 \in [1,10]$, $t_1/U \in [0.01,0.5]$ for $\ensuremath{\tilde{t}}_i/\ensuremath{t}_i$ uniformly set $= 1$, $5$, and $10$.\label{figDoubleHopSummary}}
\end{center}
\end{figure*}
We focus on the ground state spin behavior of three clusters from Figs.~\ref{figDoubleHopSummary}: 11, 12, and 20. Ground state phase diagrams showing the spin for these clusters are in the Fig.~\ref{figDoubleHopDiagrams}. Each row of the table shows the geometry and two ground state phase diagrams of a cluster with a fixed number of sites $\ensuremath{N_s}$ and electrons $\ensuremath{N_e}$. The two diagrams correspond to $\ensuremath{\tilde{t}}/\ensuremath{t} = 1$ and $5$, as indicated by the column headings. The charge of the cluster $Q = \ensuremath{N_s} - \ensuremath{N_e}$ (the negative of its doping relative to half filling) is given in the third column. For each selected cluster, phase diagrams are only shown for $Q=\pm 1$. The transition lines in these plots are found by finding the ground state spin on a grid in parameter space, then fitting the transitions between grid points with smooth curves. Detailed phase diagrams of \emph{all} non-trivial cases are given Appendix \ref{appDoubleHopClusters}.
\input{double_t_selected}
Clusters 11, 12, and 20 have fully spin-polarized ground states when doped with one electron, and for this reason will be used as starting points in later perturbation schemes. The movement of ground state spin boundaries as $\ensuremath{\tilde{t}}/\ensuremath{t}$ is increased in steps ($\ensuremath{\tilde{t}}/\ensuremath{t} = 1,5,10$) is seen in each row of the table. In cluster 11 the region of $\ensuremath{t}_2/\ensuremath{t}_1$ vs.~$U/\ensuremath{t}_1$ space with maximal spin expands for both electron- and hole-doped cases as $\ensuremath{\tilde{t}}/\ensuremath{t}$ increases, which is interesting since the effect of a larger $\ensuremath{\tilde{t}}/\ensuremath{t}$ on a hole-doped system is expected to be relatively minor. In cluster 12, a similar increase in polarization with larger $\ensuremath{\tilde{t}}/\ensuremath{t}$ is only seen in the single electron-doped case (the $Q = -1$ case is all that is shown, since all other dopings have minimal spin throughout the plotted region; see Fig.~\ref{figDoubleHopSummary}). Cluster 20 behaves very much like we naively expect: in the hole-doped case ($Q = +1$) the diagram is almost insensitive to changing $\ensuremath{\tilde{t}}/\ensuremath{t}$, while for $Q=-1$, the region of maximal polarization clearly expands at the expense of other lower-spin regions. We note that in all cases high-spin ground states occur when $\ensuremath{t}_2/\ensuremath{t}_1$ is close to 1, that is, when the dotted hopping links in the tables are nearly as strong as the solid links and the triangles and pairs that make up cluster are more strongly coupled.
Several major conclusions may be drawn from this data. First, there are many instances of high-spin ground states among these clusters, many of which can be thought of as a weak coupling ($t_2$) between triangles and pairs with a stronger internal coupling ($t_1$). In a real system, where the broad distribution of inter-site distances due to positional randomness creates exponentially strong and weak bonds, these results give some hope that the spin-polarization seen in the isolated triangle, for example, will survive in the presence of perturbation due to other sites, and that this interaction may even lead to spin polarization on longer length scales. Second, it is found almost universally that increasing $\ensuremath{\tilde{t}}/\ensuremath{t}$ leads to greater spin polarization in \emph{electron-doped} clusters, just as in the finite lattices (section \ref{secFiniteLattices}) and single-hopping parameter clusters (section \ref{secSelectedClusters}). In electron-doped clusters, we continue to see a correlation between the number of triangular loops in a cluster and that cluster's maximal spin. For instance, compare clusters 5 and 7 with clusters 14 and 15 of Fig.~\ref{figDoubleHopSummary} (the latter are much more magnetic).
In hole-doped systems we generally find lower spin values, and often there is a high-spin region at \emph{low} $U/t_1$. This inverted relationship in clusters below half-filling was also found in section \ref{secSelectedClusters} and on the 8-site square lattice. Lastly, we note that although there is potential for high-spin states, there are many clusters that have large regions of minimal ground state spin. We find overall that the Nagaoka-like ferromagnetic effect we observe is very sensitive to geometry, though the sensitivity decreases at large $\ensuremath{\tilde{t}}/\ensuremath{t}$.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/distortedClusters2.eps}
\caption{Geometries of clusters obtained by geometric distortion of clusters 11, 12, and 20 of Fig.~\ref{figDoubleHopSummary}, with three pairs of kinetic parameters $\ensuremath{t}_1 > \ensuremath{t}_2 > \ensuremath{t}_3$. \label{figDistortedGeometries}}
\end{center}
\end{figure}
Next, we test the stability of a select few of the high-spin ground states found above. For clusters 11, 12, and 20 of Fig.~\ref{figDoubleHopSummary}, we further reduce the spatial symmetry by additional geometric distortion, as shown in Fig.~\ref{figDistortedGeometries}. The distortion introduces a third pair of hopping amplitudes ($\ensuremath{t}_3,\ensuremath{\tilde{t}}_3$), and the ratio $\ensuremath{t}_3/\ensuremath{t}_2$ measures the amount of distortion.
\begin{figure}[H]
\begin{center}
\begin{tabular}{|l|c|}
\hline
\#$_{cl}$ & Ground state phase diagram \\ \hline
\large{11d} & \parbox{2.3in}{\vspace{0.2cm}\includegraphics[width=2.2in]{figs/triangle2d.ps}\vspace{0.2cm}}\\ \hline
\large{12d} & \parbox{2.3in}{\vspace{0.2cm}\includegraphics[width=2.2in]{figs/triangle2e.ps}\vspace{0.2cm}}\\ \hline
\large{20d} & \parbox{2.3in}{\vspace{0.2cm}\includegraphics[width=2.2in]{figs/pairLinkedTriangles2_d2.ps}\vspace{0.2cm}}\\ \hline
\end{tabular}
\caption{Ground state spin diagram for distorted clusters.\label{figDistortedPhaseDiagrams}}
\end{center}
\end{figure}
We fix $\ensuremath{t}_2/\ensuremath{t}_1$ at a value for which the undistorted ($\ensuremath{t}_3 = \ensuremath{t}_2$) cluster has a high-spin ground state, and determine the amount of distortion that can be applied (\emph{i.e.}~the lowest value $\ensuremath{t}_3/\ensuremath{t}_2$ can attain) before the cluster loses its high spin state. The value of $\ensuremath{\tilde{t}}_i/\ensuremath{t}_i$ is fixed (\emph{i.e.}~in each run, all of the links forming the cluster have the same $\ensuremath{\tilde{t}}/\ensuremath{t}$ ratio), and the resulting ground state phase diagrams as a function of $\ensuremath{t}_3/\ensuremath{t}_2$ and $U/\ensuremath{t}_1$ are shown in Fig.~\ref{figDistortedPhaseDiagrams}. There are two key points resulting from this data. First, as $\ensuremath{\tilde{t}}/\ensuremath{t}$ becomes larger, the high-spin ground states become more robust to the geometric fluctuation considered here: regions with high-spin ground states persist to lower values of $\ensuremath{t}_3/\ensuremath{t}_2$ as $\ensuremath{\tilde{t}}_i/\ensuremath{t}_i$ is raised. (Recall that lower $\ensuremath{t}_3/\ensuremath{t}_2$ corresponds to larger geometric distortion.) Second, the high-spin ground states are more robust at larger $U/\ensuremath{t}_1$, since the curves for fixed $U/\ensuremath{t}_1$ move to lower values of $\ensuremath{t}_3/\ensuremath{t}_1$ as $U$ increases (\emph{e.g.}~$U=100$ curve lies below the $U=50$ and $U=20$ curves).
\begin{widetext}
\begin{figure*}[h]
\begin{tabular}{c|ccc}
& Cluster 11 & Cluster 12 & Cluster 20 \\ \hline
$\ensuremath{\tilde{t}}/\ensuremath{t}=1$ & \parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/triangle2bRdm8_ratio1.eps}} &
\hspace{-1cm}\parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/triangle2cRdm7_ratio1.eps}} &
\hspace{-1cm}\parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/pairLinkedTriRdm10_ratio1.eps}} \\
$\ensuremath{\tilde{t}}/\ensuremath{t}=2.5$ & \parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/triangle2bRdm8_ratio2.5.eps}} &
\hspace{-1cm}\parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/triangle2cRdm7_ratio2.5.eps}} &
\hspace{-1cm}\parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/pairLinkedTriRdm10_ratio2.5.eps}} \\
$\ensuremath{\tilde{t}}/\ensuremath{t}=5$ & \parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/triangle2bRdm8_ratio5.eps}} &
\hspace{-1cm}\parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/triangle2cRdm7_ratio5.eps}} &
\hspace{-1cm}\parbox{2.3in}{\includegraphics[width=1.8in,angle=270]{figs/pairLinkedTriRdm10_ratio5.eps}}
\end{tabular}
\caption{(Color online) Result of randomizing clusters 11, 12, and 20 of Fig.~\ref{figDoubleHopSummary}. For clusters 11 and 12, $U/t_1=20$ and $t_2/t_1 = 0.1$; for cluster 20, $U/t_1=100$ and $t_2/t_1 = 0.3$. We set $\ensuremath{\tilde{t}}_i/\ensuremath{t}_i$ to 1, 2.5, and 5 as indicated by the row headers.
\label{figRandomized}}
\end{figure*}
\end{widetext}
To further probe the robustness of a given cluster's high spin ground state, we consider multiplying each $t_i$ of the cluster by a random factor $\lambda$ whose \emph{logarithm} is chosen from the box distribution $P(\log\lambda)=1/(2\log\alpha)$, $\log\lambda \in [-\log\alpha,+\log\alpha)$ where $\alpha \ge 1$. Thus, when $\alpha=1$ the system is unperturbed, and for $\alpha \ge 1$ each hopping amplitude is independently multiplied by a different random number between $1/\alpha$ and $\alpha$. Compared to the specific geometrical distortions analyzed in the preceding paragraph, this method of introducing randomness more accurately characterizes the fluctuations we expect in a real system, since the hopping is \emph{exponentially} dependent on the inter-site distance and we do not expect the fluctuations to preserve any symmetry present in the cluster. We take as our starting point a cluster known to have a high spin ground state and average over 1-5 thousand of the just described random perturbations. Then, we tabulate the percentage of the randomly perturbed clusters possessing each possible value of the ground state spin. The shaded regions in plots of Fig.~\ref{figRandomized} show how these percentages vary as a function of $\alpha$, with the different figures corresponding to initial clusters 11, 12, and 20 of Fig.~\ref{figDoubleHopSummary}. The boundaries of the regions are spline fits to the data. We set $U/t=20$, a relatively low value for doped semiconductors, to more clearly see the effect of $\alpha$ (at larger $U/t$ the high-spin ground state becomes increasingly robust). We see in all cases the general movement, in a probabilistic sense, of the clusters from high to low spin as $\alpha$ is increased, but that this effect is significantly mitigated by raising $\ensuremath{\tilde{t}}/\ensuremath{t}$. As $\ensuremath{\tilde{t}}/\ensuremath{t}$ becomes larger, the percentage of the clusters that retain the high-spin ground state of the original ($\alpha=1$) cluster grows substantially. Thus, we again find that increasing $\ensuremath{\tilde{t}}/\ensuremath{t}$ makes high spin ground states significantly more robust to random geometric fluctuations, this time to fluctuations similar to those we expect in an actual doped semiconductor. This result gives additional hope for the viability of constructing magnetic clusters using an STM tip (described above), where there would inevitably be slight errors in the dopant positions.
\subsection{Randomly distributed finite clusters of fixed density\label{secFixedDensityClusters}}
In sections \ref{secSelectedClusters} and \ref{secGeomDistorted}, we solved generalized Hubbard and $\ensuremath{\tilde{t}}-J$ models on a variety of clusters that were constructed to have some spatial symmetries and at most a few pairs of hopping parameters ($\ensuremath{t}_i$,$\ensuremath{\tilde{t}}_i$). This section and the next give an extensive analysis of clusters with completely random structures and several types of boundary conditions. Also, instead of considering a range of $\ensuremath{\tilde{t}}/\ensuremath{t}$ values, we use only the parameters given by our realistic band calculation described in section \ref{secCalcDetails}. In $d$-dimensions, clusters with $\ensuremath{N_s}$ sites and fixed density $\rho$ are generated by randomly placing $\ensuremath{N_s}$ sites within a $d$-dimensional hypercube of side length $L$ such that $\rho (\ensuremath{a^*_{\mathrm{B}}})^{-d}=\ensuremath{N_s}/L^d$. We fix $U=1\,\ensuremath{\mathrm{Ry^*}}$ and determine the hopping parameters $\ensuremath{t}_{ij}$ by setting $\ensuremath{t}_{ij}=t(|\ensuremath{\vec{r}}_i-\ensuremath{\vec{r}}_j|)$, where $t(r)$ is given by the lattice calculation described earlier (see Fig.~\ref{figParamRatiosVsLatSpacing}). We consider three different models, each corresponding to a different method of setting $\ensuremath{\tilde{t}}_{ij}$:
\begin{enumerate}
\item $\ensuremath{\tilde{t}}_{ij} = \ensuremath{t}_{ij}$.
\item Analogous to $\ensuremath{t}_{ij}$, using $\ensuremath{\tilde{t}}(r)$: $\ensuremath{\tilde{t}}_{ij}=\ensuremath{\tilde{t}}(|\ensuremath{\vec{r}}_i-\ensuremath{\vec{r}}_j|)$, where $\ensuremath{\tilde{t}}(r)$ is obtained from the broadening of the upper impurity band, referred to as the $D^-$ band in semiconductor literature.
\item Set $\ensuremath{\tilde{t}}_{ij} \equiv C$, where $C$ is a constant. The value of $C$ is chosen to be $U/2$.
\end{enumerate}
The first case is the regular Hubbard model for randomly distributed sites, and does not take into account the special property of hydrogenic centers. Model 2 takes into account the larger extent of the $D^-$ state. Model 3 is to simulate a situation when the radius of the $D^-$ state becomes very large, to see how big an effect that would have on the possibility of Nagaoka ferromagnetism. We choose $C=U/2$ since this is close $\ensuremath{\tilde{t}}(r)$ when $r=\ensuremath{a^*_{\mathrm{B}}}$, the smallest separation for which the tight binding model could apply. Since $\ensuremath{\tilde{t}}(r)$ increases with decreasing $r$, $\ensuremath{\tilde{t}}(\ensuremath{a^*_{\mathrm{B}}}) \approx U/2$ is of order the maximal $\ensuremath{\tilde{t}}$ found in the entire system.
Given a fixed cluster size and density, we exactly solve many (between $10^4$ and $10^6$) clusters and construct a histogram of ground state spin values. Results obtained using each of the three models are compared to assess the effect of the nature of the doubly-occupied state. We have calculated the spin histograms for clusters in two and three dimensions with sizes from $\ensuremath{N_s}=4-7$ and for densities $\rho=\frac{1}{1600}$, $\frac{1}{160}$, and $\frac{3}{160}$ in 2D, (corresponding to $\approx 0.005$, $0.05$, and $0.15$ times the Mott metal-insulator transition density) and $\rho=\frac{1}{6400}$, $\frac{1}{640}$, and $\frac{3}{640}$ in 3D (corresponding to 0.01, 0.1 and 0.3 times the Mott density). Further, we have considered open as well as periodic boundary conditions. In an actual macroscopic sample, clusters will be connected to other clusters of different local densities. Thus, the physical situation will be intermediate between the cases of open (where each cluster is surrounded by no others) and periodic (where each cluster is effectively surrounded by others of the same density) boundary conditions. The latter is closer to the actual case at high densities, the former at low density.
\subsubsection{Hopping set by band calculation: $\ensuremath{\tilde{t}}_{ij} = \ensuremath{\tilde{t}}(r_{ij})$}
Here we present results for two- and three-dimensional random clusters. Clusters have all inter-site links present (\emph{i.e.}~hopping is not restricted to be between nearest neighbor sites only). We find the distribution of ground state spin values for ensembles of clusters with fixed size $\ensuremath{N_s}$, density $\rho$, doping (either one extra electron or one hole), and model for determining $\ensuremath{\tilde{t}}_{ij}$.
Raw spin distribution data, shown by tables containing the percentage of clusters with each possible spin, are given for two-dimensional clusters, with open and periodic boundary conditions, in Appendix \ref{appRandomClusterData}. Corresponding results for three-dimensional clusters could not be included in this paper due to length considerations, and can be found in Ref.~\onlinecite{NielsenThesis}.
Here, we summarize the data by plotting the average spin and the percentage of magnetic clusters (those with greater than minimal spin) as a function of doping (zero doping = half-filled). We show only the results for 2D clusters with open boundary conditions; similar plots for periodic boundary conditions can be found in Appendix \ref{appRandomClusterData}. Figure \ref{figAvgSpin2D} shows the average spin of such clusters. There is some variation in the average spin due to even-odd asymmetry: clusters with an odd number of electrons have minimum spin $S_{min} = \frac{1}{2}$, while those with an even number have $S_{min}=0$. To remove this effect, Fig.~\ref{figAvgSpin2Dr} shows the average spin \emph{relative to $S_{min}$} (\emph{i.e.}~0.5 is subtracted from cases of odd electron number). A second measure of a systems magnetic behavior is the percentage of clusters with above minimal spin. We define any cluster with greater than minimal ground state spin (equivalently, spin $\ge 1$ since the minimal spin is either 0 or 1/2) as a \emph{magnetic cluster}, and Fig.~\ref{figPcMag2D} shows this quantity as a function of doping for the different cluster sizes (2D clusters with open b.c.). Although both the average spin and percentage of magnetic clusters provide less detailed information than the spin distribution data (Appendix \ref{appRandomClusterData}), they also suffer less from finite size effects and give a more concise picture of the results.
\begin{figure*}
\begin{center}
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ Average Spin \ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.010_2D.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.100_2D.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.300_2D.ps}} \\ \hline
\end{tabular}
\caption{(Color online) Ground state average spin of 2D random clusters with fixed size and density, and \emph{open boundary conditions}, as a function of electron-doping (negative = hole-doping). The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band. \label{figAvgSpin2D}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ Average Spin - $\mathbf{S_{min}}$\ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.010_2Dr.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.100_2Dr.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalAvgS4-5-6-7_0.300_2Dr.ps}} \\ \hline
\end{tabular}
\caption{(Color online) Ground state average spin \emph{relative to minimum spin} of 2D random clusters with fixed size and density, and \emph{open boundary conditions}, as a function of electron-doping (negative = hole-doping). The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band. \label{figAvgSpin2Dr}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{tabular}{|c|c|} \hline
$\rho$ & \textbf{2D \ : \ \% magnetic clusters \ : \ open b.c.}\\ \hline
$\frac{1}{1600}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.010_2D.ps}} \\ \hline
$\frac{1}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.100_2D.ps}} \\ \hline
$\frac{3}{160}$ & \parbox{4in}{
\includegraphics[width=2in, angle=270]{figs/finalPcMag4-5-6-7_0.300_2D.ps}} \\ \hline
\end{tabular}
\caption{(Color online) Percentage of magnetic clusters (spin 1 or greater) in an ensemble of 2D random clusters with fixed size and density, and \emph{open boundary conditions}, as a function of electron-doping (negative = hole-doping). The lower half of plots are the result of setting $\ensuremath{\tilde{t}}_{ij}=\ensuremath{t}_{ij}$, determined by the bandwidth of the lower Hubbard band. The upper half use $\ensuremath{\tilde{t}}_{ij}$ determined by the bandwidth of the upper Hubbard ($D^-$) band.\label{figPcMag2D}}
\end{center}
\end{figure*}
The study of Figs.~\ref{figAvgSpin2D}-\ref{figPcMag2D}, and the more extensive data of Appendix \ref{appRandomClusterData} and Ref.~\onlinecite{NielsenThesis} reveals several trends. First, clusters with periodic boundary conditions tend to have a larger total spin than those of the same size and density but with open boundary conditions (see Appendix \ref{appRandomClusterData}, Table \ref{tabRandClusters2D_diff}). This is especially true for large clusters ($\ensuremath{N_s} = 6,7$) and at lower density. This may be due to the increased connectedness of clusters with periodic boundary conditions compared to those with open boundary conditions. In a system that is more connected (\emph{i.e.}~where there are more nonzero hopping amplitudes $t_{ij}$), electrons can more easily move among the sites and their kinetic energy (which favors FM) is a stronger contribution to the total energy. Seen another way, the application of periodic boundary conditions to a cluster with open boundary conditions effectively raises the density of the cluster's environment, since the cluster then appears to be surrounded by other clusters of the same density.
A comparison between odd-$\ensuremath{N_s}$ and even-$\ensuremath{N_s}$ clusters shows that clusters with an odd number of sites (which have integer spin for $\pm1e^-$ away from half-filling) generally have greater average spin relative to the minimum possible spin (zero for $\pm1e^-$). This difference is not great, however, and their absolute average spin (\emph{e.g.}~in Fig.~\ref{figAvgSpin2D}) is comparable to that of the even-$\ensuremath{N_s}$ clusters, which have a minimum spin of 1/2 as opposed to 0.
Cluster size is a third point of comparison, where we find that larger clusters usually have ground states with higher spin, and higher average spin overall. One should keep in mind, however, that larger clusters are able to have higher spin values just by virtue of having more sites (and \emph{total} electrons). (Indeed, we find that smaller clusters have larger average spin \emph{relative to their maximal allowed spin}.) The rise in average spin and the existence of higher spin ground states as cluster size increases is greater and more consistently true of electron-doped clusters. In this case the dependence of average spin on cluster size is particularly significant: we find a substantial percentage of maximally polarized clusters for all sizes (4-7) investigated, showing that the spin polarization induced by extra electrons persists to larger random systems yielding large spins (up to $S=3$). We also see that the polarization of larger clusters (6-7 sites) remains (and sometimes increases) when there are two electrons above half-filling. The average spin of hole-doped clusters shows a much weaker shift toward larger spin values with cluster size than the electron-doped case, which again highlights our central argument that electron-doping is very different from hole-doping.
Fourth, we see that with increasing density there are usually fewer high-spin clusters in all categories except for clusters with one extra electron that have $\ensuremath{\tilde{t}}_{ij}$ set by method 2 above ($\ensuremath{\tilde{t}} > \ensuremath{t}$). In this case the distribution with highest weight on large spins occurs at \emph{intermediate} density ($\rho = \frac{1}{160}$ in 2D, $\frac{1}{640}$ in 3D), a result also seen in the ensembles of section \ref{secVaryDensityClusters} below. This suggests that there exists an optimal density for finding high-spin states in doped semiconductors above half-filling. We generally expect low density to be most favorable for FM, since this corresponds to large $U/t$, and believe this is the reason why all but the aforementioned case show this behavior. In the exceptional case, when there is one extra electron and $\ensuremath{\tilde{t}}_{ij}$ is set by model 2, the additional parameter $\ensuremath{\tilde{t}}/\ensuremath{t}$, will play a significant role, and the dependence of the pair ($U/\ensuremath{t}$,\,$\ensuremath{\tilde{t}}/\ensuremath{t}$) on the density could result in an optimal density for FM that is greater than zero.
Lastly, the most striking trend we find is by comparing electron-doped and hole-doped clusters. When $\ensuremath{\tilde{t}} = \ensuremath{t}$ the clusters with one extra electron have a spin distribution shifted to substantially higher spin values than those with one less electron (\emph{i.e.}~one hole). When $\ensuremath{\tilde{t}}_{ij}$ is determined by our band calculation (\emph{i.e.}~$ > \ensuremath{t}_{ij}$), this effect increases dramatically (particularly at intermediate density, as mentioned earlier). This effect is expected, since in our model an extra electron hops with amplitudes $\ensuremath{\tilde{t}}_{ij}$ while an extra hole hops with amplitudes $\ensuremath{t}_{ij}$. Recall that the motivation for the model comes from the special properties of the hydrogen atom which result in mobile electrons having spatially larger wavefunctions than mobile holes. These cluster results show that even in strongly disordered systems a Nagaoka-like ferromagnetism can emerge at least on the nanoscale, and one of the ideal conditions for this FM is an electron-doped system. Compared to those below half-filling, systems above half-filling also hold greater promise for spin polarization on longer length scales, since this would most likely arise from many aligned high-spin clusters.
\begin{table*}
\begin{center}
\input{randClTable_2D_cband}
\caption{Comparison of large $\ensuremath{\tilde{t}}=U/2$ and band calculation $\ensuremath{\tilde{t}}$ distributions of ground state spin values for 2D random clusters with \emph{open boundary conditions}. Table entries give the percentage of clusters with the ground state spin specified in the column header. Results are the ensemble average of many clusters with fixed size $\ensuremath{N_s}$, density $\rho$, and doping = one electron (1e) or hole (1h). Estimated error $\pm0.5\%$.\label{tabRandClusters2D_cband}}
\end{center}
\end{table*}
\subsubsection{Large $\ensuremath{\tilde{t}}$ case: $\ensuremath{\tilde{t}} = U/2$\label{secCBand}}
In model 3, the hopping $\ensuremath{\tilde{t}}_{ij}$ is set to a constant $C=U/2$, a value near the maximum of $\ensuremath{\tilde{t}}(r)$ (used in model 2). This corresponds qualitatively to the case when the wavefunction on doubly-occupied sites is extended across the system (as if, for instance, the state had merged with conduction band states). One may access this regime experimentally if the binding energy of the $D^-$ state can be tuned (\emph{e.g.}~in many-valley semiconductors or by an applied field). Here we focus on the case of 2D clusters with open boundary conditions, for which the raw spin distribution data comparing models 2 and 3 is shown in Fig.~\ref{tabRandClusters2D_cband}. Data for 3D clusters can be found in Appendix \ref{appRandomClusterData}. Two trends found in our discussion of models 1 and 2 above also appear in the $\ensuremath{\tilde{t}} = U/2$ results: odd-$\ensuremath{N_s}$ clusters have greater spin polarization relative to their minimum spin, and spin polarization increases with cluster size. Unlike the results of method 2 with one electron (1e), where the intermediate density was optimal for FM, the results of method 3 show spin polarization increasing with decreasing density (as in method 1). This fits with our belief that the optimal density found when method 2 was used is due to the interplay of \emph{two} density-dependent Hubbard model parameters (in method 3 there is only one, $U/t$, as in method 1). The electron-hole asymmetry found when $\ensuremath{\tilde{t}} = U/2$ is qualitatively similar to when $\ensuremath{\tilde{t}}_{ij} = \ensuremath{\tilde{t}}(r_{ij})$ (method 2), but with higher spin values (for both electron- and hole-doped systems). This is expected in the single electron-doped case (1e), since the second electrons are even more weakly bound, causing correspondingly stronger spin polarization. In the hole-doped case two aspects are particularly noteworthy. First, we find that the large $\ensuremath{\tilde{t}} = U/2$ results in clusters with higher spin than those of method 2, opposite to the trend seen in hole-doped bipartite lattices (cf.~Figs.~\ref{figSingleHoleVsElectron} and \ref{figHoney1ExtraHole}). Second, the largest spin in the distribution saturates at a value of one below the maximal allowed spin, denoted $S_{max}$ (for instance, in 2D clusters with $\ensuremath{N_s}=6$, the spin distribution in nearly 100\% $S=1.5$). This behavior is somewhat similar to the hole-doped triangular lattice (Fig.~\ref{figTri1Hole}), which has a partially-polarized ground state (with spin = $S_{max}-1$) which covers larger intervals of $U/t$ as $\ensuremath{\tilde{t}}/\ensuremath{t}$ is increased. In summary, large $\ensuremath{\tilde{t}}$ results in almost 100\% of (single) electron-doped clusters having ground state spin $S_{max}$, and almost 100\% of (single) hole-doped clusters having ground state spin $S_{max}-1$.
\section{Cluster analysis of large samples\label{secVaryDensityClusters}}
We now study the viability of ferromagnetism in a macroscopic sample. For this section, because of its relevance to hydrogenic n-doped semiconductors, we present results for $\ensuremath{\tilde{t}}_{ij}$ and $\ensuremath{t}_{ij}$ from the band calculation only. Our strategy will be to consider a large two- or three-dimensional system of random sites and divide it into clusters that can be approximately treated as independent as far as the Hubbard part of the Hamiltonian is concerned. Choice of the number of carriers in each cluster involves long-range Coulomb forces and is treated in a classical approximation described later. We solve the clusters individually and then analyze the resulting distribution of their ground state spins. The analysis of section \ref{secFixedDensityClusters} characterized random clusters with a fixed density; here the average density of a large system is fixed while the local density of individual clusters is free to vary.
\subsection{Decomposition into clusters}
We begin with a set of $\ensuremath{N_{sys}}$ randomly positioned points with some average density $\bar{\rho}$ where $\ensuremath{N_{sys}}$ is typically 10,000 to 1,000,000. We then divide the points into approximately isolated clusters, solve the cluster generalized Hubbard Hamiltonian exactly, and consider their ground state statistics. We choose to divide the large set of points into clusters using a simple algorithm that proceeds as follows:
\begin{enumerate}
\item Initially each point is a single cluster, and all points are ``unused''.
\item Choose any unused point $p$, and find its nearest neighbor $q$.
\item Merge the cluster containing $p$ with the cluster containing $q$.
\item Set point $p$ to ``used'' status.
\item Repeat at step 2 until no unused points remain.
\end{enumerate}
In this way we form the smallest clusters such that each point belongs to the same cluster as its nearest neighbor (\emph{i.e.}~the point most strongly coupled to it). Note also that the minimum cluster size is 2. The advantage of this ``nearest-neighbor'' method is that it always keeps nearest neighbor points in the same cluster, which is desirable from a perturbation theory standpoint. It does not, however, guarantee that the clusters include all the hopping amplitudes of the original system above some threshold. We show in Fig.~\ref{figClusteringMethods}(a) the decomposition of a 2D system into clusters using the algorithm. A weakness of the nearest neighbor method is that it will form separate clusters of strongly-coupled pairs even when they are nearby other clusters, and it is clearly seen from Fig.~\ref{figClusteringMethods}(a) that some of the neglected bonds are stronger than other bonds that are kept. On the same set of random sites, the result of an alternate algorithm that keeps all hopping amplitudes greater than a certain threshold (chosen so that the size of the clusters is not too large) is shown in Fig.~\ref{figClusteringMethods}(b). This technique removes the problem of isolating strongly coupled pairs/triangles from other nearby sites, but it has the disadvantage of being very sensitive to the threshold, adding another degree of arbitrariness. We find that both methods give reasonable decompositions into clusters, and the choice of algorithm not unique. In this work, we use the nearest-neighbor method outlined above, and leave a more detailed assessment and comparison of clustering methods for later work.
\begin{figure} [h]
\begin{center}
\begin{tabular}{c|c|} \cline{2-2}
a) & \parbox{2.5in}{ \includegraphics[width=2.4in]{figs/clsys_nn.ps} } \\ \cline{2-2}
\multicolumn{2}{c}{ } \\ \cline{2-2}
b) & \parbox{2.5in}{ \includegraphics[width=2.4in]{figs/clsys_union.ps} } \\ \cline{2-2}
\end{tabular}
\renewcommand{\baselinestretch}{1}\normalsize
\caption{(Color online) Example of decomposing a 100-site system into clusters. Part (a) uses the nearest-neighbor method, and part (b) the threshold method (both described in the text). The blue lines link points in the same clusters (not all hopping links between the points are shown). \label{figClusteringMethods}}
\end{center}
\end{figure}
We first determine, for fixed average densities $\bar{\rho}=\frac{1}{1600}$, $\frac{1}{160}$ and $\frac{3}{160}$, the distribution of cluster sizes which converges to the density-independent values shown in Table \ref{tabClusterSizeDist}. By considering clusters with $< 8$ sites, which are within the reach of exact diagonalization techniques, we can account for over $97\%$ of the sites. The remaining large clusters are converted into smaller clusters ($<8$ sites) by removing the smallest number of weakest links.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
& \multicolumn{2}{c|}{Percentage of clusters} \\ \hline
$\ensuremath{N_s}$ & \hspace{0.5cm}2D\hspace{0.5cm} & \hspace{0.5cm}3D\hspace{0.5cm} \\
\hline
2 & 22.9 & 20.9 \\
3 & 28.2 & 25.0 \\
4 & 22.0 & 20.6 \\
5 & 13.7 & 14.7 \\
6 & 7.2 & 8.6 \\
7 & 3.5 & 4.8 \\
8 & 1.5 & 2.7 \\
9 & 0.6 & 1.3 \\
10 & 0.3 & 0.6 \\
$>$10 & 0.1 & 0.8 \\
\hline
\end{tabular}
\caption{Distribution of cluster sizes in a large 2D or 3D system of random sites with a fixed average density. Clusters are formed from smallest sets of sites such that each site is in the same set as its nearest neighbor. Since this criterion does not depend on the value of the average density, this table is valid for all fixed average densities. \label{tabClusterSizeDist}}
\end{center}
\end{table}
We can estimate the local density, $\ensuremath{\rho_{\mbox{\scriptsize loc}}}$, of an $\ensuremath{N_s}$-site $d$-dimensional cluster with sites at positions $\vec{r}_i,\,i=1...N$ from the formula:
\begin{equation}
\rho_{\mbox{\scriptsize loc}} = \left\{
\begin{array}{ccc}
\frac{\ensuremath{N_s}}{\pi R_{cl}^2} & \quad & \mbox{in 3D} \\
\frac{\ensuremath{N_s}}{(4/3)\pi R_{cl}^3} & \quad & \mbox{in 2D}
\end{array} \right.
\end{equation}
where $R_{cl}$, the average radius of the cluster, is given by
\begin{eqnarray}
R_{cl} &=& \sqrt{\sum_{i=1}^{\ensuremath{N_s}} \left( \vec{r}_i-\vec{r}_0 \right)^2} \\
\vec{r}_0 &=& \frac{1}{\ensuremath{N_s}}\sum_{i=1}^{\ensuremath{N_s}} \vec{r}_i
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=2in,angle=270]{figs/clusterDensityHist0.100_2D.ps}
\includegraphics[width=2in,angle=270]{figs/clusterDensityHist0.100_3D.ps}
\caption{(Color online) Individual density distributions for 2- to 8-site clusters in two and three dimensions when the average density $\bar{\rho}=1.0$. Note that the majority of the weight falls above $\bar{\rho}$, indicating that the clusters chosen are significantly more dense than the average. The inset shows the long tail of the 2-site cluster curve, indicating the existence of strong pairs.\label{figLocalDenDists}}
\end{center}
\end{figure}
For clusters of a given size $\ensuremath{N_s}$ and electron number $\ensuremath{N_e}$, the local density will also have some variation about its mean. We plot the local density distribution of clusters with 2-7 sites for normalized global average density $\bar{\rho}=1.0$ in Fig.~\ref{figLocalDenDists}. We find that clusters of larger size have a lower mean density, that is, at lower local densities the process of following nearest neighbors links has greater probability of connecting together a larger number of sites. The reason for this trend is due to the suppressed probability of finding a group of mutual nearest-neighbors at low densities. Let us consider the simplest case of two sites and compare the probability of finding a nearest neighbor at distance $r$ corresponding to local density $\rho_{\mbox{\scriptsize loc}} = r^{-d}$ with the probability of finding a nearest neighbor at this distance that also has the original site as its nearest neighbor. We call such points ``mutual nearest neighbors,'' and the differential probability distribution is found by multiplying the probability of finding a nearest neighbor by the probability that the second site does not have a NN closer than the first site. We thus define the differential probability of finding a mutual NN at distance $r$ by $p_{\mbox{\scriptsize mutualNN}}(r) = p_{nn}(r)*(1 - P_{nn}(r))$, where
\begin{eqnarray}
P_{nn}(r) &=& 1 - \exp\left(-\frac{\pi^{d/2}nr^d}{\Gamma(\frac{d}{2}+1)}\right) \label{eq_prob_nn}\\ \nonumber\\
p_{nn}(r) &=& \left(\frac{2\pi^{d/2}}{\Gamma(\frac{d}{2})}nr^{d-1}\right) \exp\left(-\frac{\pi^{d/2}nr^d}{\Gamma(\frac{d}{2}+1)}\right) \,. \label{eq_diffProb_nn}\\ \nonumber
\end{eqnarray}
The function $p_{nn}(r)$ is the probability of finding a site's nearest neighbor between $r$ and $r+dr$, and $P_{nn}(r)=\int_0^r p_{nn}(r') dr'$ is the probability of finding a pair with length less than or equal to $r$. As shown in Fig.~\ref{figMutualPnn} for 2D, at large $\rho_{\mbox{\scriptsize loc}}$ the distributions $p_{nn}(r)$ and $p_{\mbox{\scriptsize mutualNN}}$ approach one another, indicating that most nearest-neighbor links form mutual NN pairs. However, at lower $\rho_{\mbox{\scriptsize loc}}$, due to the more rapid decrease of $p_{\mbox{\scriptsize mutualNN}}(r)$ at large $r$, the distributions separate and there is greater probability that a nearest neighbor will not be mutual, and thus lead to a larger (at least size 3) cluster. The peak in $p_{\mbox{\scriptsize mutualNN}}$ near 3.0 coincides with the peak in the probability of 2D 2-site clusters in Fig.~\ref{figLocalDenDists} as one expects. Clusters of greater than 2 sites will have greater probability density at lower $\rho_{\mbox{\scriptsize loc}}$, closer to the peak in $p_{nn}(r) - p_{\mbox{\scriptsize mutualNN}}$ near 1.75 (also shown in Fig.~\ref{figMutualPnn}). The distribution $p_{\mbox{\scriptsize mutualNN}}(r)$ decreases much more rapidly at large $r$ (small $\rho_{\mbox{\scriptsize loc}}$) than $p_{nn}(r)$ does, as shown in Fig.~\ref{figMutualPnn}.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/mutualNN_dist.eps}
\caption{(Color online) Probability density for finding a NN compared to that of finding a mutual NN (a NN that also has the initial site as its NN) vs. local spatial density $\rho_{\mbox{\scriptsize loc}}$. The short-dashed line (the difference) shows the probability that a nearest neighbor is \emph{not} a mutual nearest neighbor, and thus will lead to a cluster of $>2$ sites. The average density is set to unity.\label{figMutualPnn}}
\end{center}
\end{figure}
We diagonalize all the cluster Hamiltonians individually, and compile the resulting data to arrive at the distribution of spin values from the ensemble of clusters (obtained from many different large system realizations). In this case, there is substantial fluctuation in the local density of clusters. Even the mean local density for clusters of different size is different; only the average density of the \emph{entire} system is fixed. The results show the same general trends as the clusters with fixed local density described earlier. For comparison, the average spin and percentage of magnetic clusters in two and three dimensions are presented in Appendix \ref{appRandomClusterData} and Ref.~\onlinecite{NielsenThesis} respectively.
We also find that there is a weak correlation between local density ($\ensuremath{\rho_{\mbox{\scriptsize loc}}}$) and average ground state spin $\langle S \rangle$. We observe quite generally that 2D clusters with one extra electron ($\ensuremath{N_e} = \ensuremath{N_s} + 1$) have a peak in $\langle S \rangle$ near $\ensuremath{\rho_{\mbox{\scriptsize loc}}} \approx 0.015$ while those with one hole ($\ensuremath{N_e} = \ensuremath{N_s} - 1$) have relatively smaller values of $\langle S \rangle$ that are less sensitive to changes in $\ensuremath{\rho_{\mbox{\scriptsize loc}}}$. Figure \ref{figLocalSpin} shows this typical behavior for 5-site clusters with $\bar{\rho} = \frac{1}{160}$ and $\ensuremath{N_e} = \ensuremath{N_s} \pm 1$. Similar qualitative behavior is found for other clusters sizes $4 \le \ensuremath{N_s} \le 7$ and from systems with $\bar{\rho} = \frac{1}{1600},\frac{3}{160}$, though $\langle S \rangle$ tends to be higher for larger size clusters. The location of the peak at $\ensuremath{\rho_{\mbox{\scriptsize loc}}} \approx 0.015$ is important to our consideration of different large-system densities $\bar{\rho}$, since the density-independent histogram of local density given in Fig.~\ref{figLocalDenDists} shows that clusters with $\ensuremath{\rho_{\mbox{\scriptsize loc}}} / \bar{\rho} \in [2,4]$ are most prevalent. In the case $\bar{\rho} = \frac{1}{160} = 0.00625$, $\ensuremath{\rho_{\mbox{\scriptsize loc}}} = 0.15$ corresponds to $\ensuremath{\rho_{\mbox{\scriptsize loc}}} / \bar{\rho} = 2.4$, whereas for $\bar{\rho} = \frac{1}{1600} = 0.000625$ and $\bar{\rho} = \frac{3}{160} = 0.01875$ the corresponding values of $\ensuremath{\rho_{\mbox{\scriptsize loc}}} / \bar{\rho}$ are $24$ and $0.8$ respectively. This suggests that the $\bar{\rho} = \frac{1}{160}$ case will show the greatest overall magnetism, an inference that was seen in the fixed density clusters of section \ref{secFixedDensityClusters}, and is supported by the further investigation below (see section \ref{secElecDistNoCoulomb}).
\begin{figure}
\begin{center}
\hspace{-0.75in}\includegraphics[width=1.7in,angle=270]{figs/typical_shist_elec.ps} \hspace{-0.8in}
\includegraphics[width=1.7in,angle=270]{figs/typical_shist_hole.ps} \hspace{-1.2in}
\caption{(Color online) Average ground state spin vs.~local density of 2D 5-site clusters. The pertinent range of local densities is divided into bins, and bar heights indicate the average ground state spin of the 5-site clusters whose density falls within the corresponding density bin. This data is from $\bar{\rho}=\frac{1}{160}$ clusters, but the behavior is typical (see text).\label{figLocalSpin}}
\end{center}
\end{figure}
So far we have focused on characterizing the ground state spin distribution for clusters with fixed size and electron number (but with varying densities). We now turn to the spin distribution of the large systems from which we have taken the clusters. Consider a large system with a fixed number of sites $\ensuremath{N_{sys}}$ and doping (fixed total electron number $\ensuremath{N_e^{tot}}$). The system is partitioned into clusters of size $\ensuremath{N_s}=2-7$, which are approximated as being independent. It only remains to determine how the electrons will be distributed among the clusters -- after the number of electrons on each cluster is known, the clusters can be independently solved and their ground-state spin tabulated. We calculate the electron distribution using three different methods, two of which ignore Coulomb interactions and one which takes them into account using a classical approximation. In the following sections we consider \emph{only} 2D systems, since our interest is primarily in 2D heterostructures and we can obtain better statistics in 2D than in 3D.
\subsection{Electron distribution without Coulomb interactions\label{secElecDistNoCoulomb}}
As a first attempt to find the distribution of electrons among the clusters, we ignore Coulomb interactions entirely and minimize the total energy, which is then just the sum of the cluster energies. To accomplish this, we must compute the ionization energy and electron affinities of (half-filled) clusters and minimize the system's total energy to determine where the electrons will reside. We pursue this goal in two ways.
\paragraph{Average energy method\label{secAvgEnergyMethod}}
The first, more approximate, calculation finds the (ensemble) \emph{average} energy required to remove or add an electron to a half-filled clusters of each considered size. These values are shown in Table \ref{tabIonAffinity}. The averages are over very broad distributions, however, with standard deviations comparable to the mean value shown in Table \ref{tabIonAffinity}. This reveals a major shortcoming of this technique: it approximates broad energy distributions by their mean. Its advantages lie in the simplicity and speed of its calculation, and that it applies to thermodynamically large systems. We continue the analysis, knowing that results are to be treated only as a first approximation.
\begin{table}
\begin{center}
\begin{tabular}{|c||c|c||c|c||c|c|}
\hline
& \multicolumn{2}{|c|}{$\rho=\frac{1}{1600}$} & \multicolumn{2}{|c|}{$\rho=\frac{1}{160}$} &
\multicolumn{2}{|c|}{$\rho=\frac{3}{160}$} \\ \hline
\ensuremath{N_s} & $+1e^-$ & $-1e^-$ & $+1e^-$ & $-1e^-$ & $+1e^-$ & $-1e^-$ \\
\hline
1 & 1.000 & 0.000 & 1.000 & 0.000 & 1.000 & 0.000 \\
2 & 0.994 & -0.002 & 0.971 & -0.011 & 0.971 & -0.009 \\
3 & 0.990 & -0.004 & 0.951 & -0.035 & 0.933 & -0.090 \\
4 & 0.988 & -0.005 & 0.940 & -0.049 & 0.914 & -0.133 \\
5 & 0.986 & -0.006 & 0.931 & -0.060 & 0.900 & -0.167 \\
6 & 0.985 & -0.007 & 0.924 & -0.069 & 0.887 & -0.193 \\
7 & 0.984 & -0.007 & 0.917 & -0.082 & 0.875 & -0.217 \\
\hline
\end{tabular}
\caption{Average energy (in units of $U \approx 0.945\ensuremath{\mathrm{Ry^*}}$) required to add (+1) or remove (-1) an electron from a half-filled cluster of size \emph{sites} in a large 2D system with total average density $\rho$. We have used the $\ensuremath{\tilde{t}}(r)$ and $\ensuremath{t}(r)$ (with $\ensuremath{\tilde{t}} > \ensuremath{t}$) of our band calculation. \label{tabIonAffinity}}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|ccccccc|}
\hline
\ensuremath{N_s} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\hline
1 & 1.0 & .991 & .910 & .867 & .833 & .807 & .783 \\
2 & .971 & .962 & .881 & .838 & .804 & .778 & .754 \\
3 & .933 & .924 & .843 & .800 & .766 & .740 & .716 \\
4 & .914 & .905 & .824 & .781 & .747 & .721 & .697 \\
5 & .900 & .891 & .810 & .767 & .733 & .707 & .683 \\
6 & .887 & .878 & .797 & .754 & .720 & .694 & .670 \\
7 & .875 & .866 & .785 & .742 & .708 & .682 & .658 \\
\hline
\end{tabular}
\caption{Net average energy (in units of $U$) required to transfer an electron between a half-filled cluster of the size specified by a column to a half-filled of the size specified by the row. This data is for clusters in a large 2D system with total average density $\bar{\rho}=\frac{3}{160}$ and with $\ensuremath{\tilde{t}}(r)$ and $\ensuremath{t}(r)$ set by our band calculation.\label{tabEnergyTransfer}}
\end{center}
\end{table}
By subtracting pairs of the values in Table \ref{tabIonAffinity}, we find the \emph{average} net energy gained (or lost) when transferring an electron from one cluster to another, shown in Table \ref{tabEnergyTransfer}. The fact that all transfer energies are positive implies the stability of the electron configuration in which each cluster is exactly half-filled in this approximation. Note, however, that Coulomb interactions (which lower the energy of two charged cluster system) may alter this picture substantially. Using the average affinities and ionization energies, we determine the optimal distribution of electrons among the clusters. Let $x_n^q$ be the fraction of the total clusters that have $n$ sites and charge $q$. In our calculation, $n=2 \ldots 7$ and $q\in\{-1,0,+1\}$ (clusters are allowed at most one electron or hole on them), so there are 18 variables in all. The optimal $x_n^q$ are found by minimizing the total energy, $E^{tot}(\{x_n^q\})$, subject to constraints. The energy is written:
\begin{equation}
E^{tot}(\{x_n^q\}) = \sum_{n,q=\pm 1} \alpha_{n,q} x_n^q
\end{equation}
where $\alpha_{n,-1}$ is the energy required to add an electron to a $n$-site cluster, $\alpha_{n,+1}$ is the energy required to remove an electron from a $n$-site cluster. Constraints on the problem are:
\begin{itemize}
\item $x_n^q \ge 0$ for all $n,q$.
\item $\sum_{q=-1}^1 x_n^q = f_n$, where $f_n$ is the fraction of total clusters with size $n$ (found from Table \ref{tabClusterSizeDist}).
\item $N_e^{tot} = \sum_{n,q} (n+q) x_n^q$, where the sum ranges over all $n=2 \ldots 7$ and $q\in\{-1,0,+1\}$ (note that $n+q$ is the total number of electrons on $n$-site clusters with charge $q$).
\end{itemize}
Since the energy and all constraints are linear, this is a linear programming problem and can be solved with standard numerical routines. The minimization is carried out at a fixed doping, and results in the optimal placement the electrons in a thermodynamically large system. We determine the average spin per cluster of the entire system as a function of doping (summing the average spin of a cluster with $n$ sites and charge $q$ multiplied by $x_n^q$) in Fig.~\ref{figOptimalAvgSvsDoping}. We also consider the percentage of clusters that have above-minimal spin (again using the average results for fixed-size clusters), classified above as magnetic clusters. From the plot of the percentage of magnetic clusters vs.~doping in Fig.~\ref{figOptimalPcMagVsDoping} we see that when a system is doped 10-20\% above half-filling, nearly half of the clusters have greater than minimal ground state spin. This suggests that at such doping some kind of percolation might be possible that would induce magnetic order on a mesoscopic or even macroscopic length scale.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/optElec_avgSpinVsDoping.ps}
\caption{(Color online) Average spin per cluster as a function of filling (number of electrons per site; half-filled corresponds to 1.0), where the energy optimizing electron distribution is used at each filling. \label{figOptimalAvgSvsDoping}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/optElec_pcMagVsDoping.ps}
\caption{(Color online) Percentage of magnetic clusters (those having above minimal ground state spin) as a function of filling (number of electrons per site; half-filled corresponds to 1.0), where the energy optimizing electron distribution is used at each filling. \label{figOptimalPcMagVsDoping}}
\end{center}
\end{figure}
\paragraph{Full cluster method\label{secElecGlassNoCoulomb}}
A more straightforward way of calculating the optimal electron distribution in the absence of Coulomb interactions is to consider an ensemble of large random systems. For each system, after separating the sites into clusters, we minimize the energy by repeatedly testing if the movement of an electron from one site to another lowers the total energy. Specifically, the algorithm we use is as follows:
\begin{enumerate}
\item Initialize the system by placing electrons (if above half-filling) or holes (if below half-filling) on random clusters.
\item Randomly choose two clusters $i$ and $j$, and attempt to move an electron from $i$ to $j$. If the resulting change in total system energy (just the sum of all cluster energies since there are no Coulomb interactions) is negative, accept the transfer. If not, do not make the transfer.
\item Repeat the above step until the total energy converges.
\end{enumerate}
Knowing the electron distribution, the ground state spin of each cluster in the large system is then calculated. Finally, we compute the distribution of cluster ground state spin values, and average them over an ensemble of large systems. Figure \ref{figElecGlassNoCoulomb} shows how the percentage of clusters with spin greater than or equal to a reference spin $S_{ref}$. For $S_{ref} = 1$, these percentages correspond to our earlier definition of ``magnetic clusters'', and we compare in Fig.~\ref{figElecGlassCompareNoCoulomb} the results of this section with those obtained earlier using the average energy method (section \ref{secAvgEnergyMethod}). We see that the latter overestimates the number of high-spin clusters, particularly in the electron-doped case. The above analysis neglected the effect of long-range Coulomb interactions, which we consider next.
\begin{figure}
\begin{center}
\begin{tabular}{l}
\includegraphics[width=2in, angle=270]{figs/elecGlass_den0.010_noCoulomb.ps} \\
\includegraphics[width=2in, angle=270]{figs/elecGlass_den0.100_noCoulomb.ps} \\
\includegraphics[width=2in, angle=270]{figs/elecGlass_den0.300_noCoulomb.ps} \\
\end{tabular}
\caption{Percentage of clusters with total spin greater than the reference value $S_{ref}$, specified in the key, as a function of filling (1.0 = half-filling). Plots correspond to densities $\rho = \frac{1}{1600}$, $\frac{1}{160}$, and $\frac{3}{160}$ as indicated in their titles.\label{figElecGlassNoCoulomb}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/elecGlassCompareMagCl_noCoulomb.ps}
\caption{(Color online) Comparison of the percentage of magnetic clusters (those with greater than minimal ground state spin) using the average energy method (AVG, thin lines) of section \ref{secAvgEnergyMethod} and the method using actual clusters without Coulomb interactions (NC, thick lines) of section \ref{secElecGlassNoCoulomb}. The style of the line (solid, dashed, or long-dashed) indicates the density. \label{figElecGlassCompareNoCoulomb}}
\end{center}
\end{figure}
\subsection{Electron distribution including Coulomb interactions\label{secElecGlassCoulomb}}
In low density insulating systems, Coulomb interactions between charged centers (or clusters) are not screened effectively and, because of their slow ($1/r$) fall-off, cannot be neglected.\cite{EfrosShklovskii_1975}
Therefore, a more accurate way to calculate the electron distribution is to include Coulomb interactions. The approach described in this section accounts for the Coulomb interactions between charged clusters in an approximate way. We begin in a fashion similar to the preceding analysis, solving each cluster for range of total electrons near half-filling. We then determine the minimum energy electron distribution by solving a generalized electron glass problem\cite{EfrosShklovskii_1975,Efros_1976,DaviesLeeRice_1982,DaviesLeeRice_1984} which accounts for the differences in ground state energy of the clusters \emph{and} the Coulomb energy between charged clusters, as described below.
The generalized electron glass problem we solve consists of a set of two-dimensional clusters lying in a large two-dimensional space, indexed by $i=1\ldots N_{cl}$. Each cluster is treated as an effective site, and is assigned a position $\vec{R}_i$ (given by the average positions of all of its points), and a dimensionless charge $q_i$. The charge naturally corresponds to the occupation of the cluster (relative to half-filling), and is restricted in our analysis to be +1, 0, or -1.
The problem is to minimize the classical Hamiltonian
\begin{equation}
\mathcal{H}_{eg} = \sum_i \phi_i^{q_i} + \frac{e^2}{2\epsilon} \sum_{ij, i\ne j} \frac{q_i q_j}{r_{ij}}
\end{equation}
where $\epsilon$ is the dielectric constant, $r_{ij} = |\vec{R}_i - \vec{R}_j|$, and $\phi_i^{q_i}$ is the ground state energy of cluster $i$ when it has charge $q_i$. The first term gives the on-site (or, more accurately, on-cluster) energy contribution, and the second term supplies the Coulomb interaction between clusters. The minimization is performed with respect to the variables $q_i$ which must obey the constraint $\sum_i q_i = \ensuremath{N_e^{tot}}-\ensuremath{N_{sys}}$, where $\ensuremath{N_e^{tot}}$ is the total number of electrons in the $\ensuremath{N_{sys}}$-site system. The details of the minimization are a generalization of the procedure outlined by Baranovskii \emph{et al.},\cite{Baranovskii_1979} divided into three steps:
\begin{enumerate}
\item Initialize the $\{q_i\}$ by starting them all equal to zero and randomly choosing clusters to add an electron to (if $\ensuremath{N_e^{tot}} > \ensuremath{N_{sys}}$) or remove an electron from (if $\ensuremath{N_e^{tot}} < \ensuremath{N_{sys}}$) until $\sum \limits_i q_i = \ensuremath{N_e^{tot}}-\ensuremath{N_{sys}}$.
\item Calculate all single-cluster energies
\begin{equation}
E_i^q = \phi_i^q + \frac{e^2}{\epsilon}\sum_{j \ne i} \frac{q_i q_j}{r_{ij}}
\end{equation}
and check that
\begin{equation}
\Big(E_i^{q_i}-E_i^{q_i-1}\Big) < \Big(E_j^{q_j+1}-E_j^{q_j}\Big) \label{eqEGlassIneq1}
\end{equation}
for all $i,j$ such that $q_i > -1$, $q_j < 1$, and $i \ne j$. The left hand side of the inequality is the cost of having the last-placed electron on site $i$, which should be less than the cost of placing an electron on site $j$. Otherwise, we can lower the system's energy (disregarding the Coulomb interaction for now) by moving an electron from $i$ to $j$. In practice, we consider the pair $i,j$ that for Eq.~\eqref{eqEGlassIneq1} makes the left side maximal and right side minimal. If inequality (\ref{eqEGlassIneq1}) is not satisfied we move an electron from $i$ to $j$ and repeat the step from the beginning. If the inequality is satisfied, we proceed to the next step. This is analogous to the $\mu$-sub routine referred to by earlier work.\cite{Baranovskii_1979,DaviesLeeRice_1984}
\item Calculate the energies $E_i$, and iterate through all pairs $(i,j)$ such that $q_i > -1$, $q_j < 1$, and $i \ne j$ and check that each satisfies
\begin{equation}
\Big(E_j^{q_j+1}-E_j^{q_j}\Big) - \Big(E_i^{q_i}-E_i^{q_i-1}\Big) - \frac{e^2}{\epsilon r_{ij}} > 0 \,.
\end{equation}
If a pair $(i,j)$ is found that does not satisfy the inequality, we move an electron from cluster $i$ to cluster $j$ and repeat the step (recalculate the $E_i$ and check again).
\end{enumerate}
This process results in a set $\{ q_i \}$ that is stable with respect to any single electron moving between clusters. Further conditions (and steps) could be added that would make the final distribution of electrons stable against any two electrons simultaneously moving between sites, but previous work on the electron glass problem\cite{DaviesLeeRice_1982} has shown that these additional constraints do not significantly affect the result. Therefore, we do not implement this additional step.
Once we have determined the distribution of electrons among the many clusters of the large system, we compute the percentage of clusters with spin greater than a given reference spin $S_{ref}$. This quantity is averaged over many random realizations of the large cluster system.
The (ensemble-averaged) percentage of clusters with spin $\ge S_{ref}$ for $S_{ref} = \frac{1}{2}$, $1$, and $\frac{3}{2}$ is shown in Fig.~\ref{figElecGlassSeparateDensities} for our standard densities $\rho = \frac{1}{1600}$, $\frac{1}{160}$, and $\frac{3}{160}$. For $S_{ref} = 1$, these percentages correspond to our earlier definition of ``magnetic clusters'', and Fig.~\ref{figElecGlassCompareBoth} compares the results with those of the previous section (\ref{secElecGlassNoCoulomb}) which neglects Coulomb interactions but is otherwise identical to the calculation performed here. We see that Coulomb interactions slightly deplete the number of high-spin clusters, particularly in the electron-doped case. This also shows that even in the presence of long-range Coulomb interactions there is a sizable percentage of magnetic clusters at modest electron-doping. In order for the magnetic clusters to percolate in a strictly 2D system, they must account for 50\% of the system, which is only attained at large filling factors ($\approx 1.2$ in the best case of $\rho=\frac{1}{160}$). In 3D, however, the percolation threshold is much lower, so a parallel calculation in a 3D or thick 2D system (which behaves as a 3D system on short length scales) may yield even more promising results. We also remark that as the impurity density is increased at fixed doping the average number of magnetic clusters has a non-monotonic behavior. There is an optimal impurity density (nearest to $\rho = \frac{1}{160}$ in our data) that results in an on-average maximal number of magnetic clusters. Altogether, the presence of many high-spin clusters provides a necessary ingredient for ferromagnetism on macroscopic, or even mesoscopic, length scales.
\begin{figure}
\begin{center}
\begin{tabular}{l}
\includegraphics[width=2in, angle=270]{figs/elecGlass_den0.010.ps} \\
\includegraphics[width=2in, angle=270]{figs/elecGlass_den0.100.ps} \\
\includegraphics[width=2in, angle=270]{figs/elecGlass_den0.300.ps} \\
\end{tabular}
\caption{Percentage of clusters with total spin greater than the reference value $S_{ref}$, specified in the key, as a function of filling (1.0 = half-filling). Plots correspond to densities $\rho = \frac{1}{1600}$, $\frac{1}{160}$, and $\frac{3}{160}$ as indicated in their titles.\label{figElecGlassSeparateDensities}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/elecGlassCompareBoth.ps}
\caption{(Color online) Comparison of the percentage of magnetic clusters (those with greater than minimal ground state spin) when Coulomb interactions are ignored (section \ref{secElecGlassNoCoulomb}) or accounted for (section \ref{secElecGlassCoulomb}, using a generalized Coulomb glass analysis. The plot shows, for densities $\rho = \frac{1}{1600}$, $\frac{1}{160}$, and $\frac{3}{160}$, percentages of the no-Coulomb (NC) case and electron glass (EG).\label{figElecGlassCompareBoth}}
\end{center}
\end{figure}
\section{Conclusions\label{secConclusion}}
We have formulated a Hubbard model appropriate for doped semiconductors, which has an occupation-dependent hopping term and therefore intrinsic electron-hole asymmetry characteristic of the hydrogenic center. This generalized disordered Hubbard model is numerically solved using exact diagonalization on 2D finite lattices, selected symmetric clusters, and completely random clusters in two and three dimensions. We summarize the results of each in turn.
Our results on finite (periodic) lattices, as well as selected clusters and distorted/randomized versions of them, lead us to several important conclusions. First, high-spin ground states generally occur at large $U/t$ (low impurity density). On a bipartite lattice one carrier away from half-filling, Nagaoka's theorem guarantees a maximal spin state in the limit $U/t \rightarrow \infty$. In the finite lattices that satisfy Nagaoka's theorem, we find maximal spin states at large but finite $U/t$. In clusters (with less symmetry than a lattice), high-spin ground states are found to be quite sensitive to the cluster geometry, though they all exist at large $U/t$.
Second, the properties of the hydrogen atom give rise to a crucial difference between the electron-doping and hole-doping of hydrogenic systems. In lattices as well as clusters we see a greatly enhanced occurrence of spin-polarization in electron-doped (above half-filling) systems. In systems above half-filling we also find that increasing $\ensuremath{\tilde{t}}/\ensuremath{t}$ can significantly increase the likelihood of this nanoscale ferromagnetism. These results confirm our expectation that the greater the spatial extent of a doubly-occupied site's wavefunction (relative to the wavefunction of a singly-occupied site), the more favorable spin polarization becomes.
Lastly, we remark on the resilience of the high-spin ground states. By perturbing a cluster geometry that has a high-spin ground state, we find that larger values of $U/t$ and $\ensuremath{\tilde{t}}/\ensuremath{t}$ make the state more stable to geometric and random fluctuations in the hopping amplitudes. An assessment of high-spin state robustness is relevant to situations in which sites are individually positioned within some tolerance. Overall, we have identified a regime where nanoscale FM exists (with some robustness) in finite lattices and artificially-made clusters of hydrogenic centers.
The analysis of ground state spin behavior in completely random clusters reveals several of the same conclusions we found for the selected symmetric clusters. Namely, we find that electron-doping and a larger $\ensuremath{\tilde{t}}/\ensuremath{t}$ favor spin polarization in random clusters as well. The electron-hole asymmetry found in all of the random ensembles implies that in real semiconductor systems there is a significant difference between doping above and below half-filling. Spin-polarization is much more prevalent in systems above half-filling, an effect which we again emphasize as arising from the physical properties of the dopant atom. Unlike in the case of selected clusters, where ferromagnetism is generally more prevalent at larger $U/t$, we find that within the low-density range considered (well below the metal-insulator transition), there is an optimal density for finding high-spin (random) clusters. This interesting observation is likely due to clusters breaking up into separate, effectively disconnected, pieces at very low densities, which hinders carrier movement and thereby the alignment of spins in the ground state.
We also study the problem of distributing electrons onto the cluster components of a large system. Of particular interest is the relatively small effect of Coulomb interactions (between charged clusters) on the electron distribution. Coulomb interactions slightly \emph{decrease} the number of clusters with above-minimal spin. This effect was unexpected since Coulomb interactions reduce the energy cost of charged clusters, which generally have higher ground state spin than un-charged clusters. A detailed look at differences in the electron distributions with and without Coulomb interactions may help to explain the reason for the small observed effect, and is left for future work.
Taking into account all our data on finite systems, we expect high spin clusters to be observable in systems with a low density (large Hubbard $U/t$) of centers and a small \emph{excess} of electrons. The latter requirement is difficult to realize in 3D bulk systems, but could be met in doped quantum dots and 2D heterostructures. For example, doped quantum dots with dopant number $N_d = 6-15$ and a small excess of electrons $N_e-N_d=1-2$ would be ideal systems for finding high-spin ground states. Also, in modulated structures with dopants in both quantum wells and barrier regions, regions of excess electrons can be achieved, unlike in true bulk doped semiconductors. We also note that the artificial cluster geometries studied in section \ref{secSelectedClusters} have real world applications through recently developed technology which allows precise placement of phosphorous donors in silicon.\cite{Schofield_2003}
The same regime (low density, electron-doping) is also the most likely region for the possible appearance of true macroscopic ferromagnetism, as our calculations on the cluster constituents of large systems (in section \ref{secVaryDensityClusters}) reveal. Obtaining a conclusive answer to this question numerically, however, requires going beyond the small sizes possible with exact diagonalization methods. A possible route is to use more approximate methods such as density matrix or perturbative renormalization group methods in combination with other numerical techniques. Even if true ferromagnetism on the macroscopic scale is absent, we have shown that there should be a significant asymmetry between the magnetic response of systems with excess electrons above the half-filled (uncompensated) case, and those with a deficit of electrons from the half-filled case, ({\it i.e.}~traditional compensated): the former should have a larger susceptibility in the paramagnetic phase at low temperatures. This prediction can be experimentally tested by an experiment that uses gates to tune the electron density in a 2D layer. Also, if ferromagnetism is attained on large enough length scales, it may show up as hysteresis in transport measurements due to magnetic domains. Clearly the temperature scales at which these ferromagnetic tendencies will manifest temselves will be much below the scales for diluted magnetic semiconductors like Galium Manganese Arsenide. This is due to several factors - (i) the energy scale for shallow impurities is low; (ii) the ferromagnetism
occurs only for large $U/\ensuremath{t}$, \emph{i.e.}~low dopant and carrier densities where $J \sim \ensuremath{t}^2/U$ is very small; and (iii) Nagaoka ferromagnetism in a Hubbard band is a much subtler effect involving two competing terms, compared with ferromagnetism arising out of a double exchange
mechanism. Nevertheless, the demonstration of high spin states and possible ferromagnetism in semiconductors doped with so-called "non-magnetic" shallow impurities would suggest that magnetism in semiconductors is not limited to semiconductors with transition metal elements, but is possible in a wider range of semiconductor based materials.
\section{AKNOWLEDGEMENTS}
This research was supported by NSF-MRSEC, Grant DMR-0213706 and DMR-0819860.
|
1,941,325,221,033 | arxiv | \section{Introduction}\label{sec:intr}
The main goal of the relativistic heavy-ion collisions program is to produce and study the quark-gluon plasma (QGP). Along with the plasma, the relativistic heavy-ion collisions produce intense electromagnetic fields that modify its properties. In order to infer the plasma properties from the experimental data one needs to quantify the effect of electromagnetic fields on the QGP dynamics. In principle, this can be accomplished by solving the relativistic magneto-hydrodynamic (MHD) equations. The electromagnetic field affects both the ideal plasma flow and the transport coefficients, while the electric currents in plasma affect the electromagnetic field.
Since the QGP dynamics is determined mostly by the strong interactions, one may start by treating the electromagnetic interactions as a small perturbation. This approximation amounts to decoupling, to a certain extent, of the dynamics of the electromagnetic field and the plasma.
The MHD of ideal QGP in the background electromagnetic field was studied in \cite{Pu:2016ayh,Roy:2015kma,Pu:2016bxy,Roy:2015coa,Roy:2017yvg,Inghirami:2016iru,Mohapatra:2011ku,Das:2017qfi,Greif:2017irh,Tuchin:2011jw}. It has been recently argued in \cite{Roy:2017yvg} that the effect of the electromagnetic field on QGP is small for realistic fields justifying the decoupling approximation. Still, before making a final conclusion that the plasma flow is decoupled from the electromagnetic field, one needs to verify that the kinetic coefficients do not strongly depend on the field. In particular, significant enhancement of the viscous stress may invalidate the ideal fluidity assumption. Despite the recent progress in calculating the transport coefficients \cite{Ding:2010ga,Aarts:2007wj,Amato:2013oja,Cassing:2013iz,Hattori:2017qih,Yin:2013kya,Li:2017tgi,Hattori:2016idp,Hattori:2016lqx,Fukushima:2015wck,Li:2016bbh,Hattori:2016cnt,Nam:2013fpa,Agasian:2011st,Chernodub:2009rt,Li:2017jwv}, their values at the temperatures of phenomenological interest are not yet certain.
Assuming perfect decoupling, i.e.\ that QGP does not affect the electromagnetic field at all, the electromagnetic field was computed in \cite{Kharzeev:2007jp,Skokov:2009qp,Bzdak:2011yy,Voronyuk:2011jd,Ou:2011fm,Deng:2012pc,Bloczynski:2012en} using the hadron transport models. However, it was argued in \cite{Tuchin:2013ie,Tuchin:2013apa} that this approximation is adequate only at the earliest times after the plasma formation. At later times the plasma response plays the crucial role. Owing to its finite electrical conductivity it significantly enhances the electromagnetic field \cite{Tuchin:2010vs,Tuchin:2013apa,Tuchin:2013ie,Zakharov:2014dia,Tuchin:2015oka}. Thus far all calculations of the electromagnetic field assumed stationary plasma. The main goal of this paper is to compute the contribution of the plasma expansion to the magnetic field. We will argue that this contribution is on the order of a few per cent and thus can be safely neglected. Along the way, we will clarify a number of important points that were not sufficiently addressed in the previous publications.
The spacetime picture of a heavy-ion collision is shown in \fig{geom-1} and \fig{geom-2}. In \fig{geom-1}, which is nearly identical to the one found in the classical Bjorken's paper \cite{Bjorken:1982qr}, we emphasize that the valence quarks, which are sources of the electromagnetic field, are external to the plasma. In fact, a small fraction of valence quarks can be found inside the QGP, which is known as the baryon stopping. However, the transfer of the valence quarks across the wide rapidity interval is strongly suppressed \cite{Kharzeev:1996sq,Itakura:2003jp}. Their contribution to the total field was estimated in \cite{Kharzeev:2007jp} and turns out to be completely negligible at relativistic energies. In view of this observations we neglect the baryon stopping, assuming that all valence quarks travel along the straight lines. Furthermore, for our arguments in this paper it is sufficient to approximate the valence electric charges as classical point particles. In a more comprehensive treatment one has to replace the classical sources by the quantum distributions
\cite{Holliday:2016lbx,Peroutka:2017esw}.
In this paper we regard the QGP as a homogeneous plasma expanding according to the blast wave model \cite{Siemens:1978pb,Teaney:2000cw,Kolb:2000fha} and having the electrical conductivity $\sigma$. We are going to neglect its mild time dependence \cite{Tuchin:2013ie} and treat it as a constant \footnote{Actually, even a mild time dependence of $\sigma$ may be phenomenologically significant \cite{Tuchin:2015oka}.}. Recently, there has been a lively discussion of possible effects of the chiral anomaly \cite{Kharzeev:2013ffa,Huang:2015oca,Kharzeev:2015znc} on the QGP dynamics in general and its electrodynamics in particular \cite{Tuchin:2014iua,Manuel:2015zpa,Li:2016tel,Tuchin:2016qww,Hirono:2016jps,Hirono:2015rla,Xia:2016any,Qiu:2016hzd}. In this paper we adopt a conservative view and disregard these effects until they are firmly established.
The paper is organized as follows. In \sec{sec:a} we write down the basic equations that determine the electromagnetic fields in QGP. We derive the retarded Green's function of the electromagnetic field in the electrically conducting medium and show that it is a sum of two terms: the pulse and the wake.
The wake field is usually neglected in calculations. We prove that this is a good approximation. Indeed, at energies $\gamma=100$ in a plasma with electrical conductivity $\sigma = 5.8$~MeV \cite{Amato:2013oja,Ding:2010ga}, the wake term is small until $t\sim 100$~fm/$c$ and thus can be neglected in phenomenological calculations. This is discussed in \sec{sec:c} in the stationary plasma limit. The main result of \sec{sec:c} is Eq.~\eq{a51} which gives the analytical expression for the magnetic field of a point external charge in conducting medium. It agrees with the previous result derived by one of us \cite{Tuchin:2013apa}, but has an advantage of being expressed in terms of the elementary functions. Expanding plasma is considered in \sec{sec:e} were we treat the magnetic part of the Lorentz force perturbatively and derive the solution for the magnetic field. We summarize the results and discuss the prospects in \sec{sec:s}.
\begin{figure}[ht]
\includegraphics[height=5cm]{YZgeometry.pdf}
\caption{The geometry of the heavy-ion collisions. Ion remnants move with velocity $\pm \b v$. The plasma's velocity is $\b u$. We emphasize that the valence electric charges $dq$ are external to the plasma. The geometry in the $xy$ plane is shown in \fig{geom-2}. }
\label{geom-1}
\end{figure}
\section{Maxwell equations in expanding plasma}\label{sec:a}
An electromagnetic field in flowing conducting medium satisfies the equations
\begin{subequations}
\begin{align}
\b \nabla\times \b B &= \partial_t \b E+ \sigma (\b E+\b u\times \b B)+\b j\,,\label{a10}\\
\b \nabla\cdot \b E&= \rho\,, \label{a11}\\
\b \nabla\cdot \b B&= 0\,, \label{a13}\\
\b \nabla \times \b E&= -\partial_t \b B\,, \label{a14}
\end{align}
\end{subequations}
where $\b u$ is the fluid velocity, $\sigma$ is electrical conductivity and $j^\mu = (\rho, \b j)$ is the external current created by the valence charges as shown in \fig{geom-1} . Replacing the fields with the potentials as usual
\bal
\b E= -\b \nabla \varphi- \partial_t \b A\,, \qquad \b B= \b \nabla\times \b A
\gal
and using the gauge condition
\ball{a18}
\partial_t \varphi+ \b \nabla \cdot \b A+\sigma\varphi=0\,
\gal
we arrive at the equations
\begin{subequations}
\bal
&-\nabla^2 \varphi+\partial_t^2\varphi + \sigma\partial_t \varphi=\rho\,,\label{a20}\\
&-\nabla^2 \b A+\partial_t^2\b A + \sigma\partial_t \b A-\sigma \b u \times (\b \nabla\times \b A)=\b j\,,\label{a21}
\gal
\end{subequations}
We consider a point charge $e$ moving in the positive $z$ direction with constant velocity $v$:
\ball{a23}
\b j = ev \unit z\delta(\b b)\delta(z-vt)\,, \quad \rho=0\,.
\gal
In the experimentally interesting region of small $z$'s, $|\b u|\ll 1$. This allows us to treat the corresponding term in \eq{a21} as a perturbation. Thus, writing $\b A= \b A^{(0)}+ \b A^{(1)}$ we obtain two equations
\begin{subequations}
\bal
&-\nabla^2 \b A^{(0)}+\partial_t^2\b A^{(0)} + \sigma\partial_t \b A^{(0)}=\b j\,,\label{a25}\\
&-\nabla^2 \b A^{(1)}+\partial_t^2\b A^{(1)} + \sigma\partial_t \b A^{(1)}= \sigma\b u \times \b B^{(0)}\,.\label{a26}
\gal
\end{subequations}
The first of these equations describes the field created by the external currents in the stationary plasma, whereas the second one takes expansion of plasma into account.
To find the particular solutions to these equations we introduce the retarded Green's function $G(\b r, t| \b r', t')$ that obeys the equation
\ball{a28}
&-\nabla^2 G+\partial_t^2 G+ \sigma\partial_t G=\delta(t-t')\delta(\b r-\b r')\,.
\gal
We note that the function $\mathcal{G}$ defined as
\ball{a30}
G(\b r, t| \b r', t')= e^{-\sigma t/2}\mathcal{G}(\b r, t| \b r', t')
\gal
is a Green's function of the Klein-Gordon equation with imaginary mass $m=i\sigma/2$
\ball{a32}
&-\nabla^2 \mathcal{G}+\partial_t^2 \mathcal{G}+ m^2 \mathcal{G}=e^{\sigma t'/2}\delta(t-t')\delta(\b r-\b r')\,.
\gal
The corresponding retarded Green's function in the coordinate representation reads (see e.g.\ \cite{MF})
\ball{a33}
\mathcal{G}(\b r, t| \b r', t')=&\frac{1}{4\pi}e^{\frac{1}{2}\sigma t' }\left\{ \frac{\delta(t-t'-R)}{R}\right.\nonumber\\
&\left.-\frac{m}{\sqrt{(t-t')^2-R^2}}J_1\left(m\sqrt{(t-t')^2-R^2}\right)\theta (t-t'-R)\right\}\theta(t-t')\,.
\gal
Eqs.~\eq{a30} and \eq{a33} furnish the retarded Green's function for the original Eq.~\eq{a28}:
\begin{subequations}\label{a34}
\bal
G(\b r, t| \b r', t')&=G_a(\b r, t| \b r', t')+G_b(\b r, t| \b r', t')\\
G_a(\b r, t| \b r', t')&= \frac{1}{4\pi}e^{-\frac{1}{2}\sigma(t- t')}\frac{\delta(t-t'-R)}{R}\theta(t-t')\label{a34a}\\
G_b(\b r, t| \b r', t')&=\frac{1}{4\pi}e^{-\frac{1}{2}\sigma(t- t')}\frac{\sigma/2}{\sqrt{(t-t')^2-R^2}}I_1\left(\frac{\sigma}{2}\sqrt{(t-t')^2-R^2}\right)\theta (t-t'-R)\theta(t-t')\,.\label{a34b}
\gal
\end{subequations}
We separated the Green's function into a sum of the two terms: the original pulse $G_a$ and the wake $G_b$ created by the currents induced in the plasma. The exponential factor $\exp[-\sigma(t-t')/2]$ indicates the decrease of the field strength due to the work done by the field on the electric currents in the plasma.
\section{Solution for the static plasma}\label{sec:c}
The particular solution to \eq{a25}, namely the one induced by the external currents, is given by
\ball{a38}
\b A^{(0)}(\b r, t)= \int G(\b r, t| \b r', t')\b j (\b r', t')d^3r' dt'\,,
\gal
where the retarded Green's function is given by \eq{a34}.
Since the retarded Green's function breaks up into two physically meaningful terms we compute and analyze each term independently.
\subsection{The pulse field}
The argument of the delta function in $G_a$ vanishes when $t-t'= |\b r-vt' \unit z|$. The corresponding retarded time $t'$ satisfying $t>t'$ reads
\ball{a42}
t'=t_0= \gamma^2\left( t-vz-\sqrt{(z-vt)^2+b^2/\gamma^2}\right)\,.
\gal
Writing
\ball{a44}
\delta(t-t'-R)= \frac{\delta(t'-t_0) (t-t_0)}{\sqrt{(z-vt)^2+b^2/\gamma^2}}\,
\gal
and denoting $\xi = vt-z$ we find
\ball{a46}
\b A_a^{(0)}(\b r, t)= \frac{ev\unit z}{4\pi}\frac{1}{\sqrt{\xi^2+b^2/\gamma^2}}\exp\left\{-\frac{\sigma\gamma^2}{2} \left(-v\xi+\sqrt{\xi^2+b^2/\gamma^2}\right)\right\}\,.
\gal
It is readily seen that as $\sigma\to 0$ this term reproduces the vector potential of a charge uniformly moving in vacuum. The magnetic field corresponding to the vector potential \eq{a46} is given by
\bal
\b B_a^{(0)}&= -\frac{\partial A_{az}^{(0)}}{\partial b}\unit \phi\label{a50}\\
& = \frac{ev}{4\pi}\unit \phi \left\{ \frac{\sigma b/2 }{\xi^2+b^2/\gamma^2}+ \frac{b}{\gamma^2 [\xi^2+b^2/\gamma^2]^{3/2}}
\right\}\exp\left\{-\frac{\sigma\gamma^2}{2} \left(-v\xi+\sqrt{\xi^2+b^2/\gamma^2}\right)\right\}\,.
\label{a51}
\gal
The first term in the curly brackets dominates when $\sqrt{\xi^2+b^2/\gamma^2}\gg 1/\sigma\gamma^2\sim 10^{-5}$~fm. Assuming that this is the case, \eq{a51} simplifies in the limit $b/\gamma \ll \xi$ yielding the ``diffusion approximation"
\ball{a53}
\b B_a^{(0)}\approx \frac{ev}{8\pi}\unit \phi \frac{\sigma b }{\xi^2}e^{-\frac{\sigma\xi}{2(1+v)}}e^{-\frac{b^2\sigma}{4\xi}}\,,\quad \xi>0\,.
\gal
Clearly, the second exponential factor in \eq{a53} can be dropped at later times $\xi\gg b^2\sigma/4\sim 0.5$~fm.
The expression for the magnetic field was previously derived by one of us in \cite{Tuchin:2013apa} (see Eq.~(7) there) and, unlike \eq{a51}, is represented in a form of a one-dimensional integral. Both formulas reduce to \eq{a53} in the diffusion approximation.
\subsection{The wake field}
It has been tacitly assumed in \cite{Tuchin:2013apa} that the wake term is small. Using the Green's function \eq{a34b} we can compute this term explicitly:
\ball{a58}
\b A_b^{(0)}(\b r, t)=\frac{e\unit z}{4\pi}\frac{\sigma v}{2}\int_{-\infty}^{t_0}
\frac{e^{-\sigma(t-t')/2}}{\sqrt{(t-t')^2-b^2-(z-vt')^2}}I_1\left( \frac{\sigma}{2}\sqrt{(t-t')^2-b^2-(z-vt')^2}\right)dt'\,.
\gal
It is useful to introduce a new integration variable $\lambda$ such that
\ball{a60}
t'= \gamma^2\left( t-vz-\sqrt{(z-vt)^2+(b^2+\lambda^2)/\gamma^2}\right)\,.
\gal
It is straightforward to check that this implies
\ball{a62}
\lambda^2= (t-t')^2-b^2-(z-vt')^2\,.
\gal
The vector potential \eq{a58} can now be represented as
\ball{a64}
\b A_b^{(0)}(\b r, t)=\frac{e\unit z}{4\pi}\frac{\sigma v}{2}\int^{\infty}_{0}
\frac{d\lambda\, I_1\left( \frac{\sigma}{2}\lambda\right)}{\sqrt{\xi^2+(b^2+\lambda^2)/\gamma^2}}\exp\left\{ -\frac{\sigma\gamma^2}{2} \left(-v\xi+\sqrt{\xi^2+(b^2+\lambda^2)/\gamma^2}\right)\right\}\,.
\gal
The main contribution to this integral comes from the integration region $\sqrt{\gamma^2\xi^2+b^2}\ll \lambda \ll 2/\sigma\gamma$ where the integrand is approximately constant. At smaller $\lambda$'s it vanishes as $\sim\lambda$, while at larger $\lambda$'s it is exponentially suppressed. Thus, we can approximate the integral in \eq{a64} as
\bal
\b A_b^{(0)}(\b r, t)&\approx \frac{e\unit z}{4\pi}\frac{\sigma v}{2}\int^{\infty}_{0}
\frac{d\lambda\, \frac{1}{2} \frac{\sigma}{2}\lambda}{\sqrt{\xi^2+(b^2+\lambda^2)/\gamma^2}}\exp\left\{ -\frac{\sigma\gamma^2}{2} \left(-v\xi+\sqrt{\xi^2+(b^2+\lambda^2)/\gamma^2}\right)\right\}\nonumber\\
&=\frac{e\unit z}{4\pi}\frac{\sigma v}{4}\exp\left\{-\frac{\sigma\gamma^2}{2} \left(-v\xi+\sqrt{\xi^2+b^2/\gamma^2}\right)\right\}\,.\label{a67}
\gal
Using \eq{a50} we derive the magnetic field
\ball{a70}
\b B_b^{(0)}(\b r, t) =\frac{e\unit \phi}{4\pi}\frac{\sigma^2 v b}{4}\frac{1}{\sqrt{\xi^2+b^2/\gamma^2}}\exp\left\{-\frac{\sigma\gamma^2}{2} \left(-v\xi+\sqrt{\xi^2+b^2/\gamma^2}\right)\right\}\,.
\gal
Comparing \eq{a67} and \eq{a46} we conclude that the contribution of the wake to the retarded Greens function \eq{a34} is small in the phenomenologically relevant region $\sqrt{\xi^2+b^2/\gamma^2}\ll 4/\sigma\sim 10^2$~fm. However, it dominates in the opposite limit, i.e.\ at very late times.
\subsection{Diffusion approximation} \label{sec:d}
It is instructive to derive Eq.~\eq{a53} directly from \eq{a28} as has been done in \cite{Tuchin:2015oka}. The diffusion approximation in \eq{a28} amounts to the assumption that $\partial_z^2-\partial_t^2\sim k_z^2/\gamma^2\ll k_\bot^2, \sigma k_z$. In this case the retarded Green's function $G_\mathcal{D}(\b r, t| \b r', t')$ obeys the equation
\ball{d1}
&-\nabla_\bot^2 G_\mathcal{D}+ \sigma\partial_t G_\mathcal{D}=\delta(t-t')\delta(\b r-\b r')\,.
\gal
Its solution is
\ball{d3}
G_\mathcal{D}(\b r, t| \b r', t')= \int \frac{d^3p}{(2\pi)^3}\int_{-\infty}^\infty \frac{d\omega}{2\pi }\frac{e^{-i\omega (t-t') +i\b p\cdot (\b r-\b r')}}{p_\bot^2-i\omega \sigma}= \frac{1}{4\pi t}\delta(z-z')\theta(t-t') e^{-\frac{\sigma (\b r_\bot-\b r_\bot')^2}{4(t-t')}}\,.
\gal
Employing \eq{a23} and \eq{a38} one derives
\ball{d5}
\b A^{(0)}(\b r, t) = \frac{e\unit z}{4\pi(t-z/v)}e^{-\frac{\sigma b^2}{4(t-z/v)}}\theta(t-z/v)\,,
\gal
which yields \eq{a53} for $\xi \ll 4/\sigma$\,.
\section{Solution for the expanding plasma}\label{sec:e}
\subsection{Contribution of the plasma flow}
Now we turn to Eq.~\eq{a26} that takes the plasma flow into account. Suppose that a point source is moving along the trajectory $z=vt$, $x=\tilde x$, $y=\tilde y$, where $\tilde x$ and $\tilde y$ are constants, see \fig{geom-2}. Denote by $\tilde{\b r}$ a vector with components $\tilde x,\tilde y,z$ and let $\tilde{\b b}$ be its transverse part. The magnetic field created by this charge in the stationary plasma is then given by \eq{a51} and \eq{a70} with the replacement $ b \to |\b b- \tilde{\b b}|$; denote it as $\b B^{(0)}(\b r-\tilde{\b r},t)$. The solution to \eq{a26} can be written right away using the Green's function as
\ball{b1}
\b A^{(1)}(\b r, t|\tilde{\b r})=\sigma \int G_a(\b r, t| \b r', t') \b u(\b r',t') \times \b B^{(0)}_a(\b r'-\tilde{\b r}, t')d^3r' dt'\,.
\gal
The contribution of the wake is neglected as per the results of the previous section.
The longitudinal expansion of QGP is usually described by the Bjorken model \cite{Bjorken:1982qr} in which the flow velocity in the lab frame is given by
\ball{b15}
\b u(\b r, t) = \frac{\b z}{t}\,.
\gal
Since the plasma velocity is non-vanishing only in the forward light-cone, i.e.\ $\b u^2\le 1$, the integral in \eq{b1} is restricted to the region $|z'|\le t'$. Using $t'= t-R$ this implies that the integral over $z'$ runs between the following limits:
\ball{b17}
-\frac{t^2-z^2-(\b b-\b b')^2}{2(t+z)}\le z' \le \frac{t^2-z^2-(\b b-\b b')^2}{2(t-z)}\,.
\gal
In fact, the applicability of the Bjorken model is restricted to the central plateau region in the inclusive particle spectrum at a given energy. If $2Y$ is the extent of the plateau in rapidity, then $|\b u|\le \tanh Y$. For a conservative estimate of the flow correction we set $Y$ to infinity, which yields \eq{b17}.
A more sophisticated blast wave model \cite{Siemens:1978pb,Teaney:2000cw,Kolb:2000fha} takes the transverse flow into account
\ball{b19}
\b u(\b r, t) = \frac{u_o}{R_o}\b b\,\theta(R_o-b)+ \frac{\b z}{t}\,,
\gal
where $u_o$ and $R_o$ are parameters fitted to the experimental data. We use $R_o=7.5$~fm, $u_o=0.55$ from \cite{Teaney:2003kp}. This time, restriction to the forward light-cone $\b u^2(\b r', t')\le 1$ reads
\ball{b21}
\left(\frac{u_o b'}{R_o}\right)^2+\left(\frac{z'}{t-R}\right)^2\le 1\,.
\gal
\begin{figure}[ht]
\includegraphics[height=8cm]{XYgeometry.pdf}
\caption{The geometry of the heavy-ion collisions in the transverse plane. The two heavy-ion remnants (big circles) move in opposite directions along the $z$-axis, see \fig{geom-1}. The element of charge $dq$ is located at the same $z$ as an ion remnant (i.e.\ it is not inside the plasma). Its projection on the transverse plane is depicted by the square. The small circle indicates the element of plasma moving with velocity $\b u$. The observation point is denoted by the $+$ symbol. The impact parameter $\b s$ points from one nuclear center to another one.
}
\label{geom-2}
\end{figure}
\subsection{Initial conditions}\label{sec:f}
Thus far we assumed that a particle moves in plasma all the way from $t=-\infty$. In fact, a physical scenario more relevant for relativistic heavy-ion collisions is that the valence charges move in vacuum until a certain time $\tau$ when the plasma emerges. We neglect the finite
thermalization time. Let the initial conditions be
\ball{f1}
\b A(\b r, \tau)= \bm{\mathcal{A}}(\b r)\,,\quad \frac{\partial \b A(\b r, \tau)}{\partial t}= \bm{\mathcal{V}}(\b r)\,,
\gal
where $\bm{\mathcal{A}}$ and $\bm{\mathcal{V}}$ are determined by the field that existed before the plasma emergence at $t=\tau$ \cite{Tuchin:2015oka}.
Then, the solution to \eq{a25} can be written as
\begin{subequations}
\bal
\b A^{(0)}(\b r, t)=& \int_\tau^{t+}dt' \int d^3r' \b j(\b r',t')G(\b r, t|\b r',t')\label{f4}\\
&+ \int d^3r' \left\{ \sigma \bm{\mathcal{A}}(\b r')+\bm{\mathcal{V}}(\b r')\right\}
G(\b r, t|\b r',\tau)\label{f5}\\
&-\int d^3r' \bm{\mathcal{A}}(\b r')\frac{\partial}{\partial t'}G(\b r, t|\b r',\tau)\,.\label{f6}
\gal
\end{subequations}
The initial conditions \eq{f5} and \eq{f6} are satisfied at the leading order. Since they are independent of the plasma flow, we are not going to be concerned with them anymore in this paper. Thus, the solution to \eq{a26} takes form
\ball{f8}
\b A^{(1)}(\b r, t|\tilde{\b r})=\sigma \int_\tau^{t+}dt' \int d^3r' G(\b r, t|\b r',t')\,\b u(\b r',t')\times \b B^{(0)}(\b r'- \tilde{\b r}, t')\,.
\gal
The initial time is chosen to be $\tau=0.2$~fm/$c$ in accordance with the phenomenological models of relativistic heavy-ion collisions \cite{Kolb:2000fha,Teaney:2000cw}.
\subsection{Magnetic field of a nucleus}\label{sec:g}
The total field created by a nucleus is
\ball{g1}
\b A_\text{nucl}(\b r, t) =\int \rho (\b r'')\b A^{(0)}(\b b- \tilde{\b b},z-\tilde z, t)d^3 r''+ \int \rho (\b r'')\b A^{(1)}(\b r, t|\tilde{\b b},\tilde z)d^3 r''\,,
\gal
where we slightly modified the notation by replacing $\tilde{\b r}$ with $\tilde{\b b},\tilde z$ in the vector potential argument.
In the laboratory frame, the proton distribution in the nucleus in the $z$-direction is very narrow with average coordinate $\tilde z= vt$ depending on the direction of motion. Assuming that the nuclear density $\rho$ is constant throughout the nucleus of radius $R_A$ and using \fig{geom-2} one can compute the vector potential as
\bal
\b A^{(0)}_\text{nucl}(\b r, t) &= 2\int \rho \sqrt{R_A^2-(b'')^2}\b A^{(0)}(\b b - \b b''-\b s/2,z- vt, t)d^2 b''\label{g3}\\
\b A^{(1)}_\text{nucl}(\b r, t) &= 2\int \rho \sqrt{R_A^2-(b'')^2}\b A^{(1)}(\b r, t|\ \b s/2+\b b'', vt)d^2 b''\,.\label{g4}
\gal
The nuclear density is normalized as $\rho (4\pi /3)R_A^3 = Z$, where $Ze$ is the nucleus electric charge. The contribution of another heavy-ion can be calculated by simply replacing $\b v\to -\b v$. In the figures below we show only the single nucleus contribution.
It follows from \eq{g3} that the magnetic field created by a single nucleus in a stationary plasma is
\ball{g5}
\b B^{(0)}_\text{nucl}(\b r, t) = 2\int \rho \sqrt{R_A^2-(b'')^2}\b B_a^{(0)}(\b b - \b b''-\b s/2,z- vt, t)d^2 b''\,,
\gal
where only the pulse contribution \eq{a51} is taken into account, whereas the wake contribution \eq{a70} is neglected. Since $\b A^{(0)}_\text{nucl}$ is directed along the $z$-axis, the corresponding magnetic field $\b B^{(0)}_\text{nucl}$ is circularly polarized in the $\unit \phi$ direction with respect to the nuclear center $O_1$ (or $O_2$). It is related to the radial $\unit b$ and the polar $\unit \varphi$ unit vectors of the cylindrical coordinate system defined with respect to the ``lab" reference frame shown in \fig{geom-2} as
\ball{b7}
\unit \phi = \unit b\sin (\phi-\zeta)+ \unit \varphi \cos(\phi- \zeta)\,,
\gal
where $\zeta$ given by
\ball{g9}
\cot \zeta = \frac{b\cos\phi-s/2}{b\sin\phi}
\gal
is the angle between the vector pointing from $O_1$ to the observation point and the $x$-axis.
The correction \eq{g4} due to the plasma expansion can be written down using \eq{g5} as
\ball{g11}
\b A^{(1)}_\text{nucl}(\b r, t) = \sigma \int _\tau ^{t+}dt'\int d^3r' G_a(\b r, t|\b r', t')\b u(\b r', t')\times \b B^{(0)}_\text{nucl}(\b r', t')\,.
\gal
In view of \eq{b19}, this equation indicates that the longitudinal expansion of plasma induces the transverse $\unit \varphi$ and $\b b$ components of the vector potential, while the transverse expansion induces a small $z$-correction to the vector potential. Moreover, according to \eq{b7}, $A^{(1)}_\varphi/A^{(1)}_b = -\tan(\phi-\zeta)$.
\begin{figure}[ht]
\begin{tabular}{cc}
\includegraphics[height=5cm]{A0.pdf} &
\includegraphics[height=5cm]{A1overA0.pdf}
\end{tabular}
\caption{The vector potential $\b A= \b A^{(0)}+ \b A^{(1)}$ created at a representative point $z=0$, $b=1$~fm, $\phi=\pi/6$ (see \fig{geom-2}) in QGP by a remnant of the gold ion moving with the boost-factor $\gamma=100$ ($\sqrt{s}=0.2$ TeV) and impact parameter $|\b s|=3$~fm. Left panel: vector potential $\b A^{(0)}$ in the non-expanding plasma. Right panel: the relative contribution of the plasma expansion. The plasma emerges at $\tau = 0.2$~fm/$c$.}
\label{Numerics}
\end{figure}
In the left panel of \fig{Numerics} we show the time-dependence of the vector potential in the stationary plasma $\b A^{(0)}$ at a representative point indicated in the caption. This calculation agrees with the previous results \cite{Tuchin:2013apa}. It is seen that the magnetic field appears at $t=\tau=0.2$~fm/$c$ because we assumed that QGP emerges at that time. It is important to mention that in this calculation we do not consider the contributions from the fields that existed at $t<\tau$. They are given by Eqs.~\eq{f5} and \eq{f6} and are not affected by the plasma flow, even though they give a significant contribution to $\b A^{(0)}$ as shown in \cite{Tuchin:2015oka}.
In the right panel of \fig{Numerics} we show the time-dependence of the ratio $ A^{(1)}/A^{(0)}$ at a representative point inside QGP, which illustrates the relative significance of the plasma expansion in the magnetic field calculations. The main observation is that the relative contribution of the plasma expansion is below 10\%. With this accuracy, the plasma expansion effect on the magnetic field can be safely neglected.
\begin{figure}[ht]
\includegraphics[height=5cm]{A1all.pdf}
\caption{Dotted line: $A_z^{(1)}$, dashed line: $A_\phi^{(1)}$, solid line $-A_b^{(1)}$ components of the correction $\b A^{(1)}$ to the vector potential (in units of $m_\pi/e$) due to the plasma expansion. The geometric and kinematic parameters are the same as in \fig{Numerics}. The cylindrical coordinates are defined with respect to the $z$-axis of \fig{geom-2}, which is the lab frame for heavy-ion collisions. }
\label{Numerics-2}
\end{figure}
\fig{Numerics-2} shows the components of the correction to the vector potential due to the plasma expansion. The vector potential in the stationary plasma always points in the direction of the external charge motion ($\pm \unit z$-directions) generating the total magnetic field as a superposition of the circularly polarized fields of the individual charges. In contrast, flow of plasma generates additional components of the vector potential in the transverse plane.
The vector potentials shown in \fig{Numerics} and \fig{Numerics-2} is produced by a relativistic heavy-ion in a single event. We assumed that the electric charge distribution in the rest frame is uniform across the nucleus. Using a more accurate Woods-Saxon distribution gives a tiny correction. Many transport models treat heavy ion as a collection of electric charges of finite radius that are randomly distributed according to a given average charge distribution. This produces large event-by-event fluctuations of charge positions, which in turn induces large event-by-event fluctuations of electromagnetic field \cite{Bzdak:2011yy}. However, it was shown in \cite{Zakharov:2017gkb} that the quantum treatment of the nuclear electric charge distribution yields fluctuations which are roughly an order of magnitude smaller than the flow contribution. In view of this observation we neglected the event-by-event fluctuations in this paper.
\section{Summary}\label{sec:s}
We computed the effect of the QGP expansion on the magnetic field created inside the plasma by external valence charges of the heavy-ion remnants. Our main assumption is that the plasma flow is not affected by the magnetic field and is given by the phenomenological blast-wave model. We treated the effect of plasma flow as a perturbation of the magnetic field in a stationary plasma. The result shown in \fig{Numerics} indicates that the contribution of the plasma flow to the magnetic field is less than 10\%. Our main conclusion is that there is no urgent need to solve the comprehensive MHD equations in order to describe the QGP dynamics at present energies, unless one wishes to reach precision of about 10\%. It is a very good approximation, on the one hand, to study QGP in the background electromagnetic field generated by external sources and, on the other hand, to investigate the dynamics of magnetic field in the background plasma.
Since in this paper we focused on the contribution of plasma flow to the magnetic field of external charges, we disregarded the magnetic field created by the fields that existed before the plasma emergence. However, in phenomenological applications they certainly have to be taken into account as argued in \cite{Tuchin:2015oka}. Incidentally, we observed that the diffusion approximation used in \cite{Tuchin:2015oka} to analyze the initial conditions is quite reasonable.
In our previous calculations of magnetic field we always tacitly neglected the wake produced by the currents induced in plasma. In \sec{sec:c} we derived the analytic expressions for the pulse and wake fields, given by \eq{a51} and \eq{a70} respectively, and argued that the wake field is indeed negligible in the phenomenologically relevant regime due to the smallness of the electrical conductivity as compared to the inverse QGP lifetime.
Our paper paves the road to a comprehensive computation of electromagnetic field with quantum sources, whose importance was demonstrated in \cite{Holliday:2016lbx,Peroutka:2017esw}. The fact that the flow of plasma and the wake effects are but small corrections is enormous simplification of the MHD equations. Computing such a field with the appropriate initial conditions is the subject of our forthcoming paper.
\acknowledgments
This work was supported in part by the U.S. Department of Energy under Grant No.\ DE-FG02-87ER40371.
|
1,941,325,221,034 | arxiv | \section{Introduction}
Time series analysis is widely used in extensive real-world applications, such as the forecasting of meteorological factors for weather prediction \citep{wu2021autoformer}, imputation of missing data for data mining \citep{Friedman1962TheIO}, anomaly detection of monitoring data for industrial maintenance \citep{xu2021anomaly} and classification of trajectories for action recognition \citep{Franceschi2019UnsupervisedSR}. Because of its immense practical value, time series analysis has received great interest \citep{Lim2021TimeseriesFW}.
Different from other types of sequential data, such as language or video, time series is recorded continuously and each time point only saves some scalars. Since one single time point usually cannot provide sufficient semantic information for analysis, many works focus on the temporal variation, which is more informative and can reflect the inherent properties of time series, such as the continuity, periodicity, trend and etc. However, the variations of real-world time series always involve intricate temporal patterns, where multiple variations (e.g. rising, falling, fluctuation and etc.) mix and overlap with each other, making the temporal variation modeling extremely challenging.
Especially in the deep learning communities, benefiting from the powerful non-linear modeling capacity of deep models, many works have been proposed to capture the complex temporal variations in real-world time series.
One category of methods adopts recurrent neural networks (RNN) to model the successive time points based on the Markov assumption \citep{Hochreiter1997LongSM,2018Modeling,thoc20}.
However, these methods usually fail in capturing the long-term dependencies and their efficiencies suffer from the sequential computation paradigm.
Another category of methods utilizes the convolutional neural network along the temporal dimension (TCN) to extract the variation information \citep{Franceschi2019UnsupervisedSR,He2019TemporalCN}. Also, because of the locality property of the one-dimension convolution kernels, they can only model the variations among adjacent time points, thereby still failing in long-term dependencies.
Recently, Transformers with attention mechanism have been widely used in sequential modeling \citep{NEURIPS2020_1457c0d6,dosovitskiy2021an,liu2021Swin}. In time series analysis, many Transformer-based models adopt the attention mechanism or its variants to capture the pair-wise temporal dependencies among time points \citep{2019Enhancing,kitaev2020reformer,haoyietal-informer-2021,zhou2022fedformer}. But it is hard for attention mechanism to find out reliable dependencies directly from scattered time points, since the temporal dependencies can be obscured deeply in intricate temporal patterns \citep{wu2021autoformer}.
In this paper, to tackle the intricate temporal variations, we analyze the time series from a new dimension of multi-periodicity. Firstly, we observe that real-world time series usually present multi-periodicity, such as daily and yearly variations for weather observations, weekly and quarterly variations for electricity consumption. These multiple periods overlap and interact with each other, making the variation modeling intractable. Secondly, for each period, we find out that the variation of each time point is not only affected by the temporal pattern of its adjacent area but also highly related to the variations of its adjacent periods. For clearness, we name these two types of temporal variations as \emph{intraperiod-variation} and \emph{interperiod-variation} respectively. The former indicates short-term temporal patterns within a period. The latter can reflect long-term trends of consecutive different periods. Note that for the time series without clear periodicity, the variations will be dominated by the intraperiod-variation and is equivalent to the ones with infinite period length.
\begin{figure*}[t]
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{pic/intro.pdf}}
\vspace{-5pt}
\caption{Multi-periodicity and temporal 2D-variation of time series. Each period involves the \textcolor{red}{intraperiod-variation} and \textcolor{blue}{interperiod-variation}. We transform the original 1D time series into a set of 2D tensors based on multiple periods, which can unify the intraperiod- and interperiod-variations.}
\label{fig:intro}
\end{center}
\vspace{-20pt}
\end{figure*}
Since different periods will lead to different intraperiod- and interperiod-variations, the multi-periodicity can naturally derive a modular architecture for temporal variation modeling, where we can capture the variations derived by a certain period in one module. Besides, this design makes the intricate temporal patterns disentangled, benefiting the temporal variation modeling. However, it is notable that the 1D time series is hard to explicitly present two different types of variations simultaneously. To tackle this obstacle, we extend the analysis of temporal variations into the 2D space. Concretely, as shown in Figure \ref{fig:intro}, we can reshape the 1D time series into a 2D tensor, where each column contains the time points within a period and each row involves the time points at the same phase among different periods.
Thus, by transforming 1D time series into a set of 2D tensors, we can break the bottleneck of representation capability in the original 1D space and successfully unify the intraperiod- and interperiod-variations in 2D space, obtaining the \emph{temporal 2D-variations}.
Technically, based on above motivations, we go beyond previous backbones and propose the \emph{TimesNet} as a new task-general model for time series analysis. Empowering by \emph{TimesBlock}, TimesNet can discover the multi-periodicity of time series and capture the corresponding temporal variations in a modular architecture. Concretely, TimesBlock can adaptively transform the 1D time series into a set of 2D tensors based on learned periods and further capture intraperiod- and interperiod-variations in the 2D space by a parameter-efficient inception block. Experimentally, TimesNet achieves the consistent state-of-the-art in five mainstream analysis tasks, including short- and long-term forecasting, imputation, classification and anomaly detection.
Our contributions are summarized in three folds:
\vspace{-5pt}
\begin{itemize}
\item Motivated by multi-periodicity and complex interactions within and between periods, we find out a modular way for temporal variation modeling. By transforming the 1D time series into 2D space, we can present the intraperiod- and interperiod-variations simultaneously.
\item We propose the TimesNet with TimesBlock to discover multiple periods and capture temporal 2D-variations from the transformed 2D tensors by a parameter-efficient inception block.
\item As a task-general foundation model, TimesNet achieves the consistent state-of-the-art in five mainstream time series analysis tasks. Detailed and insightful visualizations are included.
\end{itemize}
\section{Related Work}
As a key problem of time series analysis, temporal variation modeling has been well explored.
Many classical methods assume that the temporal variations follow the pre-defined patterns, such as ARIMA \citep{Anderson1976TimeSeries2E}, Holt-Winter \citep{hyndman2018forecasting} and Prophet \citep{Taylor2017ForecastingAS}. However, the variations of real-world time series are usually too complex to be covered by pre-defined patterns, limiting the practical applicability of these classical methods.
In recent years, many deep models have been proposed for temporal modeling, such as MLP, TCN, RNN-based models \citep{Hochreiter1997LongSM,2018Modeling,Franceschi2019UnsupervisedSR}. Technically, MLP-based methods \citep{oreshkin2019n,challu2022n,Zeng2022AreTE,Zhang2022LessIM} adopt the MLP along the temporal dimension and encode the temporal dependencies into the fixed parameter of MLP layers. The TCN-based \citeyearpar{Franceschi2019UnsupervisedSR} methods capture the temporal variations by convolutional kernels that slide along the temporal dimension. The RNN-based methods \citep{Hochreiter1997LongSM,2018Modeling,gu2022efficiently} utilize the recurrent structure and capture temporal variations implicitly by state transitions among time steps. Note that none of these methods consider the temporal 2D-variations derived by periodicity, which is proposed in this paper.
Besides, Transformers have shown great performance in time series forecasting \citep{haoyietal-informer-2021,liu2021pyraformer,wu2021autoformer,zhou2022fedformer}. With attention mechanism, they can discover the temporal dependencies among time points. Especially, \citeauthor{wu2021autoformer} present the Autoformer with Auto-Correlation mechanism to capture the series-wise temporal dependencies based on the learned periods. In addition, to tackle the intricate temporal patterns, Autoformer also presents a deep decomposition architecture to obtain the seasonal and trend parts of input series. Afterward, FEDformer \citep{zhou2022fedformer} employs the mixture-of-expert design to enhance the seasonal-trend decomposition and presents a sparse attention within the frequency domain. Unlike previous methods, we ravel out the intricate temporal patterns by exploring the multi-periodicity of time series and capture the temporal 2D-variations in 2D space by well-acknowledged computer vision backbones for the first time.
It is also notable that, different from previous methods, we no longer limit to a specific analysis task and attempt to propose a task-general foundation model for time series analysis.
\section{TimesNet}
As aforementioned, based on the multi-periodicity of time series, we propose the \emph{TimesNet} with a modular architecture to capture the temporal patterns derived from different periods. For each period, to capture the corresponding intraperiod- and interperiod-variations, we design a \emph{TimesBlock} within the TimesNet, which can transform the 1D time series into 2D space and simultaneously model the two types of variations by a parameter-efficient inception block.
\subsection{Transform 1D-variations into 2D-variations}
As shown in Figure \ref{fig:intro}, each time point involves two types of temporal variations with its adjacent area and with the same phase among different periods simultaneously, namely \emph{intraperiod-} and \emph{interperiod-variations}. However, this original 1D structure of time series can only present the variations among adjacent time points. To tackle this limitation, we explore the two-dimension structure for temporal variations, which can explicitly present variations within and between periods, thereby with more advantages in representation capability and benefiting the subsequent representation learning.
Concretely, for the length-$T$ time series with $C$ dimensions, the original 1D organization is $\mathbf{X}_{\text{1D}}\in\mathbb{R}^{T\times C}$. To represent the interperiod-variation, we need to discover the periods first. Technically, we analyze the time series in the frequency domain by Fast Fourier Transformer (FFT) as follows:
\begin{equation}\label{equ:fft_for_period}
\begin{split}
\mathbf{A} &= \operatorname{Avg}\bigg(\operatorname{Amp}\big(\operatorname{FFT}(\mathbf{X}_{\text{1D}})\big)\bigg)\\
f_{1},\cdots,f_{k} & = \mathop{\arg\mathrm{Topk}}_{f_{\ast}\in\{1,\cdots,[\frac{T}{2}]\}}\left(\mathbf{A}\right) \\
p_{1},\cdots,p_{k} & = \left\lceil\frac{T}{f_{1}}\right\rceil,\cdots,\left\lceil\frac{T}{f_{k}}\right\rceil.
\end{split}
\end{equation}
Here, $\operatorname{FFT}(\cdot)$ and $\operatorname{Amp}(\cdot)$ denote the FFT and the calculation of amplitude values. $\mathbf{A}\in\mathbb{R}^{T}$ represents the calculated amplitude of each frequency, which is averaged from $C$ dimensions by $\operatorname{Avg}(\cdot)$. Note that the $j$-th value $\mathbf{A}_{j}$ represents the intensity of the frequency-$j$ periodic basis function, corresponding to the period length $\lceil\frac{T}{j}\rceil$. Considering the sparsity of frequency domain and to avoid the noises brought by meaningless high frequencies \citep{Chatfield1981TheAO,zhou2022fedformer}, we only select the top-$k$ amplitude values and obtain the most significant frequencies $\{f_{1},\cdots,f_{k}\}$ with the unnormalized amplitudes $\{\mathbf{A}_{f_{1}},\cdots,\mathbf{A}_{f_{k}}\}$, where $k$ is the hyper-parameter. These selected frequencies also correspond to $k$ period lengths $\{p_{1},\cdots,p_{k}\}$. Due to the conjugacy of frequency domain, we only consider the frequencies within $\{1,\cdots,[\frac{T}{2}]\}$. We summarize Equation \ref{equ:fft_for_period} as follows:
\begin{equation}\label{equ:period}
\begin{split}
\mathbf{A},\{f_{1},\cdots,f_{k}\},\{p_{1},\cdots,p_{k}\} & = \operatorname{Period(\mathbf{X}_{1D})}.
\end{split}
\end{equation}
Based on the selected frequencies $\{f_{1},\cdots,f_{k}\}$ and corresponding period lengths $\{p_{1},\cdots,p_{k}\}$, we can reshape the 1D time series $\mathbf{X}_{\text{1D}}\in\mathbb{R}^{T\times C}$ into multiple 2D tensors by the following equations:
\begin{equation}\label{equ:reshape}
\begin{split}
\mathbf{X}^{i}_{\text{2D}} &=\operatorname{Reshape}_{p_{i},f_{i}}\left(\operatorname{Padding}(\mathbf{X}_{\text{1D}})\right),\ i\in\{1,\cdots, k\},\\
\end{split}
\end{equation}
where $\operatorname{Padding}(\cdot)$ is to extend the time series by zeros along temporal dimension to make it compatible for $\operatorname{Reshape}_{p_{i},f_{i}}(\cdot)$, where $p_{i}$ and $f_{i}$ represent the number of rows and columns of the transformed 2D tensors respectively. Note that $\mathbf{X}^{i}_{\text{2D}}\in\mathbb{R}^{(p_{i}\times f_{i})\times C}$ denotes the $i$-th reshaped time series based on frequency-$f_{i}$, whose columns and rows represent the intraperiod-variation and interperiod-variation under the corresponding period length $p_{i}$ respectively. Eventually, as shown in Figure \ref{fig:2d_structure}, based on the selected frequencies and estimated periods, we obtain a set of 2D tensors $\{\mathbf{X}^{1}_{\text{2D}},\cdots,\mathbf{X}^{k}_{\text{2D}}\}$, which indicates $k$ different temporal 2D-variations derived by different periods.
It is also notable that, this transformation brings two types of localities to the transformed 2D tensors, that is localities among adjacent time points (columns, intraperiod-variation) and adjacent periods (rows, interperiod-variation). Thus, the temporal 2D-variations can be easily processed by 2D kernels.
\begin{figure*}[t]
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{pic/2d_structure.pdf}}
\vspace{-8pt}
\caption{2D structure in time series. By discovering the periodicity, we can transform the original 1D time series into structured 2D tensors, which can be processed by 2D kernels conveniently.}
\label{fig:2d_structure}
\end{center}
\vspace{-25pt}
\end{figure*}
\vspace{-2pt}
\subsection{TimesBlock}
As shown in Figure \ref{fig:model}, we organize the TimesBlock in a residual way \citep{He2016DeepRL}. Concretely, for the length-$T$ 1D input time series $\mathbf{X}_{\text{1D}}\in\mathbb{R}^{T\times C}$, we project the raw inputs into the deep features $\mathbf{X}_{\text{1D}}^{0}\in\mathbb{R}^{T\times d_{\text{model}}}$ by the embedding layer $\mathbf{X}_{\text{1D}}^{0}=\operatorname{Embed}(\mathbf{X}_{\text{1D}})$ at the very beginning. For the $l$-th layer of TimesNet, the input is $\mathbf{X}_{\text{1D}}^{l-1}\in\mathbb{R}^{T\times d_{\text{model}}}$ and the process can be formalized as:
\begin{equation}\label{equ:overall}
\begin{split}
{\mathbf{X}}_{\text{1D}}^{l}=\operatorname{TimesBlock}\left(\mathbf{X}_{\text{1D}}^{l-1}\right)+{\mathbf{X}_{\text{1D}}^{l-1}}.\\
\end{split}
\end{equation}
As shown in Figure \ref{fig:model}, for the $l$-th TimesBlock, the whole process involves two successive parts: capturing temporal 2D-variations and adaptively aggregating representations from different periods.
\vspace{-5pt}
\paragraph{Capturing temporal 2D-variations} Similar to Equation~\ref{equ:fft_for_period}, we can estimate period lengths for deep features ${\mathbf{X}}_{\text{1D}}^{l-1}$ by $\operatorname{Period}(\cdot)$. Based on estimated period lengths, we can transform the 1D time series into 2D space and obtain a set of 2D tensors, from which we can obtain informative representations by parameter-efficient inception block conveniently. The process is formalized as follows:
\begin{equation}\label{equ:TimesBlock}
\begin{split}
\mathbf{A}^{l-1},\{f_{1},\cdots,f_{k}\},\{p_{1},\cdots,p_{k}\}&=\operatorname{Period}\left({\mathbf{X}}_{\text{1D}}^{l-1}\right),\\
\mathbf{X}^{l,i}_{\text{2D}} &=\operatorname{Reshape}_{p_{i},f_{i}}\left(\operatorname{Padding}(\mathbf{X}_{\text{1D}}^{l-1})\right),\ i\in\{1,\cdots, k\}\\
\widehat{\mathbf{X}}^{l,i}_{\text{2D}} &=\operatorname{Inception}\left(\mathbf{X}^{l,i}_{\text{2D}}\right),\ i\in\{1,\cdots, k\}\\
\widehat{\mathbf{X}}^{l,i}_{\text{1D}} &=\operatorname{Trunc}\left(\operatorname{Reshape}_{1,(p_{i}\times f_{i})}\left(\widehat{\mathbf{X}}^{l,i}_{\text{2D}}\right)\right),\ i\in\{1,\cdots, k\},\\
\end{split}
\end{equation}
where $\mathbf{X}^{l,i}_{\text{2D}}\in\mathbb{R}^{(p_{i}\times f_{i})\times d_{\text{model}}}$ is the $i$-th transformed 2D tensor. After the transformation, we process the 2D tensor by a parameter-efficient inception block \citep{Szegedy2015GoingDW} as $\operatorname{Inception(\cdot)}$, which involves multi-scale 2D kernels and is one of the most well-acknowledged vision backbones. Then we transform the learned 2D representations $\widehat{\mathbf{X}}^{l,i}_{\text{2D}}$ back to 1D space $\widehat{\mathbf{X}}^{l,i}_{\text{1D}}\in\mathbb{R}^{T\times d_{\text{model}}}$ for aggregation, where we employ $\operatorname{Trunc}(\cdot)$ to truncate the padded series with length $(p_{i}\times f_{i})$ into original length $T$.
Note that benefiting from the transformation of 1D time series, the 2D kernels in the inception block can aggregate the multi-scale intraperiod-variation (columns) and interperiod-variation (rows) simultaneously, covering both adjacent time points and adjacent periods. Besides, we adopt a shared inception block for different reshaped 2D tensors $\{\mathbf{X}^{l,1}_{\text{2D}},\cdots,\mathbf{X}^{l,k}_{\text{2D}}\}$ to improve parameter efficiency, which can make the model size invariant to the selection of hyper-parameter $k$.
\begin{figure*}[t]
\vspace{-10pt}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{pic/model.pdf}}
\vspace{-10pt}
\caption{Overall architecture of TimesNet. TimesNet is stacked by TimesBlocks in a residual way. TimesBlocks can capture various temporal 2D-variations from $k$ different reshaped tensors by a parameter-efficient inception block in 2D space and fuse them based on normalized amplitude values.}
\label{fig:model}
\end{center}
\vspace{-20pt}
\end{figure*}
\paragraph{Adaptive aggregation} Finally, we need to fuse $k$ different 1D-representations $\{\widehat{\mathbf{X}}^{l,1}_{\text{1D}},\cdots,\widehat{\mathbf{X}}^{l,k}_{\text{1D}}\}$ for the next layer. Inspired by Auto-Correlation \citep{wu2021autoformer}, the amplitudes $\mathbf{A}$ can reflect the relative importance of selected frequencies and periods, thereby corresponding to the importance of each transformed 2D tensor. Thus, we aggregate the 1D-representations based on the amplitudes:
\begin{equation}\label{equ:aggregation}
\begin{split}
\widehat{\mathbf{A}}^{l-1}_{f_{1}},\cdots,\widehat{\mathbf{A}}^{l-1}_{f_{k}} & = \mathrm{Softmax}\left(\mathbf{A}^{l-1}_{f_{1}},\cdots,\mathbf{A}^{l-1}_{f_{k}}\right)\\
{\mathbf{X}}_{\text{1D}}^{l} & =\sum_{i=1}^{k}\widehat{\mathbf{A}}_{f_{i}}^{l-1}\times \widehat{\mathbf{X}}^{l,i}_{\text{1D}}.\\
\end{split}
\end{equation}
Since the variations within and between periods are already involved in multiple highly-structured 2D tensors, TimesBlock can fully capture multi-scale temporal 2D-variations simultaneously. Thus, TimesNet can achieve a more effective representation learning than directly from 1D time series.
\paragraph{Generality in 2D vision backbones} Benefiting from the transformation of 1D time series into temporal 2D-variations, we can choose various computer vision backbones to replace the inception block for representation learning, such as the widely-used ResNet \citep{He2016DeepRL} and ResNeXt \citep{Xie2017AggregatedRT}, advanced ConvNeXt \citep{liu2022convnet} and attention-based models \citep{liu2021Swin}. Thus, our temporal 2D-variation design also bridges the 1D time series to the booming 2D vision backbones, making the time series analysis take advantage of the development of computer vision community. In general, more powerful 2D backbones for representation learning will bring better performance. Considering both performance and efficiency (Figure \ref{fig:all_results} right), we conduct the main experiments based on the parameter-efficient inception block as shown in Equation \ref{equ:TimesBlock}.
\section{Experiments}
\vspace{-2pt}
To verify the generality of TimesNet, we extensively experiment on five mainstream analysis tasks, including short- and long-term forecasting, imputation, classification and anomaly detection.
\vspace{-5pt}
\paragraph{Implementation} Table \ref{tab:benchmarks} is a summary of benchmarks. More details about the dataset, experiment implementation and model configuration can be found in Appendix \ref{sec:detail}.
\begin{table}[hbp]
\vspace{-15pt}
\caption{Summary of experiment benchmarks.}\label{tab:benchmarks}
\vskip 0.02in
\centering
\begin{threeparttable}
\begin{small}
\renewcommand{\multirowsetup}{\centering}
\setlength{\tabcolsep}{6pt}
\begin{tabular}{c|l|c|c}
\toprule
\scalebox{0.95}{Tasks} & \scalebox{0.95}{Benchmarks} & \scalebox{0.95}{Metrics} & \scalebox{0.95}{Series Length} \\
\toprule
\scalebox{0.95}{\multirow{3}{*}{Forecasting}} & \scalebox{0.95}{\textbf{Long-term}: ETT (4 subsets), Electricity,} & \scalebox{0.95}{\multirow{2}{*}{MSE, MAE}} & \scalebox{0.95}{96$\sim$720} \\
& \scalebox{0.95}{Traffic, Weather, Exchange, ILI} & & \scalebox{0.95}{(ILI: 24$\sim$60)}\\
\cmidrule{2-4}
& \scalebox{0.95}{\textbf{Short-term}: M4 (4 subsets)} & \scalebox{0.95}{SMAPE, MASE, OWA} & \scalebox{0.95}{6$\sim$48} \\
\midrule
\scalebox{0.95}{Imputation} & \scalebox{0.95}{ETT (4 subsets), Electricity, Weather} & \scalebox{0.95}{MSE, MAE} & \scalebox{0.95}{96} \\
\midrule
\scalebox{0.95}{Classification} & \scalebox{0.95}{UEA (10 subsets)} & \scalebox{0.95}{Accuracy} & \scalebox{0.95}{29$\sim$1751} \\
\midrule
\scalebox{0.95}{Anomaly Detection} & \scalebox{0.95}{SMD, MSL, SMAP, SWaT, PSM} & \scalebox{0.95}{Precision, Recall, F1-Socre} & \scalebox{0.95}{100} \\
\bottomrule
\end{tabular}
\end{small}
\end{threeparttable}
\vspace{-18pt}
\end{table}
\paragraph{Baselines} Since we attempt to propose a foundation model for time series analysis, we extensively compare the well-acknowledged and advanced models in all five tasks, including the RNN-based models: LSTM \citeyearpar{Hochreiter1997LongSM}, LSTNet \citeyearpar{2018Modeling} and LSSL \citeyearpar{gu2022efficiently}; CNN-based Model: TCN \citeyearpar{Franceschi2019UnsupervisedSR}; MLP-based models: LightTS \citeyearpar{Zhang2022LessIM} and DLinear \citeyearpar{Zeng2022AreTE}; Transformer-based models: Reformer \citeyearpar{kitaev2020reformer}, Informer \citeyearpar{haoyietal-informer-2021}, Pyraformer \citeyearpar{liu2021pyraformer}, Autoformer \citeyearpar{wu2021autoformer}, FEDformer \citeyearpar{zhou2022fedformer} and Non-stationary Transformer \citeyearpar{Liu2022NonstationaryTR}. Besides, we also compare the state-of-the-art models for each specific task, such as N-HiTS \citeyearpar{challu2022n} and N-BEATS \citeyearpar{oreshkin2019n} for short-term forecasting, Anomaly Transformer \citeyearpar{xu2021anomaly} for anomaly detection, Rocket \citeyearpar{Dempster2020ROCKETEF} and Flowformer \citeyearpar{wu2022flowformer} for classification and etc. Overall, more than 15 baselines are included for a comprehensive comparison.
\begin{figure*}[h]
\vspace{-5pt}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{pic/radar_new.pdf}}
\vspace{-12pt}
\caption{ Model performance comparison (left) and generality in different vision backbones (right).}
\label{fig:all_results}
\end{center}
\vspace{-20pt}
\end{figure*}
\subsection{Main Results}
As a foundation model, TimesNet achieves consistent state-of-the-art performance on five mainstream analysis tasks compared with other customized models (Figure \ref{fig:all_results} left). The full efficiency comparison is provided in Table \ref{tab:all_efficiency} of Appendix. Besides, by replacing the inception block with more powerful vision backbones, we can further promote the performance of TimesNet (Figure \ref{fig:all_results} right), confirming that our design can make time series analysis take advantage of booming vision backbones.
\vspace{-5pt}
\subsection{Short- and Long-term Forecasting}
\paragraph{Setups} Time series forecasting is essential in weather forecasting, traffic and energy consumption planning. To fully evaluate the model performance in forecasting, we adopt two types of benchmarks, including long-term and short-term forecasting. Especially for the long-term setting, we follow the benchmarks used in Autoformer \citeyearpar{wu2021autoformer}, including ETT \citep{haoyietal-informer-2021}, Electricity \citep{ecldata}, Traffic \citep{trafficdata}, Weather \citep{weatherdata}, Exchange \citep{2018Modeling} and ILI \citep{ilidata}, covering five real-world applications. For the
short-term dataset, we adopt the M4 \citep{M4team2018dataset}, which contains the yearly, quarterly and monthly collected univariate marketing data. Note that each dataset in the long-term setting only contains one continuous time series, where we obtain samples by sliding window, while M4 involves 100,000 different time series collected in different frequencies.
\paragraph{Results} TimesNet shows great performance in both long-term and short-term settings (Table \ref{tab:long_term_forecasting_results}--\ref{tab:short_term_forecasting_results}). Concretely, TimesNet achieves state-of-the-art in more than 80\% of cases in long-term forecasting (Table \ref{tab:full_forecasting_results}). For the M4 dataset, since the time series are collected from different sources, the temporal variations can be quite diverse, making forecasting much more challenging. Our model still performs best in this task, surpassing extensive advanced MLP-based and Transformer-based models.
\begin{table}[tbp]
\caption{Long-term forecasting task. The past sequence length is set as 36 for ILI and 96 for the others. All the results are averaged from 4 different prediction lengths, that is $\{24,36,48,60\}$ for ILI and $\{96,192,336,720\}$ for the others. See Table \ref{tab:full_forecasting_results} in Appendix for the full results.}\label{tab:long_term_forecasting_results}
\vskip 0.05in
\centering
\begin{threeparttable}
\begin{small}
\renewcommand{\multirowsetup}{\centering}
\setlength{\tabcolsep}{1.0pt}
\begin{tabular}{c|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Models}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{\textbf{TimesNet}}}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{LightTS}}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{DLinear}}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{FEDformer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Stationary}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Autoformer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Pyraformer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Informer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{LogTrans}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Reformer}}} \\
\multicolumn{1}{c}{} & \multicolumn{2}{c}{\scalebox{0.8}{(\textbf{Ours})}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{Zhang2022LessIM}}} &
\multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{Zeng2022AreTE}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{zhou2022fedformer}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{Liu2022NonstationaryTR}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{wu2021autoformer}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{liu2021pyraformer}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{haoyietal-informer-2021}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{2019Enhancing}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{kitaev2020reformer}}} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}\cmidrule(lr){6-7} \cmidrule(lr){8-9}\cmidrule(lr){10-11}\cmidrule(lr){12-13}\cmidrule(lr){14-15}\cmidrule(lr){16-17}\cmidrule(lr){18-19}\cmidrule(lr){20-21}
\multicolumn{1}{c}{Metric} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} \\
\toprule
\scalebox{0.8}{ETTm1} &\boldres{\scalebox{0.8}{0.400}} &\boldres{\scalebox{0.8}{0.406}} &\scalebox{0.8}{0.435} &\scalebox{0.8}{0.437} &\secondres{\scalebox{0.8}{0.403}} &\secondres{\scalebox{0.8}{0.407}} &\scalebox{0.8}{0.448} &\scalebox{0.8}{0.452} &\scalebox{0.8}{0.481} &\scalebox{0.8}{0.456} &\scalebox{0.8}{0.588} &\scalebox{0.8}{0.517} &\scalebox{0.8}{0.691} &\scalebox{0.8}{0.607} &\scalebox{0.8}{0.961} &\scalebox{0.8}{0.734} &\scalebox{0.8}{0.929} &\scalebox{0.8}{0.725} &\scalebox{0.8}{0.799} &\scalebox{0.8}{0.671}\\
\midrule
\scalebox{0.8}{ETTm2} &\boldres{\scalebox{0.8}{0.291}} &\boldres{\scalebox{0.8}{0.333}} &\scalebox{0.8}{0.409} &\scalebox{0.8}{0.436} &\scalebox{0.8}{0.350} &\scalebox{0.8}{0.401} &\secondres{\scalebox{0.8}{0.305}} &\scalebox{0.8}{0.349} &\scalebox{0.8}{0.306} &\secondres{\scalebox{0.8}{0.347}} &\scalebox{0.8}{0.327} &\scalebox{0.8}{0.371} &\scalebox{0.8}{1.498} &\scalebox{0.8}{0.869} &\scalebox{0.8}{1.410} &\scalebox{0.8}{0.810} &\scalebox{0.8}{1.535} &\scalebox{0.8}{0.900} &\scalebox{0.8}{1.479} &\scalebox{0.8}{0.915}\\
\midrule
\scalebox{0.8}{ETTh1} &\scalebox{0.8}{0.458} &\boldres{\scalebox{0.8}{0.450}} &\scalebox{0.8}{0.491} &\scalebox{0.8}{0.479} &\secondres{\scalebox{0.8}{0.456}} &\secondres{\scalebox{0.8}{0.452}} &\boldres{\scalebox{0.8}{0.440}} &\scalebox{0.8}{0.460} &\scalebox{0.8}{0.570} &\scalebox{0.8}{0.537} &\scalebox{0.8}{0.496} &\scalebox{0.8}{0.487} &\scalebox{0.8}{0.827} &\scalebox{0.8}{0.703} &\scalebox{0.8}{1.040} &\scalebox{0.8}{0.795} &\scalebox{0.8}{1.072} &\scalebox{0.8}{0.837} &\scalebox{0.8}{1.029} &\scalebox{0.8}{0.805}\\
\midrule
\scalebox{0.8}{ETTh2} &\boldres{\scalebox{0.8}{0.414}} &\boldres{\scalebox{0.8}{0.427}} &\scalebox{0.8}{0.602} &\scalebox{0.8}{0.543} &\scalebox{0.8}{0.559} &\scalebox{0.8}{0.515} &\scalebox{0.8}{\secondres{0.437}} &\scalebox{0.8}{\secondres{0.449}} &\scalebox{0.8}{0.526} &\scalebox{0.8}{0.516} &\scalebox{0.8}{0.450} &\scalebox{0.8}{0.459} &\scalebox{0.8}{0.826} &\scalebox{0.8}{0.703} &\scalebox{0.8}{4.431} &\scalebox{0.8}{1.729} &\scalebox{0.8}{2.686} &\scalebox{0.8}{1.494} &\scalebox{0.8}{6.736} &\scalebox{0.8}{2.191}\\
\midrule
\scalebox{0.8}{Electricity} &\boldres{\scalebox{0.8}{0.192}} &\boldres{\scalebox{0.8}{0.295}} &\scalebox{0.8}{0.229} &\scalebox{0.8}{0.329} &\scalebox{0.8}{0.212} &\scalebox{0.8}{0.300} &\scalebox{0.8}{0.214} &\scalebox{0.8}{0.327} &\secondres{\scalebox{0.8}{0.193}} &\secondres{\scalebox{0.8}{0.296}} &\scalebox{0.8}{0.227} &\scalebox{0.8}{0.338} &\scalebox{0.8}{0.379} &\scalebox{0.8}{0.445} &\scalebox{0.8}{0.311} &\scalebox{0.8}{0.397} &\scalebox{0.8}{0.272} &\scalebox{0.8}{0.370} &\scalebox{0.8}{0.338} &\scalebox{0.8}{0.422}\\
\midrule
\scalebox{0.8}{Traffic} &\secondres{\scalebox{0.8}{0.620}} &\boldres{\scalebox{0.8}{0.336}} &\scalebox{0.8}{0.622} &\scalebox{0.8}{0.392} &\scalebox{0.8}{0.625} &\scalebox{0.8}{0.383} &\boldres{\scalebox{0.8}{0.610}} &\scalebox{0.8}{0.376} &\scalebox{0.8}{0.624} &\secondres{\scalebox{0.8}{0.340}} &\scalebox{0.8}{0.628} &\scalebox{0.8}{0.379} &\scalebox{0.8}{0.878} &\scalebox{0.8}{0.469} &\scalebox{0.8}{0.764} &\scalebox{0.8}{0.416} &\scalebox{0.8}{0.705} &\scalebox{0.8}{0.395} &\scalebox{0.8}{0.741} &\scalebox{0.8}{0.422}\\
\midrule
\scalebox{0.8}{Weather} &\boldres{\scalebox{0.8}{0.259}} &\boldres{\scalebox{0.8}{0.287}} &\secondres{\scalebox{0.8}{0.261}} &\secondres{\scalebox{0.8}{0.312}} &\scalebox{0.8}{0.265} &\scalebox{0.8}{0.317} &\scalebox{0.8}{0.309} &\scalebox{0.8}{0.360} &\scalebox{0.8}{0.288} &\scalebox{0.8}{0.314} &\scalebox{0.8}{0.338} &\scalebox{0.8}{0.382} &\scalebox{0.8}{0.946} &\scalebox{0.8}{0.717} &\scalebox{0.8}{0.634} &\scalebox{0.8}{0.548} &\scalebox{0.8}{0.696} &\scalebox{0.8}{0.602} &\scalebox{0.8}{0.803} &\scalebox{0.8}{0.656}\\
\midrule
\scalebox{0.8}{Exchange} &\scalebox{0.8}{0.416} &\secondres{\scalebox{0.8}{0.443}} &\secondres{\scalebox{0.8}{0.385}} &\scalebox{0.8}{0.447} &\boldres{\scalebox{0.8}{0.354}} &\boldres{\scalebox{0.8}{0.414}} &\scalebox{0.8}{0.519} &\scalebox{0.8}{0.500} &\scalebox{0.8}{0.461} &\scalebox{0.8}{0.454} &\scalebox{0.8}{0.613} &\scalebox{0.8}{0.539} &\scalebox{0.8}{1.913} &\scalebox{0.8}{1.159} &\scalebox{0.8}{1.550} &\scalebox{0.8}{0.998} &\scalebox{0.8}{1.402} &\scalebox{0.8}{0.968} &\scalebox{0.8}{1.280} &\scalebox{0.8}{0.932}\\
\midrule
\scalebox{0.8}{ILI} &\secondres{\scalebox{0.8}{2.139}} &\secondres{\scalebox{0.8}{0.931}} &\scalebox{0.8}{7.382} &\scalebox{0.8}{2.003} &\scalebox{0.8}{2.616} &\scalebox{0.8}{1.090} &\scalebox{0.8}{2.847} &\scalebox{0.8}{1.144} &\boldres{\scalebox{0.8}{2.077}} &\boldres{\scalebox{0.8}{0.914}} &\scalebox{0.8}{3.006} &\scalebox{0.8}{1.161} &\scalebox{0.8}{7.635} &\scalebox{0.8}{2.050} &\scalebox{0.8}{5.137} &\scalebox{0.8}{1.544} &\scalebox{0.8}{4.839} &\scalebox{0.8}{1.485} &\scalebox{0.8}{4.724} &\scalebox{0.8}{1.445}\\
\bottomrule
\end{tabular}
\end{small}
\end{threeparttable}
\vspace{-10pt}
\end{table}
\begin{table}[tbp]
\caption{Short-term forecasting task on M4. The prediction lengths are in $[6,48]$ and results are weighted averaged from several datasets under different sample intervals. See Table \ref{tab:full_forecasting_results_m4} for full results.}\label{tab:short_term_forecasting_results}
\vskip 0.05in
\centering
\begin{threeparttable}
\begin{small}
\renewcommand{\multirowsetup}{\centering}
\setlength{\tabcolsep}{1.2pt}
\begin{tabular}{c|cccccccccccccccccccc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Models}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{\textbf{TimesNet}}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{{N-HiTS}}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{{N-BEATS}}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{LightTS}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{DLinear}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{FEDformer}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{Stationary}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{Autoformer}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{Pyraformer}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{Informer}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{LogTrans}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.8}{Reformer}}} \\
\multicolumn{1}{c}{ } & \multicolumn{1}{c}{\scalebox{0.8}{(\textbf{Ours})}} &
\multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{challu2022n}}} &
\multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{oreshkin2019n}}} &
\multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{Zhang2022LessIM}}} &
\multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{Zeng2022AreTE}}} & \multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{zhou2022fedformer}}} & \multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{Liu2022NonstationaryTR}}} & \multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{wu2021autoformer}}} & \multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{liu2021pyraformer}}} & \multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{haoyietal-informer-2021}}} & \multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{2019Enhancing}}} & \multicolumn{1}{c}{\scalebox{0.8}{\citeyearpar{kitaev2020reformer}}} \\
\toprule
\scalebox{0.8}{SMAPE} &\boldres{\scalebox{0.8}{11.829}} &\scalebox{0.8}{11.927} &\secondres{\scalebox{0.8}{11.851}} &\scalebox{0.8}{13.525} &\scalebox{0.8}{13.639} &\scalebox{0.8}{12.840} &\scalebox{0.8}{12.780} &\scalebox{0.8}{12.909} &\scalebox{0.8}{16.987} &\scalebox{0.8}{14.086} &\scalebox{0.8}{16.018} &\scalebox{0.8}{18.200}\\
\scalebox{0.8}{MASE} &\boldres{\scalebox{0.8}{1.585}} &\scalebox{0.8}{1.613} &\secondres{\scalebox{0.8}{1.599}} &\scalebox{0.8}{2.111} &\scalebox{0.8}{2.095} &\scalebox{0.8}{1.701} &\scalebox{0.8}{1.756} &\scalebox{0.8}{1.771} &\scalebox{0.8}{3.265} &\scalebox{0.8}{2.718} &\scalebox{0.8}{3.010} &\scalebox{0.8}{4.223}\\
\scalebox{0.8}{OWA} &\boldres{\scalebox{0.8}{0.851}} &\scalebox{0.8}{0.861} &\secondres{\scalebox{0.8}{0.855}} &\scalebox{0.8}{1.051} &\scalebox{0.8}{1.051} &\scalebox{0.8}{0.918} &\scalebox{0.8}{0.930} &\scalebox{0.8}{0.939} &\scalebox{0.8}{1.480} &\scalebox{0.8}{1.230} &\scalebox{0.8}{1.378} &\scalebox{0.8}{1.775}\\
\bottomrule
\end{tabular}
\end{small}
\end{threeparttable}
\vspace{-10pt}
\end{table}
\subsection{Imputation}
\paragraph{Setups} Real-world systems always work continuously and are monitored by automatic observation equipment. However, due to malfunctions, the collected time series can be partially missing, making the downstream analysis difficult. Thus, imputation is widely-used in practical applications. In this paper, we select the datasets from the electricity and weather scenarios as our benchmarks, including ETT \citep{haoyietal-informer-2021}, Electricity \citep{ecldata} and Weather \citep{weatherdata}, where the data-missing problem happens commonly. To compare the model capacity under different proportions of missing data, we randomly mask the time points in the ratio of $\{12.5\%, 25\%, 37.5\%, 50\%\}$.
\paragraph{Results} Due to the missing time points, the imputation task requires the model to discover underlying temporal patterns from the irregular and partially observed time series. As shown in Table \ref{tab:imputation_results}, our proposed TimesNet still achieves the consistent state-of-the-art in this difficult task, verifying the model capacity in capturing temporal variation from extremely complicated time series.
\begin{table}[tbp]
\caption{Imputation task. We randomly mask $\{12.5\%, 25\%, 37.5\%, 50\%\}$ time points in length-96 time series. The results are averaged from 4 different mask ratios. See Table \ref{tab:full_imputation_results} for full results.}\label{tab:imputation_results}
\vskip 0.05in
\centering
\begin{threeparttable}
\begin{small}
\renewcommand{\multirowsetup}{\centering}
\setlength{\tabcolsep}{0.9pt}
\begin{tabular}{c|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Models}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{\textbf{TimesNet}}}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{LightTS}}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{DLinear}}} &
\multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{FEDformer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Stationary}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Autoformer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Pyraformer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Informer}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{LogTrans}}} & \multicolumn{2}{c}{\rotatebox{0}{\scalebox{0.8}{Reformer}}} \\
\multicolumn{1}{c}{} & \multicolumn{2}{c}{\scalebox{0.8}{(\textbf{Ours})}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{Zhang2022LessIM}}} &
\multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{Zeng2022AreTE}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{zhou2022fedformer}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{Liu2022NonstationaryTR}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{wu2021autoformer}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{liu2021pyraformer}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{haoyietal-informer-2021}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{2019Enhancing}}} & \multicolumn{2}{c}{\scalebox{0.8}{\citeyearpar{kitaev2020reformer}}} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}\cmidrule(lr){6-7} \cmidrule(lr){8-9}\cmidrule(lr){10-11}\cmidrule(lr){12-13}\cmidrule(lr){14-15}\cmidrule(lr){16-17}\cmidrule(lr){18-19}\cmidrule(lr){20-21}
\multicolumn{1}{c}{\scalebox{0.8}{Mask Ratio}} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} & \scalebox{0.8}{MSE} & \scalebox{0.8}{MAE} \\
\toprule
\scalebox{0.8}{ETTm1} &\boldres{\scalebox{0.8}{0.027}} &\boldres{\scalebox{0.8}{0.107}} &\scalebox{0.8}{0.104} &\scalebox{0.8}{0.218} &\scalebox{0.8}{0.093} &\scalebox{0.8}{0.206} &\scalebox{0.8}{0.062} &\scalebox{0.8}{0.177} &\secondres{\scalebox{0.8}{0.036}} &\secondres{\scalebox{0.8}{0.126}} &\scalebox{0.8}{0.051} &\scalebox{0.8}{0.150} &\scalebox{0.8}{0.717} &\scalebox{0.8}{0.570} &\scalebox{0.8}{0.071} &\scalebox{0.8}{0.188} &\scalebox{0.8}{0.050} &\scalebox{0.8}{0.154} &\scalebox{0.8}{0.055} &\scalebox{0.8}{0.166}\\
\midrule
\scalebox{0.8}{ETTm2} &\boldres{\scalebox{0.8}{0.022}} &\boldres{\scalebox{0.8}{0.088}} &\scalebox{0.8}{0.046} &\scalebox{0.8}{0.151} &\scalebox{0.8}{0.096} &\scalebox{0.8}{0.208} &\scalebox{0.8}{0.101} &\scalebox{0.8}{0.215} &\secondres{\scalebox{0.8}{0.026}} &\secondres{\scalebox{0.8}{0.099}} &\scalebox{0.8}{0.029} &\scalebox{0.8}{0.105} &\scalebox{0.8}{0.465} &\scalebox{0.8}{0.508} &\scalebox{0.8}{0.156} &\scalebox{0.8}{0.292} &\scalebox{0.8}{0.119} &\scalebox{0.8}{0.246} &\scalebox{0.8}{0.157} &\scalebox{0.8}{0.280}\\
\midrule
\scalebox{0.8}{ETTh1} &\boldres{\scalebox{0.8}{0.078}} &\boldres{\scalebox{0.8}{0.187}} &\scalebox{0.8}{0.284} &\scalebox{0.8}{0.373} &\scalebox{0.8}{0.201} &\scalebox{0.8}{0.306} &\scalebox{0.8}{0.117} &\scalebox{0.8}{0.246} &\secondres{\scalebox{0.8}{0.094}} &\secondres{\scalebox{0.8}{0.201}} &\scalebox{0.8}{0.103} &\scalebox{0.8}{0.214} &\scalebox{0.8}{0.842} &\scalebox{0.8}{0.682} &\scalebox{0.8}{0.161} &\scalebox{0.8}{0.279} &\scalebox{0.8}{0.219} &\scalebox{0.8}{0.332} &\scalebox{0.8}{0.122} &\scalebox{0.8}{0.245}\\
\midrule
\scalebox{0.8}{ETTh2} &\boldres{\scalebox{0.8}{0.049}} &\boldres{\scalebox{0.8}{0.146}} &\scalebox{0.8}{0.119} &\scalebox{0.8}{0.250} &\scalebox{0.8}{0.142} &\scalebox{0.8}{0.259} &\scalebox{0.8}{0.163} &\scalebox{0.8}{0.279} &\secondres{\scalebox{0.8}{0.053}} &\secondres{\scalebox{0.8}{0.152}} &\scalebox{0.8}{0.055} &\scalebox{0.8}{0.156} &\scalebox{0.8}{1.079} &\scalebox{0.8}{0.792} &\scalebox{0.8}{0.337} &\scalebox{0.8}{0.452} &\scalebox{0.8}{0.186} &\scalebox{0.8}{0.318} &\scalebox{0.8}{0.234} &\scalebox{0.8}{0.352}\\
\midrule
\scalebox{0.8}{Electricity} &\boldres{\scalebox{0.8}{0.092}} &\boldres{\scalebox{0.8}{0.210}} &\scalebox{0.8}{0.131} &\scalebox{0.8}{0.262} &\scalebox{0.8}{0.132} &\scalebox{0.8}{0.260} &\scalebox{0.8}{0.130} &\scalebox{0.8}{0.259} &\secondres{\scalebox{0.8}{0.100}} &\secondres{\scalebox{0.8}{0.218}} &\scalebox{0.8}{0.101} &\scalebox{0.8}{0.225} &\scalebox{0.8}{0.297} &\scalebox{0.8}{0.382} &\scalebox{0.8}{0.222} &\scalebox{0.8}{0.328} &\scalebox{0.8}{0.175} &\scalebox{0.8}{0.303} &\scalebox{0.8}{0.200} &\scalebox{0.8}{0.313} \\
\midrule
\scalebox{0.8}{Weather} &\boldres{\scalebox{0.8}{0.030}} &\boldres{\scalebox{0.8}{0.054}} &\scalebox{0.8}{0.055} &\scalebox{0.8}{0.117} &\scalebox{0.8}{0.052} &\scalebox{0.8}{0.110} &\scalebox{0.8}{0.099} &\scalebox{0.8}{0.203} &\scalebox{0.8}{0.032} &\scalebox{0.8}{0.059} &\secondres{\scalebox{0.8}{0.031}} &\secondres{\scalebox{0.8}{0.057}} &\scalebox{0.8}{0.152} &\scalebox{0.8}{0.235} &\scalebox{0.8}{0.045} &\scalebox{0.8}{0.104} &\scalebox{0.8}{0.039} &\scalebox{0.8}{0.076} &\scalebox{0.8}{0.038} &\scalebox{0.8}{0.087}\\
\bottomrule
\end{tabular}
\end{small}
\end{threeparttable}
\vspace{-10pt}
\end{table}
\subsection{Classification}
\paragraph{Setups} Time series classification can be used in recognition and medical diagnosis \citep{Moody2011PhysioNetPS}. We adopt the sequence-level classification to verify the model capacity in high-level representation learning. Concretely, we select 10 multivariate datasets from UEA Time Series Classification Archive \citep{Bagnall2018TheUM}, covering the gesture, action and audio recognition, medical diagnosis by heartbeat monitoring and other practical tasks. Then, we pre-process the datasets following the descriptions in \citep{Zerveas2021ATF}, where different subsets have different sequence lengths.
\begin{wrapfigure}{r}{0.62\textwidth}
\begin{center}
\vspace{-5pt}
\includegraphics[width=0.60\textwidth]{pic/classification.pdf}
\end{center}
\vspace{-15pt}
\caption{\small{Model comparison in classification. ``$\ast.$'' in the Transformers indicates the name of $\ast$former. The results are averaged from 10 subsets of UEA. See Table \ref{tab:full_classification_results} in Appendix for full results. }}\label{fig:classification_results}
\vspace{-10pt}
\end{wrapfigure}
\paragraph{Results} As shown in Figure \ref{fig:classification_results}, TimesNet achieves the best performance with an average accuracy of 73.6\%, surpassing the previous state-of-the-art classical method Rocket (72.5\%) and deep model Flowformer (73.0\%). It is also notable that the MLP-based model DLinear fails in this classification task (67.5\%), which performs well in some time series forecasting datasets. This is because DLinear only adopts a one-layer MLP model on the temporal dimension, which might be suitable for some autoregressive tasks with fixed temporal dependencies but will degenerate a lot in learning high-level representations. In contrast, TimesNet unifies the temporal 2D-variation in 2D space, which is convenient to learn informative representation by 2D kernels, thereby benefiting the classification task that requires hierarchical representations.
\subsection{Anomaly Detection}
\paragraph{Setups} Detecting anomalies from monitoring data is vital to industrial maintenance. Since the anomalies are usually hidden in the large-scale data, making the data labeling hard, we focus on unsupervised time series anomaly detection, which is to detect the abnormal time points. We compare models on five widely-used anomaly detection benchmarks: SMD \citep{Su2019RobustAD}, MSL \citep{Hundman2018DetectingSA}, SMAP \citep{Hundman2018DetectingSA}, SWaT \citep{DBLP:conf/cpsweek/MathurT16}, PSM \citep{DBLP:conf/kdd/AbdulaalLL21}, covering service monitoring, space \& earth exploration, and water treatment applications. Following the pre-processing methods in Anomaly Transformer \citeyearpar{xu2021anomaly}, we split the dataset into consecutive non-overlapping segments by sliding window. In previous works, the reconstruction is a classical task for unsupervised point-wise representation learning, where the reconstruction error is a natural anomaly criterion. For a fair comparison, we only change the base models for reconstruction and use the classical reconstruction error as the shared anomaly criterion for all experiments.
\paragraph{Results} Table \ref{tab:anomaly_results} demonstrates that TimesNet still achieves the best performance in anomaly detection, outperforming the advanced Transformer-based models FEDformer \citeyearpar{zhou2022fedformer} and Autoformer \citeyearpar{wu2021autoformer}. The canonical Transformer performs worse in this task (averaged F1-score 76.88\%). This may come from that anomaly detection requires the model to find out the rare abnormal temporal patterns \citep{Lai2021RevisitingTS}, while the vanilla attention mechanism calculates the similarity between each pair of time points, which can be distracted by the dominant normal time points. Besides, by taking the periodicity into consideration, TimesNet, FEDformer and Autoformer all achieve great performance. Thus, these results also demonstrate the importance of periodicity analysis, which can highlight variations that violate the periodicity implicitly, further benefiting the anomaly detection.
\begin{table}[tbp]
\caption{Anomaly detection task. We calculate the F1-score (as \%) for each dataset. A higher value of F1-score indicates a better performance. See Table \ref{tab:full_anomaly_results} in Appendix for full results.}\label{tab:anomaly_results}
\vskip 0.05in
\centering
\begin{threeparttable}
\begin{small}
\renewcommand{\multirowsetup}{\centering}
\setlength{\tabcolsep}{0.43pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Models}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{\textbf{TimesNet}}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{\textbf{TimesNet}}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{FEDformer}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{LightTS}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{DLinear}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{Stationary}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{Autoformer}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{Pyraformer}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{Anomaly*}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{Informer}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{Reformer}}} &
\multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{LogTrans}}} & \multicolumn{1}{c}{\rotatebox{0}{\scalebox{0.75}{Transformer}}} \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{\scalebox{0.7}{(\textbf{ResNeXt})}} &
\multicolumn{1}{c}{\scalebox{0.7}{(\textbf{Inception})}} &
\multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{zhou2022fedformer}}} &
\multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{Zhang2022LessIM}}} &
\multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{Zeng2022AreTE}}} & \multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{Liu2022NonstationaryTR}}} & \multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{wu2021autoformer}}} & \multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{liu2021pyraformer}}} & \multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{xu2021anomaly}}} &\multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{haoyietal-informer-2021}}} & \multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{kitaev2020reformer}}}& \multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{2019Enhancing}}} & \multicolumn{1}{c}{\scalebox{0.75}{\citeyearpar{NIPS2017_3f5ee243}}} \\
\toprule
\scalebox{0.8}{SMD} & \boldres{\scalebox{0.8}{85.81}} &\scalebox{0.8}{85.12} & \scalebox{0.8}{85.08} & \scalebox{0.8}{82.53} & \scalebox{0.8}{77.10} & \scalebox{0.8}{84.72} & \scalebox{0.8}{85.11} & \scalebox{0.8}{83.04} & \secondres{\scalebox{0.8}{85.49}} & \scalebox{0.8}{81.65} & \scalebox{0.8}{75.32} & \scalebox{0.8}{76.21} & \scalebox{0.8}{79.56} \\
\scalebox{0.8}{MSL} & \boldres{\scalebox{0.8}{85.15}} &\scalebox{0.8}{84.18}& \scalebox{0.8}{78.57} & \scalebox{0.8}{78.95} & \secondres{\scalebox{0.8}{84.88}} & \scalebox{0.8}{77.50} & \scalebox{0.8}{79.05} & \scalebox{0.8}{84.86} & \scalebox{0.8}{83.31} & \scalebox{0.8}{84.06} & \scalebox{0.8}{84.40} & \scalebox{0.8}{79.57} & \scalebox{0.8}{78.68} \\
\scalebox{0.8}{SMAP} & \boldres{\scalebox{0.8}{71.52}} &\scalebox{0.8}{70.85}& \scalebox{0.8}{70.76} & \scalebox{0.8}{69.21} & \scalebox{0.8}{69.26} & \scalebox{0.8}{71.09} & \scalebox{0.8}{71.12} & \scalebox{0.8}{71.09} & \secondres{\scalebox{0.8}{71.18}} & \scalebox{0.8}{69.92} & \scalebox{0.8}{70.40} & \scalebox{0.8}{69.97} & \scalebox{0.8}{69.70} \\
\scalebox{0.8}{SWaT} & \scalebox{0.8}{91.74} &\scalebox{0.8}{92.10}& \secondres{\scalebox{0.8}{93.19}} & \boldres{\scalebox{0.8}{93.33}} & \scalebox{0.8}{87.52} & \scalebox{0.8}{79.88} & \scalebox{0.8}{92.74} & \scalebox{0.8}{91.78} & \scalebox{0.8}{83.10} & \scalebox{0.8}{81.43} & \scalebox{0.8}{82.80} & \scalebox{0.8}{80.52} & \scalebox{0.8}{80.37} \\
\scalebox{0.8}{PSM} & \boldres{\scalebox{0.8}{97.47}} &\scalebox{0.8}{95.21}& \scalebox{0.8}{97.23} & \scalebox{0.8}{97.15} & \scalebox{0.8}{93.55} & \secondres{\scalebox{0.8}{97.29}} & \scalebox{0.8}{93.29} & \scalebox{0.8}{82.08} & \scalebox{0.8}{79.40} & \scalebox{0.8}{77.10} & \scalebox{0.8}{73.61} & \scalebox{0.8}{76.74} & \scalebox{0.8}{76.07} \\
\midrule
\scalebox{0.8}{Avg F1} & \boldres{\scalebox{0.8}{86.34}} & \scalebox{0.8}{\secondres{85.49}}& \scalebox{0.8}{84.97} & \scalebox{0.8}{84.23} & \scalebox{0.8}{82.46} & \scalebox{0.8}{82.08} & \scalebox{0.8}{84.26} & \scalebox{0.8}{82.57} & \scalebox{0.8}{80.50} & \scalebox{0.8}{78.83} & \scalebox{0.8}{77.31} & \scalebox{0.8}{76.60} & \scalebox{0.8}{76.88} \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item $\ast$ We replace the joint criterion in Anomaly Transformer \citeyearpar{xu2021anomaly} with reconstruction error for fair comparison.
\end{tablenotes}
\end{small}
\end{threeparttable}
\vspace{-10pt}
\end{table}
\subsection{Model Analysis}
\begin{wrapfigure}{r}{0.45\textwidth}
\begin{center}
\vspace{-48pt}
\includegraphics[width=0.43\textwidth]{pic/vis_case.pdf}
\end{center}
\vspace{-15pt}
\caption{\small{A case of temporal 2D-variations. }}\label{fig:vis_case}
\vspace{-35pt}
\end{wrapfigure}
\paragraph{Temporal 2D-variations} We provide a case study of temporal 2D-variations in Figure \ref{fig:vis_case}. We can find that TimesNet can capture the multi-periodicities precisely. Besides, the transformed 2D tensor is highly structured and informative, where the columns and rows can reflect the localities between time points and periods respectively, supporting our motivation in adopting 2D kernels for representation learning. See Appendix \ref{sec:vis} for more visualizations.
\paragraph{Representation analysis} We attempt to explain model performance from the representation learning aspect. From Figure \ref{fig:cka}, we can find that the better performance in forecasting and anomaly detection corresponds to the higher CKA similarity \citeyearpar{Kornblith2019SimilarityON}, which is opposite to the imputation and classification tasks. Note that the lower CKA similarity means that the representations are distinguishing among different layers, namely hierarchical representations. Thus, these results also indicate the property of representations that each task requires. As shown in Figure \ref{fig:cka}, TimesNet can learn appropriate representations for different tasks, such as low-level representations for forecasting and reconstruction in anomaly detection, and hierarchical representations for imputation and classification. In contrast, FEDformer \citeyearpar{zhou2022fedformer} performs well in forecasting and anomaly detection tasks but fails in learning hierarchical representations, resulting in poor performance in imputation and classification. These results also verify the task-generality of our proposed TimesNet as a foundation model.
\begin{figure*}[h]
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{pic/cka.pdf}}
\vspace{-5pt}
\caption{Representation analysis in four tasks. For each model, we calculate the centered kernel alignment (CKA) similarity \citeyearpar{Kornblith2019SimilarityON} between representations from the first and the last layers. A higher CKA similarity indicates more similar representations. TimesNet is marked by \textcolor{red}{red} stars.}
\label{fig:cka}
\end{center}
\vspace{-20pt}
\end{figure*}
\section{Conclusion and Future Work}
This paper presents the TimesNet as a task-general foundation model for time series analysis. Motivated by the multi-periodicity, TimesNet can ravel out intricate temporal variations by a modular architecture and capture intraperiod- and interperiod-variations in 2D space by a parameter-efficient inception block. Experimentally, TimesNet shows great generality and performance in five mainstream analysis tasks. In the future, we will further explore large-scale pre-training methods in time series, which utilize TimesNet as the backbone and can generally benefit extensive downstream tasks.
|
1,941,325,221,035 | arxiv | \section{Introduction}
In differential geometry and in pseudo-Riemannian geometry, one can form polynomial curvature invariants by taking contractions of the Riemann tensor and its covariant derivatives. For example, the Ricci scalar, and the Kretschmann scalar, $R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}$, are simple examples of such invariants \cite{invariants}.
\\
\\
Let ${\mathcal{I}}$ be the set of \emph{all} such polynomial invariants formed by full contractions of the Riemann tensor and its covariant derivatives. This set is finitely generated \cite{Goodman}, and hence, we can assume that $\mathcal{I}$ is finite. If we are given a metric, $g$, we can compute the value of these invariants, ${I}[g]\in \mathcal{I}$. A question is now, to what extent is the value ${I}[g]$ unique? Here, we will discuss \emph{$\mathcal{I}$-degenerate metrics} which are defined to be metrics having a \emph{degenerate curvature stucture} in the sense that there are continuous families $g_{\tau}$ of non-diffeomorphic metrics\footnote{We will use the following terminology: A \emph{diffeomorphism}, $f: M\rightarrow N$ is a smooth bijection between smooth manifolds. If the manifolds, $M$ and $N$, are equipped with metrics, $g$ and $h$ respectively, then if there exists a diffeomorphism $f$ such that the metrics are related via $f^*h=g$, we will say that the \emph{metrics are diffeomorphic}. If no such $f$ exists, then $h$ and $g$ are non-diffeomorphic metrics. In the differential geometry literature, one often uses the term isometry for such a map, however, we will reserve the word isometry for diffeomorphisms $f:M\rightarrow M$ where $f^*g=g$. } having the same invariants, i.e., $I[g_\tau]$ does not depend on $\tau$ \cite{degen}.
\\
\\
In the Riemannian case where the metric is positive definite, there are no $\mathcal{I}$-degenerate metrics, implying that the space is completely determined by the value of $I[g]$ \cite{OP}. In the Lorentzian case, the situation is very different, as there is a large family of $\mathcal{I}$-degenerate metrics \cite{degen}. In this case, all known examples belong to the Kundt class and these metrics have been studied in some detail. For example, the VSI metrics (all polynomial curvature invariants vanish), are known to be all Kundt \cite{VSI}. The CSI case (all invariants are constants), has been studied to some extent and the $\mathcal{I}$-degenerate metrics are also believed to be of Kundt class \cite{CSI, kundt}. In general, examples of $\mathcal{I}$-degenerate Lorentzian metrics have only been found in the Kundt class. In other signatures, other possibilities occur, and to date, only Walker examples (in addition to the Kundt metrics) have been given \cite{Walker,VSI1,Alcolado}. In particular, in 4 dimensional neutral space, it was shown that all VSI spaces are of either Kundt or Walker type \cite{VSI2}.
\\
\\
The question the paper addresses has an obvious interest in the classification problem in differential geometry. Differential geometry is used in a wide area in physics and
the Lorentzian case is of clear interest in theories of gravity. Other signatures have applications in physics as well; for example, in twistor theory \cite{twistor,Dun}, the metric is a 4 dimensional metric of neutral signature.
\\
\\
In this paper we study pseudo-Riemannian spaces of arbitrary signature with a degenerate curvature structure. We approach this problem in a new way and find new examples of $\mathcal{I}$-degenerate metrics. Indeed, in a systematic study we define a class of metrics which contain all known examples of $\mathcal{I}$-degenerate metrics, including the Kundt and Walker cases. The underlying assumption is motivated by invariant theory which states that a certain boost limit should exist, at least pointwise, for these spaces \cite{VSI2,RS, align, degenInv}. We assume that this boost limit extends to a neighbourhood, thereby constraining the form of the metric. However, this class is sufficiently rich to include \emph{all known examples} of $\mathcal{I}$-degenerate metrics.
As a by-product of this assumption, we get a way to determine the invariants for such spacetimes using the metric of a simpler space.
\\
\\
The structure of the paper is as follows: First we look at the form of a pseudo-Riemannian metric under the assumption that there exists a surface-forming null $k$-form $\mbold{F}$. This yields a generalisation of both the Kundt and Walker spaces and gives a geometric interpretation of the class under consideration. We also identify subclasses of this family of metrics by imposing appropriate conditions on the covariant derivative of $\mbold{F}$, amongst which are the Kundt and Walker classes. Then we start anew and look at the form of the metric from the point of view of invariant theory. Here we assume the existence of a particular limit and show that the resulting form of the metric is actually a subclass of the previously considered metrics with the closed $k$-form $\mbold{F}$. In other words, using invariant theory we show that there is a subclass of degenerate metrics hiding within the first class. Then we constrain the coefficients of these degenerate metrics by utilising the boost-weight decomposition and the existence of a particular boost limit. Lastly, we discuss the VSI and CSI subclasses.
\section{Canonical form of the metric}
Here we assume the existence of a $k$-form $\boldsymbol{F} = \boldsymbol{\ell }^1 \wedge \boldsymbol{\ell }^2 \wedge \cdots \wedge \boldsymbol{\ell }^k $, where $\boldsymbol{\ell }^i $'s are all null and mutually orthogonal. We then impose progressively stronger conditions onto derivatives of $\boldsymbol{F}$ to derive a hierarchy of classes. These conditions, envisaged as a generalisation of the Kundt and Walker conditions, allow us to write the metric in a canonical form. In this section no further assumptions will be made, however, in the next section an assumption of degeneracy of the curvature structure of the (general pseudo-Riemannian) metric will yield a subclass of these metrics. Hence, an independent interpretation of these $\mathcal{I}$-degenerate metrics is provided in the current section.
\\
\\
The space-time dimension is $n = 2k + m$ with signature $(k+p , k-p+m) ,\ p \leq m $. We are given $k \leq n/2$ null 1-forms $\boldsymbol{\ell }^{i}$, which are linearly independent and orthogonal. In discussing various geometrical conditions, it is convenient to work in terms of a $k$-form
defined by
\begin{equation}
\boldsymbol{F} = \boldsymbol{\ell }^1 \wedge \boldsymbol{\ell }^2 \wedge \cdots \wedge \boldsymbol{\ell }^k \ .
\end{equation}
Now we impose two surface-forming conditions the first of which is given as follows:
\begin{cond}[Primary surface-forming condition]\label{cond:1}
There exists a 1-form $\boldsymbol{\Sigma }$ such that
\begin{equation}
d\boldsymbol{F} = \boldsymbol{\Sigma } \wedge \boldsymbol{F} \ .
\end{equation}
\end{cond}
Let us introduce a distribution $\mathcal{D}$, which we shall call the orthogonal complement of $\boldsymbol{F}$, defined by
\begin{equation}
\mathcal{D}_p = \{ \boldsymbol{v} \in T_p \mathcal{M}\ | \ \langle \boldsymbol{\ell }^i , \boldsymbol{v} \rangle = 0, i = 1 , \cdots , k \}
\end{equation}
From the Frobenius theorem, $\mathcal{D}$ is integrable and there exist functions $u^i $ such that
\begin{equation}
\boldsymbol{\ell }^i = \lambda ^i_{\ j} du^j
\end{equation}
where the matrix $\lambda ^i_{\ j}$ is invertible
and $u^i = {\rm const}$ specifies an integral manifold of $\mathcal{D}$.
Next, we construct a canonical frame $\{ \boldsymbol{\ell }^i , \boldsymbol{m}^a , \boldsymbol{n}^{\hat{i}} \} , a = 1 , \cdots , m = n-2k$ that satisfies
\begin{equation}
\begin{split}
& \boldsymbol{g}\left( \boldsymbol{\ell }^i , \boldsymbol{n}^{\hat{j}} \right) = \delta ^{i\hat{j}} \ , \quad \boldsymbol{g} \left( \boldsymbol{m}^a , \boldsymbol{m}^b \right) = \eta ^{ab} \ , \\
& \boldsymbol{g} \left( \boldsymbol{l}^i , \boldsymbol{m}^a \right) = 0 \ , \quad \boldsymbol{g} \left( \boldsymbol{n}^{\hat{i}} , \boldsymbol{n}^{\hat{j}} \right) = 0 \ , \quad \boldsymbol{g} \left( \boldsymbol{n}^{\hat{i}} , \boldsymbol{m}^a \right) = 0 \ .
\end{split}
\end{equation}
$\eta ^{ab}$ is a pseudo-Euclidean metric of signature $(p,m-p)$.
Denoting the metric-induced isomorphism by $\sharp $ i.e. for an arbitrary vector $\boldsymbol{v}$ and 1-form $\boldsymbol{\omega }$
\begin{equation}
\boldsymbol{g} \left( \boldsymbol{\omega }^{\sharp } , \boldsymbol{v} \right) = \langle \boldsymbol{\omega } , \boldsymbol{v} \rangle \ ,
\end{equation}
and using the notation
\begin{equation}
\boldsymbol{\ell }_{\hat{i}} = \left( \boldsymbol{\ell }^i \right) ^{\sharp } \ , \quad \boldsymbol{n}_i = \left( \boldsymbol{n}^i \right) ^{\sharp } \ , \quad \boldsymbol{m}_a = \eta _{ab} \left( \boldsymbol{m}^b \right) ^{\sharp } \ ,
\end{equation}
we note that $\{ \boldsymbol{n}_i , \boldsymbol{m}_a , \boldsymbol{\ell }_{\hat{i}} \}$ is dual to the canonical frame $\{ \boldsymbol{\ell }^i , \boldsymbol{m}^a , \boldsymbol{n}^{\hat{i}}\} $ in the usual sense and
\begin{equation}
\boldsymbol{\ell }_{\hat{i}} \in \mathcal{D} \ , \quad \boldsymbol{m}_a \in \mathcal{D} \ .
\end{equation}
The vectors $\boldsymbol{\ell }_{\hat{i}}$ constitute a null distribution $\mathcal{D}^{\ast } \subset \mathcal{D}$.
The following condition then ensures that $\mathcal{D}^{\ast }$ is integrable:
\begin{cond}[Secondary surface-forming condition]\label{cond:2}
In the canonical frame $\{ \boldsymbol{\ell }^i , \boldsymbol{m}^a , \boldsymbol{n}^{\hat{i}} \}$,
the components of the covariant derivative of $\boldsymbol{F}$ satisfy
\begin{equation}
\nabla _{\hat{i}} F_{a \mu _1 \cdots \mu _{k-1}} = 0 \ ,
\end{equation}
where, and in what follows, the Greek indices run from $1$ to $n$.
\end{cond}
In order to see that this assumption implies the integrability of $\mathcal{D}^{\ast }$,
let us look at the covariant derivatives of $\boldsymbol{\ell }^i$ along $\mathcal{D}^{\ast }$;
\begin{equation}
\nabla _{\boldsymbol{\ell }_{\hat{i}}} \boldsymbol{\ell }^k = \omega ^k_{\ \hat{i}j} \boldsymbol{\ell }^j + \omega ^k_{\ \hat{i}a}\boldsymbol{m}^a + \omega ^k_{\ \hat{i}\hat{j}} \boldsymbol{n}^{\hat{j}} \ .
\end{equation}
$\omega ^{\lambda }_{\ \mu \nu }$'s are the components of connection 1-forms defined by
\begin{equation}
\omega ^{\lambda }_{\ \mu \nu } \boldsymbol{e}^{\nu } = \nabla _{\boldsymbol{e}_{\mu }}\boldsymbol{e}^{\lambda } \ , \quad \{ \boldsymbol{e}^{\mu } \} = \{ \boldsymbol{\ell }^i , \boldsymbol{m}^a ,\boldsymbol{n}^{\hat{i}} \} \ , \quad \{ \boldsymbol{e}_{\mu } \} = \{ \boldsymbol{\ell }_{\hat{i}} , \boldsymbol{m}_a , \boldsymbol{n}_i \} \ .
\end{equation}
One can compute the connection 1-forms from exterior derivatives of $\boldsymbol{e}^{\mu }$
(see Appendix A), and $\omega ^k_{\ \hat{i} \hat{j}} = 0$ since conditions \ref{cond:1} imply that
there exists a matrix of 1-forms $\boldsymbol{\sigma }^i_{\ j}$ such that $d\boldsymbol{\ell }^i = \boldsymbol{\sigma }^i_{\ j} \wedge \boldsymbol{\ell }^j $.
Now evaluating the commutator among $\boldsymbol{\ell }_{\hat{i}}$'s;
\begin{eqnarray*}
\left[ \boldsymbol{\ell }_{\hat{i}} , \boldsymbol{\ell }_{\hat{j} } \right] &=& \nabla _{\boldsymbol{\ell }_{\hat{i}}} \boldsymbol{\ell }_{\hat{j}} - \nabla _{\boldsymbol{\ell }_{\hat{j}}} \boldsymbol{\ell }_{\hat{i}} \\
&=& \left( \nabla _{\boldsymbol{\ell }_{\hat{i}}} \boldsymbol{\ell }^j \right) ^{\sharp } - \left( \nabla _{\boldsymbol{\ell }_{\hat{j}}} \boldsymbol{\ell }^i \right) ^{\sharp } \\
&=& \left( \omega ^j_{\ \hat{i}k} - \omega ^i_{\ \hat{j}k} \right) \boldsymbol{\ell }_k + \left( \omega ^j_{\ \hat{i}a} - \omega ^i_{\ \hat{j}a} \right) \boldsymbol{m}_a \ ,
\end{eqnarray*}
one can conclude that it is sufficient for integrability of $\mathcal{D}^{\ast }$ ($\Leftrightarrow \boldsymbol{\ell }_{\hat{i}}$'s being involutive) to have vanishing $\omega ^k_{\ \hat{i}a}$, which
is equivalent to the conditions \ref{cond:2}. The same condition also turns out to be necessary (c.f. Appendix A).
\\
\\
Once the integrability of $\mathcal{D}^{\ast }$ is established, one can choose a coordinate system
$\{ y^I , v^{\hat{i}} \} ,\ I = 1, \cdots , n-k$ so that we can write
\begin{equation}
\boldsymbol{\ell }_{\hat{i}} = \kappa _{\hat{i}}^{\ \hat{j}} \partial _{v^{\hat{j}}}
\end{equation}
where the matrix $\kappa _{\hat{i}}^{\ \hat{j}}$ is invertible. Since $\mathcal{D}^{\ast } \subset \mathcal{D}$, we have
\begin{equation}
0 = \langle \boldsymbol{\ell }^i , \boldsymbol{\ell }_{\hat{j}} \rangle = \lambda ^i_{\ k}\kappa _{\hat{j}}^{\ \hat{l}}\langle du^k , \partial _{\hat{v}^{\hat{l}}} \rangle \ ,
\end{equation}
hence
\begin{equation}
0 = \langle du^i , \partial _{v^{\hat{j}}} \rangle = \partial _{v^{\hat{j}}}u^i \quad \Leftrightarrow \quad du^i = f^i_{\ I}(y) dy^I
\end{equation}
where $f^i_{\ I}$'s are functions of $y^I$'s only.
Therefore, one can find a coordinate transformation $u^i (y) , x^a (y), \ a = 1, \cdots , m = n-2k$ independently of $v^{\hat{i}}$, so that $\{ u^i , x^a , v^{\hat{i}} \} $ form a local coordinate system of the entire manifold, with $\{ x^a , v^{\hat{i}} \} $ spanning $\mathcal{D}$.
By construction, it is clear that
\begin{equation}
g \left( \partial _{v^{\hat{i}}} , \partial _{v^{\hat{j}}} \right) = 0 \ , \quad g\left( \partial _{x^a} , \partial _{v^{\hat{i}}} \right) = 0 \ .
\end{equation}
This corresponds to the following form of the metric:
\begin{equation}
g_{\mu \nu } = \left( \begin{array}{ccc}
A_{ij} & B_{ib} & a_{i\hat{j}} \\
B^t_{aj} & g_{ab} & 0 \\
a^t_{\hat{i}j} & 0 & 0 \\
\end{array} \right) \ .
\end{equation}
The line element is then \footnote{We will use hats on the index $i$ when it is essential to distinguish between $u^i$ and $v^{\hat{i}}$. However, when there is no fear of confusion, these will be omitted.}:
\begin{equation}
ds^2 = 2d u^{i } \left( a_{ij}d v^{j } + A_{ij}d u^{j } + B_{i a} d x^a \right) + g_{ab} d x^a d x^b \ .
\end{equation}
The connection 1-forms for this metric can be found in Appendix A. For later reference, it is useful to consider a class of transformations leaving this form of the metric invariant. Consider the transformation:
\beq
(\tilde{u}^i, \tilde{x}^a, \tilde{v}^j)=(u^i,x^a,f^j( u^n,x^b,v^m)) \ ,
\label{v-trafo}\eeq
for functions $f^j$. We note that
\[ d\tilde{v}^j=(\partial_{u^n}f^j) du^n+(\partial_{x^b}f^j) dx^b +(\partial_{v^m}f^j)dv^m.\]
This may be used to simplify the metric functions $a_{ij}$. Especially, if, for a fixed $i$
\beq
(\partial_{v^n}a_{im})dv^n\wedge dv^m =0 \ ,
\label{a-condition}\eeq
(which means that $\d (a_{im}dv^m) =0$ as functions of $v^n$ ), then we can use the transformation eq.(\ref{v-trafo}) to bring $ a_{ij}\d v^j \mapsto \delta_{ij}\d v^j$ (fixed $i$).
\subsection{Important subclasses of metrics}
\label{sect:subclasses}
Let us point out some important subclasses of these metrics given in terms of the $k$-form $\boldsymbol{F}$. This form is null and surface-forming in the sense made accurate by the conditions \ref{cond:1} and \ref{cond:2}. We will assume the following classes to be of increasing speciality, i.e. we assume class $N+1$ is a subclass of class $N$ etc.
\paragraph{Type I: "Shear-free and expansion-free"}
As we will see later, all of the examples of $\mathcal{I}$-degenerate metrics given in this article belong to this class. In this class the transverse metric $g_{ab}$ is independent of the $v^j$'s in the coordinate basis. This is equivalent to requiring that $F$ obeys:
\[ \nabla_aF_{b\mu_1 \cdots \mu_{k-1}}=0\ , \]
in the canonical basis $\{ \boldsymbol{\ell }^i , \boldsymbol{m}^a , \boldsymbol{n}^{\hat{i}} \}$ constructed above (not in the coordinate basis where $g_{\mu \nu }$ is written down).
\\
\\
We note that in the special case where $k=1$, i.e., $\boldsymbol{F}$ is a 1-form, then this case reduces to the {\bf Kundt class} \cite{kundt}. Furthermore, for the Kundt metrics, condition II below is automatically satisfied.
\paragraph{Type II: The matrix $a_{i\hat{j} }=\delta_{i\hat{j}}$.}
If the matrix $a_{i\hat{j}}$ obeys equation (\ref{a-condition}), for all $i$, we can use the transformation, eq.(\ref{v-trafo}), to set $a_{i\hat{j} }=\delta_{i\hat{j}}$. This amounts to require that $\mbold{F}$ obeys (again in the canonical frame):
\[ \nabla_{\hat{j}}F_{\hat{i}\mu_1 \cdots \mu_{k-1}}=0. \]
\paragraph{Type III: The Walker case}
The Walker class is defined as the case when $\mbold{F}$ is invariant \cite{Walker}:
\[ \nabla_\mu \boldsymbol{F} = k_\mu \boldsymbol{F},\]
for a vector $k$. This condition alone encompasses the above conditions, I and II. In terms of the metric components this means that the functions $B_{ia}$ do not depend on the $v^{\hat{i}}$'s.
\paragraph{Type IV: $\mbold{F}$ is covariantly constant}
This case is defined by:
\[ \nabla_\mu \boldsymbol{F}=0, \]
and implies conditions I - III are fulfilled. Interestingly, this implies that the null-form fulfills Killing-Yano equations, but it corresponds to the degenerate case where the Killing-Yano tensor is null.
\paragraph{Type V: All $du^i$ are covariantly constant.} This amounts to the case where there is no $v^{\hat{i}}$-dependence in any of the metric functions which implies that the vectors $\partial_{v^{\hat{i}}}$ are Killing vectors. Thus in this case the space-time posseses $k$ null Killing vectors.
\\
\\
In the examples given later, metrics from all of these categories I - V can be found.
\section{Invariant theory and $\mathcal{I}$-degenerate metrics}
We will now review the boost-decomposition method as in \cite{VSI1,VSI2,bw} and introduce the $\textbf{S}_i$ and $\textbf{N}_i$ properties of a tensor. Utilising the ideas from invariant theory and degenerate tensors, we will, under the assumption that the metric has a similar well-defined limit, reach a class of spaces which are degenerate in the sense that their curvature structures are degenerate.
\subsection{Boost-weight decomposition}
If we have a pseudo-Riemannian manifold with dimension $(2k + m)$
and signature $(k, k + m)$, we can choose a suitable null frame so that the metric can be written:
\begin{align}
\label{metric}
ds^2 = 2( \boldsymbol{\ell}^1 \boldsymbol{n}^1 + ... + \boldsymbol{\ell}^i \boldsymbol{n}^i + ... + \boldsymbol{\ell}^k \boldsymbol{n}^k) + \delta_{ab} \boldsymbol{m}^a \boldsymbol{m}^b
\end{align}
where $a,b=1,...,m$.
\\
\\
First we choose a real null frame so that we can write down the metric as (\ref{metric}). We then look at the $k$ independent boosts that form an Abelian subgroup of the group $SO(k,k + m)$:
\begin{align}
\label{boost}
(\mbox{{\mbold\ell}}^1, \mbox{{\bf n}}^1) &\rightarrow (e^{\lambda_1} \mbox{{\mbold\ell}}^1, e^{-\lambda_1} \mbox{{\bf n}}^1) \nonumber \\
(\mbox{{\mbold\ell}}^2, \mbox{{\bf n}}^2) &\rightarrow (e^{\lambda_2} \mbox{{\mbold\ell}}^2, e^{-\lambda_2} \mbox{{\bf n}}^2) \nonumber \\
&. \nonumber \\
&. \nonumber \\
(\mbox{{\mbold\ell}}^k, \mbox{{\bf n}}^k) &\rightarrow (e^{\lambda_k} \mbox{{\mbold\ell}}^k, e^{-\lambda_k} \mbox{{\bf n}}^k)
\end{align}
where the $\lambda_i$'s are real constants.
This is now a pointwise action on $T_pM$. For a tensor $T$, we now introduce \textit{boost weights}, $\boldsymbol{b}$ $\in$ $\mathds{Z}^k$ in the following manner:
We look at, $T_{\mu_1...\mu_n}$, an arbitrary component of $T$ with respect to the frame given in (\ref{metric}).
\begin{enumerate}
\item Consider a boost given in eq.(\ref{boost}). Then it will transform as $T_{\mu_1...\mu_n} \rightarrow e^{(b_1 \lambda_1 + b_2 \lambda_2 + ... + b_k \lambda_k)}T_{\mu_1...\mu_n}$ for some integers $b_1,...,b_k$.
\item Then $T_{\mu_1...\mu_n}$ is of boost weight $\boldsymbol{b} = (b_1,b_2,...,b_k)$.
\end{enumerate}
Now one can decompose a tensor into boost weights accordingly:
\begin{align}
T = \sum_{\boldsymbol{b} \in \mathds{Z}^k} (T)_{\boldsymbol{b}}
\label{eq:T_b}\end{align}
where $(T)_{\boldsymbol{b}}$ means the projection onto the subspace (of components) of boost weight $\boldsymbol{b}$.
\\
\\
If we take the tensor product of two tensors $T$ and $S$, the boost weights obey the following additive rule:
\begin{align}
(T\otimes S)_{\boldsymbol{b}}~=~\sum_{\hat{\boldsymbol{b}} + \bar{\boldsymbol{b}} = \boldsymbol{b}} (T)_{\bar{\boldsymbol{b}}} \otimes (S)_{\hat{\boldsymbol{b}}}
\end{align}
\subsection{The $\textbf{S}_i$- and $\textbf{N}$-properties of a tensor}
We first look at a tensor $T$ and define some conditions its components may fulfill:
\\
\\
\textbf{Definition 1}
\begin{align}
\text{B1)~~} (T)_{\boldsymbol{b}} &= 0, \text{for all~} \boldsymbol{b} = (b_1,b_2,...,b_k), ~b_1 > 0 \nonumber \\
\text{B2)~~} (T)_{\boldsymbol{b}} &= 0, \text{for all~} \boldsymbol{b} = (0~,b_2,...,b_k),~ b_2 > 0 \nonumber \\
\text{B3)~~} (T)_{\boldsymbol{b}} &= 0, \text{for all~} \boldsymbol{b} = (0~,0~,...,b_k),~ b_3 > 0 \nonumber \\
&. \nonumber \\
&. \nonumber \\
\text{B$k$)~~} (T)_{\boldsymbol{b}} &= 0, \text{for all~} \boldsymbol{b} = (0,0,...,0,b_k), ~b_k > 0 \nonumber
\end{align}
\textit{A tensor $T$ possesses the $\textbf{S}_1$ property if there exists a null frame such that condition B1) is satisfied. Furthermore the tensor possesses the $\textbf{S}_i$ property if there exists a null frame such that conditions B1)-B$i$) are fulfilled.}
\\
\\
\textbf{Definition 2}
\\
\textit{A tensor $T$ possesses the $\textbf{N}$ property if there exists a null frame such that conditions B1)-B$k$) are fulfilled and:}
\begin{align}
(T)_{\boldsymbol{b}} = 0, \text{for~} \boldsymbol{b} = (0,0,...0,0). \nonumber
\end{align}
These conditions can be extended \cite{VSI1} in the following manner:
\\
\\
Consider a tensor $T$, which does not necessarily have any of the $\textbf{S}_i$ properties defined above. Since the boost weights $\boldsymbol{b}$ $\in \mathds{Z}^k \subset \mathds{R}^k$, we can utilise a linear transformation that maps the boost weight on a lattice in $\mathds{R}^k$. More precisely, the transformation $G \in GL(k)$ is a map:
\begin{align}
G:\mathds{Z}^k \rightarrow \Gamma
\end{align}
where $\Gamma$ is a lattice in $\mathds{R}^k$. Now, if there exist a $G \in GL(k)$ such that after having transformed the boost weights to $G \boldsymbol{b}$, the tensor $T$ now satisfies some of the properties above, we say that $T$ possesses the $\textbf{S}_i^G$- or $\textbf{N}^G$ property. If we have two tensors $T$ and $S$, both possessing the $\textbf{S}_i^G$-property, \textit{with the same $G$}, we can form the tensor product:
\begin{align}
(T\otimes S)_{G\boldsymbol{b}}~=~\sum_{G\hat{\boldsymbol{b}} + G\bar{\boldsymbol{b}} = G\boldsymbol{b}} (T)_{G\bar{\boldsymbol{b}}} \otimes (S)_{G\hat{\boldsymbol{b}}}.
\end{align}
So the tensor product also has the $\textbf{S}_i^G$-property. Note also that if $G = I$ then the $\textbf{S}_i^G$-property reduces to the $\textbf{S}_i$-property.
\\
\\
The role of these properties can be given in terms of the following result \cite{VSI2}:
\begin{thm}\label{mainthm}
A tensor $T$ is not characterised by its invariants if and only if it possesses (at least) the ${\bf S}_1^G$-property.
\end{thm}
The crucial point in the proof of this is the existence of a boost, $B_\tau$, of the form eq.(\ref{boost}), so that the components of $T$ under the action of the boost has a well-defined limit $\tau \rightarrow \infty$. Recalling some of the proof,
considering the tensor $T$ not characterised by the invariants implies the existence of a ${\mathcal X}$ in the Lie algebra of the boosts so that \cite{RS}
\beq
\lim_{\tau\rightarrow \infty}\exp(\tau{\mathcal X})(T) = T_\infty,
\label{eq:limit}\eeq
which is finite. Let $\tilde{\mbold b}$ be the boost that represents ${\mathcal X}$. Then, if $(T)_{\mbold b}\neq 0$, we get the requirement $\tilde{\mbold b}\cdot{\mbold b }\leq 0$. In particular,
\beq
\exp(\tau\tilde{\mbold b}\cdot{\mbold b })(T)_{\mbold b} \longrightarrow \begin{cases}\quad (T)_{\mbold b}, \quad& \tilde{\mbold b}\cdot{\mbold b }=0, \\
\quad 0, \quad & \tilde{\mbold b}\cdot{\mbold b }<0,
\end{cases}
\eeq
all other $(T)_{\mbold b}$ must be zero (or else the limit will not exist):
\beq
\label{eq:posbw=0}
(T)_{\mbold b}=0, \quad \tilde{\mbold b}\cdot{\mbold b}>0.\eeq
This implies the ${\bf S}_1$-property.
Henceforth, we will call the boost that generates the (pointwise) limit, eq. (\ref{eq:limit}), for the \emph{boost vector} and denote it ${\mbold b}$.
\subsection{The $\mathcal{I}$-degenerate metrics}
\label{sect:deg}
We will first prove a result which is useful in the understanding of the relation between different metrics with the same invariants, and to understand how the limit, eq.(\ref{eq:limit}), from the invariant theory point of view, can be achieved.
\begin{lem}\label{Lemma}
Consider a null-frame $\{\mbox{{\mbold\ell}}^i,\mbox{{\bf n}}^i,{\mbox{{\bf m}}}^a\}$ at a point $p$. Given also a boost of the frame as follows at $p$:
\beq \label{eq:frameboost}
\{\mbox{{\mbold\ell}}^i,\mbox{{\bf n}}^i,\mbox{{\bf m}}^a\}\mapsto \{e^{\lambda_i\tau}\mbox{{\mbold\ell}}^i,e^{-\lambda_i\tau}\mbox{{\bf n}}^i,\mbox{{\bf m}}^a\}, \quad \tau\in \mb{R}, (\lambda_i)\in \mb{R}^k.\eeq
Then there exist neighbourhoods $U$ and $\widetilde{U}$ of $p$ and coordinate systems $(u^i,v^i,x^a)$ of $U$ and $(\tilde{u}^i,\tilde{v}^i,\tilde{x}^a)$ of $\widetilde{U}$ where $p$ is the origin of each coordinate system, such that the diffeomorphism $\varphi_\tau:U\rightarrow \widetilde{U}$, given by:
\beq
(\tilde{u}^i,\tilde{v}^i,\tilde{x}^a)=(e^{\lambda_i\tau}{u}^i,e^{-\lambda_i\tau}{v}^i,{x}^a),
\label{diffboost}\eeq
induces the boost $(\ref{eq:frameboost})$ at $p$. Furthermore, the diffeomorphism $\varphi_\tau$ can be considered as a 1-parameter family of diffeomorphisms generated by the vector field:
\beq
X=\sum_i\lambda_i\left(u^i\frac{\partial}{\partial u^i}-v^i\frac{\partial}{\partial v^i}\right).
\eeq
\end{lem}
\begin{proof}
Choose a sufficiently small neighbourhood, $U$, around the point $p$. The boost eq.(\ref{eq:frameboost}) induces a transformation in the tangent space $T_pM$. Let $\phi: U\rightarrow \mb{R}^{2k+m}$ be a smooth map mapping the one-forms $\mbox{{\mbold\ell}}^i$ onto $du^i$ and $\mbox{{\bf n}}^i$ onto $dv^i$, at $p$. This choice amounts to choosing a coordinate system where the coordinate vectors align with the $\mbox{{\mbold\ell}}^i$'s and $\mbox{{\bf n}}^i$'s at $p$. Such a choice can always be made since $p$ is merely a point. The diffeomorphism (\ref{diffboost}) gives now the desired boost. The vector field $X$ can now be found by differentiation of $\varphi_\tau$ w.r.t. $\tau$.
\end{proof}
We will use this boost to get a sufficient criterion of $\mathcal{I}$-degenerate metrics. We construct a metric:
\[ \widetilde{g}_{\tau} =\varphi_\tau^*g.\]
\\
Given $U$, let $\mf{M}$ be the space of smooth metrics on $U$. We will make the following assumption:
\\
\\
{\it There exists a neighbourhood $U$ so that the metric, in the coordinates given, has a finite limit $\lim_{\tau\rightarrow\infty}\varphi_\tau^*g\in \mf{M}$ with a boost with respect to any given point $p\in U$.
}
\\
\\
This assumption places clear constraints on the possible metric. Let us consider these in detail. Now, the boost above is with respect to the origin of the coordinate system. We consider a neighbourhood, $U$, that is sufficiently small to be covered by one coordinate chart. Choose an arbitrary point $p\in U$, which is given by $(u_0^i,v_0^i,x^i_0)$. We shift this point to the origin, by introducing $(\bar{u}^i,\bar{v}^i,\bar{x}^a)=({u}^i-u^i_0,{v}^i-v_0^i,{x}^a-x^a_0)$, and then apply the above boost. The above assumption now implies that the corresponding limit should be finite for all constants $(u_0^i,v_0^i,x^i_0)$ in the neighbourhood $U$.
\\
\\
The metric consists of symmetric components $g_{\mu\nu}(u^i,v^i,x^a)dx^\mu dx^{\nu}$, which, after translation of the point $p$ to the origin of the coordinate system, turn into:
$g_{\mu\nu}(u_0^i+\bar{u}^i,v_0^i+\bar{v}^i,x_0^a+\bar{x}^a)d\bar{x}^\mu d\bar{x}^{\nu}$. With no loss of generality, we can assume the boost given is:
\[ (\bar{u}^i,\bar{v}^i,\bar{x}^a)\mapsto (e^{-\lambda_i\tau}\bar{u}^i,e^{\lambda_i\tau}\bar{v}^i,\bar{x}^a), \qquad \lambda_i>0, \]
(we include all null-directions having $\lambda_j=0$ in $x^a$). First, we note that the components
\[ d\bar{v}^i d\bar{v}^j, \qquad d\bar{v}^id\bar{x}^a, \]
have to vanish on $U$. This can be seen as follows: if there is a point $p$ on $U$ for which we have
\[ f(p)d\bar{v}^i d\bar{x}^a \neq 0, \qquad \text{then} \qquad f(p)e^{\lambda_i\tau}d\bar{v}^i d\bar{x}^a\rightarrow \infty. \]
Clearly, the same argument is valid for $d\bar{v}^i d\bar{v}^j$ as well. Next, consider the metric for the transversal space:
\[ g_{ab}d\bar{x}^ad\bar{x}^b.\]
Since the metric is smooth (as well as the limit), the partial derivative of $g_{ab}$ w.r.t. $\bar{v}^i$ exists for any $\tau$. Then, considering it as a function of $\bar{v}^i$:
\beq
g(v^i_0+e^{\lambda_i\tau}\bar{v}^i)_{ab,\bar{v}^i}=g'(v^i_0+e^{\lambda_i\tau}\bar{v}^i)_{ab} e^{\lambda_i\tau}.
\eeq
Consequently, $g'$ has to be zero (same argument as above), and hence, the components $g_{ab}$ do not depend on the $\bar{v}^i$'s, and therefore, $g_{ab}=g_{ab}(\bar{u}^i,\bar{x}^a)$.
For the components containing one or two $d\bar{u}^i$'s, we note by taking derivatives of various $\bar{v}^j$'s that they must be polynomials in the coordinates $(\bar{v}^i)$, but are arbitrary smooth functions in $(\bar{u}^i, \bar{x}^a)$. The order of the polynomial (in the $\bar{v}^i$'s) depends on the actual boost, as well as which component we consider.
In the following we will also for simplicity introduce some notation. Let $P(v_1,v_2,...,v_k)$ be a polynomial in the $v_i$'s with coefficients being arbitrary functions of $(u^i, x^a)$ (Henceforth, we will sometimes let the index of the $v$-coordinates be downstairs due to a more appealing typesetting). Define $\mathcal{P}$ the ring of all such polynomials:
\[ \mathcal{P}:=\left\{P(v_1,v_2,...,v_k) ~~| ~~ P ~ \text{polynomial, coefficients depend on}~ (u^i, x^a)\right\} \]
We will define subsets of this set and indicate them with a bracket $[-,..,-]$. The bracket consists of a list of monomials in $v_i$'s and indicates the highest allowable possible power of the $v_i$'s. For example, $[v_1^3,v_2v_3^5]\subset \mathcal{P}$ is the subset including the following powers: $v_1^n, n=0,...,3$, and $v_2^mv_3^q$, $m=0,1$ and $q=0,...,5$.
We therefore, end up with the following metric (switching back to non-barred coordinates):
\beq\label{result}
g=2du^i\left(a_{ij}dv^j+A_{ij}du^j+B_{ia}dx^a\right)+g_{ab}dx^adx^b,
\eeq
where $g_{ab}=g_{ab}(u^i,x^a)$ and $a_{ij}$, $A_{ij}$, and $B_{ia}$, are polynomials belonging to some subset of $\mathcal{P}$, with arbitrary smooth coefficients in $(u^i,x^a)$. We therefore conclude that, these metrics form a subclass of the metrics of type I considered in section \ref{sect:subclasses}.
\subsection{The polynomial invariants}
These metrics represent $\mathcal{I}$-degenerate metrics in the sense that many non-diffeomorphic metrics have the same invariants. Indeed,
\begin{thm}
\label{thm:varphi}
Consider a point $p\in U$, and assume that there is a one-parameter family of diffeomorphisms $\varphi_\tau$ such that $\varphi_\tau(p)=p$ and $\lim_{\tau\rightarrow\infty}\varphi_\tau^*g=g_0\in \mf{M}$. Then any polynomial curvature invariant of $g$ evaluated at $p$ is identical to the corresponding invariant of $g_0$ at $p$.
\end{thm}
\begin{proof}
At $p$, the diffeomorphism induces a frame transformation on the tangent space. Since any curvature invariant, $I$, does not depend on such a frame choice, the transformation of $I$ under $\varphi_\tau$ is simply:
\[\left.\varphi_{\tau}^*I\right|_{p}=\left.I\right|_{\varphi_\tau(p)}=\left.I\right|_{p}.\]
Since the metric is smooth, including its limit, any derivative of the metric, evaluated at $p$, $\left.\partial^{(n)}g\right|_p$ is well-defined in the limit as well. Consequently, since the invariants are continuous functions in $g$ and its derivatives, the limit implies:
\[ \lim_{\tau\rightarrow \infty}\left.\varphi_{\tau}^*I\right|_{p}=\left.I\right|_{p}=I\left[\lim_{\tau\rightarrow\infty} \left.g\right|_p\right]=\left.I[g_0]\right|_p.
\]
\end{proof}
We also note the following fact about any metric $g_0$ being the limit of such a boost:
\begin{prop}\label{prop:isometry}
Assume that $\lim_{\tau\rightarrow\infty}\varphi_\tau^*g=g_0\in \mf{M}$, where $\varphi_\tau$ is a one-parameter family of diffeomorphisms. Then $\varphi_\tau^*g_0=g_0$ and consequently, $\varphi_\tau$ is an isometry of $g_0$.
\end{prop}
\begin{proof}
We observe that:
\[ \varphi_\tau^*g_0=\varphi_\tau^*\left(\lim_{\lambda\rightarrow\infty}\varphi_\lambda^*g\right)=
\lim_{\lambda\rightarrow\infty}(\varphi_\lambda\circ\varphi_\tau)^*g=\lim_{\lambda\rightarrow\infty}\varphi_{\lambda+\tau}^*g=\lim_{\lambda\rightarrow\infty}\varphi_\lambda^*g=
g_0.\]
\end{proof}
This implies that the limiting metric has extra symmetries compared to $g$. We also note that if $g_0$ turns out to be $g$ (perhaps in some disguise), then the $g$ must possess the symmetry $\varphi_\tau$ as well: assume that there is a diffeomorphism $f$ so that $f^*g=g_0$. Then by applying $\varphi_\tau^*$ on each side, we get:
\[ \varphi_\tau^*f^*g=\varphi_{\tau}^*g_0=g_0=f^*g. \]
Then applying $f^{-1}$ on each side we obtain:
\[ (f^{-1})^*\varphi_\tau^*f^*g=(f\circ\varphi_\tau\circ f^{-1})^*g=(f^{-1})^*f^*g=(f\circ f^{-1})^*g=g.\]
Hence, $f^{-1}\circ\varphi_\tau\circ f$ is an isometry of $g$. In this case, the $g$ and $g_0$ are diffeomorphic so we might as well just use the (possibly) simpler metric, $g_0$, to represent our space. Clearly, this means that as long as $\varphi_\tau$ (or $f^{-1}\circ\varphi_\tau\circ f$) is not an isometry of $g$, then $g$ and $g_0$ must necessarily be two non-diffeomorphic metrics. In this sense, the existence of $\varphi_\tau$ implies that the metric $g$ (and its curvature structure) is $\mathcal{I}$-\emph{degenerate}.
\\
\\
We now want to take a closer look at the coefficients of the degenerate metrics. Because of the $v^ i$'s transformation properties under the boost the coefficients $a_{ij}, A_{ij}$ and $B_{ia}$ cannot be polynomials in $v^ i$ of an arbitrarily high degree.
\section{Constraining the coefficients of $\mathcal{I}$-degen\-erate metrics}
Given the null frame and boost in $(\ref{eq:frameboost})$, the form of the $\mathcal{I}$-degen\-erate metrics is:\\
\beq
g=2du^i\left(a_{ij}dv^j+A_{ij}du^j+B_{ia}dx^a\right)+g_{ab}dx^adx^b,
\eeq
where the coefficients $a_{ij}, A_{ij}$ and $B_{ia}$ are polynomials in the $v^i$'s. In order for the limit, $\lim_{\tau\rightarrow\infty}\varphi_\tau^*g\in \mf{M}$ to exist, the coefficients cannot be polynomials of arbitrarily high order in the $v^i$'s. This is because the $v^i$'s transform as $e^{\tau \lambda_i}v^i$ under boosts and the limit might blow up. We now use the boost-weight decomposition on the metric to get a handle on the $v^i$ dependence of the coefficients.
\\
\\
The vector space decomposition separates the metric into (coordinate)-components that transform as $g_{\mu\nu} \rightarrow e^{(b_1 \lambda_1 + b_2 \lambda_2 + ... + b_k \lambda_k)}g_{\mu\nu}$ under the action of $\varphi_\tau$ (similarly to eq.(\ref{eq:T_b}) but for a coordinate basis). This is therefore a useful point of view when analyzing the limiting behaviour of the components. In addition to the components we must include the behaviour of the differentials $du^idu^j$, $du^idv^j$ and $du^idx^a$ under boosts. This motivates the following definition:
\\
\\
$\boldsymbol{V} = (d_1,d_2,...,d_k) + \bold{v}_{ij}\in\mathbb{Z}^k$,
\\
\\
The $d_i$'s are non-negative integers corresponding to the subsets $[v_1^{d_1}v_2^{d_2}...v_k^{d_k}]\subset\mathcal{P}$ which we assume the polynomials $a_{ij},A_{ij}$ or $B_{ia}$ belong to. Differentials are accounted for by $\bold{v}_{ij}$ as follows: For each $i,j$, the $\bold{v}_{ij}$ is a vector given as follows (other components zero):
\begin{itemize}
\item{} For component $a_{ij}$: $-1$ at the $i$'th place (the $du^i$ differential) and $+1$ at the $j$'th place (the $dv^j$ differential). If $i=j$ then $\bold{v}_{ij}=0$.
\item{} For component $A_{ij}$: $-1$ at the $i$'th place (the $du^i$ differential) and $-1$ at the $j$'th place (the $du^j$ differential). If $i=j$ then $-2$ at the $i$'th place.
\item{} For component $B_{ia}$: $-1$ at the $i$'th place (the $du^i$ differential).
\end{itemize}
The vector $\boldsymbol{V}$ is thus different depending on which components of the metric we are looking at. If we pick out the term $2a_{ij} du^i dv^j$ then:
\\
\\
$\boldsymbol{V} = (d_1,d_2,...,d_i-1,...,d_j+1,...,d_k)$.
\\
\\
Demanding that the limit from section \ref{sect:deg} is finite, gives a bound $\boldsymbol{V} \cdot \boldsymbol{b} \leq 0$, where $\boldsymbol{b}$ is the boost representing $\cal{X}$ in eq.(\ref{eq:limit}). Recall that the components of $\boldsymbol{b}$ are zero or positive. The bound $\boldsymbol{V} \cdot \boldsymbol{b} \leq 0$ ensures we get a well behaved limit and the given inequality constrains the maximal degree of the polynomials $[v_1^{d_1}v_2^{d_2}...v_k^{d_k}]$.
\\
\\
Before we show a concrete example we will do two things. Firstly, we are only interested in the maximum degree of the $v^i$'s and so we set $\boldsymbol{V} \cdot \boldsymbol{b} = 0$. Secondly, for a given\footnote{It is the $k$ from the dimension of the manifold: $2k +m$ with signature $(k,k+m)$} $k$ we have a freedom of choice of how we would like to specify the boost-vectors.
\\
\\
We write the boost vector $\boldsymbol{b}$ in the form $\boldsymbol{b}=(n_1, n_2,...,n_k) \in \mathds{Z}^k$. By utilising a linear transformation from $\mathds{Z}^k \rightarrow \Gamma$, where $\Gamma$ is a lattice in $\mathds{R}^k$, we will consider the cases where we can put the following conditions on the entries of the boost vectors\footnote{We should point out that there may be other possibilities for the boost vector ${\mbold b}$; for example, we can generalise point 2. to include $n_i<n_{i+1}\leq 2n_i$. A complete list of all non-equivalent boosts is not given here.}:\begin{enumerate}
\item $n_i \leq n_{i+1}$,
\item If $n_i < n_{i+1}~\textit{then}~n_{i+1} = 2n_i$
\item For the case $n_i = 0$ then $n_{i+1} = 0$ or 1
\end{enumerate}
For a given $k$ there are then $2^k$ different boost vectors $\boldsymbol{b}$ in this category.
\\
\\
\subsubsection*{Examples:}
For $k=2$, the complete set of boost vectors $\boldsymbol{b}$ are $(0,0)$, $(0,1)$, $(1,1)$ and $(1,2)$.\\
\\
For $k=3$, the complete set of boost vectors $\boldsymbol{b}$ in this category are $(0,0,0)$, $(0,0,1)$, $(0,1,1)$, $(0,1,2)$, $(1,1,1)$, $(1,1,2)$, $(1,2,2)$, $(1,2,4)$. \\
Furthermore, in the boost-vectors with a leading zero, e.g., $(0,0,1)$, the zeros indicate that the boost does not involve these directions. This implies that there are no constraints on these variables and no degeneracy in these directions. Consequently, these directions can be included in the transverse space (and hence, generalising the transverse space and allowing it to be pseudo-Riemannian as well). In the case where there are only zeros, $(0,...,0)$, this corresponds to the $\mathcal{I}$-non-degenerate case where no boost exists.
\subsection{A concrete example}
Suppose we set $k = 3$, specify the boost-vector to be $\boldsymbol{b} = (1,2,4)$ and decide to pick out terms in front of $du^1 du^1$. Then $\boldsymbol{V} = (d_1-2,d_2,d_3)$. Writing out the dot product $\boldsymbol{V} \cdot {\mbold b} = 0$, we get:
\[ d_1 + 2d_2 + 4 d_3 = 2\]
From this equality we gather that the $v_i$-dependence in front of $du^1 du^1$ is restricted to the two cases: $(v_1^2,v_2^0,v_3^0)$ or $(v_1^0,v_2^1,v_3^0)$. Hence, $A_{11}\in [v_1^2,v_2]$. If we do this for all the metric components $A_{ij}=A_{ji}$ we get:
\begin{enumerate}[i)]
\item For $A_{11}$:
$d_1 + 2d_2 + 4d_3 = 2 $
\item For $A_{12}$:
$d_1 + 2d_2 + 4d_3 = 3 $
\item For $A_{22}$:
$d_1 + 2d_2 + 4d_3 = 4 $
\item For $A_{13}$:
$d_1 + 2d_2 + 4d_3 = 5 $
\item For $A_{23}$:
$d_1 + 2d_2 + 4d_3 = 6 $
\item For $A_{33}$:
$d_1 + 2d_2 + 4d_3 = 8 $
\end{enumerate}
At the end we can gather this information and write up the maximum degree of these metric coefficients in matrix form.
\\
\\
\[ A_{ij} =
\begin{pmatrix}
\left[v_1^2,v_2\right] & \left[v_1^3,v_1v_2\right] & \left[v_1^5,v_1^3v_2,v_1v_2^2,v_3v_1\right] \\
\cdots & \left[v_1^4,v_1^2v_2,v_2^2, v_3\right] & \left[v_1^6,v_1^4v_2,v_1^2v_2^2,v_2^3,v_3v_1^2, v_3v_2\right] \\
\cdots & \cdots & \left[v_1^8,v_1^6v_2,v_1^4v_2^2,v_1^2v_2^3,v_2^4,v_3v_1^4,v_3v_1^2v_2,v_3v_2^2,v_3^2\right]
\end{pmatrix}
\]
\\
\\
For the matrix $a_{ij}$ and the vector $B_{ia}$ the result is:
\[ a_{ij} =
\begin{pmatrix} 1 & 0 & 0 \\
\left[v_1\right] & 1 & 0 \\
\left[v_1^3,v_1v_2\right] & \left[v_1^2,v_2\right] & 1
\end{pmatrix}, \quad B_{ia}=\begin{pmatrix}
\left[v_1\right] \\
\left[v_1^2,v_2\right] \\
\left[v_1^4,v_1^2v_2,v_2^2,v_3\right]
\end{pmatrix}^T
\]
Note, however, we still have some freedom given in eq.(\ref{v-trafo}) to simplify the matrix $a_{ij}$. Using this transformation we can simplify $a_{ij}$ to be:
\[ a_{ij} =
\begin{pmatrix} 1 & 0 & 0 \\
0 & 1 & 0 \\
\left[v_1v_2\right] & \left[v_1^2\right] & 1
\end{pmatrix}.
\]
\subsection{General observations}
\label{General observations}
A list of equalities for the matrix $A_{ij}$ and for our class of boost-vectors up to dimension $k=4$ can be found in appendix B. There are a few general observations.
\paragraph{The case $k=1$: Kundt case.} We note that for $k=1$ there is only one possible non-trivial boost vector. All of these cases reduce to Kundt metrics for which $a_{ij}$ and $A_{ij}$ have only one component each: $a_{11}=1$, and $A_{11}=([v_1^2])$, and $B_{1a}=([v_1])$. These metrics are therefore consistent with the previous analysis of these metrics \cite{kundt}.
\paragraph{The case $k=2$: Kundt or type II} There are three non-trivial cases here, ${\mbold b}=(0,1), ~(1,1)$ and $(1,2)$. The first case can be considered as a $k=1$ by allowing the transverse metric be pseudo-Riemannian (i.e., $g_{ab}dx^adx^b$ is pseudo-Riemannian). Hence, this is a Kundt case. For the cases $(1,1)$ and $(1,2)$, we note that for both we can reduce the matrix $a_{ij}=\delta_{ij}$. This is the type II case in section \ref{sect:subclasses}.
\\
\\
In 4 dimensions, this is the neutral case where there are no components $B_{ia}$. Consequently, in 4 dimensions both of these cases must also be Walker cases, type III. Hence, this is in agreement with the results found in \cite{VSI2}.
\paragraph{Cases $(1,...,1)$: type II.} In all of the cases where the boost vector is $(1,...,1)$, all the $v_i$'s carry the same boost vector. This means that the matrix $a_{ij}=\delta_{ij}$, and hence of type II. Furthermore, the matrix $A_{ij}$ can at most be quadratic in $v_i$, and $B_{ia}$ at most linear in $v_i$.
\\
\\
In the special case where the space is neutral of dimension $2k$, then $B_{ia}$ is not present and hence, the space is Walker (type III).
\paragraph{Covariantly constant ${\mbold F}$: type IV.} This case is a subclass of the Walker spaces. If we assume Walker, then this case is equivalent to the additional requirement:
\[ \sum_{i}\partial_iA_{ij}=0. \]
The examples of metrics obeying this condition are plentiful. An example is (all indices are written downstairs to avoid clutter):
\[
ds^2=2du_1(dv_1+v_2du_1)+2du_2(dv_2+v_3du_2)+2du_3(dv_3+v_1^8du_3).
\]
The 3-form:
\[ \boldsymbol{F}=du_1\wedge du_2\wedge du_3 \]
is for this metric covariantly constant (and hence, is a null Killing-Yano tensor).\footnote{The fact that $\boldsymbol{F}$ satisfies Killing-Yano equations for this type of metrics has been spotted by using a systematic algorithm developed in \cite{KYsearch}.}
\subsection{The limiting spaces}
As pointed out, applying the diffeomorphism $\varphi_\tau$ gives a space with identical invariants, including the limiting space as $\tau\rightarrow \infty$. With respect to a point $p$ which we can assume has coodinates $(v_i,u^i, x^a)=(0,u_0^i,x^a_0)$, the subleading powers of $v_i$ will tend to zero as $\tau\rightarrow \infty$. For example, if we consider the ${\mbold b}=(1,2,4)$ case, then choosing the $A_{23}$ component ($A_{23}\in \left[v_1^5,v_1^3v_2,v_1v_2^2,v_3v_1\right]$)
\[ \varphi^*_{\tau}A_{23}du^2du^3 \longrightarrow (av_1^5+bv_1^3v_2+cv_1v_2^2+dv_3v_1)du^2du^3,\]
where $a,b,c,d$ are functions of $(u^i_0,x^i)$. Note that in the limit, the coordinates $u^i_0$ tend towards a constant and in evaluating the invariants, one needs to keep $u^i_0$ fixed while the $x^a$ remains unaffected. Note also that the limit itself is symmetric w.r.t. $\varphi_\tau^*$ and thereby confirming Prop. \ref{prop:isometry}.
\subsubsection{VSI spaces.} An interesting subclass of these spacetimes is the class where all polynomial curvature invariants vanish. Such spacetimes are those for which there exists a boost ${\mbold b}'$ so that the corresponding diffeomorphism $\varphi'_\tau$ has flat space as a limit: $ {\varphi'}_{\tau}^*g \longrightarrow \text{flat space}. $
This would contain all the spaces above where the polynomials are all subleading order. However, it also includes those spaces for which there exists a perturbation $\mbold \epsilon$ of ${\mbold b}$ so the boost ${\mbold b}'={\mbold b}+{\mbold\epsilon}$ gives flat space in the limit.
Note also, that if there is a sequence of such boost limits which eventually leads to flat space, then this is sufficient to prove the space is VSI as well. Therefore, a space is a VSI if there exists a sequence of boosts such that:
\[ g\longrightarrow g_1\longrightarrow \cdots \longrightarrow \text{flat space}.\]
Each arrow indicates an infinite boost limit.
As an example of this is the following space:
\[
g=2du_1(dv_1+v_2du_1)+2du_2(dv_2+v_3du_2)+2du_3(dv_3+v_1^7du_3).
\]
By using the boost ${\mbold b}_1=(1,2,4)$, the limiting space is:
\[ g_1=2du_1(dv_1+v_2du_1)+2du_2(dv_2+v_3du_2)+2du_3dv_3.\]
Using next the boost ${\mbold b}_2=(1,1,0)$, gives the limit:
\[ g_2=2du_1dv_1+2du_2dv_2+2du_3dv_3,\]
which is flat space. This proves that the the metric $g$ is a VSI.
\subsubsection{CSI spaces.} Another subclass of metrics are those that have polynomial invariants being all constants. Such spaces can be found by considering sequence of limits having a homogeneous space as an end product:
\[ g\longrightarrow g_1\longrightarrow \cdots \longrightarrow \text{homogeneous space}.\]
All spaces having such a sequence of limits will be a CSI.
For spaces where $k\geq 2$, a sequence of limits ending at a homogeneous space is sufficient but not necessary for it to be a CSI. For example, the 4-dimensional space (of boost-type (1,2))
\[ 2du_1(dv_1+v_2du_2)+2du_2(dv_2+v_1^4du_2),\]
has only 3 Killing vectors, and hence, is not a homogeneous space. Yet, it is still a CSI space (and cannot be simplified further by a limiting procedure). One would like to have a criterion for a CSI space which states that a (degenerate) space is CSI if and only if there is a sequence:
\[ g\longrightarrow g_1\longrightarrow \cdots \longrightarrow g_{\infty}. \]
However, it is not clear what the metric of $g_{\infty}$ is, but there are some conditions that it has to satisfy. Firstly, the transverse space needs to be a homogeneous space. Second, the additional null-directions will possess (at least) $k$ translations and one boost as isometries: hence, $g_{\infty}$ will possess a minimum of $(k+m+1)$ Killing vectors (a homogeneous space has at least $(2k+m)$ Killing vectors, hence, $g_\infty$ need not be homogeneous).
Examples of such CSI spaces are spaces having a Lie group $G$ as a transversal space equipped with a left-invariant metric. If ${\mbold \omega}^a$ are left-invariant 1-forms on the group $G$, then $g_{ab}{\mbold \omega}^a{\mbold \omega}^b$, where $g_{ab}$ is a constant matrix, is a left-invariant metric on $G$. Furthermore, if the leading order coefficients of the polynomials in the $v_i$'s (saturating the limits in ${\mbold V}\cdot {\mbold b}=0$), in $a_{ij}$ and $A_{ij}$ are constants, as well as the coefficients of the matrix $B_{ia}{\mbold \omega}^a$ (using the left-invariant 1-forms as basis 1-forms), then it is a CSI.
As an 8-dimensional example, let
\[ {\mbold\omega}^1=dw, \quad {\mbold \omega}^2=e^{2w}[dx +(1/2)(ydz-zdy)],\quad {\mbold \omega}^3 = e^wdy, \quad {\mbold \omega}^4=e^wdz, \]
(or any left-invariant one-forms on a 4-dimensional Lie group)
and $g_{ab}$ be any constant matrix. Let also
${\mbold\eta}$ be any (constant) linear combination of the ${\mbold \omega}^a$'s, i.e.,
\[ {\mbold \eta}=a{\mbold \omega}^1+b{\mbold \omega}^2+c{\mbold \omega}^3+d{\mbold \omega}^4.\]
So an example of an 8-dimensional CSI is:
\beq
g&=&
2du_1[dv_1+(a_1v_2+a_2(u_i,x_a))du_1+(b_1v_1+b_2(u_i,x_a)){\mbold\eta}_1]\nonumber \\ & +&2du_2[dv_2+(c_1v^4_1+c_2(u_i,x_a)v^3_1)du_2+(d_1v_1^2+d_2(u_i,x_a)v_1+d_3(u_i,x_a)){\mbold\eta}_2]\nonumber \\
&+& g_{ab}{\mbold\omega}^a{\mbold\omega}^b,
\eeq
where all are constants except when $(u_i,x_a)$-dependence is explicitly mentioned (more non-zero metric functions are possible though, only some are included here). In this case the limit is:
\beq
2du_1[dv_1+a_1v_2du_1+b_1v_1{\mbold\eta}_1]+2du_2[dv_2+c_1v^4_1du_2+d_1v_1^2{\mbold\eta}_2]\nonumber
+ g_{ab}{\mbold\omega}^a{\mbold\omega}^b,
\eeq
which is a homogeneous space. A plethora of other examples of CSI spaces can be found using the same procedure.
\section{Conclusion}
In this paper we have discussed pseudo-Riemannian spaces with degenerate curvature stucture. Under a simple assumption we found a class of metrics being $\mathcal{I}$-degenerate, eq. (\ref{result}). This class includes all known examples to date, as well as new families of examples showing that this class is bigger than previously known.
Examples of VSI and CSI spaces have been given, as well as metrics with more special curvature properties. For example, contained in the class, there are the metrics of type IV which possess a covariantly constant null $k$-form $F$. Clearly, there are also subtypes of the main types listed here which are amenable for further study.
A question is still lingering: are these all such $\mathcal{I}$-degenerate spaces? This question depends on the following crucial assumption:
{\it For any point $p\in U$, there exists a (non-trivial) boost so the metric, in the coordinates given, has a finite limit $\lim_{\tau\rightarrow\infty}\varphi_\tau^*g\in \mf{M}$.
}
From invariant theory, we know that such a limit exists point-wise \cite{RS,VSI2}, however, the extension to a neighbourhood, $U$, is unsettled.
\section*{Acknowledgement}
KY would like to thank Tsuyoshi Houri for helpful discussions and for making his work available to us before publication, which eventually led us to the class of spaces we have considered in this article. The same author is also grateful to University of Stavanger for the support. This work was partly supported by the JSPS Grant-in-Aid for Scientific Research No. 26$\cdot $1204.
|
1,941,325,221,036 | arxiv | \section{Arithmetic progressions and the Green-Tao Theorem}
Arithmetic progressions are among the most natural and well-studied mathematical objects. They are both aesthetically pleasing and ripe with structure, two properties which make it particularly interesting to find them inside other objects, which might, at first, seem complicated and unstructured. Such is the beauty of the Green-Tao Theorem which asserts that the primes - one of the most fundamental, complicated, and subtle objects in mathematics - contain \emph{arbitrarily long} arithmetic progressions. This is a remarkable amount of additive regularity for an inherently multiplicative structure to possess.
An arithmetic progression of \emph{length} $k \geq 1$ and \emph{gap size} $\Delta>0$ is a set of the form
\[
\{x, \, x+\Delta, \, x+2\Delta, \, \dots, \, x+(k-1) \Delta \},
\]
for some $x \in \mathbb{R}$, that is, a collection of $k$ points each separated from the next by a common distance $\Delta$.
\begin{thm}[The Green-Tao Theorem \cite{greentao}]
The primes contain arbitrarily long arithmetic progressions, that is, for all integers $k \geq 1$ one can find an arithmetic progression of length $k$ lying somewhere in the primes.
\end{thm}
We consider the following weakened version of containing arithmetic progressions, introduced and studied in \cite{fraseryu, frasersaitoyu}.
\begin{defn}\label{AAP}
A set of positive integers $X \subseteq \mathbb{Z}$ \emph{gets arbitrarily close to arbitrarily long arithmetic progressions} if, for all $k \in \mathbb{N}$ and $\varepsilon>0$, there exists an arithmetic progression $P$ of length $k$ and gap size $\Delta>0$ such that
\[
\sup_{p \in P} \inf_{x \in X} |p-x| \leq \varepsilon \Delta.
\]
\end{defn}
This definition should be understood as saying that for arbitrarily large $k$ and arbitrarily small $\varepsilon>0$ $X$ gets within $\varepsilon$ of an arithmetic progression of length $k$. The fact that $\varepsilon \Delta$ appears instead of $\varepsilon$ is the necessary normalization, based on the observation that all arithmetic progressions of length $k$ are essentially the same: they are all equal to $\{0,1,2, \dots, k-1\}$ upon rescaling and translation.
\begin{figure}[H]
\centering
\includegraphics[width= 0.45\textwidth]{aplines.png}
\caption{From top row to bottom row: three different approximations to an arithmetic progression of length 5, where $\varepsilon$ is 1/3, 1/10, 1/100, respectively, followed by a genuine arithmetic progression of length $5$. At this resolution the $\varepsilon=1/100$ case is indistinguishable from the genuine arithmetic progression.}
\end{figure}
\begin{thm} \label{almostAP}
The primes get arbitrarily close to arbitrarily long arithmetic progressions.
\end{thm}
To the untrained eye this theorem may have the same aesthetic appeal as the Green-Tao Theorem: getting arbitrarily close is good enough, right? However, this theorem is very straightforward to prove, compared with the 60 page epic published in \emph{Annals of Mathematics} which is required to establish the Green-Tao Theorem \cite{greentao}. Of course, it also follows directly from the Green-Tao Theorem. The purpose of this article is to give a simple and self-contained proof of Theorem \ref{almostAP}. It follows from three simple lemmas, which we will discuss in the following section and prove thereafter.
The Green-Tao Theorem is, by virtue of the fact that the sum of the reciprocals of the primes diverges (see Lemma \ref{lemma1} below), a special case of the Erd\H{o}s-Tur\'an conjecture on arithmetic progressions, which is a famous open problem in number theory dating back to 1936 \cite{erdos}. It states that if $X \subseteq \mathbb{Z}^+$ is such that
\[
\sum_{x \in X} 1/x = \infty,
\]
then $X$ should contain arbitrarily long arithmetic progressions. We also provide a straightforward proof of the weakened version of this conjecture using the same approach.
\begin{thm}\label{erdos}
If $X \subseteq \mathbb{Z}^+$ is such that
\[
\sum_{x \in X} 1/x = \infty,
\]
then $X$ gets arbitrarily close to arbitrarily long arithmetic progressions.
\end{thm}
This theorem was proved in \cite[Theorem 2.11]{fraseryu}, but follows directly from Lemmas \ref{lemma2} and \ref{lemma3} which we prove below. Perhaps the interest of this result lies in the fact that, unlike in the case of the primes, the genuine version of the Erd\H{o}s-Tur\'an conjecture is still open?
\section{Proof by three lemmas}
An obvious necessary condition for containing arbitrarily long arithmetic progressions, or even getting arbitrarily close to arbitrarily long arithmetic progressions, is being infinite. The primes have been known to be infinite for a rather long time, the first proof often attributed to Euclid. However, we need more in order to proceed. Indeed, the positive integer powers of 2 form an infinite set but it is a short exercise to see that they do \emph{not} get arbitrarily close to arbitrarily long arithmetic progressions; even arithmetic progressions of length 4! If a sequence gets big very quickly, then the reciprocals of that sequence get small very quickly and therefore the fact that the reciprocals of the powers of 2 form a geometric series which sums to 1 is an indication of the fact that the powers of 2 grow too fast. The following result, first proved by Euler, is a fundamental result in mathematics and is our first key ingredient.
\begin{lma}[Euler \cite{euler}] \label{lemma1}
The sum of the reciprocals of the primes diverges, that is,
\[
\sum_{p \ \textup{prime}} 1/p = \infty.
\]
\end{lma}
In the interest of being self-contained we present a well-known proof of this result due to Erd\H{o}s \cite{erdosprimes} in Section \ref{proof1}. The next step is to turn the fact that the sum of the reciprocals of the primes diverges into a more quantitative statement about the distribution of the primes. The following result is adapted from \cite[Lemma 2.10]{fraseryu} and we present a self-contained proof in Section \ref{proof2}.
\begin{lma} \label{lemma2}
If the sum of the reciprocals of a set of positive integers diverges, then the set has upper logarithmic density equal to 1, that is, if $X \subseteq \mathbb{Z}^+$ is such that
\[
\sum_{x \in X} 1/x = \infty,
\]
then
\[
\limsup_{n \to \infty} \sup_{m \geq 0}\frac{\log \# X \cap [m+1,m+n ]}{\log n} = 1.
\]
\end{lma}
The final step in establishing Theorem \ref{almostAP} is to show that maximal upper logarithmic density is enough to guarantee arbitrary closeness to arbitrarily long arithmetic progressions. This result follows from \cite[Theorem 2.4]{fraseryu}, see also \cite{frasersaitoyu}, but we present a self-contained and stripped back proof in Section \ref{proof3}.
\begin{lma} \label{lemma3}
If the upper logarithmic density of a set of positive integers is equal to 1, then the set gets arbitrarily close to arbitrarily long arithmetic progressions, that is, if $X \subseteq \mathbb{Z}^+$ is such that
\[
\limsup_{n \to \infty} \sup_{m \geq 0}\frac{\log \# X \cap [m+1,m+n ]}{\log n} = 1,
\]
then $X$ gets arbitrarily close to arbitrarily long arithmetic progressions.
\end{lma}
\section{Proofs}
\subsection{Proof of Lemma \ref{lemma1}} \label{proof1}
This proof is due to Erd\H{o}s \cite{erdosprimes} and is a classic example of proof by contradiction. List the primes in increasing order $p_1, p_2, \dots$ and suppose that
\[
\sum_{k=1}^\infty 1/p_k < \infty
\]
which means we can find a positive integer $L$ such that
\[
\sum_{k=L+1}^\infty 1/p_k \leq 1/2.
\]
Fix a large positive integer $N$ and write $A$ for the number of integers between 1 and $N$ which are not divisible by any primes strictly larger than $p_L$. If $n \leq N$ is such an integer then, writing $n=P^2Q$ where $Q$ is a square free integer and $P$ is an integer, one sees there are fewer than $\sqrt{N}$ choices for $P$ and $2^L$ choices for $Q$. Therefore
\[
A \leq \sqrt{N} 2^L.
\]
On the other hand
\[
N-A \leq \sum_{k=L+1}^\infty N/p_k \leq N/2
\]
since there are at most $N/p_k$ positive integers less than $N$ divisible by $p_k$. Therefore
\[
N/2 \leq A \leq \sqrt{N} 2^L
\]
which cannot be true for all $N$, yielding the desired contraction. \qed
\subsection{Proof of Lemma \ref{lemma2}} \label{proof2}
This proof is due to Fraser and Yu and is adapted from \cite{fraseryu}. List the elements of $X$ in increasing order $x_1, x_2, \dots$ and suppose that the upper logarithmic density of $X$ is strictly less than 1. It follows that there exists $s \in (0,1)$ and $C>0$ such that for all integers $m \geq 0$ and $n \geq 1$ we have
\[
\# X \cap [m+1, m+n] \ \leq \ C n^s.
\]
For integers $N \geq 0$ write $ X_N = X\cap [2^{N}, 2^{N+1})$ and note that by the upper logarithmic density assumption
\[
\# X_N \ \leq \ C 2^{sN}.
\]
Therefore
\[
\sum_{k=1}^\infty 1/x_k \ = \ \sum_{N=1}^\infty \ \sum_{k \, : \, x_k \in X_N} 1/x_k \ \leq \ \sum_{N=1}^\infty \left( \# X_N \right) 2^{-N} \ \leq \ \sum_{N=1}^\infty C 2^{(s-1)N} \ < \ \infty
\]
since $s<1$, which yields the desired contradiction since we assume the sum of the reciprocals of elements in $X$ diverges. \qed
\subsection{Proof of Lemma \ref{lemma3}} \label{proof3}
This proof is adapted from \cite{fraseryu} and \cite{frasersaitoyu}. Suppose $X \subseteq \mathbb{Z}^+$ does \emph{not} get arbitrarily close to arbitrarily long arithmetic progressions. That is, there exists $k \geq 2$ and $\varepsilon>0$ such that, given any arithmetic progression $P \subseteq \mathbb{R}$ of length $k$ and gap size $\Delta$,
\begin{equation} \label{avoid}
\sup_{p \in P} \inf_{x \in X} |p-x| > \varepsilon \Delta.
\end{equation}
We may assume for convenience that $1/(2 \varepsilon)$ is an integer, since we can always replace $\varepsilon$ with a smaller value and force this to be true. Fix a compact interval $J \subseteq \mathbb{R}$ of length $|J|>0$. Cut this interval into $k/(2 \varepsilon)$ equal pieces of length $|J|(2 \varepsilon)/k$ and label these from left to right by $1,2, \dots, k/(2 \varepsilon)$. On this set of labels (and associated intervals), form congruence classes modulo $1/(2 \varepsilon)$ and note that the centres of the intervals with labels in the same congruence class form an arithmetic progression of length $k$ and gap size $|J|/k$. It follows from \eqref{avoid} that at least one interval from each congruence class must not intersect $X$. Therefore $X \cap J$ is contained in the union of $(k-1)/(2 \varepsilon)$ intervals of length $|J|(2 \varepsilon)/k$. We apply this observation inductively starting with the interval $J_0= [m+1,m+n]$, where $m,n$ are arbitrary positive integers, and continuing with each of the subintervals of $J_0$ which intersect $X$. After $N$ applications of the inductive argument we find $X \cap J_0$ is contained in the union of at most $\left((k-1)/(2 \varepsilon)\right)^N$ intervals of length $(n-1)\left((2 \varepsilon)/k\right)^N$. Fix $N$ to be the smallest integer such that
\[
(n-1)\left(\frac{2 \varepsilon}{k} \right)^N < 1
\]
and note that, since $X \subseteq \mathbb{Z}$, each interval at the $N$th step contains at most one point from $X$. It follows that
\[
\# X \cap [m+1, m+n] \leq \left(\frac{k-1}{2 \varepsilon} \right)^N \leq \left(\frac{k-1}{2 \varepsilon} \right)^{\frac{\log(n-1)}{\log(k/(2 \varepsilon))}+1} \leq \left(\frac{k-1}{2 \varepsilon} \right) n^{\frac{\log\left((k-1)/(2 \varepsilon)\right)}{\log(k/(2 \varepsilon))}}
\]
which proves that the upper logarithmic density of $X$ is bounded above by
\[
\frac{\log\left(\frac{k-1}{2 \varepsilon}\right)}{\log \left( \frac{k}{2 \varepsilon} \right)} < 1
\]
contradicting our assumption that the upper logarithmic density of $X$ is equal to 1. \qed
\vfill
\begin{centering}
\textbf{Acknowledgments}
The author was financially supported by a \emph{Leverhulme Trust Research Fellowship} (RF-2016-500) and an \emph{EPSRC Standard Grant} (EP/R015104/1). He thanks Han Yu for many inspiring conversations related to the topics presented here.
\end{centering}
\newpage
|
1,941,325,221,037 | arxiv | \section{Introduction}
Lagrangian algorithms are popular for modeling phenomenon where the computational mesh moves with the continuum. They can track material interfaces accurately and are widely used in computational solid mechanics. However, the reason behind their less frequent use in fluid mechanics is that they cause the computational mesh to distort while the mesh follows the fluid particle motion. Eulerian algorithms have, on the other hand, become increasingly popular in the past for fluid mechanics as the computational mesh remains fixed and independent of fluid motion. While dealing with fluid flow where geometry changes dynamically, however, fixing the computational mesh may not be sufficient. The mesh must be dynamic in order to accommodate varied configurations of the flow domain. Neither purely Lagrangian nor Eulerian formulations are sufficient for dealing with such problems and thus Arbitrary Lagrangian-Eulerian algorithms have gained wide popularity. The latter methods use a pseudo-Eulerian computational mesh for computing fluid flow which moves in order to accommodate the solid displacement (or track solid-fluid interfaces in a Lagrangian manner), thus giving rise to a mesh dynamic in space and time. Problems ranging in fields like aerodynamics, marine engineering, and biomedical engineering which involve dynamically changing geometries and fluid-solid interaction can be modeled robustly using ALE algorithms. For such ALE algorithms, an additional law of geometric conservation needs to be introduced in order to ensure mesh velocities are computed in accordance with the changing control volumes of the computational mesh. In this paper an ALE algorithm is incorporated in a Consistent Flux Reconstruction finite volume solution scheme. The scheme is second-order accurate in space and uses a collocated grid arrangement applicable to unstructured 2D dynamic triangular grid.
The problem of flow past a circular cylinder has gained wide popularity over the past few decades, owing to the various complex flow features, vortex shedding mechanisms and lift-drag characteristics that provide valuable insight into the flow behavior. The understanding of flow by the study of these phenomena greatly aids in engineering design. Especially in marine and aerodynamic engineering applications due to use of equipment with similar geometries (for ex. risers in marine engineering and bluff bodies in aerodynamics) where better power efficiencies are desired and structural failure needs to be avoided. Out of all such fluid phenomena, vortex shedding remains one of the most important as it explains the unsteady features of flow and the cause behind varying lift and drag forces. In certain cases, it can even trigger structural failure. Thus to mitigate such adverse effects, many methods for vortex suppression have been explored by researchers in the past. These include either passive control methods that involve a change in the geometry, like the use of splitter plates \cite{Kwon}, control cylinders \cite{Mittal}, or active control methods like imposing transverse or rotary oscillations \cite{Choi}.
Rotary oscillation is a popular active flow control method that is used to alter the wake behind a circular cylinder in order to minimize vortex shedding thus resulting in lesser aerodynamic/hydrodynamic force fluctuations, as studied by Williamson CHK \cite{Williamson}.
Many researchers in the past have investigated the effects of imposing forced rotary oscillations on the circular cylinder to study wake structures and the associated aerodynamic coefficients.
Taneda \cite{Taneda} showed that even for low Re in range $30\leq Re \leq 300$ and rotational oscillation frequencies $St_{f}$ in range $0\leq St_{f} \leq 55$, the vortex shedding disappears completely for higher frequencies.
Tokumaru and Dimotakis \cite{Tokumaru} tested the effectiveness of forced rotary oscillations experimentally at high $Re = 15000$ with an amplitude of rotation $0\leq A_{r} \leq 16$ and forced rotary oscillation frequency $ 0\leq St_{f} \leq 3.3$, to examine the effect on the unsteady wake.
Shiels and Leonard \cite{Shiels} also verified similar findings on drag reduction by carrying out numerical simulations using a 2D high-resolution viscous method for Re between 150 and 15000. According to their study multi-polar vortices generated in the wake were responsible for drag reduction at high Re, whereas for low Re this effect was dampened by the high viscous effects.
It is also known that in these scenarios, fluid-structure interactions must also be accounted for an accurate depiction of the flow physics. Due to vortex shedding the cylinder is subjected to periodic lift and drag forces that lead to its transverse oscillatory motion.
A common strategy to simulate this vortex-induced motion is to force the body to oscillate with a predefined motion, as described by Singh \cite{Singh}, which has been adopted in the present study.
Many studies have been done on cylinders that undergo vortex-induced transverse oscillations. Ongoren and Rockwell \cite{Ongoren} carried out a hydrogen bubble visualization experiment to find the position of the vortex switch. It was concluded from this study that for forced transverse oscillation frequencies greater and lesser than the natural strouhal number ($St_{n}$), the vortices formed on one side shed on the opposite and same sides respectively at maximum amplitudes. Another numerical study conducted by Anagnostopoulos \cite{Anagnostopoulos} for flow past an oscillating circular cylinder at $Re = 106$ also confirmed switching in vortex shedding as seen in experiments conducted by the same author. Numerical studies conducted by Pham \cite{Pham} on the same problem used the immersed boundary method to illustrate the vortex shedding patterns and aerodynamic characteristics at different transverse oscillation amplitudes and frequencies.
The numerical studies on flow past a cylinder subjected to rotary or transverse oscillations, each imposed separately, have been quite popular in the past. However, there haven't been many comprehensive studies on the combined effect of rotary and transverse oscillations imposed on a cylinder exposed to open channel fluid flow to the best knowledge of the authors.
In this study the effects on the vortex shedding and aerodynamic characteristics that arise as a result of both transverse and rotational oscillations imposed on the circular cylinder at Re = 100, have been investigated using the newly developed ALE-CFR finite volume scheme.
\section{Governing Equations}
The primary governing equations taken for this work are that of mass and momentum conservation for incompressible fluid flow in the ALE frame. In order to compute flows on grids with an ALE kinematic description, an additional equation aids in computing the flow over the dynamic mesh known as the Space Conservation Law (SCL) \cite{Demirdzic1,Demirdzic2}. The conservation laws in their integral non-dimensional forms for incompressible flows without considering body forces are described below.
\\Space Conservation Law:
\begin{equation}
\frac{d}{dt} \left(\int\limits_\Omega d\Omega\right) -\int\limits_S\overrightarrow{{v}_{b}}.\overrightarrow{n}dS = 0
\label{eq:SCL}
\end{equation}
\\Mass Conservation Law:
\begin{equation}
\frac{d}{dt} \left(\int\limits_\Omega\rho d\Omega\right) + \int\limits_S\rho\left(\overrightarrow{v}-\overrightarrow{{v}_{b}}\right).\overrightarrow{n}dS = 0
\label{eq:Mass Conservation eq}
\end{equation}
\\Momentum Conservation Law:
\begin{equation}
\frac{d}{dt}\left(\int\limits_\Omega {u}_{i}d\Omega\right) + \int\limits_S {u}_{i}\left(\overrightarrow{v}-\overrightarrow{{v}_{b}}\right).\overrightarrow{n}dS = -\int\limits_S p\overrightarrow{{i}_{i}}.\overrightarrow{n}dS + \frac{1}{Re}\int\limits_S\nabla {u}_{i}.\overrightarrow{n}dS
\label{eq:Momentum Conservation Eq}
\end{equation}
where $i$ takes 1, 2 for $x$, $y$ components of velocities respectively.
The expression $\overrightarrow{{v}_{b}}$ denotes the mesh velocities of the faces for the moving control volume. In an ALE framework, the net mass flux convecting inside the moving control volume is accounted for by taking fluid convection velocity relative to the moving control volume faces in the convective terms of the Navier-Stokes Equations. Eq.(\ref{eq:SCL}) ensures that the mesh velocities are in accordance with the change in the control volumes when the mesh is deforming.
For an incompressible flow, as the density remains constant, Eq.(\ref{eq:SCL}) is reduced to,
\begin{equation}
\int\limits_S \overrightarrow{v}.\overrightarrow{n}dS = 0
\label{eq:Simplified Mass Conservation Eq}
\end{equation}
\newpage
\section{Boundary Conditions}
From the schematic shown in Fig.(\ref{fig:Flow Domain}), dirichlet boundary conditions have been given at the inlet with $u = {U}_{\infty}$ and $v = 0$. Zero shear boundary condition has been imposed on the top and bottom with $\frac{\partial u}{\partial y}=0$ and $v=0$. At the outlet, convective boundary condition have been imposed with $\frac{\partial \phi}{\partial t}+U_{c}\frac{\partial \phi}{\partial x}=0$ where $\phi=u,v$ and $U_{c} = U_{\infty}$. On the cylinder surface no slip boundary condition for forced rotary and transverse oscillations has been imposed with $\dot{\theta}=A_{r}\cos(2\pi St_{r}t)$ as the rotational oscillation speed and $Y=A_{t}\sin{(2\pi St_{tv}t)}$ as the transverse position at any instant of time where $ St_{r}$ and $ St_{tv}$ represent the rotational oscillation and transverse oscillation frequencies respectively.
\begin{figure}
\centering
\includegraphics[scale=0.025]{BoundaryConditions.jpg}
\caption{Boundary conditions and flow domain}
\label{fig:Flow Domain}
\end{figure}
\section{Numerical Scheme}
The Consistent Flux Reconstruction (CFR) scheme \cite{Bandhyopadhyay,Harichandan} was adopted in this work to accommodate flows on deformable grids and solve for a larger class of problems involving moving bodies and their effect, in turn, on the fluid flow and vice-versa. Based on this development, the scheme is hereon referred to as \textbf{ALE-CFR}, short for Arbitrary Lagrangian Eulerian Consistent Flux Reconstruction scheme. The scheme employs a collocated grid arrangement and is second-order accurate in space and first-order accurate in time. For achieving second-order accuracy the momentum equations are solved for both the original and the reconstructed cells. This reconstructed cell consists of the main cell and its neighbor, both sharing the face where the velocity needs to be computed. Appropriate neighboring cells to the main cell are chosen for each face such that the shared face acts as the center of the reconstructed cell.
The cell-centered flow variables (pressure and velocity) at the $(n-1)^{th}$ time level are linearly interpolated to calculate face-centered flow variables.\\The face-centered variables are interpolated from the cell-centered variables using weighted averaging \cite{Harichandan}, as shown for pressure in Eq.(\ref{eq:Interpolation}).
\begin{equation}
p_{1}=\frac{p_{p}a_{c}+p_{c}a_{p}}{a_{c}+a_{p}}
\label{eq:Interpolation}
\end{equation}
After grid deformation, the momentum equation is then applied on the reconstructed cell explicitly in order to construct expressions of face-centered velocities at the $n^{th}$ time level in terms of the face-centered pressures. The convective, diffusive and ALE flux expressions in the momentum equation for the reconstructed cells use the interpolated face-centered variables from old cell-centered variables. These expressions of velocities at the $n^{th}$ time level are substituted in the continuity relation for cell P, given by Eq.(\ref{eq:Discret. Cont. Eq}) to construct the Pressure Poisson Equation (PPE).
The PPE is solved for all cells to compute the cell-centered pressure field for the new mesh.
\\The discretized integral form of the continuity equation for a cell $P$ with vertices $abc$ in the grid shown in Fig.(\ref{fig:Main Grid Stencil}) at the ${n+1}^{th}$ time level is,
\begin{equation}
\begin{aligned}
\int \overrightarrow{v}.\overrightarrow{n}dS =&\left(u_{1}^{n+1}\triangle x_{ab}^{n+1}-v_{1}^{n+1}\triangle y_{ab}^{n+1}\right)+ \left(u_{2}^{n+1}\triangle x_{bc}^{n+1}-v_{2}^{n+1}\triangle y_{bc}^{n+1}\right)+\\ &\left(u_{3}^{n+1}\triangle x_{ca}^{n+1}-v_{3}^{n+1}\triangle y_{ca}^{n+1}\right)=0
\label{eq:Discret. Cont. Eq}
\end{aligned}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[height=4.5 cm,width=9.5 cm]{Main_Stencil.PNG}
\caption{Main Grid Stencil}
\label{fig:Main Grid Stencil}
\end{figure}
where ${\triangle x}_{ab}$ denotes length vector given as ${x}_{b}-{x}_{a}$.
After the velocities are substituted, Eq.({\ref{eq:Discret. Cont. Eq}}) is thus solved for the new pressure field.
The grid deforming scheme used in this work is based on linear spring analogy \cite{Zheng}. In order to sufficiently preserve the shear layers near the cylinder surface and capture flow structures accurately in the wake, the outlined region as given in Fig ({\ref{fig:Mesh}}) was made to move rigidly with the cylinder. The remaining exterior region contains deformable mesh elements that accommodate the new position of the marked region containing elements surrounding the cylinder.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.15]{MeshSnipNew1.jpg}
\caption{The Solid ALE Mesh region moves rigidly while the mesh elements outside deform}
\label{fig:Mesh}
\end{figure}
The ALE flux terms are computed after mesh deformation and the discretized PPE resulting from Eq.({\ref{eq:Discret. Cont. Eq}}) is solved for the new cell-centered pressures. Discretization of the unsteady term is done considering the change in both the cell-centered field variable and the dimensions of the CV:
\begin{equation}
\frac{d}{dt}\left(\int \phi d\Omega\right) \approx \frac{\left(\phi \Omega\right)^{n+1}-\left(\phi \Omega \right)^{n}}{\triangle t}
\end{equation}
The face-centered velocities ${u}_{1}$ and ${v}_{1}$ upon first order Eulerian discretization of Eq.({\ref{eq:Momentum Conservation Eq}}) are given by,
\begin{equation}
\begin{aligned}
u_{1}^{n+1} = u_{1}^{n}\frac{A_{pc}^{n}}{A_{pc}^{n+1}} + \frac{\triangle t}{A_{pc}^{n+1}}(-{XCFLUX_{1}}^{n}-&{XPFLUX_{1}}^{n+1}+\frac{1}{Re}{XDFLUX_{1}}^{n}+ \\ &{XALEFLUX_{1}}^{n})
\label{eq:u1}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
v_{1}^{n+1} = v_{1}^{n}\frac{A_{pc}^{n}}{A_{pc}^{n+1}} + \frac{\triangle t}{A_{pc}^{n+1}}(-YCFLUX_{1}^{n}-&YPFLUX_{1}^{n+1}+\frac{1}{Re}YDFLUX_{1}^{n}+ \\ &YALEFLUX_{1}^{n})
\label{eq:v1}
\end{aligned}
\end{equation}
${A_{pc}^{n+1}}$ is the area of the reconstructed cell consisting of original cell P and neighboring cell C at the ${n+1}^{th}$ level (after grid deformation).
All the above flux terms are computed based on the mesh stencil given in Fig ({\ref{fig:Main Grid Stencil}}).
The flux terms used in expressions for $u_{1}$ and $v_{1}$ are expressed as\\
Convective Flux:
\begin{equation}
\begin{aligned}
XCFLUX_{1}^{n} = \int u\overrightarrow{v}.\overrightarrow{n}dS =
u_{2}^{n}\left(u_{2}^{n}\triangle y_{bc}^{n}-v_{2}^{n}\triangle x_{bc}^{n}\right)+u_{3}^{n}\left(u_{3}^{n}\triangle y_{ca}^{n}-v_{3}^{n}\triangle x_{ca}^{n}\right)+\\ u_{12}^{n}\left(u_{12}^{n}\triangle y_{ag}^{n}-v_{12}^{n}\triangle x_{ag}^{n}\right)+u_{11}^{n}\left(u_{11}^{n}\triangle y_{gb}^{n}-v_{11}^{n}\triangle x_{gb}^{n}\right)
\label{eq:XCFLUX1}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
YCFLUX_{1}^{n}= \int v\overrightarrow{v}.\overrightarrow{n}dS =
v_{2}^{n}\left(u_{2}^{n}\triangle y_{bc}^{n}-v_{2}^{n}\triangle x_{bc}^{n}\right)+v_{3}^{n}\left(u_{3}^{n}\triangle y_{ca}^{n}-v_{3}^{n}\triangle x_{ca}^{n}\right)+\\ v_{12}^{n}\left(u_{12}^{n}\triangle y_{ag}^{n}-v_{12}^{n}\triangle x_{ag}^{n}\right)+v_{11}^{n}\left(u_{11}^{n}\triangle y_{gb}^{n}-v_{11}^{n}\triangle x_{gb}^{n}\right)
\label{eq:YCFLUX1}
\end{aligned}
\end{equation}
\\
Diffusive Flux:
$XDFLUX_{1}^{n} = \int\nabla u\overrightarrow{.n}dS$
\begin{equation}
\begin{aligned}
= \left[\left(\frac{ \partial u}{ \partial x}\right)_{2}^{n}\triangle y_{bc}^{n}-\left(\frac{\partial u}{\partial y}\right)_{2}^{n}\triangle x_{bc}^{n}\right]+ \left[\left(\frac{ \partial u}{ \partial x}\right)_{3}^{n}\triangle y_{ca}^{n}-\left(\frac{ \partial u}{ \partial y}\right)_{3}^{n}\triangle x_{ca}^{n}\right]+\\ \left[\left(\frac{ \partial u}{\partial x}\right)_{12}^{n}\triangle y_{ag}^{n}-\left(\frac{ \partial u}{ \partial y}\right)_{12}^{n}\triangle x_{ag}^{n}\right]+\left[\left(\frac{ \partial u}{ \partial x}\right)_{11}^{n}\triangle y_{gb}^{n}-\left(\frac{ \partial u}{ \partial y}\right)_{11}^{n}\triangle x_{gb}^{n}\right] \\
\label{eq:XDFLUX1}
\end{aligned}
\end{equation}
$YDFLUX_{1}^{n}= \int\nabla v\overrightarrow{.n}dS$
\begin{equation}
\begin{aligned}
=\left[\left(\frac{\partial v}{ \partial x}\right)_{2}^{n}\triangle y_{bc}^{n}-\left(\frac{ \partial v}{ \partial y}\right)_{2}^{n}\triangle x_{bc}^{n}\right]+\left[\left(\frac{ \partial v}{ \partial x}\right)_{3}^{n}\triangle y_{ca}^{n}-\left(\frac{ \partial v}{ \partial y}\right)_{3}^{n}\triangle x_{ca}^{n}\right]+ \\ \left[\left(\frac{ \partial v}{ \partial x}\right)_{12}^{n}\triangle y_{ag}^{n}-\left(\frac{ \partial v}{ \partial y}\right)_{12}^{n}\triangle x_{ag}^{n}\right]+\left[\left(\frac{ \partial v}{ \partial x}\right)_{11}^{n}\triangle y_{gb}^{n}-\left(\frac{ \partial v}{ \partial y}\right)_{11}^{n}\triangle x_{gb}^{n}\right]
\label{eq:YDFLUX1}
\end{aligned}
\end{equation}
\\ALE Flux:
\begin{equation}
\begin{aligned}
XALEFLUX_{1}^{n} = \int u\overrightarrow{{v}_{b}}.\overrightarrow{n}dS = u_{2}^{n}\left(\frac{ d \Omega}{ d t}\right)_{2}^{n+1}+u_{3}^{n}\left(\frac{ d \Omega}{ d t}\right)_{3}^{n+1}+ \\ u_{12}^{n}\left(\frac{ d \Omega}{ d t}\right)_{12}^{n+1}+u_{11}^{n}\left(\frac{ d \Omega}{d t}\right)_{11}^{n+1}
\label{eq:XALEFLUX1}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
YALEFLUX_{1}^{n}= \int v\overrightarrow{{v}_{b}}.\overrightarrow{n}dS = v_{2}^{n}\left(\frac{ d \Omega}{ d t}\right)_{2}^{n+1}+v_{3}^{n}\left(\frac{ d \Omega}{ d t}\right)_{3}^{n+1}+ \\ v_{12}^{n}\left(\frac{ d \Omega}{ d t}\right)_{12}^{n+1}+v_{11}^{n}\left(\frac{ d \Omega}{ d t}\right)_{11}^{n+1}
\label{eq:YALEFLUX1}
\end{aligned}
\end{equation}
Pressure Flux:
\begin{equation}
\begin{aligned}
XPFLUX_{1}^{n+1}= \int p\overrightarrow{i}.\overrightarrow{n}dS = p_{2}^{n+1}\triangle y_{bc}^{n+1}+p_{3}^{n+1}\triangle y_{ca}^{n+1}+\\ p_{12}^{n+1}\triangle y_{ag}^{n+1}+p_{11}^{n+1}\triangle y_{gb}^{n+1}
\label{eq:XPFLUX1}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
YPFLUX_{1}^{n+1}= \int p\overrightarrow{j}.\overrightarrow{n}dS = -p_{2}^{n+1}\triangle x_{bc}^{n+1}-p_{3}^{n+1}\triangle x_{ca}^{n+1}- \\ p_{12}^{n+1}\triangle x_{ag}^{n+1}-p_{11}^{n+1}\triangle x_{gb}^{n+1}
\label{eq:YPFLUX1}
\end{aligned}
\end{equation}
Where the spatial derivatives appearing in diffusive fluxes are approximated using Taylor Series approximation.
\begin{equation}
\left(\frac{ \partial \phi}{ \partial y}\right)_{1} \approx \frac{(\phi_{P}-\phi_{C})\triangle x_{ab}+(\phi_{a}-\phi_{b})\triangle x_{CP}}{\triangle y_{CP}\triangle x_{ab}-\triangle y_{ab}\triangle x_{CP}}
\label{eq:Pressure Interpolation}
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.35]{SCL.PNG}
\caption{Volume traced by face bc during grid deformation}
\label{fig:my_label3}
\end{figure}
The derivatives occurring in the ALE flux terms are computed as follows,
\begin{equation}
\begin{aligned}
\left(\frac{ d \Omega}{ d t}\right)_{bc}^{n+1}=\left(\frac{ d \Omega}{ d t}\right)_{2}^{n+1}=\frac{1}{2\triangle t}(x_{c}y_{b}+x_{b}y_{b'}+x_{b'}y_{c'}+x_{c'}y_{c}\\-x_{b}y_{c}-x_{b'}y_{b}-x_{c'}y_{b'}-x_{c}y_{c'})
\label{eq:ALE Derivative}
\end{aligned}
\end{equation}
Similar discretization procedure is adopted for the rest of the faces of the cell. The discretized face-centered velocities (like in Eq.(\ref{eq:u1}) and Eq.(\ref{eq:v1})) are substituted into the discrete continuity equation (\ref{eq:Discret. Cont. Eq}) to obtain the PPE (\ref{eq:PPE}).
\begin{equation}
\begin{aligned}
SOURCE = K_{P}p_{P}+K_{A}p_{A}+K_{B}p_{B}+K_{C}p_{C}+K_{D}p_{D}+K_{G}p_{G}+\\K_{F}p_{F}+K_{I}p_{I}+ K_{J}p_{J}+K_{L}p_{L}
\label{eq:PPE}
\end{aligned}
\end{equation}
Where the $K_{i}$ denote geometrical coefficients that appear as a result of discretizing Eq(\ref{eq:Discret. Cont. Eq}) and the $SOURCE$ term consists of the remaining fluxes and cell-centered velocities known at ${n}^{th}$ time level.
\begin{equation}
\begin{aligned}
&SOURCE =\frac{A_{pc}^{n}}{A_{pc}^{n+1}\triangle t}(u_{1}^{n}\triangle y_{ab}^{n+1}-v_{1}^{n}\triangle x_{ab}^{n+1})+\frac{A_{pa}^{n}}{A_{pa}^{n+1}\triangle t}(u_{2}^{n}\triangle y_{bc}^{n+1}-v_{2}^{n}\triangle x_{bc}^{n+1}) + \\ &\frac{A_{pb}^{n}}{A_{pb}^{n+1}\triangle t}(u_{3}^{n}\triangle y_{ca}^{n+1}-v_{3}^{n}\triangle x_{ca}^{n+1})+\\ &\frac{\triangle y_{ab}^{n+1}}{A_{pc}^{n+1}}(-XCFLUX_{1}^n+\frac{1}{Re}XDFLUX_{1}^n+XALEFLUX_{1}^n)+ \\ &\frac{\triangle y_{bc}^{n+1}}{A_{pa}^{n+1}}(-XCFLUX_{2}^n+\frac{1}{Re}XDFLUX_{2}^n+XALEFLUX_{2}^n)+ \\ &\frac{\triangle y_{ca}^{n+1}}{A_{pb}^{n+1}}(-XCFLUX_{3}^n+\frac{1}{Re}XDFLUX_{3}^n+XALEFLUX_{3}^n)- \\ &\frac{\triangle x_{ab}^{n+1}}{A_{pc}^{n+1}}(-YCFLUX_{1}^n+\frac{1}{Re}YDFLUX_{1}^n+YALEFLUX_{1}^n)- \\ &\frac{\triangle x_{bc}^{n+1}}{A_{pa}^{n+1}}(-YCFLUX_{2}^n+\frac{1}{Re}YDFLUX_{2}^n+YALEFLUX_{2}^n)- \\ &\frac{\triangle x_{ca}^{n+1}}{A_{pb}^{n+1}}(-YCFLUX_{3}^n+\frac{1}{Re}YDFLUX_{3}^n+YALEFLUX_{3}^n)
\label{eq:SOURCE}
\end{aligned}
\end{equation}
Solving subsequently for the cell-centered pressures, face-centered velocities are computed further using (Eq.{\ref{eq:u1}}) and Eq.({\ref{eq:v1}}).
For each cell, all fluxes are computed from the newly computed face-centered velocities and pressures and the velocity field is computed at the cell centers.
\begin{equation}
\begin{aligned}
u_{iP}^{n+1}=u_{iP}^{n}\frac{a_{P}^{n}}{a_{P}^{n+1}}+\frac{\triangle t}{a_{p}^{n+1}}(-{CFLUX_i}^{n+1}-{PFLUX_i}^{n+1}+\frac{1}{Re}{DFLUX_i}^{n+1}+ \\ {ALEFLUX_i}^{n+1})
\label{eq:CCV}
\end{aligned}
\end{equation}
The flux terms appearing in the momentum equation above (for velocities $i=1,2$ as u,v) are computed from the new face-centered variables using a similar approach for the original cell P.
\\
\\The various steps involved in the solution procedure can be summarized as follows:
\\
\\(1) The cell-centered velocity and pressure field $u^{n}$, $v^{n}$ and $p^{n}$ are initialized. This could either be available from the past flow data or from the prescribed initial conditions.
\\(2) The face-centered velocity and pressure field $u^{n}$, $v^{n}$ and $p^{n}$ are computed upon interpolation from the cell-centered data at $n^{th}$ level.
\\(3) Grid is deformed to accommodate the new position of the object.
\\(4) Fluxes appearing in the momentum equation (Eq.(\ref{eq:XCFLUX1}) to Eq.(\ref{eq:YPFLUX1})) are computed using the interpolated velocity field at $n^{th}$ level and mesh data at the current level.
\\(5) The new face-centered velocity field expressions are computed by solving momentum equations for the reconstructed cells.
\\(6) These face-centered velocity field expressions are substituted in the continuity equation Eq.(\ref{eq:Discret. Cont. Eq}) and the resulting PPE is solved to compute the cell-centered pressure field $p^{n+1}$.
\\(7) The cell-centered pressure field $p^{n+1}$ is interpolated to compute face-centered pressure field $p^{n+1}$.
\\(8) The pressure flux terms (Eq.(\ref{eq:XPFLUX1}) and Eq.(\ref{eq:YPFLUX1})) are updated and the face-centered velocities $u^{n+1}$ and $v^{n+1}$ are computed by solving the momentum equations for the reconstructed cells using fluxes computed from velocity and pressure fields $u^{n}$, $v^{n}$ and $p^{n+1}$ .
\\(9) The momentum equations are then solved for each cell by computing fluxes from the face-centered velocities and pressures $u^{n+1}$, $v^{n+1}$ and $p^{n+1}$ to compute the cell-centered velocity and pressure field $u^{n+1}$, $v^{n+1}$ and $p^{n+1}$.
The solution procedure is repeated for the next cycle and is continued until sufficient convergence is achieved.
\section{Results and Discussion}
In the present study for flow past a circular cylinder subjected to rotary and transverse oscillations, its wake structure and aerodynamic characteristics have been investigated.
The rotary oscillation parametric space corresponds to four vortex shedding modes primarily \cite{Tokumaru}.
The amplitude ratios with the diameter(D) of the cylinder for rotary and transverse oscillations have been taken as $A_{r} = 2.0$ and $A_{tv} = 0.2$ respectively. The rotary forced frequency of oscillation, denoted by $St_{r}$ has been varied as $0.4\leq St_{r} \leq 2.0$ and the transverse forced oscillation frequency ratio denoted by $f_{tr} = St_{tv}/{St_{n}}$, is varied as $0.9\leq f_{tr}\leq 2.0$.
In the following sections, we present the aerodynamic and vortex shedding characteristics for each of the vortex shedding modes when subjected to transverse oscillations at Re=100.
\begin{table}[h]
\centering
\caption{Distinct vortex shedding modes}
\begin{tabular}{c c c c c}
\hline
& Mode 1 & Mode 2 & Mode 3 & Mode 4\\
\hline
$A_{r}$ & 2.0 & 2.0 & 2.0 & 0.45\\
$St_{r}$ & 0.165 & 0.4 & 0.8 & 0.8\\
\hline
\end{tabular}
\label{tab:VSModes}
\end{table}
\subsection{Validation}
~\\
The Solver based on ALE-CFR was validated for both transverse and rotational oscillation problems of simulating flow past an oscillating circular cylinder. A systematic grid independence study was carried out to test the grid convergence of the new ALE scheme. The study was performed on flow past a transversely oscillating cylinder with amplitude and frequency of transverse oscillations were taken as 0.5D and $f=0.192$ respectively at $Re=185$. The difference in the grids is based on the number of nodes on the cylinder surface. Coefficient of drag was chosen for comparison since it is one of the most sensitive numerical parameters to change in grid size. Table \ref{tab:Grid Independence Study} gives the details of the grid independence test based on which grid number 3 was chosen which comprising of 57,962 triangular elements and 29,156 nodes out of which 200 nodes were taken on the body surface. Further studies, using the same grid at $Re=185$, were taken as reference for validating the former \cite{Pham}, and the latter problem \cite{Choi} was validated at $Re=100$. The coefficients $\overline{c}_{d}$ and $c_{l,rms}$ have been plotted for frequency ratios $0.8\leq f_{r} \leq 1.2$ and amplitude ratio $A/D=0.5$ in Fig \ref{fig:Cd,avg_tv} and \ref{fig:Cl,rms_tv} for the transverse oscillation problem. For the rotational oscillation problem $\overline{c}_{d}$ and $c_{l,amp}$ have been plotted for frequency ratios $0.9\leq f_{r} \leq 1.2$ and amplitude ratio $A/D=2.0$ in Fig \ref{fig:Cd,avg_r} and \ref{fig:Cl,amp_r}.
\begin{table}[h]
\centering
\caption{Grid independence test carried out at $Re=185$ for transversely oscillating cylinder}
\begin{tabular}{c c c}
\hline
Grid no. & Number of nodes on the body & Drag coefficient \\
\hline
1 & 160 & $1.754\pm0.49$\\
2 & 180 & $1.813\pm0.52$\\
3 & 200 & $1.878\pm0.53$\\
4 & 220 & $1.887\pm0.53$\\
\hline
\end{tabular}
\label{tab:Grid Independence Study}
\end{table}
\begin{figure}[h!]
\begin{subfigure}[b]{6.5cm}
\includegraphics[scale=0.04]{Cd_average_Transverse.jpg}
\caption{\normalsize{Time Averaged $C_{D}$ values for $A/D=0.5$}}
\label{fig:Cd,avg_tv}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6cm}
\includegraphics[scale=0.04]{Cl_rms_Transverse.jpg}
\caption{\normalsize{Root-Mean-Squared $C_{L}$ values for $A/D=0.5$}}
\label{fig:Cl,rms_tv}
\end{subfigure}
\caption{\normalsize{Aerodynamic coefficients for transverse oscillation frequency ratios at $Re=185$}}
\end{figure}
\begin{figure}[h!]
\begin{subfigure}[b]{6cm}
\includegraphics[scale=0.04]{Cd_average_Rotational.jpg}
\caption{\normalsize{Time Averaged $C_{D}$ values for A/D = 2.0}}
\label{fig:Cd,avg_r}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6cm}
\includegraphics[scale=0.04]{Cl_amplitude_Rotational.jpg}
\caption{\normalsize{Max Fluctuation Amplitude of $C_{L}$ for A/D = 2.0}}
\label{fig:Cl,amp_r}
\end{subfigure}
\caption{\normalsize{Aerodynamic coefficients for rotational oscillation frequencies at $Re=100$}}
\end{figure}
\newpage
\subsection{Vortex shedding characteristics and contribution of rotational and transverse oscillations to lift generation}
~\\
This vortex shedding mode 1 corresponding to $St_{r}=0.165$ and $A_{r}=2.0$ can be characterized by the generation of two like signed vortices in one-half cycle \cite{Tokumaru}, however, this was not observed in our numerical study at $Re = 100$ \cite{Choi}. Nonetheless, the characteristic feature of this mode of vortex shedding was observed to be the symmetric shedding of oppositely signed vortices, as can be seen in Fig \ref{fig:VSMode1}. The transverse oscillation frequency ratio is varied as $0.9\leq f_{tr}\leq 2.0$ for an amplitude ratio $A_{tv}=0.2$. Since this mode is lock-in with respect to rotary oscillations ($St_{r} = St_{n}$), the peak corresponding to $St_{r}$ in the frequency characteristic plot for lift was observed to be dominant.
For $f_{tr}=0.9$ and $f_{tr}=0.95$, two nearby peaks are observed from the frequency characteristics referring from Fig \ref{fig:_1,3_} and \ref{fig:_1,4_}, the primary and secondary ones corresponding to rotary and transverse oscillation frequencies respectively. The primary reason for only two peaks being visibly dominant in the frequency plot was due to the lock-in established between the forced rotary oscillation and vortex shedding frequency ($St_{vs} \approx St_{r}$ due to lock-in at the natural frequency of the system).
At frequencies lower than $f_{tr} = 1.0$ from Fig \ref{fig:_1,3_} and \ref{fig:_1,4_}, the lift amplitude (noted from \ref{fig:_1,3_Cl} and \ref{fig:_1,4_Cl}) contribution attributable to frequencies $St_{tv}$ is observed to be roughly $16\%$ and that to $St_{r}$ close to $83\%$.
For $f_{tr}=1.0$, as seen from Fig \ref{fig:Broad Peak}, a single dominant broad peak appears consisting of all the three frequencies (all close to the $St_{n}$ of the system).
At frequencies higher than $f_{tr}=1.0$ from Fig \ref{fig:_1,5_}, \ref{fig:_1,1_} and \ref{fig:_1,2_}, the transverse oscillation contribution to the lift was observed to be higher. Contribution to lift due to transverse oscillations was found to be as high as $36\%$ with rotary oscillation and vortex shedding frequencies constituting $52\%$ of the lift amplitude as seen in Fig \ref{fig:_1,1_Cl}, at $f_{tr}=2.0$ from Fig \ref{fig:_1,1_}. As $f_{tr}$ increases further, the transverse oscillations dominate the lift contribution with their contribution as high as $60\%$ of the lift amplitude in Fig \ref{fig:_1,2_Cl}, as seen from Fig \ref{fig:_1,2_}.
\afterpage{
\begin{figure}[hp]
\begin{subfigure}[b]{10cm}
\centering
\includegraphics[scale=0.05]{VSMode1.jpg}
\caption{Mode 1}
\label{fig:VSMode1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{10cm}
\centering
\includegraphics[scale=0.05]{VSMode2.jpg}
\caption{Mode 2}
\label{fig:VSMode2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{10cm}
\centering
\includegraphics[scale=0.05]{VSMode3.jpg}
\caption{Mode 3}
\label{fig:VSMode3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{10cm}
\centering
\includegraphics[scale=0.05]{VSMode4.jpg}
\caption{Mode 4}
\label{fig:VSMode4}
\end{subfigure}
\caption{Distinct Vortex Shedding Modes}
\end{figure}
}
Vortex shedding mode 2, with the rotary oscillation parameters $St_{r}=0.4$ and $A_{r}=2.0$, is characterized by the dissipation of vortices without coalescence downstream as given by Tokumaru and Dimotakis \cite{Tokumaru}. When transverse oscillations were imposed the vortex shedding pattern was seen to consist of elliptic vortices attached to circular vortices, which dissipated further downstream as seen from Fig (\ref{fig:VSMode2}). This mode was found to become less prominent as $f_{tr}$ was increased. Partial dissipation of vortices in the far downstream and the shedding of vortices downstream was observed at $f_{tr}=2.0$, a deviation from the usual characteristics of the vortex shedding mode.
At $f_{tr}=0.9$ and $f_{tr}=0.95$ the frequency peaks for lift were observed to be prominent for both the $St_{tv}$ and $St_{r}$, as observed from Fig \ref{fig:_2,3_} and \ref{fig:_2,4_}, with approximately $50\%$ contribution from each towards the net lift amplitude generated (Fig \ref{fig:_2,3_Cl} and \ref{fig:_2,4_Cl}). The reason behind these peaks belonging to the $St_{tv}$ and $St_{r}$ is due to the lock-in achieved by $St_{tv}$ with $St_{vs}$, leading to a common peak representing the two visible in Fig \ref{fig:_2,3_} and \ref{fig:_2,4_}, since in both these cases, $St_{tv}$ and $St_{n}$ lie in close proximity. Lift amplitude generated in this case came out to be significantly lower as compared to that of the earlier mode because of lock-in established by the $St_{r}$ with $St_{n}$ in mode 1.
As the transverse oscillation frequency ratio was increased to $f_{tr}=1.5$ and $f_{tr}=2.0$, from Fig \ref{fig:_2,5_} and \ref{fig:_2,1_}, the transverse oscillation peaks similar to the previous mode discussed, were seen to account for majority of lift amplitude (Fig \ref{fig:_2,5_Cl} and \ref{fig:_2,1_Cl}) generation. The contributions towards lift generation (Fig \ref{fig:_2,1_Cl} and \ref{fig:_2,2_Cl}) from the $St_{tv}$ values corresponding to $f_{tr}=2.0$ and $f_{tr}=3.0$, from Fig \ref{fig:_2,1_} and \ref{fig:_2,2_}, were found to be $75-80\%$.
The vortex shedding mode 3 is uniquely characterized by the synchronous vortex generation with the rotational forcing imposed. The smaller vortices generated merge with larger ones in the near wake to form a multi-polar vortex structure due to phase lag between the immediate and far wake. With rotary oscillation parameters $St_{r}=0.8$ and $A_{r}=2.0$, $f_{tr}$ was varied from 0.9 to 3.0 with $A_{tv} = 0.2$. This mode was found to become further enhanced when the cylinder was subjected to transverse oscillations since vortices generated by the rotary and transverse oscillations merged better due to greater phase lag between the shear layers and far wake. At $f_{tr}=0.9$ and $0.95$ the frequencies $St_{tv}$ and $St_{r}$ both impart $50\%$ to the net lift magnitude (Fig \ref{fig:_3,3_Cl} and \ref{fig:_3,4_Cl}) as seen from the frequency characteristics in Fig \ref{fig:_3,3_} and \ref{fig:_3,4_}, similar to the observations in Mode 2.
As frequency $St_{tv}$ increases to higher values when $f_{tr}=1.5$, $2.0$ and $3.0$, it was observed from numerical experiments that its contribution to lift amplitude (Fig \ref{fig:_3,5_Cl}, \ref{fig:_3,1_Cl} and \ref{fig:_3,2_Cl}) increases upto $70\%$, with the rotary oscillation frequency's contribution near $5-6\%$, as seen from Fig \ref{fig:_3,5_}, \ref{fig:_3,1_} and \ref{fig:_3,2_}.
Vortex shedding mode 4, with rotary oscillation parameters taken as $St_{r}=0.8$ and $A_{r}=0.45$, can be characterized by the generation of small scale vortices existing in the shear layers near the surface of the cylinder as seen in Fig. \ref{fig:VSMode4}. The primary difference seen between the vortex shedding mode 3 and 4 is the synchronized vortex generation with rotary oscillation in the former. The vortex shedding pattern in the downstream wake for mode 4 resembles that of flow past a stationary cylinder. From the numerical experiments conducted, two major peaks emerge in the lift frequency characteristics at $f_{tr}=0.9$ and $0.95$ due to lock-in between the frequencies $St_{tv}$ and $St_{n}$ ($St_{tv}$ and $St_{n}$ again being in close proximity), as seen in Fig. \ref{fig:_4,3_} and \ref{fig:_4,4_}. The contribution of transverse and rotary oscillation towards lift generation was found to be equally distributed as seen in the second and third modes earlier.
Similar trends are observed at high transverse frequency ratios $f_{tr}=2.0$ and $3.0$ from Fig. \ref{fig:_4,1_} and \ref{fig:_4,2_}, where the contribution of the same was observed to be $98\%$. The reason for this being that $A_{r}$ taken was significantly lower than that of the rest of the modes and $A_{tv}$ was taken to be the same.
\newpage
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_1,3_.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.165$}
\label{fig:_1,3_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_1,4_.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.165$}
\label{fig:_1,4_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{BroadPeakResonance.jpg}
\caption{$f_{tr} = 1.0$,$St_{r} = 0.165$}
\label{fig:Broad Peak}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_1,5_.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.165$}
\label{fig:_1,5_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_1,1_.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.165$}
\label{fig:_1,1_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.055]{_1,2_.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.165$}
\label{fig:_1,2_}
\end{subfigure}
\caption{Frequency Characteristics of Lift at various transverse oscillation frequencies (Mode 1)}
\end{figure}
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_1,3_Cl.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.165$}
\label{fig:_1,3_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_1,4_Cl.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.165$}
\label{fig:_1,4_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_1,5_Cl.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.165$}
\label{fig:_1,5_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_1,1_Cl.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.165$}
\label{fig:_1,1_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.04]{_1,2_Cl.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.165$}
\label{fig:_1,2_Cl}
\end{subfigure}
\caption{Lift Coefficient at various transverse oscillation frequencies (Mode 1)}
\end{figure}
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_2,3_.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.3$}
\label{fig:_2,3_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_2,4_.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.4$}
\label{fig:_2,4_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_2,5_.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.4$}
\label{fig:_2,5_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_2,1_.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.4$}
\label{fig:_2,1_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.055]{_2,2_.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.4$}
\label{fig:_2,2_}
\end{subfigure}
\caption{Frequency Characteristics of Lift at various transverse oscillation frequencies (Mode 2)}
\end{figure}
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_2,3_Cl.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.4$}
\label{fig:_2,3_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_2,4_Cl.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.4$}
\label{fig:_2,4_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_2,5_Cl.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.4$}
\label{fig:_2,5_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_2,1_Cl.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.4$}
\label{fig:_2,1_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.04]{_2,2_Cl.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.4$}
\label{fig:_2,2_Cl}
\end{subfigure}
\caption{Lift Coefficient at various transverse oscillation frequencies (Mode 2)}
\end{figure}
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_3,3_.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.8$}
\label{fig:_3,3_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_3,4_.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.8$}
\label{fig:_3,4_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_3,5_.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.8$}
\label{fig:_3,5_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_3,1_.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.8$}
\label{fig:_3,1_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.055]{_3,2_.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.8$}
\label{fig:_3,2_}
\end{subfigure}
\caption{Frequency Characteristics of Lift at various transverse oscillation frequencies (Mode 3)}
\end{figure}
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_3,3_Cl.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.8$}
\label{fig:_3,3_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_3,4_Cl.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.8$}
\label{fig:_3,4_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_3,5_Cl.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.8$}
\label{fig:_3,5_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_3,1_Cl.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.8$}
\label{fig:_3,1_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.04]{_3,2_Cl.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.8$}
\label{fig:_3,2_Cl}
\end{subfigure}
\caption{ Lift Coefficient at various transverse oscillation frequencies (Mode 3)}
\end{figure}
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_4,3_.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.8$}
\label{fig:_4,3_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_4,4_.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.8$}
\label{fig:_4,4_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_4,5_.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.8$}
\label{fig:_4,5_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.055]{_4,1_.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.8$}
\label{fig:_4,1_}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.055]{_4,2_.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.8$}
\label{fig:_4,2_}
\end{subfigure}
\caption{Frequency Characteristics of Lift at various transverse oscillation frequencies (Mode 4)}
\end{figure}
\begin{figure}[hp]
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_4,3_Cl.jpg}
\caption{$f_{tr}=0.9$,$St_{r}=0.8$}
\label{fig:_4,3_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_4,4_Cl.jpg}
\caption{$f_{tr}=0.95$,$St_{r}=0.8$}
\label{fig:_4,4_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_4,5_Cl.jpg}
\caption{$f_{tr} = 1.5$,$St_{r} = 0.8$}
\label{fig:_4,5_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\includegraphics[scale=0.04]{_4,1_Cl.jpg}
\caption{$f_{tr} = 2.0$,$St_{r} = 0.8$}
\label{fig:_4,1_Cl}
\end{subfigure}
\hfill
\begin{subfigure}[b]{8cm}
\centering
\includegraphics[scale=0.04]{_4,2_Cl.jpg}
\caption{$f_{tr} = 3.0$,$St_{r} = 0.8$}
\label{fig:_4,2_Cl}
\end{subfigure}
\caption{Lift Coefficient at various transverse oscillation frequencies (Mode 4)}
\end{figure}
\clearpage
\subsection{Variation of aerodynamic coefficients due to transverse oscillations for various vortex shedding modes}
~\\
For vortex shedding frequencies belonging to mode 2 and 3, similar variation of $\overline{c}_{d}$ and $c_{l,rms}$ was observed. From Fig. \ref{fig:Mode2} and \ref{fig:Mode3}, upon reaching transverse frequency ratios near $f_{tv} = 1.0$, $\overline{c}_{d}$ increases and reaches a maximum as also observed in works of Singh et al \cite{Singh} and Pham et al \cite{Pham}. For frequencies beyond this synchronization region, $\overline{c}_{d}$ reduces and increases monotonously for higher transverse oscillation frequencies past near $f_{tv} = 1.25$. For Mode 4, from Fig. \ref{fig:Mode4}, the increase in $\overline{c}_{d}$ was observed near $f_{tv} = 1.5$ onwards. This phenomenon was distinct in the latter due to the dominating effect of forced transverse oscillations over rotational oscillations, resulting from the lower rotational oscillation amplitude taken in this mode when compared with all other modes. $c_{l,rms}$ monotonously increases for greater values of $f_{tv}$ for modes 2,3 and 4. For vortex shedding mode 4, the same trend following the previous vortex shedding modes 2 and 3 was observed, with descent in $\overline{c}_{d}$ followed by a monotonous increase for frequencies beyond the region. For mode 1, from Fig. \ref{fig:Mode1} contrary to the all other modes, $\overline{c}_{d}$ and $c_{l,rms}$ decrease in the synchronization region near $f_{tv} = 1.0$ followed further by a monotonous increase.
\section{Conclusion}
The developed ALE-CFR scheme on validation yielded good agreement for independent transverse and rotational oscillations imposed on the cylinder. The scheme was then employed to solve the complex flow past a cylinder oscillating with both such modes and interesting phenomena were also observed. The vortex shedding modes were found to retain their distinguishable characteristics in the near wake, even after forced transverse oscillations were imposed. The vertical length of the vortex shedding area was observed to increase as a result of transverse oscillations. Vortex shedding mode 3 appeared to become more prominent as the transverse oscillation frequency was increased, due to a greater phase lag between the near and far wake given in Singh et al \cite{Singh}.
Similar to the phenomenon reported by Cheng et al \cite{Cheng}, the lock-in occurred when $St_{r} \approx St_{n}$ as seen for mode 1, and $St_{tv} \approx St_{n}$ when transverse frequency ratios $St_{r}=0.9$ and $0.95$ for all the modes were taken.
The peaks in the frequency characteristics for lift coefficient show that the dominant frequencies in the non lock-in regions correspond to the transverse, rotary oscillation and the vortex shedding frequencies, with the first two dominating.
The majority of lift contribution was accounted for by the transverse oscillations as we go to higher frequencies, even when $A_{tv}$ taken was $10\%$ of that of $A_{r}$. In the lock-in regions however, the lift was found to remain largely affected by the oscillation frequency chosen close to the natural vortex shedding frequency. For modes 2, 3 and 4, $\overline{c}_{d}$ was found to increase near $f_{tv}=1.0$ and then further reduce and continued to remain moderately variable for higher transverse oscillation frequencies. For mode 1, $c_{l,rms}$ and $\overline{c}_{d}$ decreased in the frequency region near $f_{tv}=1.0$ and then continued to increase for higher transverse oscillation frequencies. The reason for such a variation remains a topic to be explored in future studies.
\afterpage{
\begin{figure}[h]
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.04]{PlotMode1.jpg}
\caption{Mode1}
\label{fig:Mode1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.04]{PlotMode2.jpg}
\caption{Mode2}
\label{fig:Mode2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.04]{PlotMode3.jpg}
\caption{Mode3}
\label{fig:Mode3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{6.2cm}
\centering
\includegraphics[scale=0.04]{PlotMode4.jpg}
\caption{Mode4}
\label{fig:Mode4}
\end{subfigure}
\caption{Variation of aerodynamic coefficients for different transverse oscillation frequencies}
\end{figure}
}
\afterpage{
|
1,941,325,221,038 | arxiv | \section{Introduction}
After almost one century of Relativity, many pre-relativistic prejudices
(Baconian ''idola'') still survive. One of the most tenacious is the idea
that, after giving up Newton's absolute space and absolute time (the 3+1
absolute splitting), every observer (or ''reference frame'') possesses in
any case its own private space and its own private time, i.e. its private
{\it extended }3+1 splitting. However, it should be known from the General
Theory of Relativity that this can be true only for the class of the {\it %
extended reference frames defined by a congruence }$\Gamma ${\it \ of
timelike worldlines, living in general Riemannian space-times, for which the
''vortex tensor'' \cite{cattaneo} vanishes}\footnote{%
In this case the reference frame is said to be {\it time-orthogonal }and
{\it geodesic} \cite{cattaneo}. An obvious (but not trivial) example is a
non-accelerated physical frame in a static gravitational field, for instance
in a Schwarzschild space-time.}. Let us point out that this class includes
the important subclass of the {\it extended inertial frames} {\it living in
Minkowskian space-times} (space-times with vanishing Riemann tensor).
As a consequence, in any reference frame for which the vortex tensor differs
from zero, the concept of ''the whole physical space at a given instant''
turns out to be conventional, in the sense that it is lacking an operational
meaning because of the impossibility of a symmetrical {\it and transitive}
synchronization procedure at large\footnote{%
Such procedure is possible only locally, according to the principle of local
equivalence between an accelerated observer and an instantaneously comoving
inertial observer. In other words, the topology of spacetime insures the
possibility of a {\it local (but only local) }3+1 splitting.}.
In this paper, we shall deal with a very interesting case, where the naive
assumption of the existence of a ''physical space'' in a reference frame for
which the vortex tensor is different from zero leads to paradoxical results.
This is the case, widely studied in the literature but often plagued by
serious misunderstandings, of a disk uniformly rotating in a Minkowskian
space-time. This problem has been treated by various authors with different
approaches (the difference being essentially in the definition of space and
time on the disk), all of them (with only a few exceptions, like Cantoni
\cite{cantoni}, Anandan\cite{anandan} and Mashhoon\cite{mashhoon}) sharing a
crucial point which, as we shall see, contains a fundamental element of
ambiguity: {\it the circumference of the disk is treated as a geometrically
well defined entity, that possesses a well defined length}\footnote{%
An example is the approach by Landau and Lifshitz who move from the
(apparently trivial) remark: ''Let us consider two reference frames one of
them (K) being inertial, the other (K') uniformly rotating with respect to K
about the common z axis. A circumference on the xy plane of reference K
(centered at the origin of the coordinates) can be considered as a
circumference in the x'y' plane of reference K'.'' (cfr. \cite{landau}, \S\
82).} without worrying about the fact that no transitive synchronism exists
along the said circumference. Further on they diverge: (i) on the measure of
such a length; (ii) on the time unit used to evaluate the velocity of
(massive or massless) particles in uniform motion along the said
circumference (we do not consider here other essentially derived topics,
such as for instance the space metric on the disk).
To fix ideas, let $K$ be an inertial frame, and $K_{o\text{ }}$a rigid
circular platform rotating with constant angular velocity $\omega $ with
respect to $K$ (in the following, all quantities valued in $K_o$ will be
indicated by the suffix $_o$; all quantities valued in $K$ will be indicated
without any suffix). Almost all authors consider two circumferences: (i) the
rim of the platform, as seen in $K_o$; (ii) the set of the positions of the
points of the rim in $K$; and assume that these circumferences are
geometrically (of course not kinematically) identical. Let $R_o$, $R$, be
the lengths of the radii of the two circumferences, as seen in $K_o$, $K$
respectively; $L_o$, $L$ the lengths of the circumferences; $t_o$ an
interval of proper time of a clock $C_\Sigma $ at rest on the rim, and $t$
the corresponding interval of time in $K$, as measured by two different
clocks at rest in $K$, and there synchronized.
The relationships between the above quantities have the very general form
\begin{eqnarray}
R &=&R_oF_R(v,a) \label{rlt} \\
L &=&L_oF_L(v,a) \nonumber \\
t &=&t_oF_t(v,a) \nonumber
\end{eqnarray}
where $F_R$, $F_L$, $F_t$ are generic functions of the peripheral velocity $%
v=\omega R$ (which of course is assumed to be lower than $c$) and possibly
of the centrifugal acceleration $a=v^2/R=\omega ^2R$ (although the
dependence from the acceleration is not expected, in the standard theory of
Relativity).
According to the Special Theory of Relativity (SRT), all authors agree about
$F_R$, $F_L$ and $F_t$:
\begin{equation}
F_R=1 \label{fl0}
\end{equation}
\begin{equation}
F_L=\gamma ^{-1}(v)\equiv \left( 1-\beta ^2\right) ^{1/2} \label{fl1}
\end{equation}
\begin{equation}
F_t=\gamma (v)\equiv \left( 1-\beta ^2\right) ^{-1/2} \label{fl2}
\end{equation}
($\beta \equiv v/c$).
The assumptions (\ref{fl0}), (\ref{fl1}), (\ref{fl2}) come from the
consideration that: (i) the Lorentz contraction acts only on the periphery
and not on the radius of the disk; (ii) any clock $C_\Sigma $ at rest on the
rim of the disk undergoes the Lorentz dilation of time intervals, as
measured in $K$.
However, not all authors agree about the numerical value of the ratios $%
L_o/R_o$ and $L/R$. As a matter of fact, two different assumptions are found
in the literature:
\begin{equation}
\frac{L_o}{R_o}=2\pi \label{ratio1}
\end{equation}
\begin{equation}
\frac LR=2\pi \label{ratio2}
\end{equation}
The assumption (\ref{ratio1}) - see for instance \cite{cavalleri} and
references therein - comes from the consideration that, since the proper
length $dL_{o}$ of an infinitesimal element of the rotating circumference
does not change when the disk passes from rest to rotation (all proper
quantities are invariant), {\it the same should happen for the entire
circumference: this gives }$L_{o}=2\pi R_{o}=2\pi R${\it , which is
interpreted as the proper length of the circumference.} A puzzling, but
unavoidable, consequence is the following: the ratio $L/R$, as measured in $K
$, is less than $2\pi $, thus violating the Euclidean geometry of $K$
(remember that $K$ is an inertial frame!). This is the well known Ehrenfest
paradox \cite{ehrenfest}. The only way to maintain the Euclidean geometry of
$K$, when $L<2\pi R${\it , }consists in introducing a further ad hoc
hypothesis, and precisely the hypothesis that the surface of the disk bends,
in a suitable way, because of the rotation. If such ad hoc hypothesis is
rejected, on the basis both of kinematical and dynamical considerations%
\footnote{%
The kinematical consideration is the following: the bending of the rotating
disk, obviously not-symmetric with respect to the plane of the disk when it
is not rotating, determines a skew sense in space, thus violating the
spatial parity of $K$ {\it on a purely kinematical basis}.}, the Ehrenfest
paradox cannot be solved, ''from a purely kinematic point of view'': this is
the conclusion of Cavalleri \cite{cavalleri}, who ends up with the statement
that ''the relativistic kinematics for extended bodies is not generally
self-consistent'' (and suggests an ''intrinsically dynamical'' solution of
the paradox such as the one invoked by Dieks\cite{dieks}).
The assumption (\ref{ratio2}) comes from the consideration that the edge $%
\gamma _{o\text{ }}$of the platform, as seen in $K_o$, and the set $\gamma $
of the positions of the points of such edge in $K$, {\it are two
circumferences geometrically }(although of course not kinematically) {\it %
identical}. Moreover: (i) the circumference $\gamma $, as seen in $K$, must
have the length $L=2\pi R$ , according to the Euclidean geometry of $K$;
(ii) since the circumference $\gamma _o$, as seen in $K_o$, is not changed,
but the unit (infinitesimal) rod is changed by a factor $\gamma ^{-1}(v)$
(Lorentz contraction), then the length of the circumference $\gamma _{o\text{
}}$, as measured in $K$, turns out to be increased by a factor of $\gamma $.
Therefore $L_o=2\pi R\gamma (v)>2\pi R$ : which shows that the geometry of
the rotating disk is not Euclidean. This is the most widespread assumption
(see for instance Einstein \cite{einstein}, Arzeli\`{e}s \cite{arzelies},
Landau and Lifshitz \cite{landau}, M\o ller \cite{moller}, etc.).
We are not going to comment further on these topics, but simply remark that
the widespread and apparently innocent assumption that the length of a round
trip along the border of the rotating disk coincides with that of a
univocally defined geometric object, unavoidably leads to the paradox
pointed out by Selleri \cite{selleri} whose case we shall summarize in the
next section. A great many authors did not realize this fact.
We shall show that a correct and thorough application of the Special Theory
of Relativity dissipates any ambiguity, by giving up the prejudice of the
unicity of the length of a round trip about the axis of a rotating turntable.
\section{Selleri's paradox}
Recently, an interesting paper by Selleri \cite{selleri} shows that:
a) {\it under the only }(apparently obvious and, as a matter of fact,
universally shared){\it \ assumption that a round trip corresponds to a
geometrically well defined entity, whose equally well defined length be }$%
L_o $ (about its measure any hypothesis is anyhow avoided){\it ;}
b) {\it independently of any of the particular assumptions mentioned in the
introduction; }
c) {\it more generally, independently of any particular choice about the
functions }$F_L$ {\it and }$F_t${\it ; }
{\it an unavoidable paradox actually arises in the standard Special
Relativity Theory applied to rotating platforms}.
Selleri considers a light source $\Sigma $ placed in a fixed position near
the clock $C_\Sigma $ at rest on the rim of the rotating disk, and two light
flashes leaving $\Sigma $ in opposite directions at the (proper) time $t_{o1%
\text{ }}$of $C_\Sigma $ , grazing a cylindrical mirror on the rim. The
counter-rotating and the co-rotating light signals come back to $\Sigma $
respectively at the (proper) times $t_{o2\text{ }}$and $t_{o3}$ of $C_\Sigma
$ (these times are different because the light signals, as seen in the
inertial frame $K$, travel with the same velocity $c$ along two paths which
are different because of the rotation of the disk). On the other hand,
according to the previous assumption, the two paths should be identical,
with the same length $L_o$, in the frame $K_{o\text{ }}$of the disk. Then
the velocities $c_{o-}$, $\,c_{o+}$ of the counter-rotating and co-rotating
light signals, as seen in $K_o$, should be different:
\begin{equation}
c_{o-}=\frac{L_o}{t_{o2}-t_{o1}}\quad ;\quad \,\,\,\,c_{o+}=\frac{L_o}{%
t_{o3}-t_{o1}} \label{c}
\end{equation}
(since $K_o$ is not an inertial frame, ''there is no reason to demand that
the speed of light be the same eastward and westward''\cite{peres}). If $%
c_{o-}$, $\,c_{o+}$ are expressed in terms of kinematical quantities on $K$,
the functions $F_L$, $F_t$ appear. But if the ratio $c_{o-}/\,c_{o+}$ is
considered, {\it the functions }$F_L${\it , }$F_t${\it \ disappear}, and the
final result
\begin{equation}
\rho \equiv \frac{c_{o-}}{c_{o+}}=\frac{1+\beta }{1-\beta } \label{ro}
\end{equation}
($\beta \equiv \omega R/c$) is obtained. Now, if we consider the class of
rotating disks having the same peripheral velocity $\omega R$ and arbitrary
centrifugal acceleration $a=\omega ^2R$, the observable quantity $\rho $
given by expression (\ref{ro}) is constant for increasing radius ($%
R\rightarrow \infty $) and decreasing acceleration ($a\rightarrow 0$). Here
the paradox arises. In fact, uniform motion at any speed whatsoever may be
thought of as the limit of the motion on the rim of a disk of infinite
radius and infinitesimal acceleration; this means that, in the limit case of
null acceleration, the neighborhood of the light source $\Sigma $ on the
disk should be indistinguishable from an inertial frame. As a consequence,
such a neighborhood should be related to $K$ by a standard Lorentz
transformation, and the ratio $\rho $ between the speeds of the forward and
backward moving light rays should be exactly 1. Selleri concludes that SRT
gives rise to a discontinuity in the function $\rho (a)$ for $a\rightarrow 0$%
, and claims that such a discontinuity is inadmissible ''because our
empirical knowledge about inertial systems was actually obtained in frames
with small but non zero acceleration, e.g. because of Earth rotation''.
Since the calculations of Selleri are quite careful, the paradox cannot be
avoided until the assumption (a), which states that the round trip on the
turntable corresponds to a well defined circumference whose length is
univocally defined, is maintained. Of course, this paradox is lethal for the
self-consistency of the SRT; as a consequence, Selleri suggests that the SRT
should be abandoned and that the isotropy of the speed of light exists in
only one privileged reference frame, according to an idea he already
proposed elsewhere \cite{selleri1}.
Once again, let us stress that, according to proposition (b), eq. (\ref{ro})
does not depend on the particular choice of the functions $F_L$ and{\it \ }$%
F_t${\it ; }that's why{\it \ } eq. (\ref{ro}) coincides with the classical
result, corresponding to the Galilean velocity composition rule (which can
be obtained assuming $F_L=${\it \ }$F_t=1$).
{\it Remark: the anisotropy of the light velocity and the ''hypothesis of
locality''.}
{\it \ }Selleri stresses that eq. (\ref{ro}) ''does not only give the ratio
of the two global light velocities for a full trip around the platform in
the two opposite directions, but also the local ratio as well''.{\it \ }
This is consistent with the assumed symmetries of the disk, in particular
with the assumption of homogeneity of space along the rim; but conflicts
with the ''hypothesis of locality'' \cite{reichenbach},\cite{mashhoon}%
\footnote{%
''Hypothesis of locality'' is the expression used by Mashhoon to name one of
the most important axioms of Relativity Theory, which states the local
equivalence of an accelerated observer with a momentarily comoving inertial
observer (provided standard clocks and rods are used).}, according to which
the speed of light, as measured{\it \ locally} by means of standard rods and
clocks at rest in $K_{o\text{ }}$ (in an infinitesimal neighborhood of the
light source $\Sigma $) should be exactly the same as that observed in the
local inertial frame, the latter being $c$ in both directions.
{\it Remark: eq.(\ref{ro}) in some previous relativistic approaches.}
{\it \ }Eq. (\ref{ro}) has actually been obtained by many authors in
apparently relativistic contexts. Landau and Lifshitz \cite{landau}, \S\ 89,
and some other authors - more or less explicitly: see e.g. Arzeli\`{e}s \cite
{arzelies}, \S\ 115 - derived formulas equivalent to (\ref{ro}) at first
order in $\beta $. The underlying idea is that, since no transitive
synchronization procedure exists for the turntable in motion, the best time
to be introduced on the disk is not the (proper) time measured by real
clocks on it, but the time of the inertial frame $K$ (sometimes called
''universal time'' \cite{landau} or, more appropriately, ''central time''
\cite{arzelies}). From an operational point of view, this definition of time
means that any clock on the disk should not show its proper time, but the
time of the clock of $K$ over which it happens to be located at a given
instant \footnote{%
This is equivalent to a rescaling of the proper time at any point on the
disk by a factor of $\gamma ^{-1}$.}. This way the simultaneity criterium in
$K_{o}$ is borrowed from $K$. As a consequence: (i) the spatial section of
the reference frame of the disk is the (Euclidean) 2-plane $x^{0}\equiv
ct=const,\,x^{3}\equiv \,z=const$; (ii) the coordinate transformations
between $K$ and $K_{o}$ take on the following Galilean-type form:
\begin{equation}
\left\{
\begin{array}{c}
x_{o}^{0}=x^{0} \\
x_{o}^{1}=x^{1} \\
x_{o}^{2}=x^{2}+\omega t=x^{2}+\frac{\omega }{c}x^{0} \\
x_{o}^{3}=x^{3}
\end{array}
\right. \label{trasf}
\end{equation}
where $x^1,\,x^2,x^3$ and $\,x_o^1,\,x_o^2,x_o^3$ are the cylindrical
coordinates $r,\,\theta ,z$ and $r_o,\,\theta _o,z_o$ in $K,\,K_o$
respectively.
Since the total round trip times for co-rotating and counter-rotating light
signals are not the same, {\it but the length of the two round trips in }$%
K_o ${\it \ is assumed to be the same (as being related to a univocally
defined geometric object)}, the velocity of light should be different for
the two patterns, and eq. (\ref{ro}) follows. Such a velocity seems to be
considered by Landau and Lifshitz \cite{landau} as a physical quantity,
because the physical time $t_o$ of $K_o$ differs from the ''universal time''
$t$ of $K$ only for terms of second order in $\beta $.
The same result (\ref{ro}) is obtained by Peres \cite{peres} in the full
SRT, under the same underlying assumptions. It is unclear whether the
velocity of light is just a coordinate velocity without a physical meaning,
or is actually physical: in this case it is tacitly assumed $F_L=${\it \ }$%
F_t=1$, and the relativistic approach is only apparent.
Anyway, the point we would stress is that, {\it in any case}, every
(classical or ''relativistic'') approach, based on the hypothesis that the
length of a round trip is related to a univocally defined geometric object,
unavoidably leads to eq. (\ref{ro}); but no authors, before Selleri,
realized its paradoxical consequences.
\section{Geometry of motion in 2+1 dimensions}
Let $K_o$ be a rigid circular platform, rotating with constant angular
velocity $\omega $ with respect to an inertial frame $K$, bearing a coaxial
cylindrical mirror with a light source $\Sigma $ just inside the mirror
surface. Two light rays are sent by the source in opposite directions along
the surface of the mirror. All the masses are assumed to be negligible.
What happens is easily described in 2+1 dimensions. For convenience reasons,
we shall use polar coordinates $x^0\equiv ct,$ $x^1\equiv r,\,x^2\equiv $ $%
\theta ,\,$ , where $r$ is the distance from the rotation axis and $\theta $
the rotation angle, as measured in $K$. In these coordinates, the metric
tensor takes the simple form
\[
g_{\mu \nu }=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -r^2
\end{array}
\right)
\]
If $R$ is the value of $r$ for the source $\Sigma $ (and also the radius of
the mirror), the world line of $\Sigma $ (which is placed in a fixed point
on the platform) is a timelike helix, say $\gamma _\Sigma $, that wraps
around the cylinder representing the disk in $2+1$ dimensions. The
parametric equations of $\gamma _\Sigma $, in the coordinates $x^0\equiv ct,$
$x^1\equiv r,\,x^2\equiv $ $\theta ,\,$ of $K$, are
\[
\left\{
\begin{array}{c}
x^0\equiv ct \\
x^1\equiv r=R \\
x^2\equiv \theta =\omega t
\end{array}
\right.
\]
Eliminating the parameter $t$, they become
\begin{equation}
\left\{
\begin{array}{c}
x^0=\frac c\omega \theta \\
x^1=R
\end{array}
\right. \label{gammasigma}
\end{equation}
Notice that the length of the helix $\gamma _\Sigma $ is an observable
quantity, namely (apart a factor $c$) the proper time measured by a clock $%
C_\Sigma $ carried by the source $\Sigma $. Such length, expressed in units
of time, is
\begin{eqnarray}
d\tau &=&\frac 1c\sqrt{g_{\mu \nu }dx^\mu dx^\nu }=\frac 1c\sqrt{%
c^2dt^2-R^2d\theta ^2}= \label{elica} \\
&=&\frac 1c\sqrt{\frac{c^2}{\omega ^2}d\theta ^2-R^2d\theta ^2}=\frac{%
d\theta }\omega \sqrt{1-\beta ^2} \nonumber
\end{eqnarray}
We see the time dilation at work.
Two light beams, emitted at the same time $t=0$ when the source $\Sigma $
(physically realized by means of two instruments, an electromagnetic source
and a beam splitter) is in $(0,R,0)$ (coordinates of $K$), travel along two
world lines which are null helixes, say $\gamma _{L_{\pm }}$ (the $+$ sign
holds for the co-rotating ray, the $-$ sign for the counter-rotating one).
The parametric equations of the helixes $\gamma _{L_{\pm }}$, in the
coordinates $x^0\equiv ct,$ $x^1\equiv r,\,x^2\equiv $ $\theta ,\,$ of $K$,
are
\[
\left\{
\begin{array}{c}
x^0\equiv ct \\
x^1\equiv r=R \\
x^2\equiv \theta =\pm \frac cRt
\end{array}
\right.
\]
from which, eliminating the parameter $t$:
\begin{equation}
\left\{
\begin{array}{c}
x^0=\pm R\theta \\
x^1=R
\end{array}
\right. \label{gammaluce}
\end{equation}
The two rays meet the source again at two different events which correspond
to the intersections between the null helixes $\gamma _{L_{\pm }}$ and the
timelike helix $\gamma _\Sigma $.
The (first) intersection between $\gamma _{L_{+}}$ and $\gamma _\Sigma $
(''absorption of the co-rotating photon by a detector placed in the same
place of the source $\Sigma $, after a complete round trip'') is found when
\begin{equation}
\frac c\omega \theta =R\left( \theta +2\pi \right) \label{uno}
\end{equation}
Analogously, the (first) intersection between $\gamma _{L_{-}}$ and $\gamma
_\Sigma $ (''absorption of the counter-rotating photon after a complete
round trip'') is found when
\begin{equation}
\frac c\omega \theta =-R\left( \theta -2\pi \right) \label{due}
\end{equation}
(of course the angle $\theta =\omega t$ is the rotation of $\Sigma $ in the
time interval $t$, everything being measured in $K$).
Eq.s (\ref{uno}), (\ref{due}) show that the two photons, emitted from the
source $\Sigma $ at the time $t=0$ and travelling in opposite directions,
are absorbed - after a complete round trip around the rim of the platform -
by the detector, placed in the same place of the source, when the angular
coordinates of $\Sigma $ - as measured in $K$ - are, respectively,
\begin{eqnarray}
\theta _{\Sigma _{+}} &=&\frac{2\pi \beta }{1-\beta } \label{angoli} \\
\theta _{\Sigma _{-}} &=&\frac{2\pi \beta }{1+\beta } \nonumber
\end{eqnarray}
Introducing these results into eq.(\ref{elica}), we see that the two
''absorption events'' happen at the following proper times of $\Sigma $
(times measured by the standard clock $C_\Sigma $, at rest on the platform
in $\Sigma $):
\begin{eqnarray}
\tau _{+} &=&\frac{\theta _{\Sigma _{+}}}\omega \sqrt{1-\beta ^2}=\frac{2\pi
\beta }\omega \sqrt{\frac{1+\beta }{1-\beta }} \label{tempo+-} \\
\tau _{-} &=&\frac{\theta _{\Sigma _{-}}}\omega \sqrt{1-\beta ^2}=\frac{2\pi
\beta }\omega \sqrt{\frac{1-\beta }{1+\beta }} \nonumber
\end{eqnarray}
and are separated by the proper time interval:
\begin{equation}
\delta \tau \equiv \tau _{+}-\tau _{-}=\frac{\theta _{\Sigma _{+}}-\theta
_{\Sigma _{-}}}\omega \sqrt{1-\beta ^2}=\frac{4\pi \beta ^2}{\omega \sqrt{%
1-\beta ^2}} \label{properlag}
\end{equation}
Taking into account the time dilation in the inertial frame $K$, the
corresponding universal time interval $\delta t$ is:
\begin{equation}
\delta t=\gamma \delta \tau =\frac{4\pi \beta ^2}{\omega \left( 1-\beta
^2\right) } \label{lag}
\end{equation}
{\it Remark: rotation angles for the light rays.}
{\it \ }Let $\theta _{L_{+}}$, $\theta _{L_{-}}$ be the rotation angles (as
measured in $K$) of the co-rotating and counter-rotating light beams when
they are absorbed by the detector after a complete round trip. Then from
eqs. (\ref{angoli}):
\begin{eqnarray}
\theta _{L_{+}} &=&\theta _{\Sigma _{+}}+2\pi =\frac{2\pi }{1-\beta }
\label{angoli1} \\
\theta _{L_{-}} &=&\theta _{\Sigma _{-}}-2\pi =-\frac{2\pi }{1+\beta }
\nonumber
\end{eqnarray}
{\it Remark: the Sagnac effect. }
Eqs. (\ref{properlag}), (\ref{lag}) exactly coincide with the formulas which
are at the basis of the Sagnac effect, i.e. the eqs. (23), (22) of Post \cite
{post} (see also Stedman \cite{sagnac}, Dieks \cite{dieks} and, as it will
be more apparent in the next section, Anandan \cite{anandan}).
As known, the Sagnac effect is a shift of the interference fringes appearing
in a suitable interferometer, and is due to the time difference (given
either by eq. (\ref{properlag}) or by eq. (\ref{lag}), according to the
particular choice of the clock) between the arrivals on the detector of the
co-rotating and the counter-rotating light beam.
The classical explanation (see for instance Sagnac \cite{sagnac1913}), but
also many ''relativistic'' explanations (e.g. Peres \cite{peres}), ascribe
such a time difference to the anisotropy of light propagation along the rim
of the platform, due to rotation.
On the other hand, the true relativistic explanation, proposed by Anandan
\cite{anandan} in 1981 and here recovered, with many interesting
implications, ascribes such a time difference to the nonuniformity of time
on the rotating platform, and in particular to the ''time lag'' arising in
synchronizing clocks along the rim (see next section).
\begin{figure}[ph]
\caption{General view of the geometry of the rotating disk in 2+1 dimension.
A few timelike helixes are drawn, belonging to the congruence which defines
the reference frame $K_0$ of the disk. The dashed line $\gamma _{L+}$
represents the co-rotating light beam.}
\end{figure}
\section{Lengths along the rim of the disk}
In order to compare the speed of the light beams as seen on board the
platform, the lengths of the different travels are needed. To this end,
consider a given event taking place on the rim of the platform, e.g. the
event ''emission of two photons in opposite directions from the source $%
\Sigma $''. This event will be denoted by the symbol $\Sigma \left( 0\right)
$ partly recalling the one already used for the source/detector (of course
the two meanings of $\Sigma $ should be clear from the context, in order to
avoid any confusion).
The locus of events ''simultaneous to $\Sigma \left( 0\right) $'' is defined
without ambiguities in the class of the time-orthogonal and geodesic
reference frames (see \cite{cattaneo}), which contains the class of the
(extended or local) inertial{\it \ } reference frames.
In particular, the set of simultaneous events is univocally defined: (i) in
the {\it extended} inertial frame $K$ ; (ii) in an infinitesimal region of
the platform, containing $\Sigma $, which differs as little as we want from
the {\it local} inertial frame $K_{o}\left( \Sigma \right) $. When we start
from the event $\Sigma \left( 0\right) $ and move along the rim of the
platform, the simultaneity procedure, as defined in $K_{o}\left( \Sigma
\right) $, is transported, step by step, along the rim. As a result, the set
of events taking place on the rim and simultaneous to $\Sigma $ in $K_{o}$
(i.e. satisfying the condition $x_{o}^{0}=0$) is mapped, in the
(3-dimensional Euclidean) plot of the (2+1) Minkowskian space-time, into a
spacelike helical curve $\gamma _{S}$, starting from $\Sigma \left( 0\right)
$ and everywhere orthogonal to the timelike helix $\gamma _{\Sigma }$
(''history of the point $\Sigma $ at rest on the disk''), whose tangent
vector forms a constant angle $\alpha $ with respect to the $x^{0}$ axis of $%
K$. Orthogonality is shown in the plot (see figure 2) by the fact that the
angle between the tangent vector to $\gamma _{S}$ and a normal section of
the cylinder on which the helixes wrap is again everywhere the same $\alpha $%
\footnote{%
As it is well known, a spacetime diagram is a topological map from a
Minkowskian space to an Euclidean space, which changes the metrical
relations (angles and lengths) in a well defined way; in particular, the
M-orthogonality between two directions is depicted in the diagram as an
E-symmetry with respect to the light cone.} :
\[
\alpha =\arctan \left( \beta \right) \,\,\,\,
\]
Notice that, when the platform does not rotate ($\beta =0$), the world line
of any point on its border is a straight vertical line and the locus of the
events simultaneous to $\Sigma $ is a {\it closed} curve, namely a
circumference (whose length coincides with the usual length of the contour
of the disk); but when the platform is set in motion ($\beta \neq 0$), the
world lines of the points on its border become timelike helixes\footnote{%
If the totality of the points of the platform is considered, then the
totality of their world lines is a congruence of timelike helixes, which
should be assumed as the only unambigous definition of the reference frame $%
K_o$ (see \cite{cattaneo}).} and the sets of simultaneous events change
completely their topology and become {\it open} curves, namely spacelike
helixes orthogonal to the former ones.
In particular, the helix $\gamma _S$ of the events simultaneous to $\Sigma
\left( 0\right) $ in $K_o$ is easily found by imposing that the tangent unit
vector ${\bf \upsilon }_{\gamma _S}=\left( \upsilon _{\gamma _S}^0,\upsilon
_{\gamma _S}^\theta \right) $ to $\gamma _S$ be normal to the tangent vector
to $\gamma _\Sigma $. One obtains
\begin{eqnarray}
\upsilon _{\gamma _S}^0 &\equiv &c\frac{dt}{ds_{\gamma _S}}=\frac \beta {%
\sqrt{1-\beta ^2}} \label{tg-gammas} \\
\upsilon _{\gamma _S}^\theta &\equiv &\frac{d\theta }{ds_{\gamma _S}}=\frac
1{R\sqrt{1-\beta ^2}} \nonumber
\end{eqnarray}
from which, dividing the second formula by the first one, then integrating,
the following equations for $\gamma _S$ follow:
\begin{equation}
\left\{
\begin{array}{c}
r=R \\
\theta =\frac{c^2}{\omega R^2}t
\end{array}
\right. \label{gammas}
\end{equation}
Considering (\ref{gammas}) the infinitesimal length along the helix $\gamma
_S$ is
\begin{eqnarray}
ds_{\gamma _S} &=&\sqrt{g_{00}\left( dx^0\right) ^2+g_{\theta \theta }\left(
dx^\theta \right) ^2}=\sqrt{c^2dt^2-R^2d\theta ^2} \label{dsgamma} \\
&=&\sqrt{\omega ^2\frac{R^4}{c^2}d\theta ^2-R^2d\theta ^2}=iRd\theta \sqrt{%
1-\beta ^2} \nonumber
\end{eqnarray}
If valued in $K_{o}$, the line element $ds_{\gamma _{S}}$ should be
interpreted, leaving the imaginary unit $i$ aside\footnote{%
The appearance of the imaginary unit is due to the conventions regarding the
signature of the four dimensional line element and simply means that the
interval is spacelike.}, as the ''proper length'' $\left( dl\right) _{o}$ of
an infinitesimal part of the rim in its locally inertial frame:
\begin{equation}
\left( dl\right) _{o}=Rd\theta \sqrt{1-\beta ^{2}} \label{dlo}
\end{equation}
Of course, eq. (\ref{dlo}) has a univocally well defined interpretation only
for an infinitesimal part of the rim; if we integrate along a finite portion
of it, the interpretation of such integral as the ''proper length of the
considered part of the rim'' is a questionable extrapolation. In fact, a
rotating disk does not admit a well defined ''proper frame''; rather, it
should be regarded as a class of an infinite number of local proper frames,
considered in different points at different times, and glued together
according to some convention. It is well known (\cite{cantoni}, \cite
{anandan}, \cite{arzelies}, \cite{landau}, etc.) that no convention exists
such that a self-consistent synchronization of standard clocks at rest on
the disk can be realized (see later on). In particular, the (first)
intersection $\Sigma \left( \tau _{0}\right) $ (see fig.2) of the helix $%
\gamma _{\Sigma }$ - given by eq.(\ref{gammasigma}) - with the helix $\gamma
_{S}$ of the events ''simultaneous to $\Sigma \left( 0\right) $ in $K_{o}$''
- given by eq.(\ref{gammas}) - takes place when the angular coordinate (in $%
K $) of $\Sigma $ takes on the value $\theta _{\Sigma }$ satisfying the
equation $\frac{\theta _{\Sigma }}{\omega }=\left( 2\pi +\theta _{\Sigma
}\right) \frac{\omega R^{2}}{c^{2}}$, namely
\[
\theta _{\Sigma }=2\pi \beta ^{2}\left( 1-\beta ^{2}\right) ^{-1}
\]
Along the ''simultaneity helix'' $\gamma _{S}$, the intersection $\Sigma
\left( \tau _{0}\right) $ takes place after the rotation angle $\theta
_{0}=2\pi +\theta _{\Sigma }=2\pi \left( 1-\beta ^{2}\right) ^{-1}$. But the
event $\Sigma \left( \tau _{0}\right) $, though ''simultaneous to $\Sigma
\left( 0\right) $ in $K_{o}$'' according to the previous definition, {\it \
belongs to the future of }$\ \Sigma \left( 0\right) $! This is a well known
example which displays the impossibility of a self-consistent definition of
simultaneity {\it at large} on the disk.
Eq. (\ref{dlo}) deserves a further comment. The quantity $Rd\theta $
appearing in it cannot be interpreted as the length of an infinitesimal arc
of the rotating circumference as viewed in the inertial frame $K$, because
to come back to the source on the turntable the rotation angle must increase
by $\theta _{0}=2\pi \left( 1-\beta ^{2}\right) ^{-1}>2\pi $. If we want a
round trip on the platform to correspond to a $2\pi $ rotation, we need a
new angular variable $\theta ^{\prime }$ such that $d\theta ^{\prime }$ =$%
d\theta \left( 1-\beta ^{2}\right) .$ In terms of this new variable, eq. (%
\ref{dlo}) takes the form
\begin{equation}
\left( dl\right) _{o}=Rd\theta ^{\prime }\left( 1-\beta ^{2}\right) ^{-\frac{%
1}{2}} \label{dlzero}
\end{equation}
Stated in other words, $Rd\theta $ and $Rd\theta ^{\prime }$ are the
projections of $\left( dl\right) _{o}$ onto a plane $t=$const of the
inertial frame $K$, along $x^{o}$ and $\gamma _{\Sigma }$ respectively. As a
consequence, only the latter expression should be interpreted as the length
of an infinitesimal arc of the rotating circumference as viewed in $K$, and
we recognize in the previous expression the Lorentz contraction.
In spite of the impossibility of a self-consistent definition of
simultaneity {\it at large} on the disk, it is interesting to stress that
the length of the rim in $K_o$, as defined by the formula
\begin{equation}
l_o\equiv \int\limits_{\Sigma \left( 0\right) }^{\Sigma \left( \tau
_0\right) }\left( dl\right) _o=\int\limits_0^{\theta _0}Rd\theta \sqrt{%
1-\beta ^2}=2\pi R\left( 1-\beta ^2\right) ^{-\frac 12} \label{lo}
\end{equation}
exactly coincides with the expected relation between the length $l_o$ of the
''circumference''
\begin{equation}
\gamma _o\equiv \left\{ P(\theta )\in \gamma _S:\,\theta \in \left[ 0,\theta
_0\right] \,\right\} \label{gamma00}
\end{equation}
relative to $K_o$, and the length $l=2\pi R$ of the circumference relative
to $K$. However, the interpretation of this result differs from the
traditional one (whose origin can be found in Einstein \cite{einstein}): the
main issue, first remarked by Cantoni \cite{cantoni} and later on by Anandan
\cite{anandan}, is that the ''circumference'' $\gamma _o$ is not a
circumference at all, but an {\it open} spacelike curve, whose end-point $%
\Sigma \left( \tau _0\right) $ belongs to the future of the starting-point%
{\it \ }$\Sigma \left( 0\right) $. The time distance between $\Sigma \left(
0\right) $ and $\Sigma \left( \tau _0\right) \,$along $\gamma _\Sigma $, as
measured by the clock $C_\Sigma $ at rest near $\Sigma $, is the ''time
lag''
\begin{equation}
\tau _0=\frac{\theta _0}\omega \sqrt{1-\beta ^2}=\frac{2\pi \beta ^2}{\omega
\sqrt{1-\beta ^2}} \label{timelag}
\end{equation}
(see eq. (\ref{elica})) which arises in synchronizing clocks around the rim,
because of the rotation.
Notice that the ''time lag'' (\ref{timelag}) is exactly half of the proper
time interval (\ref{properlag}) between the arrivals on the detector of the
co-rotating and the counter-rotating light beams: this means that the Sagnac
effect can be explained as a consequence of the ''time lag'' due to rotation
(see \cite{anandan}).
But the main conclusion of this paper is the following: the two light beams
come back to the detector (placed near the source $\Sigma $) {\it for
different values of the rotation angle around the cylinder}, as given by eq.
(\ref{angoli1}). Actually, the two end-points $\Sigma \left( \tau
_{+}\right) \equiv \Sigma (\theta _{L_{+}}=\theta _{\Sigma _{+}}+2\pi
),\,\Sigma \left( \tau _{-}\right) \equiv \Sigma (\theta _{L_{-}}=\theta
_{\Sigma _{-}}-2\pi )$ along $\gamma _\Sigma $ are different; more
explicitly, there are two different travelled distances for the two light
rays leaving $\Sigma \left( 0\right) $ in opposite directions, according to
\begin{eqnarray}
l_{+} &\equiv &\int\limits_{\Sigma \left( 0\right) }^{\Sigma \left( \tau
_{+}\right) }\left( dl\right) _o=\int\limits_0^{\theta _{L_{+}}}Rd\theta
\sqrt{1-\beta ^2}=2\pi R\sqrt{\frac{1+\beta }{1-\beta }} \label{lungh-luce}
\\
l_{-} &\equiv &\int\limits_{\Sigma \left( 0\right) }^{\Sigma \left( \tau
_{-}\right) }\left( dl\right) _o=\int\limits_0^{\left| \theta
_{L_{-}}\right| }Rd\theta \sqrt{1-\beta ^2}=2\pi R\sqrt{\frac{1-\beta }{%
1+\beta }} \nonumber
\end{eqnarray}
Now the effective speeds of light on the platform, in the two opposite
directions, are given by the ratios of the two travelled lengths $%
l_{+},\,l_{-}$ to the proper travel times as measured by the clock $C_\Sigma
$ at rest in $\Sigma $, which are given by eqs. (\ref{tempo+-}). The result
is
\begin{eqnarray}
c_{+} &\equiv &\frac{l_{+}}{\tau _{+}}=\frac{R\omega }\beta =c
\label{finale} \\
c_{-} &\equiv &\frac{l_{-}}{\tau _{-}}=\frac{R\omega }\beta =c \nonumber
\end{eqnarray}
Both these velocities, that correspond to what Selleri calls $\widetilde{c}%
\left( 0\right) $ and $\widetilde{c}\left( \pi \right) $, are exactly equal
to $c$; therefore their ratio is simply $1$ just as in inertial reference
frames. As a consequence: (i) no discontinuity is found in passing from
accelerated to uniform motion; (ii) no violation of the ''hypothesis of
locality'' occurs; (iii) the explanation of the Sagnac effect does not
require any anisotropy in the propagation of light along the rim, as often
claimed \cite{peres}, but is {\it totally }due to the time-lag (\ref{timelag}%
) arising in synchronizing clocks around the rim, because of the rotation.
\begin{figure}[tbp]
\caption{The figure shows the $2+1$ diagram of the rim of the rotating disk
developed on a plane: vertically the time $t$ of reference $K$ $is$ shown;
horizontally one finds the rotation angles as seen in $K$. Primed and
unprimed low case letters on opposite sides of the figure mark one and the
same point on the cylinder. The symbols are the same as those used in the
text. Simple geometric properties reproduce the results obtained in the
text. Null lines are plotted at $45^{o},$ thus it is immediately seen that $%
\tau _{0}$, distance between $\Sigma \left( \tau _{0}\right) $ and $\Sigma
\left( 0\right) ,$ is half $\tau _{+}-\tau _{-}$ (distance between $\Sigma
\left( \tau _{+}\right) $ and $\Sigma \left( \tau _{-}\right) $. The length
of $\gamma _{o}$ is the length of the line $\Sigma \left( 0\right)
bb^{\prime }\Sigma \left( \tau _{0}\right) $, $l_{+}$ is $\Sigma \left(
0\right) bb^{\prime }n$; $l_{-}$ is $\Sigma \left( 0\right) a^{\prime }am$,
which is the same as $\Sigma \left( 0\right) bb^{\prime }m^{\prime }$:
indeed $m^{\prime }$ is the first point on the world line of the
counter-rotating light ray ''simultaneous'' to $\Sigma \left( 0\right) $.}
\end{figure}
\section{Comparison with a ''natural'' splitting between space and time}
In 4 dimensions everything is clear and clean, but we perceive the world
locally in 3+1 dimensions; so the results found in the previous sections
could be considered as being rather abstract requiring the knowledge and
application of four-dimensional geometry and setting aside what appears to
be a most ''natural'' splitting between space and time. The question can
legitimately be posed whether the same general result could be found on the
base of actual measurements performed on the disk.
In the present section, we compare the four-dimensional treatment, outlined
in sect. 3 and sect. 4, with the 3+1 global splitting suggested, in an
apparently ''natural'' way, by a pure experimenter living on the platform,
on the basis of actual measurements outside of any four-dimensional
interpretation.
Our observer possesses a great many unitary rods and a couple of identical
clocks. The clocks are synchronized in one and the same place, that will be
assumed to be the origin of the observer's reference frame.
Now the observer sets out for an extremely slow (negligible relative
velocity) round trip on the platform in a sense which for us is corotating,
carrying with him the rods and one of the clocks. At each step he lays down
one rod, tail to head of the previous one. When he comes back to the origin
the number of rods he used tells him what reasonably he considers as being
the length $l_0$ of his trip. There is however a curious phenomenon our
experimenter can notice: when coming back to the origin, the clock carried
with him turns out to be desynchronized with respect to the identical one he
left at the origin, and precisely turns out to be {\it late} by a given
amount $\tau _0$, whose numerical value is given by eq. (\ref{timelag}).
If our observer repeats his slow trip in the reverse sense, he will find:
(i) the same length $l_0$ of his new trip, since the number of rods is one
and the same both turning clockwise and counterclockwise; (ii) the same
desynchronization of his clock, which this time turns out to be {\it ahead}
by the same amount $\tau _0$ with respect to the identical one at the
origin. If he compares the results of the two trips, he finds a net
desynchronization $2\tau _0$\ between the two travelling clocks, after their
clockwise and counterclockwise slow round trips.
To solve this puzzling result, our experimenter decides to send light beams
in both directions along the same path he followed and measures the time
they take to reach the origin again, finding two different results $\tau _{+}
$ and $\tau _{-}$. The most interesting thing is that {\it the difference }$%
\tau _{+}-${\it \ }$\tau _{-}${\it \ exactly equals the net
desynchronization }$2\tau _{0}${\it \ between the two (slowly) travelling
clocks!} Excited by this astounding coincidence, our friend decides to send
pairs of material objects (particles or matter waves) in opposite
directions, with the same relative velocity. He finds of course different
times of the total round trips, but the same difference between them for any
pair of travelling objects: $2\tau _{0}$ again and again!\footnote{%
This result, although known in the literature, is not demonstrated in the
present paper, but will be confirmed (and perhaps more clearly established)
in a brief letter in preparation.} So our experimenter, although completely
unaware of any theoretical approach to the problem, in particular of any
four-dimensional geometrical model of space-time, and only confident in his
own measurements realized by means of real rods and real clocks, is forced
to conclude that: (i) the platform on which he lives is rotating, and the
desynchronization $2\tau _{0}$ of a pair of clocks, after their slow round
trips in opposite directions, is a measure of the speed of this rotation;
(ii) the time intervals along the closed path are not uniquely defined and,
to obtain reliable measures of them, the readings of clocks must be
corrected by a quantity $\pm \tau _{0}$ to account for the desynchronization
effect (by the way, this is precisely what is done when considering data
from the GPS\footnote{%
Global Positioning System} satellites \cite{allan}); (iii) as a consequence
of this correction, the speed of light is actually the same both forward and
backward.
It is interesting to note that our pure experimenter discovers the
interdependence between space and time, although unaware of the metric
structure of Minkowski space-time - or, what is the same - of SRT.
\bigskip
To sum up, we see that different approaches are possible about space and
time, however once one has been chosen about space (time), the properties of
time (space) are entirely determined by the theory and produce in any case
the same result. Different pictures are available but the statements about
physical quantities and their relationships are finally the same.
(1) According with our pure experimenter, the circumference of the rotating
disk can be considered as a geometrically well defined entity, with a well
defined length in the reference frame of the disk. But in this case, when
considering durations involving displacements along the rim (i.e. through
points where time flows differently), {\it a correction is unavoidable} {\it %
in order to take into account the phenomenon of desynchronization}. This is
the price to be payed if consistent results are desired.
(2) According to our four-dimensional treatment, the synchronization along
the rim is defined by extending formally the local Einstein criterium of
simultaneity. Then the length of the circumference of the rotating disk, as
measured by infinitesimal rigid rods at rest along the rim, turns out to be
an open curve in space-time, and its length turns out to be
traveller-dependent.
One can choose description (1), which is closer to real measurements on the
platform, or description (2), which is clearer and, at least in our opinion,
quite enlightening; but the conclusions about physical quantities are
exactly the same. The choice of the description is a matter of taste; the
statements about physical quantities and their relationships are a matter of
fact and self-consistency.
\section{Conclusion}
The starting point of this paper was a careful check of eq. (\ref{ro}). We
easily found that that formula, widespread both in non-relativistic and in
relativistic literature, and apparently supported by the experimental
evidence of the Sagnac effect, actually leads to the paradoxical
consequences pointed out by Selleri \cite{selleri}. In particular, eq. (\ref
{ro}) turns out to be incompatible both with the principle of invariance of
the speed of light in the class of the inertial frames and with the
''hypothesis of locality''. As a consequence, eq. (\ref{ro}) would rule out
all the relativistic physics of this century and imply the existence of a
privileged (''absolute'') frame, which could be interpreted as the frame of
the stationary ether. We are not going to comment further on some unexpected
and radical implications, first of all the recovery of absolute
simultaneity; see Sagnac \cite{sagnac1913} for a Galilean-like
interpretation, and Selleri \cite{selleri}, \cite{selleri1} for a
Lorentzian-like interpretation.
Actually we have shown that eq.(\ref{ro}) comes from the (explicit or
implicit) assumption that the length of round trip journeys along the rim of
a turntable is that of a closed curve (namely a circumference) and is unique
for all travellers independently from the kind of synchronization procedure
adopted. This assumption, evident in eq.(\ref{c}), where the same length $%
L_{o}$ is used for both the counter-rotating and the co-rotating light
signals (as seen in $K_{o}$), comes from a purely three-dimensional approach
(global $3+1$ splitting). However if we look at relativity from a
4-dimensional (in this case $2+1$ dimensional is enough) point of view, one
sees that ''round trips'' correspond in general to open curves (arcs of
helixes) and their proper lengths differ from one traveller to another, in
particular for co-rotating and counter-rotating light beams (the starting
point $\Sigma (0)$ is univocally defined, but the end point $\Sigma (\tau )$
depends on $\tau $, as can be seen on figure 2). When the speed of the
traveller tends to zero (in any direction) the length of the journey tends
to the unique value $l_{0}$ given by eq.(\ref{lo}), which has usually been
considered as the length of the ''circumference'' $\gamma _{0}$ defined in (%
\ref{gamma00}).
The root of the misunderstandings lies in the ambiguity of a self-consistent
definition of simultaneity (in $K_{o}$) of events taking place along the rim
of the disk, in particular in the drastic change of the topology of the line
of simultaneous events, due to rotation, which is evident in a
four-dimensional context only. On this very fact (namely in the time lag of
eq.(\ref{timelag})) stands the correct relativistic explanation of the
Sagnac effect, which does not need any anisotropy of the speed of light
along the rim, contrary to a widely supported idea.
In order to formally recover, in that context, the isotropy of the speed of
light, Peres is forced to introduce an ad hoc time that has nothing to do
with the time of real clocks on the turntable \cite{peres}. Our result, in a
full Minkowskian context, is that the speed of light is actually the same
both ''eastward and westward'' (to use Peres' words), as measured using real
clocks at rest on the platform.
In conclusion, we have shown that the SRT has no flaws when applied to
describe the behaviour of light as seen from a turntable carrying mirrors,
provided we avoid the use of (geometrical or kinematical) quantities {\it %
ambiguously defined}, and stick consistently to its axioms and rules.
|
1,941,325,221,039 | arxiv | \section{Introduction}
Usually the Fermi surface of an interacting electron systems respects
the point-group symmetry of the underlying crystal lattice.
However, electron-electron interactions may also lead to Fermi
surface deformations which break the orientational symmetry
spontaneously.
From a Fermi liquid viewpoint this can happen via a \lq\lq Pomeranchuk
instability'', that is, when Pomeranchuk's stability condition
\cite{Pom} for the forward scattering interactions is violated.
Interactions favoring a symmetry-breaking Fermi surface deformation
with a $d$-wave order parameter, where the surface expands along the
$k_x$-axis and shrinks along the $k_y$-axis (or vice versa), are
present in the $t$-$J$,\cite{YK1} Hubbard,\cite{HM,GKW} and extended
Hubbard\cite{VV} model on a square lattice.
These models therefore exhibit enhanced \lq\lq nematic'' correlations,
which also appear in the context of fluctuating stripe
order.\cite{KFE}
Signatures for such correlations have been observed in various
cuprate superconductors.\cite{KBX}
In particular, they provide a natural explanation for the
relatively strong in-plane anisotropy observed in the magnetic
excitation spectrum of $\rm YBa_2Cu_3O_y$.\cite{HPX,YM}
Fermi surface symmetry breaking competes with superconductivity.
In the $t$-$J$ model the $d$-wave Fermi surface deformation instability
is overwhelmed by $d$-wave pairing. This is indicated by slave-boson
mean-field theory\cite{YK1} and has been confirmed recently by a
variational Monte Carlo calculation.\cite{EMG}
However, enhanced nematic correlations remain.\cite{Yam}
The competition of superconductivity and Fermi surface symmetry
breaking is more delicate in the two-dimensional Hubbard model.
Renormalization group calculations in the symmetric phase suggest
that the superconducting instability is always stronger than the
Pomeranchuk instability,\cite{HSR} but
these calculations do not exclude the possibility of \emph{coexistence}
of the two competing order parameters in the symmetry-broken phase.
Indeed, coexistence of $d$-wave superconductivity and $d$-wave
Fermi surface symmetry breaking has been obtained near van Hove filling
from a weak coupling perturbation expansion for the symmetry-broken
ground state of the Hubbard model.\cite{NM}
To elucidate the interplay and competition of Fermi surface symmetry
breaking and superconductivity in a more general setting, and to
classify possible scenarios, we analyze in the present work a mean-field
model allowing for both instabilities with a tunable strength for each.
The model describes itinerant electrons on a square lattice with
two types of interaction: a reduced BCS interaction driving $d$-wave
superconductivity and a purely forward scattering interaction driving
$d$-wave Fermi surface symmetry breaking.
The properties of the mean-field model without BCS interaction, where
the electrons interact only via forward scattering (\lq\lq f-model''), have
been clarified already earlier.\cite{KKC,KCOK,YOM}
The main results can be summarized as follows.
Fermi surface symmetry-breaking occurs below a transition temperature
$T_c$ which forms a dome-shaped line as a function of the chemical
potential $\mu$, with a maximal $T_c$ near van Hove filling.
\cite{KKC,KCOK}
The phase transition is usually first order at the edges of the
transition line, and always second order around its center.
\cite{KKC,KCOK,YOM}
The $d$-wave compressibility of the Fermi surface is however strongly
enhanced even near the first order transition down to zero temperature.
\cite{YOM}
Adding a uniform repulsion to the forward scattering interaction, the
two tricritical points at the ends of the second order transition line
are shifted to lower temperatures.
For a favorable choice of hopping and interaction parameters one of the
first order edges can be replaced completely by a second order transition
line, leading to a quantum critical point.\cite{YOM}
Fluctuations at and near the quantum critical point destroy fermionic
quasi-particle excitations, leading to non-Fermi liquid behavior.
\cite{MRA,DM}
Adding an attractive $d$-wave BCS interaction to the f-model leads to
a variety of qualitatively distinct phase diagrams, depending on the
interaction strength.
If the BCS interaction is not too strong, Fermi surface symmetry
breaking is stabilized around van Hove filling, and coexists with
superconductivity at low temperatures.
In the presence of a pairing gap it is easier to realize Fermi surface
symmetry breaking via a continuous phase transition at low temperatures
than without.
In particular, a quantum critical point connecting superconducting
phases with and without Fermi surface symmetry breaking at zero
temperature is obtained for a suitable choice of interactions.
For a relatively strong BCS interaction, Fermi surface symmetry
breaking can be limited to intermediate temperatures, or can be
suppressed completely by pairing.
The article is structured as follows.
In Sec.~II we introduce the mean-field model and describe the
self-consistency equations for the order parameters.
The phase diagrams and other results are presented in Sec.~III.
A conclusion follows in Sec.~IV.
\section{Mean-field model}
We analyze itinerant electrons on a square lattice interacting
via forward scattering and a reduced BCS interaction, described
by a Hamiltonian of the form
\begin{equation}
H = \sum_{{\bf k}} \epsilon_{{\bf k}}^0 n_{{\bf k}} + H_I^f + H_I^c \; ,
\end{equation}
where $n_{{\bf k}} = \sum_{\sigma} n_{{\bf k}\sigma}$ counts the spin-summed
number of electrons with momentum ${\bf k}$.
The kinetic energy is due to hopping between nearest and
next-nearest neighbors on a square lattice,
leading to the bare dispersion relation
\begin{equation}
\epsilon_{{\bf k}}^{0}= -2t (\cos k_{x}+\cos k_{y})
- 4t'\cos k_{x} \cos k_{y} \; .
\end{equation}
The forward scattering interaction reads
\begin{equation}
H_I^f =
\frac{1}{2L} \sum_{{\bf k},{\bf k}'} f_{{\bf k}\bk'} \, n_{{\bf k}} n_{{\bf k}'} \; ,
\end{equation}
where $L$ is the number of lattice sites, and the function
$f_{{\bf k}\bk'}$ has the form
\begin{equation}
f_{{\bf k}\bk'} = u - g_f \, d_{{\bf k}} d_{{\bf k}'} \; ,
\end{equation}
with coupling constants $u \geq 0$ and $g_f \geq 0$, and a function
$d_{{\bf k}}$ with $d_{x^2-y^2}$-wave symmetry such as
$d_{{\bf k}} = \cos k_x - \cos k_y$.
This ansatz mimics the structure of the effective interaction in
the forward scattering channel as obtained for the $t$-$J$\cite{YK1}
and Hubbard\cite{HM} model.
The uniform term originates directly from the repulsion between
electrons and suppresses the (uniform) electronic compressibility
of the system.
The $d$-wave term enhances the $d$-wave compressibility and drives
spontaneous Fermi surface symmetry breaking.
In the Hubbard model it is generated by fluctuations, while in
the $t$-$J$ model the nearest neighbor interaction contributes
directly to a $d$-wave attraction in the forward scattering channel.
The BCS interaction has the form
\begin{equation}
H_I^c = \frac{1}{L} \sum_{{\bf k},{\bf k}'} V_{{\bf k}\bk'} \,
c^{\dag}_{{\bf k}\uparrow} c^{\dag}_{-{\bf k}\downarrow}
c_{-{\bf k}'\downarrow} c_{{\bf k}'\uparrow} \; .
\end{equation}
It is a reduced interaction in the sense that it contributes only
in the Cooper channel, that is, when the total momentum of the
interacting particles vanishes.
For the matrix element $V_{{\bf k}\bk'}$ we choose a separable $d$-wave
attraction
\begin{equation}
V_{{\bf k}\bk'} = - g_c \, d_{{\bf k}} d_{{\bf k}'}
\end{equation}
with $g_c \geq 0$, which corresponds to the dominant term in the
Cooper channel for the two-dimensional Hubbard and $t$-$J$ model.
Inserting $n_{{\bf k}} = \langle n_{{\bf k}} \rangle + \delta n_{{\bf k}}$ into
$H_I^f$, and $c^{\dag}_{{\bf k}\uparrow} c^{\dag}_{-{\bf k}\downarrow} =
\langle c^{\dag}_{{\bf k}\uparrow} c^{\dag}_{-{\bf k}\downarrow} \rangle +
\delta (c^{\dag}_{{\bf k}\uparrow} c^{\dag}_{-{\bf k}\downarrow})$ into $H_I^c$,
and neglecting terms quadratic in the fluctuations, one obtains
the mean-field Hamiltonian
\begin{equation}
H_{\rm MF} = \sum_{{\bf k}} \Big[
\epsilon_{{\bf k}} \, n_{{\bf k}} +
(\Delta_{{\bf k}} \, c^{\dag}_{{\bf k}\uparrow} c^{\dag}_{-{\bf k}\downarrow} + h.c.)
- \frac{\delta\epsilon_{{\bf k}}}{2} \langle n_{{\bf k}} \rangle
- \Delta_{{\bf k}} \,
\langle c^{\dag}_{{\bf k}\uparrow} c^{\dag}_{-{\bf k}\downarrow} \rangle \Big] \; .
\end{equation}
Here $\epsilon_{{\bf k}} = \epsilon^0_{{\bf k}} + \delta\epsilon_{{\bf k}}$ is a
renormalized dispersion relation, which is shifted with
respect to the bare dispersion by
$\delta\epsilon_{{\bf k}} =
L^{-1} \sum_{{\bf k}'} f_{{\bf k}\bk'} \langle n_{{\bf k}'} \rangle =
un + \eta \, d_{{\bf k}} \,$,
where $n = L^{-1} \sum_{{\bf k}} \langle n_{{\bf k}} \rangle$ is the
average particle density, and
\begin{equation}
\eta = - \frac{g_f}{L}
\sum_{{\bf k}} d_{{\bf k}} \langle n_{{\bf k}} \rangle
\end{equation}
is our order parameter for Fermi surface symmetry breaking.
It vanishes as long as the momentum distribution function
$\langle n_{{\bf k}} \rangle$ respects the symmetry of the square
lattice.
The superconducting gap function is given by
$\Delta_{{\bf k}} = \frac{1}{L} \sum_{{\bf k}'} V_{{\bf k}\bk'} \,
\langle c_{-{\bf k}'\downarrow} c_{{\bf k}'\uparrow} \rangle =
\Delta \, d_{{\bf k}} \,$,
where
\begin{equation}
\Delta = - \frac{g_c}{L}
\sum_{{\bf k}} d_{{\bf k}} \langle c_{-{\bf k}\downarrow} c_{{\bf k}\uparrow} \rangle \; .
\end{equation}
For the reduced interactions $H_I^f$ and $H_I^c$ the
mean-field decoupling is exact in the thermodynamic limit.
Feynman diagrams describing contributions beyond mean-field
theory have zero measure for $L \to \infty$.
The mean-field Hamiltonian is quadratic in the Fermi operators
and can be diagonalized by a Bogoliubov transformation.
For the grand canonical potential per lattice site,
$\omega = L^{-1} \Omega$, we obtain
\begin{equation}
\omega(\eta,\Delta) =
- \frac{2}{\beta L} \sum_{{\bf k}} \log[2\cosh(\beta E_{{\bf k}}/2)] +
\frac{\eta^2}{2g_f} + \frac{|\Delta|^2}{g_c} +
un - \frac{un^2}{2} - \mu \; ,
\end{equation}
where $\beta$ is the inverse temperature,
$E_{{\bf k}} = (\xi_{{\bf k}}^2 + |\Delta_{{\bf k}}|^2)^{1/2}$, and
$\xi_{{\bf k}} = \epsilon_{{\bf k}} - \mu$.
The stationarity conditions $\partial\omega/\partial\eta = 0$
and $\partial\omega/\partial\Delta = 0$ yield the
self-consistency equations for the order parameters
\begin{equation}
\eta = \frac{g_f}{L} \sum_{{\bf k}} d_{{\bf k}} \,
\frac{\xi_{{\bf k}}}{E_{{\bf k}}} \, \tanh\frac{\beta E_{{\bf k}}}{2}
\end{equation}
and
\begin{equation}
\Delta = \frac{g_c}{L} \sum_{{\bf k}} d_{{\bf k}} \,
\frac{\Delta_{{\bf k}}}{2E_{{\bf k}}} \, \tanh\frac{\beta E_{{\bf k}}}{2}
\; ,
\end{equation}
respectively. The condition $\partial\omega/\partial n = 0$
(at fixed $\mu$) yields the equation determining the density
\begin{equation}
n = 1 - \frac{1}{L} \sum_{{\bf k}}
\frac{\xi_{{\bf k}}}{E_{{\bf k}}} \, \tanh\frac{\beta E_{{\bf k}}}{2}
\; .
\end{equation}
\section{Results}
We now show results obtained from a numerical solution of the
mean-field equations. For the ratio of hopping amlitudes we
choose $t'/t = -1/6$. The bare dispersion $\epsilon_{{\bf k}}^0$ has
saddle points at ${\bf k} = (\pi,0)$, $(0,\pi)$, leading to a
logarithmic van Hove singularity in the bare density of states
at $\epsilon = - 4t' = -2t/3$. All the results presented in the
figures are for $u = 0$ (no uniform contribution to forward
scattering), but we will discuss the effects of a finite
$u$ in the text.
In the following we set $t=1$, that is, all results with
dimension of energy are in units of $t$.
In Fig.~1 we show the transition temperature $T_f(\mu)$ for
Fermi surface symmetry breaking in the absence of pairing
($\Delta = 0$) for $g_f = 1$, and the critical
temperature for superconductivity $T_c(\mu)$ in the absence
of Fermi surface symmetry breaking ($\eta = 0$) for
various choices of $g_c$.
As discussed in detail in Refs.~\onlinecite{KCOK} and
\onlinecite{YOM},
a symmetry-broken Fermi surface is stabilized below a
dome-shaped transition line, with a maximal transition
temperature near van Hove filling. The transition is first
order at the edges of the transition line and second order
around its center.
The critical temperature for superconductivity is also
maximal near van Hove filling, but $T_c(\mu)$ remains
finite for any $\mu$ (as long as the band is partially
filled), and the transition is always of second order.
Near van Hove filling the transition temperatures $T_f$
and $T_c$ are of the same order of magnitude for $g_c$
slightly above $g_f = 1$.
Note that in the weak coupling limit one would obtain
$T_f \ll T_c$ for comparable $g_f$ and $g_c$, since
$\log(1/T_f) \propto g_f^{-1}$ for $g_f \to 0$, \cite{YOM}
while $\log(1/T_c) \propto g_c^{-1/2}$ for $g_c \to 0$
at van Hove filling,
due to the logarithmic divergence of the density of states,
and the additional logarithm in the Cooper channel,
as can be seen from the gap equation (12).
We now discuss results for the full mean-field model,
allowing also for coexistence of the two order parameters.
In Fig.~2 we show the low temperature region of the phase
diagram in the $(\mu,T)$-plane for $g_f = 1$ and a
relatively weak BCS coupling, $g_c = 0.7$.
Fermi surface symmetry breaking suppresses $T_c$ and
remains essentially unaffected by the (relativeley small)
superconducting gap. The suppression of $T_c$ occurs
since Fermi surface symmetry breaking splits the van Hove
singularity, reducing thus the density of states at the
Fermi level. However, superconductivity cannot be
eliminated completely, since a logarithmic Cooper
singularity survives for any reflection invariant Fermi
surface. The phase diagram thus exhibits three types of
first order transitions between phases with a symmetric
and a symmetry-broken Fermi surface: between two normal
states, between a superconductor and a normal state, and
between two superconducting states.
Continuing the (then metastable) phase with a symmetric
Fermi surface beyond the first order transition line leads
to a diverging $d$-wave compressibility at the ficticious
second order transition line \lq\lq $T_f^{\rm 2nd}$'' also
shown in the plot.
For larger $g_c$ the energy scale for superconductivity
(gap and $T_c$) increases, and effects of the superconducting
gap on Fermi surface symmetry breaking become more pronounced,
see Fig.~3 (here $g_c = 0.9$). In particular the first
order transition line $T_f(\mu)$ is shifted toward the
center of the symmetry-broken region, and approaches the
ficticious second order line \lq\lq $T_f^{\rm 2nd}$''.
In Fig.~4 we show the $\Delta$-dependence of the \lq\lq reduced''
Landau energy
$\omega(\Delta) = \omega[\eta_{\rm min}(\Delta),\Delta] -
\omega[\eta_{\rm min}(0),0]$ for two points in the phase
diagram which are close to each other, but on opposite
sides of the first order transition between a superconducting
state with a symmetric Fermi surface and a normal state with
Fermi surface symmetry breaking; $\eta_{\min}(\Delta)$
minimizes $\omega(\eta,\Delta)$ for fixed $\Delta$.
Note that $\eta_{\min}$ is zero for large $\Delta$;
the kink in $\omega(\Delta)$ is due to the discontinuous onset
of $\eta$ for small $\Delta$.
The $\mu$-dependence of the order parameters $\eta$ and
$\Delta$ is shown for various temperatures in Fig.~5.
The jump of $\eta$ at the first order transition induces a
counter jump of $\Delta$.
For high $T$, superconductivity is suppressed completely
by Fermi surface symmetry breaking (Fig.~5c), while for
lower temperatures coexistence of the order parameters
$\Delta$ and $\eta$ is realized (Figs.~5a and 5b).
The temperature dependence of the order parameters is shown
for $\mu = -0.7$ (near van Hove filling) in Fig.~6.
The increasing superconducting gap $\Delta$ leads to a
decrease of $\eta$ upon lowering the temperature below
$T_c$. Superconductivity smears the single
particle states over an energy range of order $\Delta$,
and thus suppresses the energy gain from a Fermi surface
deformation.
Fermi surface symmetry breaking is thus suppressed by
the superconducting gap.
Although the system is not critical at the first order
transition from a symmetric to a symmetry-broken Fermi
surface, it is close to criticality in the sense that the
$d$-wave compressibility $\kappa_d$ is strongly enhanced
by the forward scattering interaction.
For the case of pure forward scattering (f-model) this
was shown already in Ref.~\onlinecite{YOM}.
In the presence of superconductivity with a gap function
$\Delta_{{\bf k}}$, the $d$-wave compressibility is given by
\begin{equation}
\kappa_d = \frac{\kappa_d^0}{1 - g_f \, \kappa_d^0} \; ,
\end{equation}
where
\begin{equation}
\kappa_d^0 = \frac{1}{L} \sum_{{\bf k}} d_{{\bf k}}^2 \,
\left[
\frac{\beta \xi_{{\bf k}}^2}{2 E_{{\bf k}}^2} \,
\frac{1}{\left(\cosh\frac{\beta E_{{\bf k}}}{2}\right)^2} +
\frac{|\Delta_{{\bf k}}|^2}{E_{{\bf k}}^3} \,
\tanh\frac{\beta E_{{\bf k}}}{2} \right]
\end{equation}
is the $d$-wave compressibility in the superconducting
state in the absence of forward scattering ($g_f = 0$).
The enhancement of $\kappa_d$ due to $g_f$ is thus given
by the \lq\lq Stoner factor'' $S = (1 - g_f \, \kappa_d^0)^{-1}$.
In Fig.~7 we plot the inverse Stoner factor $S^{-1}$ along
the right first order transition line (approached from the
symmetric phase) up to the tricritical temperature
$T_f^{\rm tri}$ for various choices of $g_c$.
It becomes clear that $S$ is enhanced significantly by
superconductivity at low temperatures.
In particular, for $g_c = 0.9$ the system is very close
to criticality.
Enhancing $g_c$ beyond $g_c = 0.9$, the first order transition
lines between the states with symmetric and symmetry-broken
Fermi surfaces are successively replaced by a continuous phase
transition.
In Fig.~8 we show the phase diagram for $g_c = 1$. Here Fermi
surface symmetry breaking occurs via a continuous transition
at the lowest temperatures, well below $T_c$.
In particular, there is a continuous quantum phase transition
at $T = 0$.
The first order lines are connected to continuous transition
lines both at the high and low temperature ends.
The low temperature ends are tricritical points, where the
quadratic and the quartic coefficient of the reduced Landau
energy $\omega(\eta) = \omega[\eta,\Delta_{\rm min}(\eta)]$ both
vanish.
By contrast, at the high temperature ends the quartic
coefficient of $\omega(\eta)$ jumps from a negative to a
positive value. This discontinuity is due to the onset of
$\Delta$ below $T_c$.
Note that the high temperature ends are close to the
tricritical points found for smaller $g_c$, such that a
small jump of the quartic coefficient can turn its sign.
For $g_c = 1.12$ the first order transition has disappeared
completely from the phase diagram (see Fig.~9), and the
transition between symmetric and symmetry-broken Fermi surfaces
is always continuous.
The transition lines for Fermi surface symmetry breaking and
superconductivity intersect in tetracritical points, where
both quadratic coefficients of $\omega(\eta,\Delta)$ vanish.
Enhancing $g_c$ further leads to a progressive suppression of
Fermi surface symmetry breaking, in particular at lower
temperatures, where the superconducting gap is getting large.
For $g_c = 1.2$, Fermi surface symmetry breaking is eliminated
completely by superconductivity at low $T$, while it still
survives in a small region at intermediate temperatures, see
Fig.~10.
For even larger $g_c$ the region with a symmetry-broken Fermi
surface shrinks further until it disappears completely from
the phase diagram.
Adding a uniform contribution $u > 0$ to $f_{{\bf k}\bk'}$, Eq.~(4),
leads to a suppression of first order transitions into a phase
with a symmetry-broken Fermi surface, making thus continuous
transitions easier.
This trend was already observed and explained in detail for the
case of pure forward scattering.\cite{YOM}
For small $g_c$, the tricritical points are shifted to lower
temperatures by a finite $u$, and the first order transition
line moves closer to the ficticious second order transition.
The gradual replacement of the first order line by a second
order for increasing $g_c$ is accelerated for $u > 0$.
For example, for $g_f = 1$, $u = 10$, and $g_c = 0.9$ the
phase diagram looks qualitatively as the one in Fig.~9, with
Fermi surface symmetry breaking always occuring via a
continuous transition.
The (effective) interaction resulting from the Hubbard or
$t$-$J$ model contains also an $s$-wave component in the
Cooper channel.
In case of coexistence of superconductivity with a $d$-wave
Fermi surface deformation, this leads to a small $s$-wave
contribution to the gap function $\Delta_{{\bf k}}$, in addition
to the dominant $d$-wave term.\cite{YK1,NM}
\section{Conclusions}
We have solved a mean-field model for itinerant electrons
moving on a square lattice with two types of interactions:
an interaction in the forward scattering channel favoring
a $d$-wave shaped symmetry-breaking Fermi surface deformation
and a reduced BCS interaction with $d$-wave symmetry.
Making different choices for the interaction parameters,
a rich variety of possible phase diagrams has been found.
For pure forward scattering Fermi surface symmetry breaking
occurs typically via a first order transition at low
temperatures.\cite{KCOK,YOM} The presence of superconductivity
reduces the first order character of this transition and,
if strong enough, can turn it into a continuous one.
This gives rise to a quantum critical point within the
superconducting phase.
The superconducting gap tends to suppress Fermi surface
symmetry breaking. For a certain choice of parameters one
finds reentrant behavior, where Fermi surface symmetry
breaking is stabilized at intermediate temperatures, while
it is suppressed by the pairing gap at low temperatures.
If superconductivity is too strong, Fermi surface symmetry
breaking disappears completely from the phase diagram.
In microscopic models the relative strength of forward
scattering and pairing interactions is determined by the
microscopic interactions.
In the $t$-$J$ model slave-boson mean-field\cite{YK1} and
variational Monte Carlo\cite{EMG} calculations show that
pairing prevents Fermi surface symmetry breaking, but there
are strongly enhanced correlations indicating that the model
is close to a $d$-wave Pomeranchuk instability.\cite{Yam}
This corresponds to the case of a relatively large $g_c$
in our phenomenological mean-field model.
In the weakly interacting Hubbard model coexistence of
superconductivity and Fermi surface symmetry breaking
has been found around van Hove filling at $T=0$ within
second order perturbation theory.\cite{NM}
The available numerical results indicate that Fermi surface
symmetry breaking occurs via a continuous transition in
this case, as in the phase diagrams in Figs.~8 or 9.
It would clearly be interesting to analyze how order parameter
fluctuations modify the mean-field results.
A renormalization group calculation by Vojta {\it et al.}\cite{VZS}
suggests that a quantum critical point for orientational
symmetry breaking in a $d$-wave superconductor is
destabilized by fluctuations, leading possibly to a first
order transition.
\acknowledgments
We thank Marijana Kir\'{c}an for a critical reading of the manuscript
and for valuable comments.
|
1,941,325,221,040 | arxiv | \section{Introduction}\label{sec:introduction}
ALICE (A Large Ion Collider Experiment) is a general-purpose heavy-ion
experiment at the CERN LHC (Large Hadron Collider) aimed at studying the
physics of strongly-interacting matter and the quark--gluon plasma. A unique
design has been adopted for the ALICE detector to fulfill tracking and
particle-identification requirements~\cite{ref:ALICEperf}. Thanks to these features the experiment
is able to identify charged hadrons with momenta from about 0.1~GeV/c and up
to a few GeV/c by combining different detecting systems, as discussed in
Section~\ref{sec:pid}.
The hot and dense matter produced in ultrarelativistic heavy-ion collisions
evolves through different phases to a freeze-out state where strong interactions
among the hadrons stop. Since produced hadrons carry information about the
evolution of the system, the measurement of the tranverse momentum
distributions and yields of identified hadrons is essential to understand the
global properties and dynamics of the later stages. Results on
charged-hadron spectra and yields at mid-rapidity are presented in
Section~\ref{sec:results} for Pb--Pb collisions at
$\sqrt{s_{\rm NN}}~=~\rm~2.76~TeV$.
\section{Particle identification}\label{sec:pid}
In this section the particle-identification (PID) detectors relevant for this
analysis are briefly discussed, namely the \emph{Inner Tracking System} (ITS),
the \emph{Time-Projection Chamber} (TPC) and the \emph{Time-Of-Flight} detector
(TOF). A detailed review of the ALICE detector and of its PID capabilities can
be found in~\cite{ref:ALICEperf}. The ITS is a six-layer silicon detector located at radii between 4 and 43
cm. Four of the six layers provide $dE/dx$ measurements and are used for
particle identification in the non-relativistic ($1/\beta^2$)
region. Moreover, using the ITS as a standalone tracker enables one to
reconstruct and identify low-momentum particles not reaching the main tracking
systems. The TPC is the main central-barrel tracking detector of ALICE and
provides three-dimensional hit information and specific energy-loss
measurements with up to 159 samples. With the measured particle momentum and
$\langle dE/dx \rangle$ the particle type can be determined by comparing the
measurements against the Bethe-Bloch expectation. The TOF detector is a
large-area array of Multigap Resistive Plate Chambers (MRPC) and covers the central
pseudorapidity region ($\left| \eta \right| <$~0.9, full azimuth). Particle
identification is performed by matching momentum and trajectory-length
measurements performed by the tracking system with the time-of-flight
information provided by the TOF system. The total time-of-flight resolution is
about 85 ps in Pb--Pb collisions and it is determined by the time resolution
of the detector itself and by the start-time resolution.
\section{Results}\label{sec:results}
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{protonspectra.eps}
\vspace{-5mm}
\caption{Transverse momentum spectra of primary $\bar{p}$ and
corresponding fits in several centrality classes.}
\label{fig:protonspectra}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{mub.eps}
\vspace{-5mm}
\caption{Antiparticle/particle production ratios in the 0-5\% most central
collisions.}
\label{fig:mub}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{kapi.eps}
\vspace{-5mm}
\caption{$K^{-}/\pi^{-}$ production ratios as a function of
$dN_{ch}/d\eta$ compared to RHIC data.}
\label{fig:kapi}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{prpi.eps}
\vspace{-5mm}
\caption{$\bar{p}/\pi^{-}$ production ratios as a function of
$dN_{ch}/d\eta$ compared to RHIC data.}
\label{fig:prpi}
\end{minipage}
\end{figure}
The transverse momentum spectra of primary $\pi^{\pm}$, $K^{\pm}$, $p$ and
$\bar{p}$ are measured at mid-rapidity ($\left|y\right|~<~0.5$) combining the
techniques and detectors described in Section~\ref{sec:pid}. Primary particles
are defined as prompt particles produced in the collision and all decay
products, except products from weak decay of strange particles. The
contribution from the feed-down of weakly-decaying particles to $\pi^{\pm}$,
$p$ and $\bar{p}$ and from protons from material are subtracted by fitting the
data using Monte Carlo templates of the DCA\footnote{Distance of Closest
Approach to the reconstructed primary vertex.} distributions. Hadron spectra are
measured in several centrality classes (see~\cite{ref:ALICEpbpb} for details
on centrality selection) from 100~MeV/c up to 3~GeV/c for pions, from
200~MeV/c up to 2~GeV/c for kaons and from 300~MeV/c up to 3~GeV/c for
protons and antiprotons. Individual fits to the data are performed following a
blast-wave parameterization~\cite{ref:blastwave} to extrapolate the
spectra outside the measured $p_{T}$ range. The measured spectra and corresponding fits are shown in
Figure~\ref{fig:protonspectra} for primary $\bar{p}$. Average transverse
momenta $\langle p_{T} \rangle$ and integrated
production yields $dN/dy$ are obtained using the measured data points and the
extrapolation.
Antiparticle/particle integrated production ratios are observed to be
consistent with unity for all particle species in all centralities suggesting
that the baryo-chemical potential $\mu_{B}$ is close to zero as expected at LHC
energies. Figure~\ref{fig:mub} compares ALICE results with
RHIC at data in Au--Au
collisions at $\sqrt{s_{\rm NN}}$ = 200 GeV~\cite{ref:RHIC} for the 0-5\%
most central collisions. The $p_{T}$-integrated $K^{-}/\pi^{-}$ and $\bar{p}/\pi^{-}$ ratios are shown in
Figure~\ref{fig:kapi} and~\ref{fig:prpi} as a function of the charged-particle
density $dN_{ch}/d\eta$~\cite{ref:ALICEpbpb} and are compared with RHIC data at
$\sqrt{s_{\rm NN}}$~=~200~GeV and
ALICE proton-proton results at
$\sqrt{s}$~=~7~TeV~\cite{ref:MarekQM}. $K^{-}/\pi^{-}$ production nicely
follows the trend measured by STAR. $\bar{p}/\pi^{-}$ results are similar to
previous measurements performed by PHENIX and BRAHMS where the definition of
the proton sample is close to ours (proton measurements reported by STAR are inclusive). Finally, the $\bar{p}/\pi^{-}$ ratio measured at the LHC ($\sim$~0.05) is
significantly lower that the value expected from statistical model predictions
($\sim$~0.07-0.09) with a chemical freeze-out temperature of $T_{ch} =
160-170$~MeV at the LHC~\cite{ref:statmodels}.
The measured hadron $\langle p_{T} \rangle$'s are shown in
Figure~\ref{fig:meanpt} as a function of $dN_{ch}/d\eta$ for $\pi^{-}$, $K^{-}$ and $\bar{p}$ and
are compared to STAR results in Au--Au collisions at
$\sqrt{s_{\rm NN}}$~=~200~GeV. The spectra are observed to be
harder that at RHIC for similar $dN_{ch}/d\eta$. A detailed study of the
spectral shapes has been done in order to give a quantitative estimate of
the thermal freeze-out temperature $T_{fo}$ and the average transverse flow
$\langle \beta \rangle$. A combined blast-wave fit of the spectra has
been performed in the ranges 0.3-1.0~GeV/c, 0.2-1.5~GeV/c and 0.3-3.0~GeV/c
for pions, kaons and protons respectively. While the $T_{fo}$ parameter is
slightly sensitive to the pion fit range because of feed-down of
resonances\footnote{This effect will be investigated in details in the
future.} the transverse flow $\langle \beta \rangle$ measurement is not,
being dominated by the proton spectral shape. The results obtained on the
thermal freeze-out properties in different centrality bins are compared with
similar measurements performed by the STAR Collaboration at lower energies in
Figure~\ref{fig:blastwave}. A stronger radial flow is observed with respect to
RHIC, being about 10\% larger in the most central collisions at the LHC.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{meanpt.eps}
\vspace{-5mm}
\caption{Hadron $\langle p_{T} \rangle$ as a function of the
charged-particle density $dN_{ch}/d\eta$.}
\label{fig:meanpt}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{blastwave.eps}
\vspace{-5mm}
\caption{Thermal freeze-out parameters $T_{fo}$ and $\langle \beta \rangle$
from combined blast-wave fits.}
\label{fig:blastwave}
\end{minipage}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
The tranverse momentum spectra of $\pi^{\pm}$, $K^{\pm}$, $p$ and $\bar{p}$ have
been measured with ALICE in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$~=~2.76~TeV,
demonstrating the excellent PID capabilities of the
experiment. Antiparticle/particle production ratios are consistent with unity as
expected at LHC energies. $\bar{p}/\pi^{-}$ integrated ratio is significantly
lower than statistical model predictions with a chemical freeze-out temperature $T_{ch} =
160-170$~MeV. The average transverse momenta and the spectral shapes indicate
a $\sim$10\% stronger radial flow than at RHIC energies.
|
1,941,325,221,041 | arxiv | \section{Introduction \label{sec:intro}}
The hadronic decays of $B$ mesons have provided us with a good place to study
CP violation in particle physics. In particular, the detection of direct CP
violation in a decay process requires that there exist at least two
contributing amplitudes with different weak and strong phases. The direct
CP-violating effect in the $B$ system has finally been observed in the $B^0 \to
K^+ \pi^-$ decay at the $B$-factories \cite{Aubert:2004qm,Chao:2004mn}, proving
the existence of nontrivial strong phases in $B$ decays. It is therefore of
consequence to find out the patterns of final-state strong phases for a wider
set of decay modes.
Since the CKM factors involved in charmed $B$ meson decays are purely real to a
good approximation, the phases associated with the decay amplitudes thus have
the origin of strong interactions. Such final-state rescattering effects have
been noticed from data in these decays \cite{Cheng:2001sc,Chua:2001br}, and
estimated to be at 15-20\% level \cite{Fayyazuddin:2002pv}. Unfortunately, no
satisfactory first-principle calculations can yield such strong phases
\cite{Wolfenstein:2003pc}. In Ref.~\cite{Chiang:2002tv}, we performed an
analysis based upon the experimental data available at that time. A few
theoretical and experimental questions are left unanswered. As more decay
modes have been observed and others are measured at higher precisions, it
becomes possible for us to look at and answer those questions. In this paper,
flavor SU(3) symmetry is employed to relate different amplitudes and strong
phases of the same topological type. Moreover, we will take a different
approach by fitting theoretical parameters to all available branching ratios
simultaneously. An advantage of this analysis is that the parameters thus
obtained are insensitive to statistical fluctuations of individual modes.
This paper is organized as follows. In Section~\ref{sec:decomp}, we give the
amplitude decomposition of modes under flavor SU(3) symmetry and the current
branching ratio data. Theoretical parameters involved in our analysis are
defined. In Section~\ref{sec:phases}, we consider three sets of charmed decay
modes: $DP$, $D^* P$, and $DV$, where $P$ and $V$ denote charmless pseudoscalar
and vector mesons, respectively. A summary of our findings is given in
Section~\ref{sec:summary}.
\section{Flavor Amplitude Decomposition and Data \label{sec:decomp}}
In the decomposition of decay amplitudes, relevant meson wave functions are
assumed to have the following quark contents, with phases chosen so that
isospin multiplets contain no relative signs:
\begin{itemize}
\item{\it Beauty mesons:} $\overline{B^0} = b \bar d$, $B^- = - b \bar u$,
$\overline{B}_s = b \bar s$.
\item {\it Charmed mesons:} $D^0 = - c \bar u$, $D^+ = c \bar d$, $D_s^+ =c
\bar s$, with corresponding phases for vector mesons.
\item {\it Pseudoscalar mesons $P$:} $\pi^+ = u \bar d$, $\pi^0 = (d \bar d - u
\bar u)/\sqrt{2}$, $\pi^- = - d \bar u$, $K^+ = u \bar s$, $K^0 = d \bar s$,
$\bar K^0 = s \bar d$, $K^- = - s \bar u$, $\eta = (s \bar s - u \bar u - d
\bar d)/\sqrt{3}$, $\eta' = (u \bar u + d \bar d + 2 s \bar s)/\sqrt{6}$,
assuming a specific octet-singlet mixing \cite{Chau:1990ay,eta} in the $\eta$
and $\eta'$ wave functions.)
\item {\it Vector mesons $V$:} $\rho^+ = u \bar d$, $\rho^0 = (d \bar d - u
\bar u)/\sqrt{2}$, $\rho^- = - d \bar u$, $\omega = (u \bar u + d \bar
d)/\sqrt{2}$, $K^{*+} = u \bar s$, $K^{*0} = d \bar s$, $\overline{K}^{*0} =
s \bar d$, $K^{*-} = - s \bar u$, $\phi = s \bar s$.
\end{itemize}
The amplitudes contributing to the decays discussed here involve only three
different topologies \cite{Zeppenfeld:1980ex,Chau:1990ay,GHLR,eta}:
\begin{enumerate}
\item {\it Tree amplitude $T$:} This is associated with the transition $b \to c
d \bar u$ (Cabibbo-favored) or $b \to c s \bar u$ (Cabibbo-suppressed) in
which the light (color-singlet) quark-antiquark pair is incorporated into one
meson, while the charmed quark combines with the spectator antiquark to form
the other meson.
\item {\it Color-suppressed amplitude $C$:} The transition is the same as in
the tree amplitudes, namely $b \to c d \bar u$ or $b \to c s \bar u$, except
that the charmed quark and the $\bar u$ combine into one meson while the $d$
or $s$ quark and the spectator antiquark combine into the other meson.
\item{\it Exchange amplitude $E$:} The $b$ quark and spectator antiquark
exchange a $W$ boson to become a $c \bar u$ pair, which then hadronizes into
two mesons by picking up a light quark-antiquark pair out of the vacuum.
\end{enumerate}
After factoring out the CKM factors explicitly, we obtain the flavor amplitude
decomposition of the charmed $B$ decay modes in Tables~\ref{tab:DP},
\ref{tab:DstP}, and \ref{tab:DV}. In these tables, we introduce positive
$\xi$'s to parameterize the flavor SU(3) breaking effects. This symmetry is
respected between strangeness-conserving and strangeness-changing amplitudes
when $\xi$'s are taken to be unity. As we will discuss in the next section,
$\xi$'s will be allowed to change in order to test the assumption. Using the
Wolfenstein parameters \cite{Wolfenstein:1983yz}, the relevant CKM factors are:
\begin{eqnarray}
V_{cb} = A \lambda^2 ~,
\quad
V_{ud} = 1 - \frac{\lambda^2}{2} ~,
\quad \mbox{and} \quad
V_{us} = \lambda ~,
\end{eqnarray}
none of which contain a weak phase to the order we are concerned with. In the
following analysis, we take the central values $\lambda = 0.2272$ and $A =
0.809$ quoted by the CKMfitter group \cite{CKMfitter}.
Since only the relative strong phases are physically measurable, we fix the
tree ($T$, $T_P$, and $T_V$) amplitudes to be real and pointing in the positive
direction. We then associate the color-suppressed and exchange amplitudes with
the corresponding strong phases explicitly as follows:
\begin{eqnarray}
C = |C| e^{i\delta_{C}} ~, && E = |E| e^{i\delta_{E}} ~, \\
C_P = |C_P| e^{i\delta_{C_P}} ~, && E_P = |E_P| e^{i\delta_{E_P}} ~, \\
C_V = |C_V| e^{i\delta_{C_V}} ~, && E_V = |E_V| e^{i\delta_{E_V}} ~.
\end{eqnarray}
The magnitude of invariant decay amplitude ${\cal A}$ for a decay process $B
\to M_1 \, M_2$ is related to its partial width via the following relation:
\begin{eqnarray}
\Gamma(B \to M_1 \, M_2)
= \frac{p^*}{8\pi m_B^2} |{\cal A}|^2 ~,
\end{eqnarray}
with
\begin{equation}
p^* = \frac{1}{2m_B}
\sqrt{\big\{m_B^2-(m_1+m_2)^2\big\}\big\{m_B^2-(m_1-m_2)^2\big\} }~,
\end{equation}
where $m_{1, 2}$ are the masses of $M_{1, 2}$, respectively. To relate partial
widths to branching ratios, we use the world-average lifetimes $\tau^+ = (1.638
\pm 0.011)$ ps, $\tau^0 = (1.530 \pm 0.009)$ ps, and $\tau_s = (1.466 \pm
0.059)$ ps computed by the Heavy Flavor Averaging Group (HFAG) \cite{HFAG}.
\begin{table}[t]
\caption{Branching ratios and flavor amplitude decomposition for $B\to DP$
decays. Data are quoted from Refs. \cite{PDG, Aubert:2006hu,
Ronga:2006hv, Aubert:2006qn, Aubert:2006um, Blyth:2006at, Aubert:2006jc,
Aubert:2006cd, Kuzmin:2006mw,unknown:2006qw}}
\label{tab:DP}
\begin{center}
\begin{tabular}{l c c c c c} \hline \hline
Decay & $m_B$ & Branching ratio & $p^*$ & $|{\cal A}|$ & Representation \\
& (GeV) & (in units of $10^{-4}$) & (GeV) & ($10^{-7}$ GeV) \\
\hline
$B^- \to D^0 \pi^-$ & 5.2791 & $47.5 \pm 2.1 $ & 2.308 & $ 7.61 \pm 0.17 $
& $-V_{cb}V_{ud}^*(T+C)$ \\
$ \to D^0 K^-$ & & $ 4.08 \pm 0.24 $ & 2.281 & $ 2.24 \pm 0.07 $
& $-V_{cb}V_{us}^* (\xi_TT+\xi_CC)$ \\
\hline
$\overline{B}^0 \to D^+ \pi^-$ & 5.2793 & $ 29 \pm 2 $ & 2.306 & $ 6.11 \pm 0.21 $
& $-V_{cb}V_{ud}^*(T+E) $ \\
$ \to D^+ K^-$ & & $2.0 \pm 0.6$ & 2.279 & $1.63 \pm 0.24 $
& $-V_{cb}V_{us}^* \xi_T T$ \\
$ \to D^0 \pi^0$ & & $ 2.61 \pm 0.24 $ & 2.308 & $1.85 \pm 0.09 $
& $V_{cb}V_{ud}^*(E-C)/\sqrt{2}$ \\
$ \to D^0 \eta$ & & $2.0\pm0.2$ & 2.274 & $1.62 \pm 0.08 $
& $V_{cb}V_{ud}^*(C+E)/\sqrt{3}$ \\
$ \to D^0 \eta'$ & & $ 1.25 \pm 0.23 $ & 2.198 & $1.31 \pm 0.12 $
& $-V_{cb}V_{ud}^*(C+E)/\sqrt{6}$ \\
$ \to D^0 \overline{K}^0$ & & $0.52 \pm 0.07$ & 2.280 & $0.83 \pm 0.05 $
& $-V_{cb}V_{us}^* \xi_C C$ \\
$ \to D_s^+ K^-$ & & $0.27 \pm 0.05$ & 2.242 & $0.61 \pm 0.06 $
& $-V_{cb}V_{ud}^*E$ \\
\hline
$\overline{B}^0_s \to D^+ \pi^-$ & 5.3696 & $ $ & 2.357 & $ $
& $-V_{cb}V_{us}^*\xi_E E $ \\
$ \to D^0 \pi^0$ & & $ $ & 2.359 & $ $
& $V_{cb}V_{us}^*\xi_E E/\sqrt{2}$ \\
$ \to D^0 K^0$ & & $ $ & 2.332 & $ $
& $-V_{cb}V_{ud}^*C$\\
$ \to D^0 \eta$ & & $ $ & 2.326 & $$
& $V_{cb}V_{us}^*(\xi_E E-\xi_CC)/\sqrt{3}$ \\
$ \to D^0 \eta'$ & & $ $ & 2.251 & $ $
& $-V_{cb}V_{us}^*(2\xi_CC+\xi_EE)/\sqrt{6}$ \\
$ \to D_s^+ \pi^-$ & & $38\pm3\pm13^{~a} $ & 2.321 & $ 7.30 \pm 1.28 $
& $-V_{cb}V_{ud}^*T$ \\
$ \to D_s^+ K^-$ & & $ $ & 2.294 & $ $
& $-V_{cb}V_{us}^*(\xi_TT+\xi_EE)$ \\
\hline\hline
\end{tabular}
\end{center}
\leftline{$^a$ Ref.~\cite{unknown:2006qw}.}
\end{table}
\begin{table}[t]
\caption{Branching ratios and flavor amplitude decomposition for $B\to D^*P$
decays. Data are quoted from Refs. \cite{PDG, Aubert:2006hu,
Ronga:2006hv, Aubert:2006qn, Aubert:2006um, Blyth:2006at, Aubert:2006jc,
Aubert:2006cd, Kuzmin:2006mw,unknown:2006qw}}
\label{tab:DstP}
\begin{center}
\begin{tabular}{l c c c c c} \hline \hline
Decay & $m_B$ & Branching ratio & $p^*$ & $|{\cal A}|$ & Representation \\
& (GeV) & (in units of $10^{-4}$) & (GeV) & ($10^{-7}$ GeV) \\
\hline
$B^- \to D^{*0} \pi^-$ & 5.2791 & $ 50 \pm 4 $ & 2.256 & $7.87 \pm 0.32 $
& $-V_{cb}V_{ud}^*(T_V+C_P)$ \\
$ \to D^{*0} K^-$ & & $ 3.7 \pm 0.4 $ & 2.227 & $2.16 \pm 0.12 $
& $-V_{cb}V_{us}^* (\xi_{T_V}T_V+\xi_{C_P}C_P)$ \\
\hline
$\overline{B}^0 \to D^{*+} \pi^-$ & 5.2793 & $28.5 \pm 1.7$ & 2.255 & $ 6.17 \pm 0.19 $
& $-V_{cb}V_{ud}^*(T_V+E_P) $ \\
$ \to D^{*+} K^-$ & & $ 2.14 \pm 0.20 $ & 2.226 & $1.70 \pm 0.08 $
& $-V_{cb}V_{us}^* \xi_{T_V} T_V$ \\
$ \to D^{*0} \pi^0$ & & $1.7 \pm 0.3 $ & 2.256 & $1.52 \pm 0.12 $
& $V_{cb}V_{ud}^*(E_P-C_P)/\sqrt{2}$ \\
$ \to D^{*0} \eta$ & & $ 1.8 \pm 0.6 $ & 2.220 & $ 1.55 \pm 0.24 $
& $V_{cb}V_{ud}^*(C_P+E_P)/\sqrt{3}$ \\
$ \to D^{*0} \eta'$ & & $ 1.23 \pm 0.35 $ & 2.141 & $ 1.32 \pm 0.19 $
& $-V_{cb}V_{ud}^*(C_P+E_P)/\sqrt{6}$ \\
$ \to D^{*0} \overline{K}^0$ & & $ 0.36 \pm 0.12^{~b} $ & 2.227 & $ 0.70 \pm 0.12 $
& $-V_{cb}V_{us}^* \xi_{C_P} C_P$ \\
$ \to D_s^{*+} K^-$ & & $ 0.20 \pm 0.05 \pm 0.04^{~c} $ & 2.185 & $ 0.53 \pm 0.08 $
& $-V_{cb}V_{ud}^*E_P$ \\
\hline
$\overline{B}^0_s \to D^{*+} \pi^-$ & 5.3696 & $ $ & 2.306 & $ $
& $-V_{cb}V_{us}^*\xi_{E_P}E_P $ \\
$ \to D^{*0} \pi^0$ & & $ $ & 2.308 & $ $
& $ V_{cb}V_{us}^*\xi_{E_P} E_P/\sqrt{2}$ \\
$ \to D^{*0} K^0$ & & & 2.279 & $ $
& $-V_{cb}V_{ud}^*C_P$ \\
$ \to D^{*0} \eta$ & & & 2.273 & $$
& $V_{cb}V_{us}^*(\xi_{E_P}E_P-\xi_{C_P}C_P)/\sqrt{3}$ \\
$ \to D^{*0} \eta'$ & & $ $ & 2.195 & $ $
& $-V_{cb}V_{us}^*(2\xi_{C_P}C_P+\xi_{E_P}E_P)/\sqrt{6}$ \\
$ \to D_s^{*+} \pi^-$ & & $ $ & 2.267 & $ $
& $-V_{cb}V_{ud}^*T_V$ \\
$ \to D_s^{*+} K^-$ & & $ $ & 2.238 & $ $
& $-V_{cb}V_{us}^*(\xi_{T_V}T_V+\xi_{E_P}E_P)$ \\
\hline\hline
\end{tabular}
\end{center}
\leftline{$^b$ Ref.~\cite{Aubert:2006qn}, $^c$ Ref.~\cite{Aubert:2006hu}.}
\end{table}
\begin{table}
\caption{Branching ratios and flavor amplitude decomposition for $B\to DV$
decays. Data are quoted from Refs. \cite{PDG, Aubert:2006hu,
Ronga:2006hv, Aubert:2006qn, Aubert:2006um, Blyth:2006at, Aubert:2006jc,
Aubert:2006cd, Kuzmin:2006mw,unknown:2006qw}}
\label{tab:DV}
\begin{center}
\begin{tabular}{l c c c c c} \hline \hline
Decay & $m_B$ & Branching ratio & $p^*$ & $|{\cal A}|$ & Representation \\
& (GeV) & (in units of $10^{-4}$) & (GeV) & ($10^{-7}$ GeV) \\
\hline
$B^- \to D^0 \rho^-$ & 5.2791 & $134 \pm 18 $ & 2.237 & $13.0 \pm 0.9 $
& $-V_{cb}V_{ud}^*(T_P+C_V)$ \\
$ \to D^0 K^{*-}$ & & $ 5.3 \pm 0.4 $ & 2.213 & $2.60 \pm 0.11 $
& $-V_{cb}V_{us}^* (\xi_{T_P}T_P+\xi_{C_V}C_V)$ \\
\hline
$\overline{B}^0 \to D^+ \rho^-$ & 5.2793 & $ 75 \pm 12 $ & 2.235 & $10.1 \pm 0.8 $
& $-V_{cb}V_{ud}^*(T_P+E_V) $ \\
$ \to D^+ K^{*-}$ & & $ 4.5 \pm 0.7 $ & 2.211 & $2.48 \pm 0.19 $
& $-V_{cb}V_{us}^* \xi_{T_P} T_P$ \\
$\to D^0 \rho^0$ & & $3.2 \pm 0.5 $ & 2.237 & $2.07 \pm 0.16 $
& $V_{cb}V_{ud}^*(E_V-C_V)/\sqrt{2}$ \\
$ \to D^0 \omega$ & & $2.6 \pm 0.3$ & 2.235 & $1.87 \pm 0.11 $
& $-V_{cb}V_{ud}^*(C_V+E_V)/\sqrt{2}$ \\
$\to D^0 \overline{K}^{*0}$ & & $ 0.42 \pm 0.06 $ & 2.212 & $0.76 \pm 0.06 $
& $-V_{cb}V_{us}^* \xi_{C_V} C_V$ \\
$ \to D_s^+ K^{*-}$ & & $ < 8 $ & 2.172 & $ < 3 $
& $-V_{cb}V_{ud}^*E_V$ \\
\hline
$\overline{B}^0_s \to D^+ \rho^-$ & 5.3696 & $ $ & 2.288 & $ $
& $-V_{cb}V_{us}^*\xi_{E_V} E_V $ \\
$ \to D^+ K^{*-}$ & & $ $ & 2.264 & $ $
& $-V_{cb}V_{us}^*(\xi_{T_P}T_P+\xi_{E_V}E_V)$ \\
$\to D^0 \rho^0$ & & & 2.289 & $ $
& $V_{cb}V_{ud}^*E_V/\sqrt{2}$ \\
$ \to D^0 K^{*0}$ & & & 2.265 & $ $
& $-V_{cb}V_{ud}^*C_V$ \\
$ \to D^0 \omega$ & & & 2.288 & $$
& $-V_{cb}V_{us}^*\xi_{E_V} E_V/\sqrt{2}$ \\
$ \to D^0 \phi$ & & & 2.237 & $$
& $-V_{cb}V_{us}^*\xi_{C_V} C_V$ \\
$ \to D_s^+ \rho^-$ & & & 2.250 & $ $
& $-V_{cb}V_{ud}^*T_P$ \\
$ \to D_s^+ K^{*-}$ & & & 2.226 & $ $
& $-V_{cb}V_{us}^*(\xi_{T_P}T_P+\xi_{E_V}E_V)$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{Strong Phases \label{sec:phases}}
In our analysis, we take the amplitude sizes and the strong phases as
theoretical parameters, and perform $\chi^2$ fits to all the branching ratios
in each category ($B_{u, d}\to DP$, $D^*P$, and $DV$). We consider three
schemes to test the flavor SU(3) assumption:
\begin{enumerate}
\item $\xi_T=\xi_C=\xi_{T_V}=\xi_{C_P}=\xi_{T_P}=\xi_{C_V}=1$. This is the
exact flavor SU(3)-symmetric case.
\item $\xi_T=\xi_C=\xi_{T_V} = \xi_{C_P}= f_K / f_\pi \simeq 1.22$, and
$\xi_{T_P}=\xi_{C_V}= f_{K^*} / f_{\rho} \simeq 1.00 $. This takes into
account the difference in the decay constants for the charmless meson in the
final states.
\item All $\xi_T$'s and $\xi_C$'s are taken as free parameters and determined
by the $\chi^2$ fit in each individual category.
\end{enumerate}
Here we have taken the decay constants $f_\pi = 130.7$ MeV, $f_K = 159.8$ MeV
\cite{PDG}, $f_{K^*} = 210.4$ MeV and $f_\rho = 210.4$ MeV
\cite{vecdecayconst}. For Scheme 3 in the $DV$ sector, it turns out that this
scheme does not work well with the present available experimental data. We will
discuss this issue in Subsection \ref{sec:DV}.
Among all $B_{u,d}$ decays considered in this work, no Cabibbo-suppressed decay
involves the exchange diagram. The only place to test this is the $B_s$
decays, of which we know very little at the moment. We thus assume $\xi_E$'s
=1 when we predict the branching ratios of those decays.
The strong phases given in the following results are subject to a two-fold
ambiguity. This is because only the cosines of the relative strong phases are
involved in the branching ratios. Therefore, it is allowed to flip the signs
of all the phases simultaneously without changing the fitting quality and our
predictions. In view of this, we will restrict the strong phase associated
with the color-suppressed amplitudes to the $[-180^\circ,0^\circ]$ range in our
analysis.
\subsection{$B \to D P$ decays}
\label{sec:DP}
In Table~\ref{tab:DPfit}, we see that $\chi^2_{\rm min}$ is greatly reduced by
the introduction of the SU(3) breaking factors $\xi_T$ and $\xi_C$. The
smallness of $\chi^2_{\rm min}$ in Schemes~2 and 3 also shows the consistency
of input observables.
\begin{table}[h]
\caption{$B\to DP$ decays. Theoretical parameters are extracted from global
$\chi^2$ fits in different schemes explained in the text. The amplitude
sizes are given in units of $10^{-6}$. Predictions of branching ratios are
made with $\xi_E=1$ and given in units of $10^{-4}$ unless otherwise noted.}
\label{tab:DPfit}
\begin{center}
\begin{tabular}{l c c c} \hline \hline
& Scheme 1 & Scheme 2 & Scheme 3 \\
\hline\hline
$|T|$ & $16.26^{+0.61}_{-0.68} $ & $13.74 \pm 0.45$
& $13.71\pm 0.46 $ \\
$|C|$ & $6.77^{+0.20}_{-0.21} $ & $6.67 \pm 0.20 $
& $6.57 \pm 0.22 $\\
$|E|$ & $1.47^{+0.13}_{-0.15} $ & $1.48^{+0.13}_{-0.15} $
& $1.49^{+0.13}_{-0.15}$\\
$\delta_C$ (degrees) & $-69.0^{+9.2}_{-7.5}$ & $-47.0^{+9.5}_{-8.2}$
& $-48.7^{+9.8}_{-8.5} $ \\
$\delta_E$ (degrees) & $-146.2^{+13.9}_{-12.0}$ & $30.4^{+11.6}_{-11.8}$ &
$28.9^{+11.9}_{-12.1}$ \\
$\xi_T$ &1 (fixed) & $f_K/f_\pi$ (fixed) & $1.24 \pm 0.02 $ \\
$\xi_C$ &1 (fixed) & $f_K/f_\pi$ (fixed) & $1.33 \pm 0.02 $ \\
\hline
$\chi^2_{\rm min}$ & 45.28 & 3.53 & 1.41 \\
$\chi^2_{\rm min}/{\rm dof}$ & 11.32 & 0.88 & 0.71 \\
\hline\hline
$B^- \to D^0 \pi^-$ & $52.8 \pm 5.3 $ & $ 48.6 \pm 3.7 $ & $ 47.5 \pm 3.8 $ \\
$\to D^0 K^-$ & $2.84 \pm 0.28 $ & $ 3.91 \pm 0.30 $ & $ 4.08 \pm 0.34 $ \\
\hline
$\overline{B}^0 \to D^+ \pi^-$ & $ 29 \pm 3 $ & $ 29 \pm 2 $ & $ 29 \pm 2 $ \\
$\to D^+ K^-$ & $ 1.8 \pm 0.1 $ & $ 1.9 \pm 0.1 $ & $ 2.0 \pm 0.1 $ \\
$\to D^0 \pi^0$ & $ 2.76 \pm 0.37 $ & $ 2.68 \pm 0.35 $ & $ 2.61 \pm 0.36 $ \\
$\to D^0 \eta$ & $ 2.2 \pm 0.3 $ & $ 2.1 \pm 0.2 $ & $ 2.1 \pm 0.2 $ \\
$\to D^0 \eta'$ & $ 1.06 \pm 0.12 $ & $ 1.03 \pm 0.11 $ & $ 1.00 \pm 0.12 $ \\
$\to D^0 \overline{K}^0$ & $ 0.31 \pm 0.02 $ & $ 0.45 \pm 0.03 $ & $ 0.52 \pm 0.04 $ \\
$\to D_s^+ K^-$ & $ 0.27 \pm 0.05 $ & $ 0.27 \pm 0.05 $ & $ 0.27 \pm 0.05 $ \\
\hline
$\overline{B}^0_s \to D^+ \pi^-$ (in units of $10^{-6}$) & $ 1.4 \pm 0.3 $ & $ 1.4 \pm 0.3
$ & $ 1.4 \pm 0.3 $ \\
$\to D^0 \pi^0$ (in units of $10^{-6}$) & $ 0.7 \pm 0.1 $ & $ 0.7 \pm 0.1 $ & $ 0.7 \pm 0.1 $ \\
$\to D^0 K^0$ & $ 5.4 \pm 0.3 $ & $ 5.3 \pm 0.3 $ & $ 5.1 \pm 0.3 $ \\
$\to D^0 \eta$ & $ 0.09 \pm 0.01 $ & $ 0.14 \pm 0.01 $ & $ 0.16 \pm 0.02 $ \\
$\to D^0 \eta'$ & $ 0.20 \pm 0.02 $ & $ 0.29 \pm 0.02 $ & $ 0.33 \pm 0.03 $ \\
$\to D_s^+ \pi^-$ & $ 31 \pm 2 $ & $ 22 \pm 1 $ & $ 22 \pm 1 $ \\
$\to D_s^+ K^-$ & $ 1.8 \pm 0.1 $ & $ 2.0 \pm 0.1 $ & $ 2.0 \pm 0.1 $ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
The values of $|T|$ and $|C|$ can be directly obtained from the $\overline{B^0}
\to D^+ K^-$ and $D^0 \overline{K^0}$ decays via the U-spin symmetry, i.e.,
exchange between $d$ quark and $s$ quark. They are respectively $14.0 \pm 2.1$
and $7.17 \pm 0.46$ in units of $10^{-6}$ GeV. Here we take
$\xi_T=\xi_C=f_K/f_\pi$. Likewise, $|E|$ is inferred from the $\overline{B^0}
\to D_s^+ K^-$ mode to be $(1.49 \pm 0.14) \times 10^{-6}$ GeV. These values
directly extracted from individual modes are consistent with those given in
Table~\ref{tab:DPfit} and in general have larger errors except for $|E|$. The
SU(3) breaking parameter $\xi_T$ can also be extracted from $\overline{B^0}\to
D^+K^-$ and $\overline{B_s^0}\to D_s^+\pi^-$. It leads to $\xi_T=0.96\pm0.22$.
This is smaller than the fitted value of $\xi_T$ in Table~\ref{tab:DPfit}.
According to our wave functions for $\eta$ and $\eta'$, the ratio ${\cal
B}(D^0\eta) / {\cal B}(D^0\eta')$ is predicted to be 2, in comparison with
$1.58 \pm 0.33$ given by the current data. From these decays, we determine
$|C+E| = (7.07 \pm 0.30) \times 10^{-6}$ GeV. On the other hand, $|C-E| =
(6.42 \pm 0.30) \times 10^{-6}$ GeV is inferred from the $D^0 \pi^0$ mode.
Therefore, one can form the combination $|C|^2 + |E|^2 = (45.6 \pm 2.9) \times
10^{-12}$ GeV$^2$ from these three modes, consistent with $(53.6 \pm 6.6)
\times 10^{-12}$ GeV$^2$ that is derived from the $D^0 \overline{K^0}$ and
$D_s^+ K^-$ modes assuming $\xi_C=f_K/f_\pi$.
\begin{figure}[h]
\includegraphics[height=4.5cm]{DP4delCC.eps}
\hspace{0.3cm}
\includegraphics[height=4.5cm]{DP4delEE.eps}
\hspace{0.3cm}
\includegraphics[height=4.5cm]{DP4EC.eps}
\caption{$\Delta\chi^2$=1 (pink, solid) and 2.30 (blue, dotted) contours on the
$\delta_C$-$|C/T|$, $\delta_E$-$|E/T|$, and $|E/T|$-$|C/T|$ planes in Scheme
3.}
\label{fig:DP2CE}
\end{figure}
In Fig.~\ref{fig:DP2CE}, we show the $\Delta \chi^2 = 1$ and $2.30$ contours on
the $\delta_C$-$|C/T|$, $\delta_E$-$|E/T|$, and $|E/T|$-$|C/T|$ planes in
Scheme 3, respectively, showing the correlations between each pair of
parameters. The projections of the $\Delta \chi^2 = 1$ contours to individual
axes give the 68.3\% confidence level (CL) ranges of the corresponding
quantities. In particular, we find that $|C/T| = 0.48\pm 0.02 $ and $|E/T| =
0.11\pm 0.01$. Our result shows an enhancement in the color-suppressed
amplitude. This can be explained by non-factorizable effects or final state
interactions. The three flavor amplitude sizes fall into a hierarchy: $|T| >
|C| > |E|$, with $|E|$ being about one order of magnitude smaller than $|T|$.
This is the reason why the 1 $\sigma$ bounds on $\delta_E$ are relatively
loose. Moreover, we observe non-trivial strong phases $\delta_C$ and
$\delta_E$. These results are consistent with previous
studies~\cite{Chiang:2002tv,Kim:2004hx,Colangelo:2005hh}.
We note in passing that in our contour plots, the planes are scanned by
minimizing $\chi^2$, keeping all the other parameters free to vary. Therefore,
our results are different from those given in Ref.~\cite{Colangelo:2005hh}. In
addition, their formalism corresponds to our Scheme 1.
Based upon our fit results, we give predictions for all the $B_{u,d,s}$ meson
decays in this category in the lower part of Table~\ref{tab:DPfit}. For the
$B_s$ decays involving the exchange diagram, we take $\xi_E=1$. The predicated
branching ratios of those modes could be changed if we take into account SU(3)
breaking in $E$. Conversely, measurements of those modes can provide useful
information about the magnitude of the SU(3) breaking effect in the exchange
diagram.
$\overline{B_s}^0 \to D_s^+ \pi^-$ is a Cabibbo-favored decay involving the
tree amplitude. Therefore, it has the largest decay rate among the channels in
this group. Our preferred value for its branching ratio is $(22 \pm 1) \times
10^{-4}$. On the other hand, a recent measurement of this mode by CDF gives
$(38 \pm 3 \pm 13) \times 10^{-4}$~\cite{unknown:2006qw}. The discrepancy is
1.2~$\sigma$. Further measurements of this and other $B_s$ decay modes with
better precision will help settling the question whether flavor SU(3) symmetry
can be reliably extended to the sector of $B_s$ meson decays or not.
From the naive factorization (NF) approximation, the SU(3) breaking parameters
are given by
\begin{eqnarray}
\xi_T^{\rm NF} = \frac{f_K F_0^{BD}(m_K^2)}{f_\pi F_0^{BD}(m_\pi^2)}
\simeq 1.23~,
&\quad&
\xi_C^{\rm NF} = \frac{(m_B^2 - m_K^2) F_0^{BK}(m_D^2)}
{(m_B^2 - m_\pi^2) F_0^{B\pi}(m_D^2)} \simeq 1.37 ~.
\end{eqnarray}
where the form factors are calculated using the covariant light-front
model~\cite{Cheng:2003sm}:
$F_0^{BD}(m_\pi^2)=0.67,~F_0^{BD}(m_K^2)=0.67,~F_0^{B\pi}(m_D^2)=0.28,
~F_0^{BK}(m_D^2)=0.38.$ These theoretical predictions are very close to our
fitted values: $\xi_T=1.24 \pm 0.02$ and $\xi_C=1.33 \pm 0.02$.
The ratio of the two effective Wilson coefficients $a_{1, 2}^{\rm eff}$ for
these decay processes can be extracted as
\begin{eqnarray}
\left|\frac{a_2^{\rm eff}}{a_1^{\rm eff}}\right|_{DP}
&=& \left|\frac{C}{T}\right|\frac{(m_B^2 - m_D^2) f_\pi F_0^{BD}(m_\pi^2)}
{(m_B^2 - m_\pi^2) f_D F_0^{B\pi}(m_D^2)}
= 0.59 \pm 0.03~,
\end{eqnarray}
where $|C/T|=0.48\pm0.02$ as obtained from the $\chi^2$ analysis in Scheme 3,
and $f_D = 222.6$ MeV~\cite{PDG} is used. In Ref.~\cite{Kim:2004hx},
$|a_2^{\rm eff}/a_1^{\rm eff}|_{DP}$ is found to be $0.54-0.70$ at the 1
$\sigma$ level using the data of $B^-\to D^0\pi^-$, $\overline{B}^0 \to D^+
\pi^-$ and $\overline{B}^0 \to D^0 \pi^0$ modes, which is consistent with our
result. In the pQCD calculation~\cite{Keum:2003js}, it is found that
$|a_2^{\rm eff}/a_1^{\rm eff}|_{DP} =0.42-0.51$, and the relative phase between
$a_1^{\rm eff}$ and $a_2^{\rm eff}$ is estimated to be $-65.3^\circ<\arg(a^{\rm
eff}_2/a^{\rm eff}_1)_{DP} <-61.5^\circ$ without the exchange diagram.
\subsection{$B \to D^* P$ decays}
\label{sec:DstP}
We see again in Table~\ref{tab:DstPfit} that $\chi^2_{\rm min}$ is
significantly lowered by the introduction of the SU(3) breaking factors
$\xi_{T_V} $ and $\xi_{C_P}$.
\begin{table}[h]
\caption{$B\to D^*P$ decays. Theoretical parameters are extracted from global
$\chi^2$ fits in different schemes explained in the text. The amplitude
sizes are given in units of $10^{-6}$. Predictions of branching ratios are
made with $\xi_E=1$ and given in units of $10^{-4}$ unless otherwise noted.}
\label{tab:DstPfit}
\begin{center}
\begin{tabular}{l c c c} \hline \hline
& Scheme 1 & Scheme 2 & Scheme 3 \\
\hline\hline
$|T_V|$ & $16.45^{+0.55}_{-0.61} $ & $14.85^{+0.60}_{-1.00} $
& $15.34^{+0.84}_{-1.70} $ \\
$|C_P|$ & $6.03^{+0.43}_{-0.46} $ & $6.21^{+0.39}_{-0.43} $
& $6.14^{+0.46}_{-0.50} $\\
$|E_P|$ & $1.37^{+0.18}_{-0.20} $ & $1.26^{+0.20}_{-0.23} $
& $1.29^{+0.19}_{-0.22}$\\
$\delta_{C_P}$ (degrees) & $-63.4^{+13.2}_{-10.8} $ & $-54.5^{+24.9}_{-12.0} $
& $-57.3^{+30.0}_{-12.6}$ \\
$\delta_{E_P}$ (degrees) & $-126.8^{+21.4}_{-19.3}$ & $-84.7^{+84.7}_{-27.8}$ &
$-100.9^{+100.9}_{-30.0}$ \\
$\xi_{T_V}$ & 1 (fixed) & $f_K/f_\pi$ (fixed) & $1.17^{+0.03}_{-0.02} $ \\
$\xi_{C_P}$ & 1 (fixed) & $f_K/f_\pi$ (fixed) & $1.20 \pm 0.05 $ \\
\hline
$\chi^2_{\rm min}$ & 12.00 & 1.39 & 0.72 \\
$\chi^2_{\rm min}/{\rm dof}$ & 3.00 & 0.35 & 0.36 \\
\hline\hline
$B^- \to D^{*0} \pi^-$ & $ 52 \pm 6 $ & $ 49 \pm 8 $ & $ 50 \pm 10 $ \\
$\to D^0 K^-$ & $ 2.8 \pm 0.3 $ & $ 3.9 \pm 0.6 $ & $ 3.7 \pm 0.8 $ \\
\hline
$\overline{B}^0 \to D^{*+} \pi^-$ & $ 30.3 \pm 2.8 $ & $ 27.9 \pm 5.4 $ & $ 28.4 \pm 7.3 $ \\
$\to D^{*+} K^-$ & $1.80 \pm 0.13 $ & $ 2.19 \pm 0.24 $ & $ 2.14 \pm 0.37 $ \\
$\to D^{*0} \pi^0$ & $ 1.9 \pm 0.5 $ & $ 1.6 \pm 0.6 $ & $ 1.7 \pm 0.9 $ \\
$\to D^{*0} \eta$ & $ 1.9 \pm 0.4 $ & $ 2.2 \pm 0.4 $ & $ 2.1 \pm 0.6 $ \\
$\to D^{*0} \eta'$ & $0.89 \pm 0.17 $ & $ 1.05 \pm 0.21 $ & $ 1.00 \pm 0.29 $ \\
$\to D^{*0} \overline{K}^0$ & $ 0.24 \pm 0.04 $ & $ 0.38 \pm 0.05 $ & $ 0.36 \pm 0.06 $ \\
$\to D_s^{*+} K^-$ & $ 0.23 \pm 0.06 $ & $ 0.19 \pm 0.06 $ & $ 0.20 \pm 0.06 $ \\
\hline
$\overline{B}^0_s \to D^{*+} \pi^-$ (in units of $10^{-6}$) & $1.2 \pm 0.3 $ & $ 1.0 \pm 0.3
$ & $ 1.1 \pm 0.3 $ \\
$\to D^{*0} \pi^0$ (in units of $10^{-7}$) & $ 6.0 \pm 1.6 $ & $ 5.0 \pm 1.7 $ & $ 5.3 \pm 1.7 $ \\
$\to D^{*0} K^0$ & $ 4.2 \pm 0.6 $ & $ 4.5 \pm 0.6 $ & $ 4.4 \pm 0.7 $ \\
$\to D^{*0 }\eta$ & $ 0.06 \pm 0.02 $ & $ 0.09 \pm 0.02 $ & $ 0.09 \pm 0.04 $ \\
$\to D^{*0} \eta'$ & $ 0.16 \pm 0.03 $ & $ 0.27\pm 0.03 $ & $ 0.25 \pm 0.05 $ \\
$\to D_s^{*+} \pi^-$ & $ 31 \pm 2 $ & $ 25 \pm 3 $ & $ 27 \pm 4 $ \\
$\to D_s^{*+} K^-$ & $ 1.8 \pm 0.1 $ & $ 2.1 \pm 0.3 $ & $ 2.2 \pm 0.5 $ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
In this category, $|T_V| = (14.7 \pm 0.7) \times 10^{-6}$ GeV,
$|C_P|=(6.0\pm1.0)\times 10^{-6}$ GeV and $|E_P| =(1.29 \pm 0.21) \times
10^{-6}$ GeV can be directly extracted from the $D^{*+} K^-$, $D^{*0}
\overline{K^0}$ and $D_s^{*+} K^-$ modes respectively, taking
$\xi_{T_V}=\xi_{C_P}=f_K/f_\pi$. Another way to constrain $|C_P|$ is to deduce
from the $D^{*0} (\pi,\eta,\eta')$ and $D_s^{*+} K^-$ modes. Using this
method, we find $|C_P| = (6.2 \pm 0.5) \times 10^{-6}$ GeV.
\begin{figure}[h]
\includegraphics[height=4.5cm]{DstP4delCC.eps}
\hspace{0.3cm}
\includegraphics[height=4.5cm]{DstP4delEE.eps}
\hspace{0.3cm}
\includegraphics[height=4.5cm]{DstP4EC.eps}
\caption{$\Delta\chi^2$=1 (pink, solid) and 2.30 (blue, dotted) contours on the
$\delta_{C_P}$-$|C_P/T_V|$, $\delta_{E_P}$-$|E_P/T_V|$, and
$|E_P/T_V|$-$|C_P/T_V|$ planes in Scheme 3.}
\label{fig:DstP2CE}
\end{figure}
In Fig.~\ref{fig:DstP2CE}, we show the $\Delta \chi^2 = 1$ and $2.30$ contours
on the $\delta_{C_P}$-$|C_P/T_V|$, $\delta_{E_P}$-$|E_P/T_V|$, and
$|E_P/T_V|$-$|C_P/T_V|$ planes in Scheme 3, respectively. We find that
$|C_P/T_V| = 0.40^{+0.07}_{-0.04}$ and $|E_P/T_V| = 0.08^{+0.02}_{-0.01}$. As
in the $DP$ category, there can be sizable non-factorizable contributions to
the color-suppressed amplitude or final state interactions. The hierarchy among
$|T_V|$, $|C_P|$ and $|E_P|$ is also very similar to those in the $DP$ category
in Section~\ref{sec:DP}. However, the strong phase, $\delta_{E_P}$ can be zero
within the 68.3\% CL region.
Predictions for all the $B_{u,d,s}$ meson decays according to our fit results
are listed in the lower part of Table~\ref{tab:DstPfit}. The most dominant
mode $D_s^{*+} \pi^-$ of $B_s$ decays is predicted to have a branching ratio of
$(27 \pm 4) \times 10^{-4}$, similar to that of the $D_s^+ \pi^-$ mode.
From the naive factorization approximation,
the SU(3) breaking parameters are given by
\begin{eqnarray}
\xi_{T_V}^{\rm NF}=\frac{p^*_{D^*K} f_{K} A_0^{BD^*}(m_{K}^2)}
{p^*_{D^*\pi} f_\pi A_0^{BD^*}(m_\pi^2)} \simeq 1.22 ~,
&\quad&
\xi_{C_P}^{\rm NF}=\frac{p^*_{D^*K}F_1^{BK}(m_{D^*}^2)}
{p^*_{D^*\pi}F_1^{B\pi}(m_{D^*}^2)} \simeq 1.36,
\end{eqnarray}
where $A_0^{BD^*}(m_\pi^2)=0.64$, $A_0^{BD^*}(m_{K}^2)=0.65$,
$F_1^{B\pi}(m_{D^*}^2)=0.31$ and
$F_1^{BK}(m_{D^*}^2)=0.43$~\cite{Cheng:2003sm}. Unlike the $DP$ sector, our
fitted SU(3) breaking factors are somewhat smaller than the naive factorization
expectations.
The ratio of the two effective Wilson coefficients can be extracted as
\begin{eqnarray}
\left|\frac{a_2^{\rm eff}}{a_1^{\rm eff}}\right|_{D^*P}
&=& \left|\frac{C_P}{T_V}\right|
\frac{f_\pi A_0^{BD^*}(m_\pi^2)}{f_{D^*}F_1^{B\pi}(m_{D^*}^2)}
= 0.42 \pm 0.04 ~,
\end{eqnarray}
where $|C_P/T_V|=0.40^{+0.07}_{-0.04}$ as obtained from the $\chi^2$ analysis
in Scheme 3, and $f_{D^*}$ = 256.0 MeV~\cite{PDG,Neubert:1997uc} is used. In
the pQCD approach~\cite{Keum:2003js}, $|a_2^{\rm eff}/a_1^{\rm eff}|_{D^*P}$ is
found to be $0.47-0.55$ and their relative phase is estimated to be
$-64.8^\circ<\arg(a^{\rm eff}_2/a^{\rm eff}_1)_{D^*P} <-61.4^\circ$ without the
exchange diagram. These are consistent with our results at the 1~$\sigma$
level.
\subsection{$B \to D V$ decays}
\label{sec:DV}
The decays in this category render a very different pattern from the previous
two in the $\chi^2$ fitting. First, Scheme~1 and Scheme~2 yield the same
result. This is because $f_{K^*}/f_\rho \simeq 1.00$. Furthermore Scheme 3
does not work well, unlike the $DP$ and $D^*P$ sectors. It is found that
$\chi_{\rm
min}^2=0.045,~\xi_{T_P}=0.83,~\xi_{C_V}=4.58,~|T_P|=31.49,~|C_V|=1.75$ and
$|E_V|=6.64$ in units of $10^{-6}$ GeV, if we take $\xi_{T_P}$ and $\xi_{C_V}$
as free fitting parameters. Theoretically, we do not expect $|C_V|<|E_V|$.
This unreasonable result is partly caused by the fact that $|E_V|$ is less
constrained by the experiment, $\overline{B}^0\to D_s^+K^{*-}$. Therefore we
here adopt another prescription in which $\xi_{T_P}$ and $\xi_{C_V}$ are fixed
by the naive factorization calculation, i.e.,
\begin{eqnarray}
\xi_{T_P}=\xi_{T_P}^{\rm NF} = \frac{p^*_{DK^*} f_{K^*} F_1^{BD}(m_{K^*}^2)}
{p^*_{D\rho} f_\rho F_1^{BD}(m_\rho^2)} \simeq 1.00~,
&~&
\xi_{C_V}=\xi_{C_V}^{\rm NF}
= \frac{p^*_{DK^*}A_0^{BK^*}(m_D^2)}{p^*_{D\rho}A_0^{B\rho}(m_D^2)}
\simeq 1.09~,
\end{eqnarray}
where $F_1^{BD}(m_\rho^2)=0.69$, $F_1^{BD}(m_{K^*}^2)=0.69$,
$A_0^{B\rho}(m_D^2)=0.35$ and $A_0^{BK^*}(m_D^2)=0.38$~\cite{Cheng:2003sm}.
\begin{table}[h]
\caption{$B\to DV$ decays. Theoretical parameters are extracted from global
$\chi^2$ fits in different schemes explained in the text. The amplitude
sizes are given in units of $10^{-6}$. Predictions of branching ratios are
made with $\xi_E=1$ and given in units of $10^{-4}$ unless otherwise noted.}
\label{tab:DVfit}
\begin{center}
\begin{tabular}{l c c c} \hline \hline
& Scheme 1 & Scheme 2 & Scheme 3 \\
\hline\hline
$|T_P|$ & $25.60^{+1.56}_{-1.62} $ & $25.60^{+1.56}_{-1.62} $
& $25.87^{+1.61}_{-1.72} $ \\
$|C_V|$ & $7.07^{+0.29}_{-0.33} $ & $7.07^{+0.29}_{-0.33} $
& $6.95^{+0.29}_{-0.37}$\\
$|E_V|$ & $0.57^{+1.32}_{-0.43} $ & $0.57^{+1.32}_{-0.43} $
& $0.77^{+1.53}_{-0.66} $\\
$\delta_{C_V}$ (degrees) & $-75.1^{+19.1}_{-15.8} $ & $-75.1^{+19.1}_{-15.8} $
&$ -79.2^{+18.0}_{-14.9} $ \\
$\delta_{E_V}$ (degrees) & $143.4^{+36.6}_{-108.8}$ & $143.4^{+36.6}_{-108.8}$ &
$158.6^{+21.4}_{-128.5}$ \\
$\xi_{T_P}$ & 1 (fixed) & $f_{K^*}/f_\rho$ (fixed) & $ \xi_{T_P}^{\rm NF}$ (fixed) \\
$\xi_{C_V}$ & 1 (fixed) & $f_{K^*}/f_\rho$ (fixed) & $ \xi_{C_V}^{\rm NF}$ (fixed) \\
\hline
$\chi^2_{\rm min}$ & 5.91 & 5.91 & 4.18 \\
$\chi^2_{\rm min}/{\rm dof}$ & 2.96 & 2.96 & 2.09 \\
\hline\hline
$B^- \to D^0 \rho^-$ & $105 \pm 18 $ & $ 105 \pm 18 $ & $ 103 \pm 18 $ \\
$\to D^0 K^{*-}$ & $ 5.7 \pm 1.0 $ & $ 5.7 \pm 1.0 $ & $ 5.6 \pm 1.0 $ \\
\hline
$\overline{B}^0 \to D^+ \rho^-$ & $ 78 \pm 11 $ & $ 78 \pm 11 $ & $ 78 \pm 12 $ \\
$\to D^+ K^{*-}$ & $ 4.3 \pm 0.5 $ & $ 4.3 \pm 0.5 $ & $ 4.4 \pm 0.6 $ \\
$\to D^0 \rho^0$ & $ 3.5 \pm 0.8 $ & $ 3.5 \pm 0.8 $ & $ 3.4 \pm 1.0 $ \\
$\to D^0 \omega$ & $ 2.7 \pm 0.7 $ & $ 2.7 \pm 0.7 $ & $ 2.7 \pm 0.9 $ \\
$\to D^0 \overline{K}^{*0}$ & $ 0.33 \pm 0.03 $ & $ 0.33 \pm 0.03 $ & $ 0.38 \pm 0.04 $ \\
$\to D_s^+ K^{*-}$ & $ 0.04 \pm 0.12 $ & $ 0.04 \pm 0.12 $ & $ 0.07 \pm 0.20 $ \\
\hline
$\overline{B}^0_s \to D^+ \rho^-$ (in units of $10^{-7}$) & $ 2.1 \pm 6.3 $ & $ 2.1 \pm 6.3
$ & $ 3.8 \pm 10.7 $ \\
$\to D^+ K^{*-}$ & $ 4.2 \pm 0.6 $ & $ 4.2 \pm 0.6 $ & $ 4.2 \pm 0.6 $ \\
$\to D^0 \rho^0$ (in units of $10^{-6}$) & $ 1.9 \pm 5.8 $ & $ 1.9 \pm 5.8 $ & $ 3.5 \pm 9.8 $ \\
$\to D^0 K^{*0}$ & $ 5.8 \pm 0.5 $ & $ 5.8 \pm 0.5 $ & $ 5.6 \pm 0.5 $ \\
$\to D^0 \omega$ (in units of $10^{-7}$) & $ 1.0 \pm 3.2 $ & $ 1.0 \pm 3.2 $ & $ 1.9 \pm 5.4 $ \\
$\to D^0 \phi$ & $ 0.31 \pm 0.03 $ & $ 0.31 \pm 0.03 $ & $ 0.35 \pm 0.03 $ \\
$\to D_s^+ \rho^-$ & $ 75 \pm 9 $ & $ 75 \pm 9 $ & $ 77 \pm 10 $ \\
$\to D_s^+ K^{*-}$ & $ 4.1 \pm 0.6 $ & $ 4.1 \pm 0.6 $ & $ 4.2 \pm 0.6 $ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
$|T_P| = (26.1 \pm 2.0) \times 10^{-6}$ GeV can be extracted from the $D^+
K^{*-}$ mode using the U-spin symmetry and taking $\xi_{T_P}=f_{K^*}/f_\rho$.
This is slightly larger than our fit result in Scheme 2. Directly from the
$\overline{B^0} \to D_s^+ K^{*-}$ mode, we have only a poor upper bound of $8.2
\times 10^{-6}$ GeV on $|E_V|$.
The observable ${\cal B}(D^0 \rho^-)$ has the largest contribution to the total
$\chi^2_{\rm min}$. From Table \ref{tab:DV}, we observe that the area of the
triangle formed from the $B^-\to D^0\rho^-$, $\overline{B}^0 \to D^+ \rho^-$
and $\overline{B}^0 \to D^0 \rho^0$ decays is very small, while that of the
triangle formed from the $B^-\to D^0K^{*-}$, $\overline{B}^0 \to D^+ K^{*-}$
and $\overline{B}^0 \to D^0 \overline{K}^{*0}$ modes is not. This is the
reason why the global $\chi^2$ fits in the $DV$ sector are not as satisfactory
as those in the $DP$ and $D^*P$ sectors.
In Ref.~\cite{Chiang:2002tv}, we noted that $|C_V|$ extracted from
$D^0\overline{K}^{*0}$ was inconsistent with $\sqrt{|C_V|^2+|E_V|^2}$ extracted
from a combination of the $D^0\rho^0$ and $D^0\omega$ modes. Currently, the
former is $(8.01 \pm 0.60) \times 10^{-6}$ GeV if we take
$\xi_{C_V}=f_{K^*}/f_\rho$, and the latter is $(6.86 \pm 0.34) \times 10^{-6}$
GeV. There is still a discrepancy at the 1.7$\;\sigma$ level, or this
discrepancy implies that the SU(3) breaking factor $\xi_{C_V}$ should be
greater than about $1.17$. A determination of ${\cal B}(D_s^+ K^{*-})$ and
better measurements of related modes will be very useful in providing further
insights into this problem.
\begin{figure}[h]
\includegraphics[height=4.5cm]{DV5delCC.eps}
\hspace{0.3cm}
\includegraphics[height=4.5cm]{DV5delEE.eps}
\hspace{0.3cm}
\includegraphics[height=4.5cm]{DV5EC.eps}
\caption{$\Delta\chi^2$=1 (pink, solid) and 2.30 (blue, dotted) contours on the
$\delta_{C_V}$-$|C_V/T_P|$, $\delta_{E_V}$-$|E_V/T_P|$, and
$|E_V/T_P|$-$|C_V/T_P|$ planes in Scheme 3.}
\label{fig:DV2CE}
\end{figure}
In Fig.~\ref{fig:DV2CE}, we show the $\Delta \chi^2 = 1$ and $2.30$ contours on
the $\delta_{C_V}$-$|C_V/T_P|$, $\delta_{E_V}$-$|E_V/T_P|$, and
$|E_V/T_P|$-$|C_V/T_P|$ planes in Scheme 3, respectively. We find that
$|C_V/T_P| =0.27 \pm 0.02$ and $|E_V/T_P| =0.03^{+0.06}_{-0.03}$. We see that
the magnitude of $T_P$ is larger than $T$ and $T_V$, resulting in a more
hierarchical structure among $|T_P|$, $|C_V|$ and $|E_V|$. Another result of
the large $T_P$ is reflected in the bigger branching ratio prediction for the
most dominant $\overline{B_s}^0 \to D_s^+ \rho^-$ mode. As in the $D^*P$
sector, the central value of $\delta_{E_V}$ is non-zero, but is still
consistent with zero within the 68.3\% CL region.
The ratio of the two effective Wilson coefficients can be extracted as
\begin{eqnarray}
\left|\frac{a_2^{\rm eff}}{a_1^{\rm eff}}\right|_{DV}
&=& \left|\frac{C_V}{T_P}\right|\frac{f_\rho F_1^{BD}(m_\rho^2)}
{f_D A_0^{B\rho}(m_D^2)}
= 0.50\pm 0.04 ~,
\end{eqnarray}
where $|C_V/T_P| =0.27 \pm 0.02$ as obtained from the $\chi^2$ analysis in
Scheme 3. In Ref.~\cite{Kim:2004hx}, it is estimated that $|a_2^{\rm
eff}/a_1^{\rm eff}|_{DV}=0.24-0.42$ at the 1~$\sigma$ level using the data of
the $B^-\to D^0\rho^-$, $\overline{B}^0 \to D^+ \rho^-$ and $\overline{B}^0 \to
D^0 \rho^0$ modes.
Finally, we summarize our findings in Fig.~\ref{fig:amps}. These diagrams are
constructed by taking the central values of the fitted parameters in each
category using Scheme 3. They illustrate the sizes and relative phases among
the tree, color-suppressed, and exchange amplitudes.
\begin{figure}[h]
\begin{center}
\includegraphics[height=3cm]{triDP3.eps} \\
(a) \\[0.5cm]
\includegraphics[height=3cm]{triDstP3.eps} \\
(b) \\[0.5cm]
\includegraphics[height=3cm]{triDV3.eps} \\
(c) \\[0.5cm]
\caption{Amplitude diagrams of (a): $DP$ decays; (b): $D^*P$ decays; and (c):
$DV$ decays.
\label{fig:amps}}
\end{center}
\end{figure}
\section{Conclusions \label{sec:summary}}
We have used the $\chi^2$ fit approach to re-analyze the two-body charmed $B$
meson decays in the flavor SU(3)-symmetric formalism, taking into account
different symmetry breaking schemes as well. In the $DP$ and $D^* P$ decays,
there are significant improvement in the $\chi^2$ minimum between Scheme~1 and
Scheme~2, but not much between Scheme~2 and Scheme~3. This shows that the
major SU(3) breaking effect can be accounted for by the decay constant ratio
$f_K/f_\pi$, as demanded for example by naive factorization. The same feature,
however, is not observed in the $DV$ sector, where the corresponding decay
constant ratio is approximately one.
In our analysis, the fit results are generally consistent with those extracted
from individual modes. We have found that the color-suppressed amplitudes are
enhanced in the $DP$ and $D^*P$ sectors, but not in the $DV$ sector. This
strongly suggests that non-factorizable effects or final-state rescattering
effects cannot be neglected in the former two sectors.
In the $DV$ sector, it is observed that the Cabibbo-suppressed $D^0
\overline{K}^{*0}$ yields a $|C_V|$ that exceeds the bound $\sqrt{|C_V|^2 +
|E_V|^2}$ given by a combination of the $D^0 \rho^0$ and $D^0 \omega$
branching ratios at $1.7~\sigma$ level, or $\xi_{C_V}$ should be greater than
about $1.17$. We urge the measurement of ${\cal B}(\overline{B}^0\to
D_s^+K^{*-})$ for a direct determination of the exchange amplitude, which may
provide a possible solution to this problem.
Finally we note that the exchange diagrams are at least an order of magnitude
smaller than the dominant tree topologies in these decays. Consequently, it is
difficult to determine their phases, particularly in the $D^*P$ and $DV$
sectors, unless data precision can be significantly improved in the future.
\begin{acknowledgments}
C.-W.~C. would like to thank the hospitality of the National Center for
Theoretical Sciences in Taiwan and the Institute of Theoretical Physics at
Univ.\ of Oregon during his visit where part of this work was initiated and
carried out. This research was supported in part by the National Science
Council of Taiwan, R.O.C.\ under Grant No.\ NSC 95-2112-M-008-008.
\end{acknowledgments}
|
1,941,325,221,042 | arxiv | \section{$\mathbb{R}_{\textnormal{an,exp}}$-Definable Functions}
\subsection{Log-Analytic Functions and the Exponential \\ Number}
Compare with \cite{8}, Section 1 for a more detailed description of the content in this subsection.
\vs{0.2cm}
Let $m \in \mathbb{N}$ and $X \subset \mathbb{R}^m$ be definable.
\vs{0.3cm}
{\bf1.1 Definition}
\vs{0.1cm}
Let $f:X \to \mathbb{R}$ be a function.\index{log-analytic}
\begin{itemize}
\item [(a)] Let $r \in \mathbb{N}_0$. By induction on $r$ we define that $f$ is \textbf{log-analytic of order at most} $r$.
\vs{0.3cm}
\textbf{Base case}: The function $f$ is log-analytic of order at most $0$ if there is a decomposition $\mathcal{C}$ of $X$ into finitely many definable cells such that for $C \in \mathcal{C}$ there is a globally subanalytic function $F:\mathbb{R}^m \to \mathbb{R}$ such that $f|_C = F|_C$.
\vs{0.3cm}
\textbf{Inductive step}: The function $f$ is log-analytic of order at most $r$ if the following holds: There is a decomposition $\mathcal{C}$ of $X$ into finitely many definable cells such that for $C \in \mathcal{C}$ there are $k,l \in \mathbb{N}_{0}$, a globally subanalytic function $F:\mathbb{R}^{k+l} \to \mathbb{R}$, and log-analytic functions $g_1,...,g_k:C \to \mathbb{R}, h_1,...,h_l:C \to \mathbb{R}_{>0}$ of order at most $r-1$ such that
$$f|_C=F(g_1,...,g_k,\log(h_1),...,\log(h_l)).$$
\item[(b)] Let $r \in \mathbb{N}_0$. We call $f$ \textbf{log-analytic of order}\index{log-analytic} $r$ if $f$ is log-analytic of order at most $r$ but not of order at most $r-1$.
\item[(c)] We call $f$ \textbf{log-analytic}\index{log-analytic} if $f$ is log-analytic of order $r$ for some $r \in \mathbb{N}_0$.
\end{itemize}
\vs{0.3cm}
{\bf1.2 Definition}
\vs{0.1cm}
Let $f:X \to \mathbb{R}$ be a function. Let $E$ be a set of positive definable functions on $X$.
\begin{itemize}
\item [(a)] By induction on $e \in \mathbb{N}_0$ we define that $f$ has \textbf{exponential number at most $e$ with respect to $E$}\index{exponential!number}.
\vs{0.2cm}
{\bf Base Case}: The function $f$ has exponential number at most $0$ with respect to $E$ if $f$ is log-analytic.
\vs{0.2cm}
{\bf Inductive Step}: The function $f$ has exponential number at most $e$ with respect to $E$ if the following holds: There are $k,l \in \mathbb{N}_0$, functions $g_1,...,g_k:X \to \mathbb{R}$ and $h_1,...,h_l:X \to \mathbb{R}$ with exponential number at most $e-1$ with respect to $E$ and a log-analytic function $F:\mathbb{R}^{k+l} \to \mathbb{R}$ such that
$$f=F(g_1,...,g_k,\exp(h_1),...,\exp(h_l))$$
and $\exp(h_1),...,\exp(h_l) \in E$.
\item [(b)] Let $e \in \mathbb{N}_0$. We say that $f$ has \textbf{exponential number $e$ with respect to $E$}\index{exponential!number} if $f$ has exponential number at most $e$ with respect to $E$ but not at most $e-1$ with respect to $E$.
\item [(c)] We say that $f$ \textbf{can be constructed from $E$}\index{constructed from} if there is $e \in \mathbb{N}_0$ such that $f$ has exponential number $e$ with respect to $E$.
\end{itemize}
\vs{0.5cm}
{\bf1.3 Remark}
\vs{0.1cm}
Let $e \in \mathbb{N}_0$. Let $E$ be a set of positive definable functions on $X$.
\begin{itemize}
\item[(1)] Let $f:X \to \mathbb{R}$ be a function with exponential number at most $e$ with respect to $E$. Then $\exp(f)$ has exponential number at most $e+1$ with respect to $E \cup \{\exp(f)\}$.
\item[(2)] Let $s \in \mathbb{N}_0$. Let $f_1,...,f_s:X \to \mathbb{R}$ be functions with exponential number at most $e$ with respect to $E$ and let $F:\mathbb{R}^s \to \mathbb{R}$ be log-analytic. Then $F(f_1,...,f_s)$ has exponential number at most $e$ with respect to $E$.
\end{itemize}
\vs{0.3cm}
{\bf1.4 Remark}
\vs{0.1cm}
Let $X_1,X_2 \subset \mathbb{R}^m$ be definable and disjoint. Let $X = X_1 \cup X_2$. For $j \in \{1,2\}$ let $E_j$ be a set of positive definable functions on $X_j$ and $f_j:X_j \to \mathbb{R}$ be a function. Let $e \in \mathbb{N}_0$ be such that $f_j$ has exponential number at most $e$ with respect to $E_j$ for $j \in \{1,2\}$. Let
$$E:=\{g \mid g:X \to \mathbb{R} \textnormal{ is a function with } g|_{X_j} \in E_j \textnormal{ for } j \in \{1,2\}\}.$$
Then
$$f:X \to \mathbb{R}, x \mapsto \left\{\begin{array}{ll} f_1(x) , & x \in X_1, \\
f_2(x), & x \in X_2, \end{array}\right.$$
has exponential number at most $e$ with respect to $E$.
\subsection{A Preparation Theorem for Log-Analytic Functions}
Compare with \cite{8}, Section 2 for a more detailed description of the content in this subsection.
\vs{0.2cm}
Let $n \in \mathbb{N}$. Let $t$ range over $\mathbb{R}^n$ and $x$ over $\mathbb{R}$. We fix a definable set $C \subset \mathbb{R}^n \times \mathbb{R}$.
\vs{0.5cm}
{\bf1.5 Definition} (Opris, \cite{8} Section 2.1)
\vs{0.1cm}
Let $r \in \mathbb{N}_0$. A tuple $\mathcal{Y}:=(y_0,...,y_r)$ of functions on $C$ is called an \textbf{$r$-logarithmic scale} on $C$ with \textbf{center} $\Theta=(\Theta_0,...,\Theta_r)$ if the following holds:
\begin{itemize}
\item[(a)] $y_j>0$ or $y_j<0$ for every $j \in \{0,...,r\}$.
\item[(b)] $\Theta_j$ is a definable function on $\pi(C)$ for every $j \in \{0,...,r\}$.
\item[(c)] It is $y_0(t,x)=x-\Theta_0(t)$ and for every $j \in \{1,...,r\}$ it is $y_j(t,x)=\log(\vert{y_{j-1}(t,x)}\vert) - \Theta_j(t)$ for all $(t,x) \in C$.
\item[(d)] Either there is $\epsilon_0 \in \textnormal{}]0,1[$ such that $0<\vert{y_0(t,x)}\vert < \epsilon_0\vert{x}\vert$ for all $(t,x) \in C$ or $\Theta_0=0$, and for every $j \in \{1,...,r\}$ either there is $\epsilon_j \in \textnormal{}]0,1[$ such that $0<\vert{y_j(t,x)}\vert<\epsilon_j\vert{\log(\vert{y_{j-1}(t,x)}\vert)}\vert$ for all $(t,x) \in C$ or $\Theta_j=0$.
\end{itemize}
\vs{0.2cm}
For a logarithmic scale $(y_0,...,y_r)$ on a definable set $C$ and $q \in \mathbb{R}^{r+1}$ we often write $\vert{\mathcal{Y}(t,x)}\vert^{\otimes q}$ instead of $\prod_{j=0}^r\vert{y_j(t,x)}\vert^{\otimes q_j}$ where $(t,x) \in C$.
\vs{0.4cm}
{\bf1.6 Definition} (Opris, \cite{8} Section 2.3)
\vs{0.1cm}
We call $g:\pi(C) \to \mathbb{R}$ \textbf{$C$-heir} if there is $l \in \mathbb{N}_0$, an $l$-logarithmic scale $\hat{\mathcal{Y}}$ with center $(\hat{\Theta}_0,...,\hat{\Theta}_l)$ on $C$, and $j \in \{1,...,l\}$ such that $g=\exp(\Theta_j)$.
\vs{0.5cm}
{\bf1.7 Definition} (Opris, \cite{8} Section 2.3)
\vs{0.1cm}
We call $g:\pi(C) \to \mathbb{R}$ \textbf{$C$-nice} if there is a set $E$ of $C$-heirs such that $g$ can be constructed from $E$.
\vs{0.5cm}
Note that the class of log-analytic functions on $\pi(C)$ is a proper subclass of the class of $C$-nice functions.
\vs{0.5cm}
{\bf1.8 Definition} (Opris, \cite{8} Section 2.2)
\vs{0.1cm}
Let $r \in \mathbb{N}_0$. Let $f:C \to \mathbb{R}$ be a function. We say that $g$ is \textbf{$r$-log-analytically prepared in $x$ with center $\Theta$} if
$$g(t,x)=a(t) \vert{\mathcal{Y}(t,x)}\vert^{\otimes q}u(t,x)$$
for all $(t,x) \in C$ where $a$ is a definable function on $\pi(C)$ which vanishes identically or has no zero, $\mathcal{Y}=(y_0,...,y_r)$ is an $r$-logarithmic scale with center $\Theta$ on $C$, $q \in \mathbb{Q}^{r+1}$ and the following holds for $u$. There is $s \in \mathbb{N}$ such that $u=v \circ \phi$ where $v$ is a power series which converges on an open neighbourhood of $[-1,1]^s$ with $v([-1,1]^s) \subset \mathbb{R}_{>0}$ and $\phi:=(\phi_1,...,\phi_s):C \to [-1,1]^s$ is a function of the form
$$\phi_j(t,x):=b_j(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes p_j}$$
for $j \in \{1,...,s\}$ and $(t,x) \in C$ where $b_j:\pi(C) \to \mathbb{R}$ is definable for $j \in \{1,...,s\}$ and $p_j:=(p_{j0},...,p_{jr}) \in \mathbb{Q}^{r+1}$.
We call $a$ \textbf{coefficient} and $b:=(b_1,...,b_s)$ a tuple of \textbf{base functions} for $f$. An \textbf{LA-preparing tuple} for $f$ is then
$$\mathcal{J}:=(r,\mathcal{Y},a,q,s,v,b,P)$$
where
$$P:=\left(\begin{array}{cccc}
p_{10}&\cdot&\cdot&p_{1r}\\
\cdot&& &\cdot\\
\cdot&& &\cdot\\
p_{s0}&\cdot&\cdot&p_{sr}\\
\end{array}\right)\in M\big(s\times (r+1),\mathbb{Q}).$$
\vs{0.5cm}
The following preparation theorem for log-analytic functions could be established in \cite{8}.
\vs{0.5cm}
{\bf1.9 Fact} (Opris, \cite{8}, Theorem A)
\vs{0.1cm}
{\it
Let $m \in \mathbb{N}$, $r \in \mathbb{N}_0$. Let $X \subset \mathbb{R}^n \times \mathbb{R}$ be definable. Let $f_1,....,f_m:X \to \mathbb{R}$ be log-analytic functions of order at most $r$. Then there is a definable cell decomposition $\mathcal{C}$ of $X_{\neq 0}$ such that $f_1|_C,...,f_m|_C$ are $r$-log-analytically prepared in $x$ with $C$-nice coefficient, $C$-nice base functions and common $C$-nice center for $C \in \mathcal{C}$.}
\section{Restricted Log-Exp-Analytic \\
Power Functions}
\subsection{Basic Facts and Definitions}
The main results of this paper are formulated in the parametric setting. So we set up the concept of restricted log-exp-analytic power functions in single variables.
\vs{0.4cm}
Let $t$ range over $\mathbb{R}^n$ and $x$ over $\mathbb{R}^m$. We fix definable sets $C,X \subset \mathbb{R}^n \times \mathbb{R}^m$ with $C \subset X$. Suppose that $X_t$ is open for every $t \in \mathbb{R}^n$. Let $\pi_n:\mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^n, (t,x) \mapsto t$.
\vs{0.5cm}
{\bf2.1 Definition}
\vs{0.1cm}
We call a function $f:C \to \mathbb{R}$ \textbf{definable step function} if there is a decomposition $\mathcal{D}$ of $C$ into finitely many definable cells such that for every $D \in \mathcal{D}$ we have that $f|_D$ is constant.
\vs{0.5cm}
{\bf2.2 Definition}
\vs{0.1cm}
Let $E$ be a set of positive definable functions on $C$. We call $E$ \textbf{$X$-digestible in $x$} if the following holds: Let $e \in \mathbb{N}_0$ and let $g \in \log(E)$ be with exponential number at most $e$ with respect to $E$. Then $g$ is locally bounded in $x$ with reference set $X$ (i.e. there is a definable function $\tilde{g}:X \to \mathbb{R}$ with $\tilde{g}|_C=g$ where $\tilde{g}_t$ is locally bounded for $t \in \pi_n(X)$) or there is a definable step function $\chi:C \to \mathbb{R}$ and a function $h:C \to \mathbb{R}_{>0}$ which has exponential number at most $e$ with respect to $E$ such that $g=\chi \log(h)$.
\vs{0.3cm}
{\bf2.3 Remark}
\vs{0.1cm}
Let $E$ be a set of positive definable functions on $C$. Let $Y \subset \mathbb{R}^n \times \mathbb{R}^m$ be definable with $X \subset Y$ such that $Y_t$ is open for every $t \in \mathbb{R}^n$. Let $E$ be $Y$-digestible in $x$. Then $E$ is $X$-digestible in $x$.
\vs{0.3cm}
{\bf Proof}
\vs{0.1cm}
This follows from the following fact. Let $g:C \to \mathbb{R}$ be locally bounded in $x$ with reference set $Y$. Then $g:C \to \mathbb{R}$ is locally bounded in $x$ with reference set $X$. \hfill$\blacksquare$
\vs{0.5cm}
{\bf2.4 Definition}
\vs{0.1cm}
Let $f:C \to \mathbb{R}$ be a function.
\begin{itemize}
\item [(a)]
Let $e \in \mathbb{N}_0$. We say that $f$ is a \textbf{restricted log-exp-analytic power function (restricted log-exp-analytic function) in $x$ of order (at most) $e$ with reference set $X$} if $f$ has exponential number (at most) $e$ with respect to a set $E$ of $X$-digestible functions in $x$ (positive locally bounded functions in $x$ with reference set $X$) on $C$.
\item [(b)]
We say that $f$ is a \textbf{restricted log-exp-analytic power function (restricted log-exp-analytic function) in $x$ with reference set $X$} if $f$ can be constructed from a set $E$ of $X$-digestible functions in $x$ (positive locally bounded functions in $x$ with reference set $X$) on $C$, i.e. there is $e \in \mathbb{N}_0$ and a set $E$ of $X$-digestible functions in $x$ on $C$ such that $f$ has exponential number (at most) $e \in \mathbb{N}_0$ with respect to $E$.
\end{itemize}
Compare also with Definition 2.5 in \cite{7} for the definition of a restricted log-exp-analytic function. Compare with Section 2 in \cite{7} for elementary properties of this class of functions.
\vs{0.3cm}
{\bf2.5 Remark}
\begin{itemize}
\item [(1)]
The log-analytic functions are precisely the restricted log-exp-analytic power functions in $x$ of order (at most) $0$.
\item [(2)]
A restricted log-exp-analytic function $f:C \to \mathbb{R}$ in $x$ with reference set $X$ is a restricted log-exp-analytic power function in $x$ with reference set $X$.
\end{itemize}
\vs{0.3cm}
{\bf2.6 Example}
\vs{0.1cm}
Let $r \in \mathbb{R} \setminus \mathbb{Q}$. The irrational power function
$$h: \mathbb{R} \to \mathbb{R}, x \mapsto \left\{\begin{array}{ll} x^r, & x > 0, \\
0, & \textnormal{ else, } \end{array}\right.$$
is a restricted log-exp-analytic power function (of order (at most) $1$) in $x$ with reference set $\mathbb{R}$.
\vs{0.4cm}
{\bf Proof}
\vs{0.1cm}
This is immediately seen with the fact that $h(x)=\exp(r \log(x))$ for $x \in \mathbb{R}_{>0}$ and $h(x)=0$ otherwise:
Let
\[
f:\mathbb{R} \to \mathbb{R}, x \mapsto \left\{\begin{array}{ll} \exp(r \log(x))), & x > 0, \\
1, & \text{else.} \end{array}\right.
\]
Let $E:=\{f\}$. Then $E$ is $X$-digestible. Let
\[
g:\mathbb{R}^2 \to \mathbb{R}, (x_1,x_2) \mapsto \left\{\begin{array}{ll} x_1, & x_2 > 0, \\
0, & \text{else.} \end{array}\right.
\]
Then $g$ is log-analytic (and even globally subanalytic). Since $h(x)=g(f(x),x)$ for $x \in \mathbb{R}$ we see that $h$ is a restricted log-exp-analytic power function of order $1$ in $x$ with reference set $\mathbb{R}$ (note that $h$ is not log-analytic).
\hfill$\blacksquare$
\vs{0.5cm}
{\bf2.7 Remark}
\vs{0.1cm}
Let $e \in \mathbb{N}_0$. Let $Y \subset \mathbb{R}^n \times \mathbb{R}^m$ be definable with $X \subset Y$ such that $Y_t$ is open for every $t \in \mathbb{R}^n$. Let $f:C \to \mathbb{R}$ be a restricted log-exp-analytic power function in $x$ of order at most $e$ with reference set $Y$. Then $f$ is a restricted log-exp-analytic power function in $x$ of order at most $e$ with reference set $X$.
\vs{0.3cm}
{\bf Proof}
\vs{0.1cm}
This is directly seen with Remark 2.3. \hfill$\blacksquare$
\vs{0.5cm}
{\bf2.8 Remark}
\vs{0.1cm}
Let $k \in \mathbb{N}$. For $j \in \{1,...,k\}$ let $f_j:C \to \mathbb{R}$ be a restricted log-exp-analytic power function in $x$ with reference set $X$. Let $F:\mathbb{R}^k \to \mathbb{R}$ be log-analytic. Then
$$C \to \mathbb{R}, x \mapsto F(f_1(x),...,f_k(x)),$$
is a restricted log-exp-analytic power function in $x$ with reference set $X$.
\vs{0.3cm}
{\bf Proof}
\vs{0.1cm}
Note that $f_j$ can be constructed from a set $E_j$ of positive definable functions which is $X$-digestible in $x$ for $j \in \{1,...,k\}$. Note that $E:=E_1 \cup ... \cup E_k$ is $X$-digestible in $x$ and that $f_j$ can be constructed from $E$ for $j \in \{1,...,k\}$. With Remark 1.8(2) from \cite{8} we are done. \hfill$\blacksquare$
\vs{0.5cm}
{\bf2.9 Remark}
\vs{0.1cm}
Let $C_1,C_2 \subset \mathbb{R}^m$ be disjoint and definable with $C_1 \cup C_2 = C$. For $j \in \{1,2\}$ let $f_j:C_j \to \mathbb{R}$ be a restricted log-exp-analytic power function in $x$ with reference set $X$. Then
$$f:C \to \mathbb{R}, (t,x) \mapsto \left\{\begin{array}{ll} f_1(t,x) , & (t,x) \in C_1, \\
f_2(t,x), & (t,x) \in C_2, \end{array}\right.$$
is a restricted log-exp-analytic power function in $x$ with reference set $X$.
\vs{0.4cm}
{\bf Proof}
\vs{0.1cm}
This follows directly with Remark 2.8 in \cite{7}. \hfill$\blacksquare$
\vs{0.5cm}
{\bf2.10 Definition}
\vs{0.1cm}
A function $f:X \to \mathbb{R}$ is called a \textbf{restricted log-exp-analytic power function in $x$} if $f$ is a restricted log-exp-analytic power function in $x$ with reference set $X$.
\vs{0.5cm}
{\bf2.11 Remark}
\vs{0.1cm}
Let $k \in \mathbb{N}_0$. Let $w:=(w_1,...,w_k)$ range over $\mathbb{R}^k$. Let $g:\mathbb{R}^k \to \mathbb{R}^m$ be log-analytic and continuous. Let
$$V:=\{(t,x,w) \in X \times \mathbb{R}^k \mid (t,x+g(w)) \in X\}.$$
Let $f:X \to \mathbb{R}, (t,x) \mapsto f(t,x),$ be a restricted log-exp-analytic power function in $x$.
Then $F:V \to \mathbb{R}, (t,x,w) \mapsto f(t,x+g(w)),$ is a restricted log-exp-analytic power function in $(x,w)$.
\vs{0.4cm}
{\bf Proof}
\vs{0.1cm}
Note that $V_t$ is open in $\mathbb{R}^m \times \mathbb{R}^k$ for every $t \in \mathbb{R}^n$. Let $E$ be $X$-digestible in $x$ such that $f$ can be constructed from $E$. Consider
$$\tilde{E}:=\{V \to \mathbb{R}_{>0}, (t,x,w) \mapsto h(t,x+g(w)) \mid h \in E\}.$$
Note that $F$ can be constructed from $\tilde{E}$. We show that $\tilde{E}$ is $V$-digestible in $(x,w)$ and we are done. Let $\alpha \in \tilde{E}$. Then there is $h \in E$ with $\alpha(t,x,w)=h(t,x+g(w))$ for $(t,x,w) \in V$. Let $e \in \mathbb{N}$ be such that $h$ has exponential number at most $e$ with respect to $E$.
\vs{0.2cm}
{\bf Case 1}: Let $h$ be locally bounded in $x$ with reference set $X$. Then $\alpha$ is locally bounded in $(x,w)$ by the claim in the proof of Remark 2.10 in \cite{7}.
\vs{0.2cm}
{\bf Case 2}: Let $\chi:X \to \mathbb{R}$ be a definable step function and $\beta:X \to \mathbb{R}_{>0}$ be with exponential number at most $e-1$ with respect to $E$ such that $h=\exp(\chi \log(\beta))$. We obtain $$\alpha(t,x,w)=h(t,x+g(w))=\exp(\chi(t,x+g(w))\log(\beta(t,x+g(w))))$$
for $(t,x,w) \in V$. Note that $V \to \mathbb{R}, (t,x,w) \mapsto \chi(t,x+g(w)),$ defines a definable step function and that $V \to \mathbb{R}_{>0}, (t,x,w) \mapsto \beta(t,x+g(w)),$ has exponential number at most $(e-1)$ with respect to $\tilde{E}$. This finishes the proof. \hfill$\blacksquare$
\subsection{A Preparation Theorem for Restricted \\
Log-Exp-Analytic Power Functions}
In this section we give a preparation theorem for restricted log-exp-analytic power functions. Our considerations start with theorem $B$ from \cite{8}.
\vs{0.2cm}
From Definition 2.12 to Proposition 2.14 let $n \in \mathbb{N}_0$, $m \in \mathbb{N}$, let $t$ range over $\mathbb{R}^n$ and $x$ over $\mathbb{R}^m$. Furthermore we fix definable sets $C,X \subset \mathbb{R}^n \times \mathbb{R}^m$ with $C \subset X$ such that $X_t$ is open for $t \in \mathbb{R}^n$.
\vs{0.5cm}
{\bf2.12 Definition}
\vs{0.1cm}
Let $f:C \to \mathbb{R}$ be definable. Suppose that $f(x)>0$ for every $x \in C$, $f(x)<0$ for every $x \in C$ or $f=0$. Then $f$ is \textbf{a finite product of powers of definable functions $g_1,...,g_k:C \to \mathbb{R}_{>0}$} for $k \in \mathbb{N}$ if there are $\lambda_1,...,\lambda_k \in \mathbb{R}$ and $\sigma \in \{-1,0,1\}$ such that $f=\sigma \prod_{j=1}^k g_j^{\lambda_j}$.
\vs{0.5cm}
{\bf2.13 Definition}
\vs{0.1cm}
Let $f:C \to \mathbb{R}, (t,x) \mapsto f(t,x),$ be a function. By induction on $e \in \mathbb{N}_0 \cup \{-1\}$ we define that $f$ is $(m,X)$-power-restricted $e$-prepared. To this preparation we assign a corresponding finite set of log-analytic functions $L$ on $C$ which ''occur'' in this preparation.
\vs{0.2cm}
$e=-1$: The function $f$ is $(m,X)$-power-restricted $(-1)$-prepared if $f$ is the zero function. It is $L:=\{0\}$.
\vs{0.2cm}
$e-1 \to e$: The function $f$ is $(m,X)$-power-restricted $e$-prepared if the following holds. There is $s \in \mathbb{N}$ such that
$$f(t,x)=a(t,x)\exp(c(t,x))v(b_1(t,x)\exp(d_1(t,x)),...,b_s(t,x)\exp(d_s(t,x)))$$
for $(t,x) \in C$ where $a,b_1,...,b_s$ are finite products of powers of log-analytic functions, $c,d_1,...,d_s$ are locally bounded in $x$ with reference set $X$ and are $(m,X)$-power-restricted $(e-1)$-prepared. Additionally we have $b_j(t,x)\exp(d_j(t,x) \in [-1,1]$ for $(t,x) \in C$ and $v$ is a power series which converges on an open neighbourhood of $[-1,1]^s$ with $v([-1,1]^s) \subset \mathbb{R}_{>0}$. Suppose that for $c$ and $d_1,...,d_s$ corresponding sets of log-analytic functions $L_c,L_{d_1},...,L_{d_s}$ have already been defined. Let $b_0:=a$. For $j \in \{0,...,s\}$ let $\sigma_j \in \{-1,0,1\}$ and $\lambda_{j0},...,\lambda_{jk} \in \mathbb{R}$, $h_{j0},...,h_{jk}:C \to \mathbb{R}_{>0}$ be log-analytic with
$$b_j= \sigma_j \prod_{i=1}^k h_{ji}^{\lambda_{ji}}$$
where $k \in \mathbb{N}$. We set
$$L:=L_c \cup L_{d_1} \cup ... \cup L_{d_s} \cup \{h_{ji} \mid j \in \{0,...,s\}, i \in \{1,...,k\}\}.$$
\vs{0.4cm}
\textbf{Convention}
\vs{0.1cm}
For a set $E$ of positive definable functions on $X$ we say that $f:X \to \mathbb{R}$ has exponential number at most $-1$ with respect to $E$ if $f$ is the zero function.
\vs{0.5cm}
{\bf2.14 Proposition}
\vs{0.1cm}
{\it Let $e \in \mathbb{N}_0$. Let $f:X \to \mathbb{R}$ be a restricted log-exp-analytic power function in $x$ of order at most $e$. Then there is a decomposition $\mathcal{C}$ of $X$ into finitely many definable cells such that for every $C \in \mathcal{C}$ the function $f|_C$ is $(m,X)$-power-restricted $e$-prepared.}
\vs{0.3cm}
{\bf Proof}
\vs{0.1cm}
Let $e \in \mathbb{N}_0 \cup \{-1\}$ and $E$ be $X$-digestible in $x$ such that $f$ has exponential number at most $e$ with respect to $E$. We do an induction on $e$. For $e=-1$ the assertion is clear.
\vs{0.1cm}
$e-1 \to e$: There is a decomposition $\mathcal{D}$ of $X$ into finitely many definable cells such that for every $D \in \mathcal{D}$ there is $s \in \mathbb{N}$ such that
$$f(t,x)=a(t,x)\exp(d_0(t,x))v(b_1(t,x)\exp(d_1(t,x)),...,b_s(t,x)\exp(d_s(t,x)))$$
for $(t,x) \in D$ where $a,b_1,...,b_s:D \to \mathbb{R}$ are log-analytic and $d_0,d_1,...,d_s:D \to \mathbb{R}$ are finite $\mathbb{Q}$-linear combinations of functions from $\log(E)$ which have exponential number at most $(e-1)$ with respect to $E$. Additionally $b_j(t,x)\exp(d_j(t,x)) \in [-1,1]$ for $(t,x) \in D$ and $v$ is a power series which converges absolutely on an open neighbourhood of $[-1,1]^s$ with $v([-1,1]^s) \subset \mathbb{R}_{>0}$ (compare with theorem B in \cite{8}). Fix $D \in \mathcal{D}$ with the corresponding preparation for $f|_D$. Note that there are locally bounded $\delta_0,...,\delta_s:D \to \mathbb{R}$ in $x$ with reference set $X$ which have exponential number at most $(e-1)$ with respect to $E$ and functions $\eta_0,...,\eta_s:D \to \mathbb{R}$ such that $d_j=\delta_j+\eta_j$ for $j \in \{0,...,s\}$ and the following holds. There are $k \in \mathbb{N}$ and definable step functions $\chi_{j1},...,\chi_{jk}:D \to \mathbb{R}$ and positive functions $\beta_{j1},...,\beta_{jk}:D \to \mathbb{R}_{>0}$ which have exponential number at most $e-1$ with respect to $E$ such that $\eta_j = \sum_{i=1}^k \chi_{ji} \log(\beta_{ji})$. So we obtain
$$f|_D=a(\prod_{i=1}^k\beta_{0i}^{\chi_{0i}}) \exp(\delta_0)v(b_1(\prod_{i=1}^k\beta_{1i}^{\chi_{1i}})\exp(\delta_1),...,b_s(\prod_{i=1}^k\beta_{si}^{\chi_{si}})\exp(\delta_s)).$$
Now we use the inductive hypothesis on $\beta_{ji}$ and find a decomposition $\mathcal{A}$ of $D$ into finitely many definable cells such that for $j \in \{0,...,s\}$ and $i \in \{1,...,k\}$ the following holds. We have $\chi_{ji}|_A=c_{j,i,A}$ for a $c_{j,i,A} \in \mathbb{R}$ and $\beta_{ji}$ is $(m,X)$-power-restricted $(e-1)$-prepared, i.e. for $(t,x) \in A$
$$\beta_{ji}(t,x)=\hat{a}_{ji}(t,x)\exp(\nu_{j,i,0}(t,x)) \cdot $$ $$\hat{v}_{ji}(\hat{b}_{j,i,1}(t,x)\exp(\nu_{j,i,1}(t,x)),...,\hat{b}_{j,i,\hat{s}}(t,x)\exp(\nu_{j,i,\hat{s}}(t,x)))$$
\vs{0.2cm}
where $\nu_{j,i,0},...,\nu_{j,i,\hat{s}}:A \to \mathbb{R}$ are locally bounded in $x$ with reference set $X$, the functions $\hat{a}_{ji},\hat{b}_{j,i,1},...,\hat{b}_{j,i,\hat{s}}:A \to \mathbb{R}$ are finite products of powers of log-analytic functions and $\hat{v}_{ji}$ is a power series which converges absolutely on an open neighbourhood of $[-1,1]^{\hat{s}}$ with $\hat{v}_{ji}([-1,1]^{\hat{s}}) \subset \mathbb{R}_{>0}$. (By redefining the single $v_{ij}$ we may assume that $\hat{s}$ does not depend on $j$.) Note also that $\hat{a}_{ji}$ is positive.
\vs{0.2cm}
Fix $A \in \mathcal{A}$ and the corresponding preparation for $\beta_{ji}|_A$. Let $$\gamma_{ji}:=\hat{v}_{ji}(\hat{b}_{j,i,1}\exp(\nu_{j,i,1}),...,\hat{b}_{j,i,\hat{s}}\exp(\nu_{j,i,\hat{s}})).$$
For $j \in \{0,...,s\}$ let
$$\omega_j:=\prod_{i=1}^k \gamma_{ji}^{c_{j,i,A}}, \textnormal{ } \kappa_j:=\sum_{i=1}^k c_{j,i,A}\nu_{j,i,0}, \textnormal{ } \mu_j := \prod_{i=1}^{k}\hat{a}_{ji}^{c_{j,i,A}}.$$
Note that $\hat{v}_{ji}^{c_{j,i,A}}$ is a power series which converges absolutely on an open neighbourhood of $[-1,1]^s$ with $\hat{v}_{ji}^{c_{j,i,A}}([-1,1]^s) \subset \mathbb{R}_{>0}$ (by using the exponential series, the logarithmic series and the fact that $\hat{v}_{ji}^{c_{j,i,A}} = \exp(c_{j,i,A}\log(\hat{v}_{ji}))$). We obtain
$$f|_A=a\mu_0e^{\delta_0+\kappa_0}\omega_0v(b_1\mu_1e^{\delta_1+\kappa_1}\omega_1,...,b_s\mu_se^{\delta_s+\kappa_s}\omega_s).$$
Note that $a\mu_0$ and $b_j\mu_j$ for $j \in \{1,...,s\}$ are finite product of powers of log-analytic functions. Additionally $\delta_j+\kappa_j$ is locally bounded in $x$ with reference set $X$ and has exponential number at most $e-1$ with respect to $E$ for $j \in \{0,...,s\}$. So by the inductive hypothesis on $\delta_j+\kappa_j$ for $j \in \{0,...,\hat{s}\}$ we find a decomposition $\mathcal{B}$ of $A$ into finitely many definable cells such that for every $B \in \mathcal{B}$ we have that $(\delta_j+\kappa_j)|_B$ is $(m,X)$-power-restricted $(e-1)$-prepared for $j \in \{0,...,s\}$. We are done by composition of power series. \hfill$\blacksquare$
\vs{0.5cm}
For the rest of Chapter 2.2 we set the following. Let $n \in \mathbb{N}_0$ and $m,l \in \mathbb{N}_0$ be with $n=l+m$. Let $w$ range over $\mathbb{R}^l$, $u$ over $\mathbb{R}^m$ and $x$ over $\mathbb{R}$. Here $(u_1,...,u_m,x)$ is serving as the tuple of independent variables of families of functions parameterized by $w:=(w_1,...,w_l)$. Let $t:=(w,u)$ and let $\pi:\mathbb{R}^{n+1} \to \mathbb{R}^n, (t,x) \mapsto t$. Fix definable sets $C,X \subset \mathbb{R}^n \times \mathbb{R}$ with $C \subset X$ such that $X_w$ is open for every $w \in \mathbb{R}^l$.
\vs{0.5cm}
{\bf2.15 Definition}
\vs{0.1cm}
Let $e \in \mathbb{N}_0 \cup \{-1\}$ and $r \in \mathbb{N}_0$. By induction on $e \in \mathbb{N}_0 \cup \{-1\}$ we define that $f:C \to \mathbb{R}, (w,u,x) \mapsto f(w,u,x),$ is \textbf{$(m+1,X)$-power-restricted $(e,r)$-prepared in $x$} and assign a \textbf{preparing tuple} to this preparation.
\vs{0.2cm}
$e=-1$: We call $f$ $(m+1,X)$-power-restricted $(-1,r)$-prepared in $x$ if $f$ is the zero function. A preparing tuple for $f$ is then $(0)$.
\vs{0.2cm}
$e-1 \to e$: We call $f$ $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$ if for $(t,x) \in C$
$$f(t,x)=a(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes q} \exp(d_0(t,x))u(t,x)$$
where $a:\pi(C) \to \mathbb{R}$ is a finite product of powers of $C$-nice functions, $q \in \mathbb{R}^{r+1}$, $d_0:C \to \mathbb{R}$ is locally bounded in $(u,x)$ with reference set $X$ and is $(m+1,X)$-power-restricted $(e-1,r)$-prepared in $x$. Additionally $u:C \to \mathbb{R}$ is of the following form. There is $s \in \mathbb{N}$ such that $u=v \circ \phi$ where $\phi:=(\phi_1,...,\phi_s):C \to [-1,1]^s$ with
$$\phi_j(t,x)=b_j(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes p_j}\exp(d_j(t,x))$$
for $j \in \{1,...,s\}$ where $p_j \in \mathbb{R}^r$, $b_j:\pi(C) \to \mathbb{R}$ is a finite product of powers of $C$-nice functions, $d_j:C \to \mathbb{R}$ is $(m+1,X)$-power-restricted $(e-1,r)$-prepared in $x$ and locally bounded in $(u,x)$ with reference set $X$ and $v$ is a power series which converges absolutely on an open neighbourhood of $[-1,1]^s$ with $v([-1,1]^s) \subset \mathbb{R}_{>0}$. A preparing tuple for $f$ is then
$$(r,\mathcal{Y},a,\exp(d_0),q,s,v,b,\exp(d),P)$$
with $b:=(b_1,...,b_s)$, $\exp(d):=(\exp(d_1),...,\exp(d_s))$ and
$$P:=\left(\begin{array}{cccc}
p_{10}&\cdot&\cdot&p_{1r}\\
\cdot&& &\cdot\\
\cdot&& &\cdot\\
p_{s0}&\cdot&\cdot&p_{sr}\\
\end{array}\right)\in M\big(s\times (r+1),\mathbb{R}).$$
\vs{0.5cm}
A full preparation theorem for restricted log-exp-analytic power functions in $(u,x)$ is the following.
\vs{0.5cm}
{\bf2.16 Proposition}
\vs{0.1cm}
{\it
Let $e \in \mathbb{N}_0$. Let $f:X \to \mathbb{R}, (w,u,x) \mapsto f(w,u,x),$ be a restricted log-exp-analytic power function in $(u,x)$ of order at most $e$. Then there is $r \in \mathbb{N}_0$ and a definable cell decomposition $\mathcal{C}$ of $X_{\neq 0}$ such that for every $C \in \mathcal{C}$ the function $f|_C$ is $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$.}
\vs{0.4cm}
{\bf Proof}
\vs{0.1cm}
By Proposition 2.14 there is a decomposition $\mathcal{D}$ of $X$ into finitely many definable cells such that for every $D \in \mathcal{D}$ we have that $f|_D$ is $(m+1,X)$-power-restricted $e$-prepared. Fix $D \in \mathcal{D}$ and a corresponding finite set $L$ of log-analytic functions on $D$ from Definition 2.13. Let $L:=\{l_1,...,l_\kappa\}$ for $\kappa \in \mathbb{N}$. By Fact 1.9 there is a decomposition $\mathcal{C}$ of $D_{\neq 0}$ into finitely many definable cells such that for every $C \in \mathcal{C}$ the functions $l_1,...,l_\kappa$ are $r$-log-analytically prepared in $x$ with $C$-nice coefficent, $C$-nice base functions and common $C$-nice center. Fix $C \in \mathcal{C}$ and the corresponding center $\Theta:=(\Theta_0,...,\Theta_r)$ for this preparation. We do an induction on $e$. For $e=-1$ there is nothing to show.
\vs{0.2cm}
$e-1 \to e$:
We have
$$f|_C=\sigma_{b_0}b_0\exp(d_0)v(\sigma_{b_1}b_1\exp(d_1),...,\sigma_{b_s}b_s\exp(d_s))$$
where $\sigma_{b_0},...,\sigma_{b_s} \in \{-1,0,1\}$, $b_j=\prod_{i=1}^k h_{ji}^{\lambda_{ji}}$ with $k \in \mathbb{N}$, $\lambda_{ji} \in \mathbb{R}$ and $h_{ji}$ is a positive log-analytic function on $C$ with $h_{ji} \in L$ for $i \in \{1,...,k\}$ and $j \in \{0,...,s\}$. Additionally $d_0,...,d_s:C \to \mathbb{R}$ are locally bounded in $x$ with reference set $X$ and are $(m,X)$-power-restricted $(e-1)$-prepared. We have that $\sigma_{b_j}b_j(t,x)\exp(d_j(t,x)) \in [-1,1]$ for $(t,x) \in C$ and the function $v:[-1,1]^s \to \mathbb{R}$ is a power series which converges absolutely on an open neighbourhood of $[-1,1]^s$ with $v([-1,1]^s) \subset \mathbb{R}_{>0}$. By the inductive hypothesis we have that
$d_0,...,d_s$ are $(m+1,X)$-power-restricted $(e-1,r)$-prepared in $x$ with center $\Theta$. Let $j \in \{0,..,s\}$. Since $h_{ji}$ is $r$-log-analytically prepared in $x$ with $C$-nice coefficient, base functions and center $\Theta$ for $i \in \{1,...,k\}$ one sees immediately that
$$b_j(t,x)=\hat{a}(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes q} \hat{v}(\hat{b}_1(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes p_1},..., \hat{b}_{\hat{s}}(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes p_{\hat{s}}})$$
for $q \in \mathbb{R}^{r+1}$, $p_i \in \mathbb{Q}^{r+1}$ and $\hat{a}:\pi(C) \to \mathbb{R}$ is a finite product of powers of $C$-nice functions, $\hat{b}_1,...,\hat{b}_{\hat{s}}:\pi(C) \to \mathbb{R}$ are $C$-nice functions and $\hat{v}:[-1,1]^{\hat{s}} \to \mathbb{R}$ is a power series which converges absolutely on an open neighbourhood of $[-1,1]^{\hat{s}}$ with $\hat{v}([-1,1]^{\hat{s}}) \subset \mathbb{R}_{>0}$. We are done with composition of power series. \hfill$\blacksquare$
\vs{0.5cm}
{\bf2.17 Definition}
\vs{0.1cm}
Let $e \in \mathbb{N}_0 \cup \{-1\}$ and $r \in \mathbb{N}_0$. By induction on $e \in \mathbb{N}_0 \cup \{-1\}$ we define that $f:C \to \mathbb{R}, (w,u,x) \mapsto f(w,u,x),$ is \textbf{purely $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$} and assign a \textbf{purely preparing tuple} to this preparation.
\vs{0.2cm}
$e=-1$: We call $f$ purely $(m+1,X)$-power-restricted $(-1,r)$-prepared in $x$ if $f$ is the zero function. A preparing tuple for $f$ is then $(0)$.
\vs{0.2cm}
$e-1 \to e$: We call $f$ purely $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$ if for $(t,x) \in C$
$$f(t,x)=a(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes q} \exp(d_0(t,x)) \cdot u(t,x)$$
where $a:\pi(C) \to \mathbb{R}$ is a finite product of powers of log-analytic functions, $q \in \mathbb{R}^{r+1}$, $d_0:C \to \mathbb{R}$ is locally bounded in $(u,x)$ with reference set $X$ and is purely $(m+1,X)$-power-restricted $(e-1,r)$-prepared in $x$ and $u$ is a function on $C$ of the following form. There is $s \in \mathbb{N}$ such that $u=v \circ \phi$ where $\phi:=(\phi_1,...,\phi_s):C \to [-1,1]^s$ with
$$\phi_j(t,x)=b_j(t)\vert{\mathcal{Y}(t,x)}\vert^{\otimes p_j}\exp(d_j(t,x))$$
for $j \in \{1,...,s\}$ where $b_1,...,b_s:\pi(C) \to \mathbb{R}$ are finite products of powers of log-analytic functions, $d_1,...,d_s:C \to \mathbb{R}$ are locally bounded in $(u,x)$ with reference set $X$ and are purely $(m+1,X)$-power-restricted $(e-1,r)$-prepared in $x$, $p_j:=(p_{j0},...,p_{jr}) \in \mathbb{R}^{r+1}$ and $v$ is a power series on $[-1,1]^s$ which converges absolutely on an open neighbourhood of $[-1,1]^s$ and fulfills $v([-1,1]^s) \subset \mathbb{R}_{>0}$. A purely preparing tuple for $f$ is then
$$(r,\mathcal{Y},a,\exp(d_0),q,s,v,b,\exp(d),P)$$
with $b:=(b_1,...,b_s)$, $\exp(d):=(\exp(d_1),...,\exp(d_s))$ and
$$P:=\left(\begin{array}{cccc}
p_{10}&\cdot&\cdot&p_{1r}\\
\cdot&& &\cdot\\
\cdot&& &\cdot\\
p_{s0}&\cdot&\cdot&p_{sr}\\
\end{array}\right)\in M\big(s\times (r+1),\mathbb{R}).$$
\vs{0.2cm}
For Corollary 2.19 and Proposition 2.20 one needs the notion of simple cells (compare also with Definition 3.4 and 3.7 in \cite{7}).
\vs{0.4cm}
{\bf2.18 Definition}
\vs{0.1cm}
Let $C \subset \mathbb{R}^n \times \mathbb{R}_{\neq 0}$ be a definable cell. We call $C$ \textbf{simple} if for every $t \in \pi(C)$ we have $C_t=]0,d_t[$.
\vs{0.5cm}
{\bf2.19 Corollary}
\vs{0.1cm}
{\it
Let $f:X \to \mathbb{R}, (t,u,x) \mapsto f(t,u,x),$ be a restricted log-exp-analytic power function in $(u,x)$. Then there are $r \in \mathbb{N}_0$, $e \in \mathbb{N}_0 \cup \{-1\}$ and a definable cell decomposition $\mathcal{C}$ of $X$ such that for every simple $C \in \mathcal{C}$ the cell $C$ is simple and $f|_C$ is purely $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$ with center $(0)$.}
\vs{0.4cm}
{\bf Proof}
\vs{0.1cm}
By Proposition 2.16 there are $r \in \mathbb{N}_0$, $e \in \mathbb{N}_0 \cup \{-1\}$ and a definable cell decomposition $\mathcal{C}$ of $X$ such that for every $C \in \mathcal{C}$ the function $f|_C$ is $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$. Fix a simple $C \in \mathcal{C}$. We show by induction on $l \in \{-1,...,e\}$ that $f$ is purely $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$ with center $(0)$. For $l=-1$ there is nothing to show.
\vs{0.2cm}
$l-1 \to l$: Let
$$(r,\mathcal{Y},a,\exp(c),q,s,v,b,\exp(d),P)$$
be a preparing tuple for $f$ with $b:=(b_1,...,b_s)$ and $\exp(d):=(\exp(d_1),...,\exp(d_s))$. Note that $\hat{\Theta}=0$ for every center $\hat{\Theta}$ of an $k$-logarithmic scale on $C$ (where $k \in \mathbb{N}_0$) (compare with Fact 3.6 in \cite{7}, for a proof of this fact compare with Proposition 2.19 in \cite{5}). Consequently every $C$-nice function on $\pi(C)$ is log-analytic and the center $\Theta$ of $\mathcal{Y}$ vanishes. So we have that $a$ and $b_1,...,b_s$ are finite products of powers of log-analytic functions on $C$. So we see that $f$ is purely $(m+1,X)$-power-restricted $(e,r)$-prepared in $x$ with center $(0)$ by the inductive hypothesis and we are done.
\hfill$\blacksquare$
\vs{0.4cm}
A consequence of this preparation theorem is the following version of Proposition 3.16 from \cite{7} which shows that a restricted log-exp-analytic power function $f:X \to \mathbb{R}, (w,u,x) \mapsto f(w,u,x),$ can be log-analytically prepared in $x$ on simple cells with special coefficient and base functions if $0 \in X_t$ is interior point for every $t \in \pi(X)$.
\vs{0.5cm}
{\bf2.20 Proposition}
\vs{0.1cm}
{\it
Suppose that $0$ is interior point of $X_t$ for every $t \in \pi(X)$. Let $f:X \to \mathbb{R}$ be a restricted log-exp-analytic power function in $(u,x)$. Then there is $r \in \mathbb{N}_0$ and a definable cell decomposition $\mathcal{C}$ of $X$ such that for every simple $C \in \mathcal{C}$ the following holds. The function $f|_C$ is $r$-log-analytically prepared in $x$ with preparing tuple
$$(r,\mathcal{Y},a,q,s,v,b,P)$$
where $\mathcal{Y}$ is an $r$-logarithmic scale with center $(0)$, $P \in M(s \times (r+1),\mathbb{R})$,
$b:=(b_1,...,b_s)$ and $a,b_1,...,b_s$ are restricted log-exp-analytic power functions in $u$ with reference set $\pi(X)$.}
\vs{0.4cm}
{\bf Proof}
\vs{0.1cm}
Compare with the proof of Proposition 3.16 in \cite{7}. \hfill$\blacksquare$
\section{Differentiability Properties of Restricted \\
Log-Exp-Analytic Power Functions}
Outgoing from Proposition 2.20 we can give some differentiability properties of restricted log-exp-analytic power functions: Everything from Proposition 3.17 in \cite{7} can be formulated and proven for this class of functions. Here we give versions for theorem A, theorem B and theorem C from \cite{7} for restricted log-exp-analytic power functions which are also generalizations of the results from $\cite{3}$. Since this article is a supplement to \cite{7} we don't give proofs here.
\vs{0.3cm}
For this section we fix $n,m \in \mathbb{N}$. Let $t$ range over $\mathbb{R}^n$ and $x$ over $\mathbb{R}^m$. Let $X \subset \mathbb{R}^n \times \mathbb{R}^m$ be definable such that $X_t$ is open for every $t \in \mathbb{R}^n$.
\vs{0.5cm}
{\bf 3.1 Theorem} (Closedness under taking derivatives)
\vs{0.1cm}
{\it Let $x:=(x_1,...,x_m)$. Let $f:X \to \mathbb{R}, (t,x) \mapsto f(t,x),$ be a restricted log-exp-analytic power function in $x$. Suppose that $f_t$ is differentiable with respect to $x_m$ on $X_t$ for every $t \in \mathbb{R}^n$. Then $\partial f/\partial x_m$ is a restricted log-exp-analytic power function in $x$.}
\vs{0.5cm}
{\bf3.2 Theorem} (Strong quasianalyticity)
\vs{0.1cm}
{\it Let $X \subset \mathbb{R}^n \times \mathbb{R}^m$ be definable such that $X_t$ is open and connected for every $t \in \mathbb{R}^n$. Let $f:X \to \mathbb{R}, (t,x) \mapsto f(t,x),$ be a restricted log-exp-analytic power function in $x$. Then there is $N \in \mathbb{N}$ with the following property. If for $t \in \mathbb{R}^n$ the function $f_t$ is $C^N$ and if there is $a \in X_t$ such that all derivatives up to order $N$ vanish in $a$ then $f_t$ vanishes identically.}
\vs{0.5cm}
{\bf 3.3 Theorem} (Tamm's theorem)
\vs{0.1cm}
{\it Let $f:X \to \mathbb{R}, (t,x) \mapsto f(t,x),$ be a restricted log-exp-analytic power function in $x$. Then there is $N \in \mathbb{N}$ such that for all $t \in \mathbb{R}^n$ if $f(t,-)$ is $C^N$ at $x$ then $f(t,-)$ is real analytic at $x$.}
\vs{0.5cm}
{\bf3.4 Corollary}
\vs{0.1cm}
{\it Let $f:X \to \mathbb{R}, (t,x) \mapsto f(t,x),$ be a restricted log-exp-analytic power function in $x$. Then the set of all $(t,x) \in X$ such that $f(t,-)$ is real analytic at $x$ is definable.}
\vs{0.5cm}
{\bf3.5 Remark}
\vs{0.1cm}
The function
$$f:\mathbb{R} \to \mathbb{R}, x \mapsto \left\{\begin{array}{cc}
e^{-\frac{1}{x}},&x>0,\\
0,& x \leq 0,
\end{array}\right.$$
is not a restricted log-exp-analytic power function in $x$.
\vs{0.3cm}
{\bf Proof}
\vs{0.1cm}
Note that $f$ is flat at $0$, but not the zero function. So $f$ is not strong quasianalytic. Furthermore $f$ is $C^\infty$ at $0$, but not real analytic. So we see with Theorem 3.2 respectively Theorem 3.3 that $f$ is not a restricted log-exp-analytic power function in $x$. \hfill$\blacksquare$
\vs{0.4cm}
|
1,941,325,221,043 | arxiv | \section{Introduction}\label{sec:intro}
Video synthesis is an open and challenging problem in computer vision. As literature suggests, a deeper understanding of spatio-temporal behavior of video frame sequences can directly provide insights in choosing priors, future prediction, and feature learning \cite{vondrick2016generating, p2pvg2019}. Much progress has been made in developing variety of ways to generate videos which can be classified into broadly two categories: class of video generation methods which require random latent vectors without any reference input pixel \cite{vondrick2016generating, saito2017temporal, tulyakov2018mocogan}, and class of video generation methods which do depend on reference input pixels \cite{p2pvg2019, he2018probabilistic, siarohin2019animating}. Current literature contains methods mostly from the second class which often requires some human intervention \cite{p2pvg2019, he2018probabilistic}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{figures/First_fig.pdf}
\caption{\textbf{Comparison of proposed non-adversarial approach to one representative adversarial approach (MoCoGAN~\cite{tulyakov2018mocogan}) on the Chair-CAD~\cite{aubry2014seeing} dataset.} \textit{Top}: MoCoGAN often generates blurry frames including similar type of chairs for different videos as the time step increases. \textit{Bottom}: Our approach\protect\footnotemark, on the other hand, generates relatively sharper frames, maintaining consistency with the type of chairs unique to each video in the dataset.}
\label{fig:first_fig}
\vspace{-1em}
\end{figure}
\footnotetext{Project page: \textcolor{blue}{https://abhishekaich27.github.io/navsynth.html}}
In general, Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative}
have shown remarkable success in various kinds of video modality problems \cite{liang2017dual, kwon2019predicting, lotter2016deep, saito2017temporal}. Initially, video generation frameworks predominantly used GANs to synthesize videos from latent noise vectors. For example, VGAN \cite{vondrick2016generating} and TGAN \cite{saito2017temporal} proposed generative models that synthesize videos from random latent vectors with deep convolutional GAN. Recently, MoCoGAN \cite{tulyakov2018mocogan} proposed to decompose a video into content and motion parts using a generator guided by two discriminators. During testing, these frameworks generate videos that are captured in the range of the trained generator, by taking random latent vectors. While all these methods have obtained reasonable performance on commonly used benchmark datasets, they utilize adversarial learning to train their models and hence, inherit the shortcomings of GANs. Specifically, GANs are often very sensitive to multiple factors such as random network initialization, and type of layers employed to build the network \cite{li2017towards, salimans2016improved}. Some infamous drawbacks of GANs are mode-collapse (i.e., able to generate only some parts of the data distribution: see Fig.~\ref{fig:first_fig} for an example) and/or vanishing generator gradients due to discriminator being way better in distinguishing fake samples and real samples \cite{arjovsky2017wasserstein}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.87\textwidth]{figures/overview_fig.pdf}
\caption{\textbf{Overview of the proposed method.} Videos can be broken down into two main parts: static and transient components. To capture this, we map a video (with \textit{L} frame sequence) into two learnable latent spaces. We jointly learn the static latent space and the transient latent space along with the network weights. We then use these learned latent spaces to generate videos at the inference time. See Sec.~\ref{sec:prop_method} for more details.}
\label{fig:intro}
\vspace{-1em}
\end{figure*}
Non-adversarial approaches \cite{bojanowski2018optimizing, li2018implicit, hoshen2019non} have recently been explored to tackle these challenges. For example, Generative Latent Optimization (GLO) \cite{bojanowski2018optimizing} and Generative Latent Nearest Neighbor (GLANN) \cite{hoshen2019non} investigate the importance of inductive bias in convolutional networks by disconnecting the discriminator for a non-adversarial learning protocol of GANs. These works show that without a discriminator, a generator can be learned to map the training images in the given data distribution to a lower dimensional latent space that is learned in conjunction with the weights of the generative network. Such procedure not only avoids the mode-collapse problem of the generators, but also provides the user an optimized low dimensional latent representation (embedding) of the data in contrast with the random latent space as in GANs. Recently Video-VAE \cite{he2018probabilistic} proposed to use Variational Auto-Encoder (VAE) for conditional video synthesis, either by randomly generating or providing the first frame to the model for synthesizing a video. However, the quality of generated videos using Video-VAE often depends on the provided input frame. Non-adversarial video synthesis without any visual inputs still remains as a novel and rarely addressed problem.
\emph{In this paper}, we propose a novel non-adversarial framework to generate videos in a controllable manner without any reference frame. Specifically, we propose to synthesize videos from two optimized latent spaces, one providing control over the static portion of the video (\textit{static latent space}) and the other over the transient portion of the video (\textit{transient latent space}). We propose to jointly optimize these two spaces while optimizing the network (a generative and a recurrent network) weights with the help of regression-based reconstruction loss and a triplet loss.
Our approach works as follows. During training, we jointly optimize over network weights and latent spaces (both static and transient) and obtain a common transient latent space, and individual static latent space dictionary for all videos sharing the same class (see Fig.~\ref{fig:intro}). During testing, we randomly choose a static vector from the dictionary, concatenate it with the transient latent vector and generate a video. This enables us to obtain a controlled environment of diverse video generation from learned latent vectors for each video in the given dataset, while maintaining almost uniform quality. In addition, the proposed approach also allows a concise video data representation in form of learned vectors, frame interpolation (using a low rank constraint introduced in \cite{hyder2020low}), and generation of videos unseen during the learning paradigm.
The key contributions of our work are as follows.
\begin{itemize}[leftmargin=*]
\item We propose a novel framework for generating a wide range of diverse videos from learned latent vectors without any conditional input reference frame with almost uniform visual quality. Our framework obtains a latent space dictionary on both static and transient portions for the training video dataset, which enables us to generate even unseen videos with almost equal quality by providing combinations of static and transient latent vectors that were not part of training data.
\item Our extensive experiments on multiple datasets well demonstrate that the proposed method, without the adversarial training protocol, has better or at par, performance with current state-of-the-art methods \cite{vondrick2016generating, saito2017temporal, tulyakov2018mocogan}. Moreover, we do not need to optimize the (multiple) discriminator networks as in previous methods \cite{vondrick2016generating, saito2017temporal, tulyakov2018mocogan} which offers a computational advantage.
\end{itemize}
\section{Related Works}\label{sec: rel_work}
Our work relates to two major research directions:
video synthesis and non-adversarial learning. This section focuses on some representative methods closely related to our work.
\subsection{Video Synthesis} Video synthesis has been studied from multiple perspectives \cite{vondrick2016generating, saito2017temporal, tulyakov2018mocogan, he2018probabilistic, siarohin2019animating} (see Tab.~\ref{tab:compare_methods} for a categorization of existing methods). VGAN \cite{vondrick2016generating} demonstrates that a video can be divided into foreground and background using deep neural networks. TGAN \cite{saito2017temporal} proposes to use a generator to capture temporal dynamics by generating correlated latent codes for each video frame and then using an image generator to map each of these latent codes to a single frame for the whole video. MoCoGAN \cite{tulyakov2018mocogan} presents a simple approach to separate content and motion latent codes of a video using adversarial learning. The most relevant work for us is Video-VAE \cite{he2018probabilistic} that extends the idea of image generation to video generation using VAE by proposing a structured latent space in conjunction with the VAE architecture for video synthesis. While this method doesn't require a discriminator network, it depends on reference input frame to generate a video. In contrast, our method proposes a efficient framework for synthesizing videos from learnable latent vectors without any input frame. This gives a controlled environment for video synthesis that even enables us to generate visually good quality unseen videos through combining static and transient parts.
\begin{table}
\begin{center}\small{
\begin{tabular}{M{1.7cm}|M{1.5cm}|M{1.4cm}|M{1.8cm}}
\toprule[1.2pt]
\multicolumn{1}{c|}{\multirow{2}{*}{Methods}} & \multicolumn{3}{c}{Settings}\\
\cline{2-4}
& \small{Adversarial learning?} & \small{Input frame?} & \small{Input latent vectors?} \\
\midrule
VGAN \cite{vondrick2016generating} & \checkmark & \ding{55} & \checkmark (random)\\
\hline
TGAN \cite{saito2017temporal} & \checkmark & \ding{55} & \checkmark (random) \\
\hline
MoCoGAN \cite{tulyakov2018mocogan} & \checkmark & \ding{55} & \checkmark (random)\\
\hline
Video-VAE \cite{he2018probabilistic} & \ding{55} & \checkmark & \checkmark (random) \\
\hline
\textbf{Ours} & \ding{55} & \ding{55} & \checkmark (learned)\\
\bottomrule[1.2pt]
\end{tabular}}
\end{center}
\caption{\textbf{Categorization of prior works in video synthesis.} Different from existing methods, our model doesn't require a discriminator, or any reference input frame. However, since we have learned latent vectors, we have control of the kind of videos the model should generate.}
\label{tab:compare_methods}
\vspace{-1.1em}
\end{table}
\subsection{Non-Adversarial Learning} Generative adversarial networks, as powerful as they are in pixel space synthesis, are also difficult to train. This is owing to the saddle-point based optimization game between the generator and the discriminator. On top of the challenges discussed in the previous section, GANs require careful user driven configuration tuning which may not guarantee same performance for every run. Some techniques to make the generator agnostic to described problems have been discussed in \cite{salimans2016improved}. The other alternative to the same has given rise to non-adversarial learning of generative networks \cite{bojanowski2018optimizing, hoshen2019non}. Both \cite{bojanowski2018optimizing, hoshen2019non} showed that properties of convolutional GANs can be mimicked using simple reconstruction losses while discarding the discriminator.
While there has been some work on image generation from learned latent vectors \cite{bojanowski2018optimizing, hoshen2019non}, our work significantly differs from these methods as we do not map all the frames pixel-wise in a given video to the same latent distribution. This is because doing so would require a separate latent space (hence, a separate model for each video) for all the videos in a given dataset, and performing any operation in that space would naturally become video specific. Instead, we divide the latent space of videos sharing the same class into two parts - static and transient. This gives us a dictionary of static latent vectors for all videos and a common transient latent subspace. Hence, any random video of the dataset can now be represented by the combination of one static vector (which remains same for all frames) and the common transient subspace.
\section{Formulation}\label{sec:prop_method}
Define a video clip $\mathsf{V}$ represented by $L$ frames as
$\mathsf{V} = \begin{bmatrix}v_1, v_2, \cdots, v_L
\end{bmatrix}$. Corresponding to each frame $v_i$, let there be a point $z_i$ in latent space $\mathcal{Z}_\mathsf{V}\in\mathbb{R}^{D\times L}$ such that
\begin{align}\label{2}
\mathcal{Z}_{\mathsf{V}} = \begin{bmatrix}z_1, z_2, \cdots, z_L
\end{bmatrix}
\end{align}
which forms a path of length $L$. We propose to disentangle a video into two parts: a static constituent, which captures the constant portion of the video common for all frames, and a transient constituent which represents the temporal dynamics between all the frames in the video. Hence, let $\mathcal{Z}_\mathsf{V}$ be decomposed as $\mathcal{Z}_\mathsf{V} = [\mathcal{Z}_\mathsf{s}^\top ,\mathcal{Z}_\mathsf{t}^\top]^\top$ where $\mathcal{Z}_\mathsf{s}\in\mathbb{R}^{D_\mathsf{s}\times L}$ represents the static subspace and $\mathcal{Z}_\mathsf{t}\in\mathbb{R}^{D_\mathsf{t}\times L}$ represents the transient subspace with $D = D_\mathsf{s} + D_\mathsf{t}$. Thus $\{z_i\}_{i=1}^L$ in (\ref{2}) can be expressed as $z_i = \begin{bmatrix}z^{(\mathsf{s})\top}_{i} ,z^{(\mathsf{t})\top}_{i}\end{bmatrix}^\top~\forall~i=1, 2,\cdots,L$. Next assuming that the video is of short length, we can fix $z^{(\mathsf{s})}_{i} = z^{(\mathsf{s})}$ for all frames after sampling only once. Therefore, (\ref{2}) can be expressed as
\begin{align}\label{3}
\mathcal{Z}_\mathsf{V} = \begin{bmatrix}\begin{bmatrix}z^{(\mathsf{s})} \\ z^{(\mathsf{t})}_{1}\end{bmatrix}, \begin{bmatrix}z^{(\mathsf{s})} \\ z^{(\mathsf{t})}_{2}\end{bmatrix}, \cdots, \begin{bmatrix}z^{(\mathsf{s})} \\ z^{(\mathsf{t})}_{L}\end{bmatrix}
\end{bmatrix}
\end{align}
The transient portion will represent the motion of a given video. Intuitively, the latent vectors corresponding to this transient state should be correlated, or in other words, will form a path between $z^{(\mathsf{t})}_{1}$ and $z^{(\mathsf{t})}_{L}$.
Specifically, the frames in a video are correlated in time and hence a frame $v_i$ at time $i = T$ is a function of all previous frames $\{v_i\}_{i=1}^{T-1}$. As a result, their corresponding transient representation should also exhibit such a trajectory. This kind of representation of latent vectors can be obtained by employing a Recurrent Neural Network (RNN) where output of each cell of the network is a function of its previous state or input. Denote the RNN as $\mathcal{R}$ with weights $\theta$. Then, the RNN output $\mathcal{R}\br{z_i} = \{r^{(\mathsf{t})}_{i}\}~\forall~i = 1, 2, \cdots, L$ is a sequence of correlated variables representing the transient state of the video.
\subsection{Learning Network Weights}
Define a generative network $\mathcal{G}$ with weights represented by $\bm{\gamma}$. $\mathcal{G}$ takes latent vectors sampled from $\mathcal{Z}_\mathsf{V}$ as input and predicts up to $L$ frames of the video clip. For a set of $N$ videos, initialize set of \textit{D}-dimensional vectors $\mathcal{Z}_\mathsf{V}$ to form the pair \Big{\{}\big{(}$\mathcal{Z}_{\mathsf{V}_1}, \mathsf{V}_1$\big{)}, \big{(}$\mathcal{Z}_{\mathsf{V}_2}, \mathsf{V}_2$\big{)}, $\cdots$, \big{(}$\mathcal{Z}_{\mathsf{V}_N}, \mathsf{V}_N$\big{)} \Big{\}}. More specifically from (\ref{3}), defining $\textbf{z}^{(\mathsf{s})} = \begin{bmatrix}z^{(\mathsf{s})}, z^{(\mathsf{s})}, \cdots, z^{(\mathsf{s})}\end{bmatrix}\in\mathbb{R}^{D_\mathsf{s}\times L}$, and $\textbf{z}^{(\mathsf{t})} = \begin{bmatrix}z_1^{(\mathsf{t})}, z_2^{(\mathsf{t})}, \cdots, z_L^{(\mathsf{t})}\end{bmatrix}\in\mathbb{R}^{D_\mathsf{t}\times L}$, we will have the pairs
\begin{align*}
\Bigg{\{}\br{\begin{bmatrix}\textbf{z}^{(\mathsf{s})} \\ \textbf{z}^{(\mathsf{t})}\end{bmatrix}_{1}, \mathsf{V}_1}, \br{\begin{bmatrix}\textbf{z}^{(\mathsf{s})} \\ \textbf{z}^{(\mathsf{t})}\end{bmatrix}_{2}, \mathsf{V}_2}, \cdots, \br{\begin{bmatrix}\textbf{z}^{(\mathsf{s})} \\ \textbf{z}^{(\mathsf{t})}\end{bmatrix}_{N}, \mathsf{V}_N} \Bigg{\}}.
\end{align*}
With these pairs, we propose to optimize the weights $\gamma$, $\theta$, and input latent vectors $\mathcal{Z}_\mathsf{V}$ (sampled once in the beginning of training) in the following manner. For each video $\mathsf{V}_j$, we jointly optimize for $\theta, \gamma$, and $\{\mathcal{Z}_{\mathsf{V}_j}\}_{j=1}^N$ for every epoch in two stages:
\begin{subequations}\label{4}
\renewcommand{\theequation}{\theparentequation.\arabic{equation}}
\begin{alignat}{3}
& \text{Stage 1}: \quad && \min_{\gamma}~\ell\br{ \mathsf{V}_j, \mathcal{G}(\mathcal{Z}_{\mathsf{V}_j})\vert\br{\mathcal{Z}_{\mathsf{V}_j},\theta}}\label{4.1}\\
& \text{Stage 2}: \quad && \quad \min_{\mathcal{Z}_\mathsf{V}, \theta}~\ell\br{ \mathsf{V}_j, \mathcal{G}(\mathcal{Z}_{\mathsf{V}_j})\vert\gamma}\label{4.2}
\end{alignat}
\end{subequations}
$\ell(\cdot)$ can be any regression-based loss. For rest of the paper, we will refer to both (\ref{4.1}) and (\ref{4.2}) together as $\min\limits_{\mathcal{Z}_\mathsf{V}, \theta, \gamma}\ell_{\text{rec}}$.
\noindent \textbf{Regularized loss function to capture static subspace.}
The transient subspace, along with the RNN, handles the temporal dynamics of the video clip. To equally capture the static portion of the video, we randomly choose a frame from the video and ask the generator to compare its corresponding generated frame during training. For this, we update the above loss as follows.
\begin{align}\label{5}
\min_{\mathcal{Z}_\mathsf{V}, \theta, \gamma}~\br{\ell_{\text{rec}} + \lambda_{\text{s}}\ell_{\text{static}}}
\end{align}
where $\ell_{\text{static}} = \ell\br{\hat{v}_k, v_k}$ with $\textit{k}\in\{1, 2, \cdots, L\}$, $v_k$ is the ground truth frame, $\hat{v}_k = \mathcal{G}(\textbf{z}_k)$, and $\lambda_{\text{s}}$ is the regularization constant. $\ell_{\text{static}}$ can also be understood to essentially handle the role of image discriminator in \cite{tulyakov2018mocogan, wang2018video} that ensures that the generated frame is close to the ground truth frame.
\subsection{Learning Latent Spaces}\label{sec:Latent_space}
Non-adversarial learning involves joint optimization of network weights as well as the corresponding input latent space. Apart from the gradients with respect to loss in (\ref{5}), we propose to further optimize the latent space with gradient of a loss based on the triplet condition as follows.
\subsubsection{The Triplet Condition}\label{sec:trip_cond}
\begin{figure}[ht]
\vspace{-1em}
\centering
\includegraphics[width=0.47\textwidth]{figures/triplet_cond.pdf}
\caption{\textbf{Triplet Condition in the transient latent space.} Latent code representation of different frames of short video clips may lie very near to each other in the transient subspace. Using the proposed triplet condition, our model learns to explain the dynamics of similar looking frames and simultaneously map them to distinct latent vectors.}
\label{fig:triplet_condl}
\end{figure}
Short video clips often have indistinguishable dynamics in consecutive frames which can force the latent code representations to lie very near to each other in the transient subspace. However, an ideal transient space should ensure that the latent vector representation of a frame should only be close to a similar frame than a dissimilar one \cite{schroff2015facenet, sermanet2018time}. To this end, we introduce a triplet loss to (\ref{5}) that ensures that a pair of co-occurring frames $v^a_i$ (anchor) and $v^p_i$ (positive) are closer but distinct to each other in embedding space than any other frame $v^n_i$ (negative) (see Fig.~\ref{fig:triplet_condl}). In this work, positive frames are randomly sampled within a margin range $\alpha$ of the anchor and negatives are chosen outside of this margin range. Defining a triplet set with transient latent code vectors \{$z^{(\mathsf{t}),a}_i, z^{(\mathsf{t}),p}_i, z^{(\mathsf{t}),n}_i$\}, we aim to learn the transient embedding space $\textbf{z}^{(\mathsf{t})}$ such that
\begin{align*}
\Vert z^{(\mathsf{t}),a}_i - z^{(\mathsf{t}),p}_i\Vert_2^2 + \alpha < \Vert z^{(\mathsf{t}),a}_i - z^{(\mathsf{t}),n}_i\Vert_2^2
\end{align*}
$\forall~\{z^{(\mathsf{t}),a}_i, z^{(\mathsf{t}),p}_i, z^{(\mathsf{t}),n}_i\}\in\Gamma$, where $\Gamma$ is the set of all possible triplets in $\textbf{z}^{(\mathsf{t})}$. With the above regularization, the loss in (\ref{5}) can be written as
\begin{align}\label{6}
&\qquad\qquad\qquad\min_{\mathcal{Z}_\mathsf{V}, \theta, \gamma}~\br{\ell_{\text{rec}} + \lambda_{\text{s}}\ell_{\text{static}}}\nonumber\\
&\text{s.t.}\quad \Vert z^{(\mathsf{t}),a}_i - z^{(\mathsf{t}),p}_i\Vert_2^2 + \alpha < \Vert z^{(\mathsf{t}),a}_i - z^{(\mathsf{t}),n}_i\Vert_2^2
\end{align}
where $\alpha$ is a hyperparameter that controls the margin while selecting positives and negatives.
\subsection{Full Objective Function}
For any choice of differentiable generator $\mathcal{G}$, the objective (\ref{5}) will be differentiable with respect to $\mathcal{Z}_\mathsf{V}$, and $\br{\gamma, \theta}$ \cite{bora2017compressed}. We initialize $\mathcal{Z}_\mathsf{V}$ by sampling them from two different Gaussian distributions for both static and transient latent vectors. We also ensure that the latent vectors $\mathcal{Z}_\mathsf{V}$ lie on the unit $\ell_2$ sphere, and hence, we project $\mathcal{Z}_\mathsf{V}$ after each update by dividing its value by $\mathsf{max}\br{1, \Vert\mathcal{Z}_\mathsf{V}\Vert}$ \cite{bojanowski2018optimizing}, where $\mathsf{max}\br{\cdot}$ returns maximum among the set of given elements. Finally, the complete objective function can be written as follows.
\begin{align}\label{6i}
&\min_{\mathcal{Z}_\mathsf{V}, \theta, \gamma}~\br{\ell_{\text{rec}} + \lambda_{\text{s}}\ell_{\text{static}} + \lambda_{\text{t}}\ell_{\text{triplet}}}
\end{align}
where $\ell_{\text{static}} = \ell\br{\hat{v}_k, v_k}$, $\lambda_{\text{t}}$ is a regularization constant for the triplet loss, and $\ell_{\text{triplet}} = \mathsf{max}\br{\Vert z^{(\mathsf{t}),a}_i - z^{(\mathsf{t}),p}_i\Vert_2^2 + \alpha - \Vert z^{(\mathsf{t}),a}_i - z^{(\mathsf{t}),n}_i\Vert_2^2, 0}$. The weights of the generator $\gamma$ and static latent vector $z^{(\mathsf{s})}$ are updated by gradients of the losses $\ell_{\text{rec}}$ and $\ell_{\text{static}}$. The weights. $\theta$, of the RNN, and transient latent vectors $\textbf{z}^{(\mathsf{t})}$ are updated by gradients of the losses $\ell_{\text{rec}}$, $\ell_{\text{static}}$ and $\ell_{\text{triplet}}$.
\vspace{-0.25cm}
\subsubsection{Low Rank Representation for Interpolation}
The objective of video frame interpolation is to synthesize non-existent frames in-between the reference frames. While the triplet condition ensures that similar frames have their transient latent vectors nearby, it doesn't ensure that they lie on a manifold where simple linear interpolation will yield latent vectors that generate frames with plausible motion compared to preceding and succeeding frames \cite{hyder2020low,bojanowski2018optimizing}. This means that the transient latent subspace can be represented in a much lower dimensional space compared to its larger ambient space. So, to enforce such a property, we project the latent vectors into a low dimensional space while learning them along with the network weights, first proposed in \cite{hyder2020low}. Mathematically, the loss in (\ref{6i}) can be written as
\begin{align}\label{7}
&\min_{\mathcal{Z}_\mathsf{V}, \theta, \gamma}~\br{\ell_{\text{rec}} + \lambda_{\text{s}}\ell_{\text{static}} + \lambda_{\text{t}}\ell_{\text{triplet}}}\nonumber\\
&~~\text{s.t.}\qquad~~~~\mathsf{rank}\br{\textbf{z}^{(\mathsf{t})}} = \rho
\end{align}
where $\mathsf{rank}\br{\cdot}$ indicates rank of the matrix and $\rho$ is a hyper-parameter that decides what manifold $\textbf{z}^{(\mathsf{t})}$ is to be projected on.
We achieve this by reconstructing $\textbf{z}^{(\mathsf{t})}$ matrix from its top $\rho$ singular vectors in each iteration \cite{friedman2001elements}.
Note that, we only employ this condition for optimizing the latent space for the frame interpolation experiments in Sec.~\ref{sec:Frame_inter}.
\section{Experiments}
In this section, we present extensive experiments to demonstrate the effectiveness of our proposed approach in generating videos through learned latent spaces.
\subsection{Datasets}\label{sec:dataset}
We evaluate the performance of our approach using three publicly available datasets which have been used in many prior works \cite{he2018probabilistic, vondrick2016generating, tulyakov2018mocogan}.
\textbf{Chair-CAD }\cite{aubry2014seeing}.
This dataset consists of total 1393 chair-CAD models, out of which we randomly choose 820 chairs for our experiments with the first 16 frames, similar to \cite{he2018probabilistic}. The rendered frame in each video for all the models are center-cropped and then resized to $64 \times 64 \times 3$ pixels. We obtain the transient latent vectors for all the chair models with one static latent vectors for the training set.
\textbf{Weizmann Human Action }\cite{gorelick2007actions}. This dataset provides 10 different actions performed by 9 people, amounting to 90 videos. Similar to Chair-CAD, we center-cropped each frame, and then resized to $64 \times 64 \times 3$ pixels. For this dataset, we train our model to obtain nine static latent vectors (for nine different identities) and ten transient latent vectors (for ten different actions) for videos with 16 frames each.
\textbf{Golf scene dataset }\cite{vondrick2016generating}. Golf scene dataset \cite{vondrick2016generating} contains 20,268 golf videos with $128 \times 128 \times 3$ pixels which further has 583,508 short video clips in total. We randomly chose 500 videos with 16 frames each and resized the frames to $64 \times 64 \times 3$ pixels. Same as the Chair-CAD dataset, we obtained the transient latent vectors for all the golf scenes and one static latent vector for the training set.
\subsection{Experimental Settings}
We implement our framework in PyTorch \cite{paszke2017automatic}. Please see supplementary material for details on implementation and values of different hyper-parameters ($D_\mathsf{s}, D_\mathsf{t}, \alpha$, etc.).
\textbf{Network Architecture.} We choose DCGAN \cite{radford2015unsupervised} as the generator architecture for the Chair-CAD and Golf scene dataset, and conditional generator architecture from \cite{mirza2014conditional} for the Weizmann Human Action dataset for our experiments. For the RNN, we employ a one-layer gated recurrent unit network with 500 hidden units \cite{chung2014empirical}.
\textbf{Choice of Loss Function for }$\ell_{\text{rec}}$ \textbf{and} $\ell_{\text{static}}$\textbf{.} One straight forward loss function that can be used is the mean squared loss, but it has been shown in literature that it leads to generation of blurry pixels \cite{zhao2015loss}. Moreover, it has been shown empirically that generative functions in adversarial learning focus on edges \cite{bojanowski2018optimizing}. Motivated by this, the loss function for $\ell_{\text{rec}}$ and $\ell_{\text{static}}$ is chosen to be the Laplacian pyramid loss $\mathcal{L}_{\text{Laplacian}}$ \cite{ling2006diffusion} defined as
\begin{align*}
\mathcal{L}_{\text{Laplacian}}\br{v, \hat{v}} = \sum_l2^{2^l}\vert\mathsf{L}^l\br{v}-\mathsf{L}^l\br{\hat{v}}\vert_1
\end{align*}
where $\mathsf{L}^l\br{\cdot}$ is the \textit{l}-th level of the Laplacian pyramid representation of the input.
\textbf{Baselines.} We compare our proposed method with two adversarial methods. For Chair-CAD and Weizmann Human Action, we use MoCoGAN \cite{tulyakov2018mocogan} as the baseline, and for Golf scene dataset, we use VGAN \cite{vondrick2016generating} as the baseline. We use the publicly available code for MoCoGAN and VGAN, and set the hyper-parameters as recommended in the published work. We also compare two different versions of the proposed method by ablating the proposed loss functions. Note that, we couldn't compare our results with Video-VAE \cite{he2018probabilistic} using our performance measures (described below) as the implementation has not been made available by the authors, and to the best of our efforts we couldn't reproduce the results provided by them.
\textbf{Performance measures.}
Past video generation works have been evaluated quantitatively on Inception score (IS) \cite{he2018probabilistic}. But, it has been shown that IS is not a good evaluation metric for pixel domain generation, as the maximal IS score can be obtained by synthesizing a video from every class or mode in the given data distribution \cite{lucic2018gans, barratt2018note, theis2016note}. Moreover, a high IS does not guarantee any confidence on the quality of generation, but only on the diversity of generation. Since a generative model trained using our proposed method can generate all videos using the learned latent dictionary\footnote{Direct video comparison seems straight forward for our approach as the corresponding one-to-one ground truth is known. However, for \cite{tulyakov2018mocogan, vondrick2016generating}, we do not know which video is being generated (action may be known e.g. \cite{tulyakov2018mocogan}) which makes such direct comparison infeasible and unfair.}, and for a fair comparison with baselines, we use the following two measures, similar to measures provided in \cite{tulyakov2018mocogan}. We also provide relevant bounds computed on real videos for reference. Note that arrows indicate whether higher $\br{\uparrow}$ or lower $\br{\downarrow}$ scores are better.
\vspace{1mm}
\noindent (1) \textbf{Relative Motion Consistency Score (MCS $\downarrow$)}: Difference between consecutive frames captures the moving components, and hence motion in a video. So, firstly each frame in the generated video, as well as the ground-truth data, is represented as a feature vector computed using a VGG16 network \cite{simonyan2014very} pre-trained on ImageNet \cite{russakovsky2015imagenet} at the \texttt{relu3\_3} layer. Secondly, the averaged consecutive frame-feature difference vector for both set of videos is computed, denoted by $\hat{f}$ and $f$ respectively. Finally, the relative MCS is then given by $\log_{10}\br{\Vert f - \hat{f}\Vert_2^2}$.
\vspace{1mm}
\noindent (2) \textbf{Frame Consistency Score (FCS $\uparrow$)}:
This score measures the consistency of the static portion of the generated video frames. We keep the first frame of the generated video as reference and compute the averaged structural similarity measure for all frames. The FCS is then given by the average of this measure over all videos.
\subsection{Qualitative Results}
Fig.~\ref{fig:qual_result} shows some examples with randomly selected frames of generated videos for the proposed method and the adversarial approaches MoCoGAN \cite{tulyakov2018mocogan} and VGAN \cite{vondrick2016generating}. For Chair-CAD \cite{aubry2014seeing} and Weizmann Human Action \cite{gorelick2007actions} dataset, it can be seen that the proposed method is able to generate visually good quality videos with a non-adversarial training protocol, whereas MoCoGAN produces blurry and inconsistent frames. Since we use optimized latent vectors unlike MoCoGAN (which uses random latent vectors for video generation), our method produces visually more appealing videos. Fig.~\ref{fig:qual_result} presents two particularly important points. As visualized for the Chair-CAD videos, the adversarial approach of MoCoGAN produces not only blurred chair images in the generated video, but they are also non-uniform in quality. Further, it can be seen that as the time step increases, MoCoGAN tends to generate the same chair for different videos. This shows a major drawback of the adversarial approaches, where they fail to learn the diversity of the data distribution. Our approach overcomes this by producing a optimized dictionary of latent vectors which can be used for generating any video in the data distribution easily. To further validate our method for qualitative results, we present the following experiments.
\vspace{-0.25cm}
\subsubsection{Qualitative Ablation Study}
Fig.~\ref{fig:qual_abl_result} qualitatively shows the contribution of the specific parts of the proposed method on Chair-CAD \cite{aubry2014seeing}. First, we investigate the impact of input latent vector optimization. For a fair comparison, we optimize the model for same number of epochs. It can be observed that the model benefits from the joint optimization of input latent space to produce better visual results. Next, we validate the contribution of $\ell_{\text{static}}$ and $\ell_{\text{triplet}}$ on a difficult video example whose chair color matches with the background. Our method, combined with $\ell_{\text{static}}$ and $\ell_{\text{triplet}}$, is able to distinguish between the white background and the white body of the chair model.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth, height=1in]{qualitative_results/Visual_figures_ablation.pdf}
\caption{\textbf{Qualitative ablation study on Chair-CAD \cite{aubry2014seeing}.} \textit{Left}: It can be seen that the model is not able to generate good quality frames properly resulting in poor videos when the input latent space is not optimized, whereas with latent optimization, the generated frames are sharper. \textit{Right}: The impact of $\ell_{\text{static}}$ and $\ell_{\text{triplet}}$ is indicated by the red bounding boxes. Our method with $\ell_{\text{static}}$ and $\ell_{\text{triplet}}$ captures the difference between the white background and white chair, whereas without these two loss functions, the chair images are not distinguishable from their background. \texttt{+} and \texttt{-} indicate presence and absence of the terms, respectively.}
\label{fig:qual_abl_result}
\end{figure*}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.31\textwidth}
\hspace{-2em}
\includegraphics[width=1.1\textwidth, height=1.5in]{qualitative_results/Visual_figures_chair.pdf}
\caption{Chair-CAD \cite{aubry2014seeing}}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\hspace{-0.7em}
\includegraphics[width= 1.1\textwidth, height=1.5in]{qualitative_results/Visual_figures_WZ.pdf}
\caption{Weizmann Human Action \cite{gorelick2007actions}}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\hspace{0.3em}
\includegraphics[width= 1.1\textwidth, height=1.5in]{qualitative_results/Visual_figures_golf.pdf}
\caption{Golf \cite{vondrick2016generating}}
\end{subfigure}
\caption{\textbf{Qualitative results comparison with state-of-the-art methods.} We show two generated video sequences for MoCoGAN \cite{tulyakov2018mocogan} (for (a) Chair-CAD \cite{aubry2014seeing}, (b) Weizmann Human action \cite{gorelick2007actions}), VGAN \cite{vondrick2016generating} (for (c) Golf scene \cite{vondrick2016generating}) (\textit{top}), and the proposed method (\textbf{Ours}, \textit{bottom}). The proposed method produces visually sharper, and consistently better using the non-adversarial training protocol. More examples have been provided in the supplementary material.}
\label{fig:qual_result}
\vspace{-1em}
\end{figure*}
\begin{table*}[ht]
\centering
\begin{subtable}{0.3\textwidth}
\centering \small{
\begin{tabular}{c|c|c}
\toprule[1.2pt]
& MCS $\downarrow$ & FCS $\uparrow$ \\ \midrule
Bound & 0.0 & 0.91\\
\hline
MoCoGAN \cite{tulyakov2018mocogan} & 4.11 & 0.85 \\ \hline
Ours ($\texttt{-} \ell_{\text{triplet}} \texttt{-} \ell_{\text{static}}$) & 3.83 & 0.77 \\ \hline
Ours ($\texttt{+} \ell_{\text{triplet}} \texttt{+} \ell_{\text{static}}$) & \textbf{3.32} & \textbf{0.89} \\ \bottomrule[1.2pt]
\end{tabular}}
\caption{Chair-CAD \cite{aubry2014seeing}}
\label{tab:quant_chair_cad}
\end{subtable}%
\qquad
\begin{subtable}{0.3\textwidth}
\centering \small{
\begin{tabular}{c|c|c}
\toprule[1.2pt]
& MCS $\downarrow$ & FCS $\uparrow$ \\ \midrule
Bound & 0.0 & 0.95\\
\hline
MoCoGAN \cite{tulyakov2018mocogan} & 3.41 & 0.85 \\ \hline
Ours ($\texttt{-} \ell_{\text{triplet}} \texttt{-} \ell_{\text{static}}$) & 3.87 & 0.79 \\ \hline
Ours ($\texttt{+} \ell_{\text{triplet}} \texttt{+} \ell_{\text{static}}$) & \textbf{2.63} & \textbf{0.90}\\ \bottomrule[1.2pt]
\end{tabular}}
\caption{Weizmann Human Action \cite{gorelick2007actions}}
\label{tab:quant_weizmann}
\end{subtable}
\qquad
\begin{subtable}{0.3\textwidth}
\centering \small{
\begin{tabular}{c|c|c}
\toprule[1.2pt]
& MCS $\downarrow$ & FCS $\uparrow$ \\ \midrule
Bound & 0.0 & 0.97 \\
\hline
VGAN \cite{vondrick2016generating} & 3.61 & \textbf{0.88} \\ \hline
Ours ($\texttt{-} \ell_{\text{triplet}} \texttt{-} \ell_{\text{static}}$) & 3.78 & 0.84 \\ \hline
Ours ($\texttt{+} \ell_{\text{triplet}} \texttt{+} \ell_{\text{static}}$) & \textbf{2.71} & 0.84\\
\bottomrule[1.2pt]
\end{tabular}}
\caption{Golf \cite{vondrick2016generating}}
\label{tab:golf}
\end{subtable}
\caption{\textbf{Quantitative results comparison with state-of-the-art methods.} We obtained better scores on the proposed method on both Chair-CAD \cite{aubry2014seeing}, Weizmann Human Action \cite{gorelick2007actions}, and Golf \cite{vondrick2016generating} datasets, compared to the adversarial approaches (MoCoGAN, and VGAN). Best scores have been highlighted in bold. }
\label{tab:main-quant}
\vspace{-1em}
\end{table*}
\begin{table}[H]
\centering
\vspace{-1em}
\small{
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
\toprule[1.2pt]
\multicolumn{1}{c|}{\multirow{2}{*}{Actions}} & \multicolumn{9}{c}{Identities}\\
\cline{2-10}
& P1 & P2 & P3 & P4 & P5 & P6 & P7 & P8 & P9 \\ \midrule
run &$\bullet$ & $\bullet$ & $\bullet$ & & \textcolor{red}{ $\bullet$} & & & & \\ \hline
walk & \textcolor{green(ryb)}{ $\bullet$} & $\bullet$ & & & $\bullet$ & & $\bullet$ & & \\ \hline
jump & & $\bullet$ & & $\bullet$ & $\bullet$ & $\bullet$ & \textcolor{blue}{ $\bullet$} & $\bullet$ & $\bullet$ \\ \hline
skip & $\bullet$ & $\bullet$ & $\bullet$ & & & $\bullet$ & $\bullet$ & \textcolor{yellow}{ $\bullet$} & $\bullet$ \\
\bottomrule[1.2pt]
\end{tabular}}
\caption{\textbf{Generating videos by exchanging unseen actions by identities.} Each cell in this table indicates a video in the dataset. Only cells containing the symbol \textcolor{black}{$\bullet$} indicate that the video was part of the training set. We randomly generated videos corresponding to rest of the cells indicated by symbols \textcolor{red}{$\bullet$}, \textcolor{green(ryb)}{$\bullet$}, \textcolor{yellow}{$\bullet$}, and \textcolor{blue}{$\bullet$}, visualized in Fig.~\ref{fig:motion_exchange}.}
\label{tab:exchange}
\vspace{-0.7em}
\end{table}
\subsubsection{Action Exchange}
Our non-adversarial approach can effectively separate the static and transient portion of a video, and generate videos unseen during the training protocol. To validate these points, we choose a simple \textit{matrix} completion for the combination of identities and actions in the Weizmann Human action \cite{gorelick2007actions} dataset. For training our model, we created a set of videos (without any cropping to present the complete scale of the frame) represented by the cells marked with $\bullet$ in Tab.~\ref{tab:exchange}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{qualitative_results/Ours_Visual_figures_motion_exchange.pdf}
\caption{\textbf{Examples of action exchange to generate unseen videos.} This figure shows the generated videos unseen during the training of the model with colored bounding boxes indicating the colored dots (\textcolor{red}{ $\bullet$}, \textcolor{yellow}{ $\bullet$}, \textcolor{blue}{ $\bullet$}, \textcolor{green(ryb)}{ $\bullet$}) referred to in Tab.~\ref{tab:exchange}. This demonstrates the effectiveness of our method in disentangling static and transient portion of videos.}
\label{fig:motion_exchange}
\vspace{-1.25em}
\end{figure}
Hence, the unseen videos correspond to the cells not marked with $\bullet$. During testing, we randomly generated these unseen videos (marked with \textcolor{red}{ $\bullet$}, \textcolor{yellow}{ $\bullet$}, \textcolor{blue}{ $\bullet$} and \textcolor{green(ryb)}{ $\bullet$} in Tab.~\ref{tab:exchange}), and the visual results are shown in Fig.~\ref{fig:motion_exchange}. This experiment clearly validates our claim of static (identities) and transient (action) portion disentanglement of a video and, generation of unseen videos by using combinations of action and identities \textit{not} part of training set. Note that generated videos may not exactly resemble ground truth videos of the said combinations as we learn $\textbf{z}^{(\mathsf{t})}$ over a class of many videos.
\vspace{-0.4cm}
\subsubsection{Frame Interpolation}\label{sec:Frame_inter}
To show our methodology can be employed for frame interpolation, we trained our model using the loss (\ref{7}) for $\rho = 2$ and $\rho = 10$. During testing, we generated intermediate frames by interpolating learned latent variables of two distinct frames. For this, we computed the difference $\Delta z^{(\mathsf{t})}$ between the learned latent vectors of second ($z^{(\mathsf{t})}_2$) and fifth ($z^{(\mathsf{t})}_5$) frame, and generated $k=3$ unseen frames using $\{z_2^{(\mathsf{t})} + \sfrac{n}{k}\Delta z^{(\mathsf{t})} \}_{n=1}^k$, after concatenating with $z^{(\mathsf{s})}$. Fig.~\ref{fig:interpolation} shows the results of interpolation between second and fifth frames for two randomly chosen videos. Thus, our method is able to produce dynamically consistent frames with respect to the reference frames without any pixel clues.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth, height = 1.in]{qualitative_results/Visual_figures_interpolation_results.pdf}
\caption{\textbf{Examples of frame interpolation.} An important advantage of our method is translation of interpolation in the learned latent space to video space using (\ref{7}). It can be observed that as $\rho$ increases, the interpolation (bounded by color) is better. Note that the adjacent frames are also generated frames, and not ground truth frames.}
\label{fig:interpolation}
\vspace{-0.5cm}
\end{figure}
\subsection{Quantitative Results}
Quantitative result comparisons with respect to baselines have been provided in Tab.~\ref{tab:main-quant}. Compared to videos generated by the adversarial method MoCoGAN \cite{tulyakov2018mocogan}, we report a relative decrease of 19.22\% in terms of MCS, and 4.70\% relative increase in terms of FCS for chair-CAD dataset \cite{aubry2014seeing}. For the Weizmann Human Action \cite{gorelick2007actions} dataset, the proposed method is reported to have a relative decrease of 22.87\% in terms of in terms of MCS, and 4.61\% relative increase in terms of FCS. Similarly for Golf scene dataset \cite{vondrick2016generating}, we perform competitively with VGAN \cite{vondrick2016generating} with a observed relative decrease of 24.90\% in terms of in terms of MCS. A important conclusion from these results is that our proposed method, being non-adversarial in nature, learns to synthesize a diverse set of videos, and is able to perform at par with adversarial approaches. It should be noted that a better loss function for $\ell_{\text{rec}}$ and $\ell_{\text{static}}$ would produce stronger results. We leave this for future works.
\vspace{-1em}
\subsubsection{Quantitative Ablation Study}
\begin{table}
\small{
\begin{subtable}[t]{0.45\hsize}
\centering
\begin{tabular}{c|c|c}
\toprule[1.2pt]
& MCS $\downarrow$ & FCS $\uparrow$ \\
\midrule
Bound & 0 & 0.91 \\ \hline
$\texttt{-}\mathsf{Z}$ & 3.96 & 0.75 \\
\hline
$\texttt{+}\mathsf{Z}$ & 3.32 & 0.89 \\
\bottomrule[1.2pt]
\end{tabular}
\caption{With respect to latent space optimization.}
\label{tab:ablation_z}
\end{subtable}}~
\small{
\begin{subtable}[t]{0.5\hsize}
\centering
\begin{tabular}{c|c|c}
\toprule[1.2pt]
& MCS $\downarrow$ & FCS $\uparrow$ \\ \midrule
Bound & 0 & 0.91 \\ \hline
$\texttt{-} \ell_{\text{triplet}} \texttt{-} \ell_{\text{static}}$ & 3.83 & 0.77 \\ \hline
$\texttt{-} \ell_{\text{triplet}} \texttt{+} \ell_{\text{static}}$ & 3.82 & 0.85 \\ \hline
$\texttt{+} \ell_{\text{triplet}} \texttt{-} \ell_{\text{static}}$ & 3.36 & 0.81 \\ \hline
$\texttt{+} \ell_{\text{triplet}} \texttt{+} \ell_{\text{static}}$ & 3.32 & 0.89\\
\bottomrule[1.2pt]
\end{tabular}
\caption{With respect to loss functions}
\label{tab:ablation_loss}
\end{subtable}}
\caption{\textbf{Ablation study of proposed method on Chair-CAD} \cite{aubry2014seeing}. In (a), we evaluate contributions of latent space optimization ($\mathsf{Z}$). In (b), we evaluate contributions of $\ell_{\text{triplet}}$ and $\ell_{\text{static}}$ in four combinations. \texttt{+} and \texttt{-} indicate presence and absence of the terms, respectively.}
\vspace{-1em}
\end{table}
In this section, we demonstrate the contribution of different components in our proposed methodology on the Chair-CAD \cite{aubry2014seeing} dataset. For all the experiments, we randomly generate 500 videos using our model by using the learned latent vector dictionary. We divide the ablation study into two parts. Firstly, we present the results for impact of the learned latent vectors on the network modules. For this, we simply generate videos once with the learned latent vectors ($\texttt{+}\mathsf{Z}$), and once with randomly sampled latent vectors from a different distribution ($\texttt{-}\mathsf{Z}$). The inter-dependency of our model weights and the learned latent vectors can be interpreted from Tab.~\ref{tab:ablation_z}. We see that there is a relative decrease of 16.16\% in MCS from 3.96 to 3.32, and 18.66\% of relative increase in FCS. This shows that optimization of the latent space in the proposed method is important for good quality video generation.
Secondly, we investigate the impact of the proposed losses on the proposed method. Specifically, we look into four possible combinations of $\ell_{\text{triplet}}$ and $\ell_{\text{static}}$. The results are presented in Tab.~\ref{tab:ablation_loss}. It can observed that the combination of triplet loss $\ell_{\text{triplet}}$ and static loss $\ell_{\text{static}}$ provides the best result when employed together, indicated by the relative decrease of 14.26\% in MCS from 3.83 to 3.32.
\section{Conclusion}
We present a non-adversarial approach for synthesizing videos by jointly optimizing both network weights and input latent space.
Specifically, our model consists of a global static latent variable for content features, a frame specific transient latent variable, a deep convolutional generator, and a recurrent neural network which are trained using a regression-based reconstruction loss, including a triplet based loss.
Our approach allows us to generate a diverse set of almost uniform quality videos, perform frame interpolation, and generate videos unseen during training. Experiments on three standard datasets show the efficacy of our proposed approach over state-of-the-methods.
\textbf{Acknowledgements.} The work was partially supported by NSF grant 1664172 and ONR grant N00014-19-1-2264.
\FloatBarrier
{\small
\bibliographystyle{ieee_fullname}
|
1,941,325,221,044 | arxiv | \section{Introduction}
\subsection{The betting market for the EPL}
Gambling on soccer is a global industry with revenues between \$700 billion and \$1 trillion a year (see "Football Betting - the Global Gambling Industry worth Billions." BBC Sport). Betting on the result of a soccer match is a rapidly growing market, and online real-time odds exists (Betfair, Bet365, Ladbrokes). Market odds for all possible score outcomes ($0-0, 1-0, 0-1, 2-0, ... $) as well as outright win, lose and draw are available in real time. In this paper, we employ a two-parameter probability model based on a Skellam process and a non-linear objective function to extract the expected scoring rates for each team from the odds matrix. The expected scoring rates then define the implied volatility of the game.
A key feature of our analysis is to use the real-time odds to re-calibrate the expected scoring rates instantaneously as events evolve in the game. This allows us to assess how market expectations change according to exogenous events such as corner kicks, goals, and red cards. A plot of the implied volatility provides a diagnostic tool to show how the market reacts to event information. In particular, we study the evolution of the odds implied final score prediction over the course of the game. Our dynamic Skellam model fits the scoring data well in a calibration study of 1520 EPL games from the 2012 - 2016 seasons.
The goal of our study is to show how a parsimonious two-parameter model can flexibly model the evolution of the market odds matrix of final scores. We provide a non-linear objective function to fit our Skellam model to instantaneous market odds matrix. We then define the implied volatility of an EPL game and use this as a diagnostics to show how the market's expectation changes over the course of a game.
One advantage of viewing market odds through the lens of a probability model is the ability to obtain more accurate estimates of winning probabilities. For example, a typical market "vig" (or liquidity premium for bookmakers to make a return) is $5-8\%$ in the win, lose, draw market. Now there is also extra information in the final score odds about the win odds. Our approach helps to extract that information. Another application of the Skellam process is to model final score outcomes as a function of characteristics (see \cite{Karlis:2003ck, Karlis:2009dq}.)
The rest of the paper is outlined as follows. The next subsection provides connections with existing research. Section 2 presents our Skellam process model for representing the difference in goals scored. We then show how to make use of an odds matrix while calibrating the model parameters. We calculate a dynamic implied prediction of any score and hence win, lose and draw outcomes, using real-time online market odds. Section 3 illustrates our methodology using an EPL game between Everton and West Ham during the 2015-2016 season. Finally, Section 4 discusses extensions and concludes with directions for future research.
\subsection{Connections with Existing Work}
There is considerable interest in developing probability models for the evolution of the score of sporting events.
\cite{Stern:1994hj} and \cite{Polson:2015ira} propose a continuous time Brownian motion model for the difference in scores in a sporting event and show how to calculate the implied volatility of a game.
We build on their approach by using a difference of Poisson processes (a.k.a. Skellam process) for the discrete evolution of the scores of an EPL game, see also
\cite{Karlis:2003ck, Karlis:2009dq} and \cite{Koopman2014}.
Early probabilistic models (\citealt{Lee:1997ct}) predicted the outcome of soccer matches using independent Poisson processes. Later models incorporate a correlation between the two scores and model the number of goals scored by each team using bivariate Poisson models (see \cite{Maher:1982hr} and \cite{Dixon:1997jc}). Our approach follows \cite{Stern:1994hj} by modeling the score difference (a.k.a. margin of victory), instead of modeling the number of goals and the correlation between scores directly.
There is also an extensive literature on soccer gambling and market efficiency. For example, \cite{Vecer2009} estimates the scoring intensity in a soccer game from betting markets. \cite{Dixon:2004gj} presents a detailed comparison of odds set by different bookmakers. \cite{Fitt:2009iv} uses market efficiency to analyze the mispricing of cross-sectional odds
and \cite{Fitt:2005bj} models online soccer spread bets.
Another line of research, asks whether betting markets are efficient and, if not, how to exploit potential inefficiencies in the betting market. For example, \cite{Levitt2004} discusses the structural difference of the gambling market and financial markets. The study examines whether bookmakers are more skilled at game prediction than bettors and in turn exploit bettor biases by setting prices that deviate from the market clearing price. \cite{Avery:1999jg} examine the hypothesis that sentimental bettors act like noise traders and can affect the path of prices in soccer betting markets.
\section{Skellam Process for EPL scores}
To model the outcome of a soccer game between team A and team B, we let the difference in scores, $N(t)=N_A(t)-N_B(t)$ where
$N_A(t)$ and $N_B(t)$ are the team scores at time point $t$. Negative values of $N(t)$ indicate that team A is behind. We begin at $N(0) = 0$ and ends at time one with $N(1)$ representing the final score difference. The probability $\mathbb{P}(N(1)>0)$ represents the ex-ante odds of team A winning.
Half-time score betting, which is common in Europe, is available for the distribution of $N(\frac{1}{2})$.
We develop a probabilistic model for the distribution of $N(1)$ given $N(t)=\ell$ where $\ell$ is the current lead. This model, together with the current market odds can be used to infer the expected scoring rates of the two teams and then to define the implied volatility of the outcome of the match. We let $ \lambda^A$ and $ \lambda^B $ denote the expected scoring rates for the whole game. We allow for the possibility that the scoring abilities (and their market expectations) are time-varying, in which case we denote the expected scoring rates after time $t$ by $ \lambda^A_t $ and $\lambda^B_t$ respectively, instead of $ \lambda^A(1-t) $ and $\lambda^B(1-t)$.
\subsection{Implied Score Prediction from EPL Odds}
The Skellam distribution is defined as the difference between two independent Poisson variables, see \cite{Skellam:1946kb}, \cite{Sellers:2012uy}, \cite{Alzaid:2010ua}, and \cite{BarndorffNielsen:2012tx}. \cite{Karlis:2009dq} shows how Skellam distribution can be extended to a difference of distributions which have a specific trivariate latent variable structure.
Following \cite{Karlis:2003ck}, we decompose the scores of each team as
\begin{equation}
\left\{
\begin{aligned}
N_A(t) &=& W_A(t)+W(t) \\
N_B(t) &=& W_B(t)+W(t)
\end{aligned}
\right.
\end{equation}
where $W_A(t)$, $W_B(t)$ and $W(t)$ are independent processes with
$W_A(t) \sim Poisson (\lambda^A t)$, $W_B(t) \sim Poisson (\lambda^B t) . $
Here $W(t)$ is a non-negative integer-valued process to induce a correlation between the numbers of goals scored.
By modeling the score difference, $N(t)$, we avoid having to specify the distribution of $W(t)$ as the difference in goals scored is independent of $W(t)$. Specifically, we have
a Skellam distribution
\begin{equation}
N(t) = N_A(t) - N_B(t) = W_A(t) - W_B(t) \sim Skellam(\lambda^A t,\lambda^B t).
\label{skellam}
\end{equation}
where $ \lambda^A t $ is the cumulative expected scoring rate on the interval $ [0,t]$.
At time $t$, we have the conditional distributions
\begin{equation}
\left\{
\begin{aligned}
W_A(1) - W_A(t) &\sim& Poisson (\lambda^A(1-t)) \\
W_B(1) - W_B(t) &\sim& Poisson (\lambda^B(1-t)) \\
\end{aligned}
\right.
\end{equation}
Now letting $N^*(1-t)$, the score difference of the sub-game which starts at time $t$ and ends at time 1 and the duration is $(1-t)$. By construction, $N(1) = N(t) + N^*(1-t)$. Since $N^*(1-t)$ and $N(t)$ are differences of two Poisson process on two disjoint time periods, by the property of Poisson process, $N^*(1-t)$ and $N(t)$ are independent.
Hence, we can re-express equation (\ref{skellam}) in terms of $N^*(1-t)$, and deduce
\begin{equation}
N^*(1-t) = W^*_A(1-t) - W^*_B(1-t) \sim Skellam(\lambda^A_t,\lambda^B_t)
\end{equation}
where $W^*_A(1-t) = W_A(1) - W_A(t)$, $\lambda^A = \lambda^A_0$ and $\lambda^A_t=\lambda^A(1-t)$. A natural interpretation of the expected scoring rates, $\lambda^A_t$ and $\lambda^B_t$, is that they reflect the "net" scoring ability of each team from time $t$ to the end of the game. The term $W(t)$
model a common strength due to external factors, such as weather. The "net" scoring abilities of the two teams are assumed to be independent of each other as well as the common strength factor.
We can calculate the probability of any particular score difference, given by $\mathbb{P}(N(1)=x|\lambda^A,\lambda^B)$, at the end of the game where the $ \lambda$'s are estimated from the matrix of market odds. Team strength and "net" scoring ability can be influenced by various underlying factors, such as the offensive and defensive abilities of the two teams. The goal of our analysis is to only represent these parameters at every instant as a function of the market odds matrix for all scores.
To derive the implied winning probability, we use the law of total probability. The probability mass function of a Skellam random variable is the convolution of two Poisson distributions:
\begin{eqnarray}
\mathbb{P}(N(1)=x|\lambda^A,\lambda^B)
&=&\sum_{k=0}^\infty \mathbb{P}(W_B(1)=k-x|W_A(1)=k, \lambda^B) \mathbb{P}(W_A(1)=k|\lambda^A) \nonumber\\
&=&\sum_{k=max\{0,x\}}^\infty \left\{e^{-\lambda^B}\frac{(\lambda^B)^{k-x}}{(k-x)!}\right\}\left\{e^{-\lambda^A}\frac{(\lambda^A)^k}{k!}\right\}\nonumber\\
&=&e^{-(\lambda^A+\lambda^B)} \sum_{k=max\{0,x\}}^\infty\frac{(\lambda^B)^{k-x}(\lambda^A)^k}{(k-x)!k!}\nonumber \\
&=&e^{-(\lambda^A+\lambda^B)} \left(\frac{\lambda^A}{\lambda^B}\right)^{x/2}I_{|x|}(2\sqrt{\lambda^A\lambda^B})
\end{eqnarray}
where $I_r(x)$ is the modified Bessel function of the first kind (for full details, see \cite{Alzaid:2010ua}), thus has the series representation
\[ I_r(x)=\left(\frac{x}{2}\right)^r \sum_{k=0}^{\infty} \frac{(x^2/4)^k}{k!\Gamma(r+k+1)}. \]
The probability of home team A winning is given by
\begin{equation}
\mathbb{P}(N(1)>0|\lambda^A,\lambda^B)=\sum_{x=1}^\infty \mathbb{P}(N(1)=x|\lambda^A,\lambda^B).
\end{equation}
In practice, we truncate the number of possible goals since the probability of an extreme score difference is negligible. Unlike the Brownian motion model for the evolution of the outcome in a sports game (\cite{Stern:1994hj}, \cite{Polson:2015ira}), the probability of a draw in our setting is not zero. Instead, $\mathbb{P}(N(1)=0|\lambda^A,\lambda^B)>0$ depends on the sum and product of two parameters $\lambda^A$ and $\lambda^B$ and thus the odds of a draw are non-zero.
For two evenly matched teams with$\lambda^A=\lambda^B=\lambda$, we have
\begin{equation}
\mathbb{P}(N(1)=0|\lambda^A=\lambda^B=\lambda)
= e^{-2\lambda}I_0(2\lambda)
= \sum_{k=0}^{\infty} \frac{1}{(k!)^2}\left(\frac{\lambda^k}{e^\lambda}\right)^2.
\end{equation}
Figure \ref{draw} shows that this probability is a monotone decreasing function of $\lambda$ and so two evenly matched teams with large $\lambda$'s are less likely to achieve a draw.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{draw.pdf}
\caption{Left: Probability of a draw for two evenly matched teams. Right: Probability of score differences for two evenly matched teams. Lambda values are denoted by different colors.}
\label{draw}
\end{figure}
Another quantity of interest is the conditional probability of winning as the game progresses. If the current lead at time $t$ is $\ell$, and $N(t)=\ell=N_A(t)-N_B(t)$,
the Poisson property implied that the final score difference $(N(1)|N(t)=\ell)$can be calculated by using the fact that $N(1)=N(t)+N^*(1-t)$ and $N(t)$ and $N^*(1-t)$ are independent. Specifically, conditioning on $N(t)=\ell$, we have the identity
\[ N(1)=N(t)+N^*(1-t)=\ell+Skellam(\lambda^A_t,\lambda^B_t).\]
We are now in a position to find the conditional distribution ($N(1)=x|N(t)=\ell$) for every time point $t$ of the game given the current score. Simply put, we have the time homogeneous condition
\begin{eqnarray}
\mathbb{P}(N(1)=x|\lambda^A_t,\lambda^B_t,N(t)=\ell)&=&\mathbb{P}(N(1)-N(t)=x-\ell |\lambda^A_t,\lambda^B_t,N(t)=\ell)\nonumber\\
&=&\mathbb{P}(N^* (1-t)=x-\ell |\lambda^A_t,\lambda^B_t)
\end{eqnarray}
where $\lambda^A_t$, $\lambda^B_t$, $\ell$ are given by market expectations at time $t$.
Two conditional probabilities of interest are he chances that the home team A wins,
\begin{eqnarray}
\mathbb{P}(N(1)>0|\lambda^A_t,\lambda^B_t,N(t)=\ell)&=&\mathbb{P}(\ell+ N^*(1-t)>0|\lambda^A_t,\lambda^B_t)\nonumber\\
&=&\mathbb{P}(Skellam(\lambda^A_t,\lambda^B_t)>-\ell |\lambda^A_t,\lambda^B_t)\nonumber\\
&=&\sum_{x>-\ell}e^{-(\lambda^A_t+\lambda^B_t)}\left(\frac{\lambda^A_t}{\lambda^B_t}\right)^{x/2}I_{|x|}(2\sqrt{\lambda^A_t\lambda^B_t}).
\end{eqnarray}
and the conditional probability of a draw at time $t$ is
\begin{eqnarray}
\mathbb{P}(N(1)=0|\lambda^A_t,\lambda^B_t,N(t)=\ell)&=&\mathbb{P}(\ell+N^*(1-t)=0|\lambda^A_t,\lambda^B_t)\nonumber\\
&=&\mathbb{P}(Skellam(\lambda^A_t,\lambda^B_t)=-\ell |\lambda^A_t,\lambda^B_t)\nonumber\\
&=&e^{-(\lambda^A_t+\lambda^B_t)}\left(\frac{\lambda^A_t}{\lambda^B_t}\right)^{-\ell/2}I_{|\ell |}(2\sqrt{\lambda^A_t\lambda^B_t}).
\end{eqnarray}
\noindent The conditional probability at time $t$ of home team A losing is
$ 1-\mathbb{P}(N(1)>0|\lambda^A_t,\lambda^B_t,N(t)=\ell) $.
We now turn to the calibration of our model from given market odds.
\subsection{Market Calibration}
Our information set at time $t$, denoted by $\mathcal{I}_t$, includes the current lead $N(t) = \ell$ and the market odds for $\left\{Win, Lose, Draw, Score\right\}_t$, where
$Score_t = \{ ( i - j ) : i, j = 0, 1, 2, ....\}$. These market odds can be used to calibrate a Skellam distribution which has only two parameters $\lambda^A_t$ and $\lambda^B_t$. The best fitting Skellam model with parameters $\{\hat\lambda^A_t,\hat\lambda^B_t\}$ will then provide a better estimate of the market's information concerning the outcome of the game than any individual market (such as win odds) as they are subject to a "vig" and liquidity. Suppose that the fractional odds for all possible final score outcomes are given by a bookmaker. In this case, the bookmaker pays out three times the amount staked by the bettor if the outcome is indeed 2-1. Fractional odds are used in the UK, while money-line odds are favored by American bookmakers with $2:1$ ("two-to-one") implying that the bettor stands to make a \$200 profit on a \$100 stake. The market implied probability makes the expected winning amount of a bet equal to 0. In this case, the implied probability $p=1/(1+3)=1/4$ and the expected winning amount is $\mu=-1*(1-1/4)+3*(1/4)=0$. We denote this odds as $odds(2,1)=3$. To convert all the available odds to implied probabilities, we use the identity
\[ \mathbb{P}(N_A(1) = i, N_B(1) = j)=\frac{1}{1+odds(i,j)}. \]
The market odds matrix, $O$, with elements $o_{ij}=odds(i-1,j-1)$, $i,j=1,2,3...$ provides all possible combinations of final scores. Odds on extreme outcomes are not offered by the bookmakers. Since the probabilities are tiny, we set them equal to 0. The sum of the possible probabilities is still larger than 1 (see \cite{Dixon:1997jc} and \cite{Polson:2015ira}). This "excess" probability corresponds to a quantity known as the "market vig." For example, if the sum of all the implied probabilities is 1.1, then the expected profit of the bookmaker is 10\%. To account for this phenomenon, we scale the probabilities to sum to 1 before estimation.
To estimate the expected scoring rates, $\lambda^A_t$ and $\lambda^B_t$, for the sub-game $N^*(1-t)$, the odds from a bookmaker should be adjusted by $N_A(t)$ and $N_B(t)$. For example, if $N_A(0.5)=1$, $N_B(0.5)=0$ and $odds(2,1)=3$ at half time, these observations actually says that the odds for the second half score being 1-1 is 3 (the outcomes for the whole game and the first half are 2-1 and 1-0 respectively, thus the outcome for the second half is 1-1). The adjusted ${odds}^*$ for $N^*(1-t)$ is calculated using the original odds as well as the current scores and given by
\begin{equation}
{odds}^*(x,y)=odds(x+N_A(t),y+N_B(t)).
\end{equation}
At time $t$ $(0\leq t\leq 1)$, we calculate the implied conditional probabilities of score differences using odds information
\begin{equation}
\mathbb{P}(N(1)=k|N(t)=\ell)=\mathbb{P}(N^*(1-t)=k-\ell)=\frac{1}{c}\sum_{i-j=k-\ell}\frac{1}{1+{odds}^*(i,j)}\end{equation}
where $c=\sum_{i,j} \frac{1}{1+{odds}^*(i,j)}$ is a scale factor, $\ell=N_A(t)-N_B(t)$, $i,j\geq 0$ and $k=0,\pm 1,\pm 2\ldots$.
Moments of the Poisson distribution make it straightforward to derive the moments of a Skellam random variable with parameters $\lambda^A$ and $\lambda^B$. The unconditional mean and variance are given by $$E[N(1)]=E[W_A(1)]-E[W_B(1)]=\lambda^A-\lambda^B,$$
$$V[N(1)]=V[W_A(1)]+V[W_B(1)]=\lambda^A+\lambda^B.$$ Therefore, the conditional moments are given by
\begin{equation}
\left\{
\begin{aligned}
E[N(1)|N(t)=\ell]&=\ell+(\lambda^A_t-\lambda^B_t),\\
V[N(1)|N(t)=\ell]&=\lambda^A_t+\lambda^B_t.
\end{aligned}
\right.
\end{equation}
We also need to ensure that $\hat E[N(1)|N(t)=\ell]-\ell\leq \hat V[N(1)|N(t)=\ell]$. A method of moments estimate of $\lambda$'s is given by the solution to
\begin{equation}
\left\{
\begin{aligned}
\hat E[N(1)|N(t)=\ell]&=\ell+(\lambda^A_t-\lambda^B_t),\\
\hat V[N(1)|N(t)=\ell]&=\lambda^A_t+\lambda^B_t,
\end{aligned}
\right.
\end{equation}
where $\hat E$ and $\hat V$ are the expectation and variance calculated using market implied conditional probabilities, could be negative. To address this issue, we define the residuals
\begin{equation}
\left\{
\begin{aligned}
D_E&=\hat E[N(1)|N(t)=\ell]-[\ell+(\lambda^A_t-\lambda^B_t)],\\
D_V&=\hat V[N(1)|N(t)=\ell]-(\lambda^A_t+\lambda^B_t).
\end{aligned}
\right.
\end{equation}
We then calibrate parameters by adding the constraints $\lambda^A_t\geq 0$ and $\lambda^B_t\geq 0$ and solving the following equivalent constrained optimization problem.
\begin{eqnarray}
\left(\hat\lambda^A_t,\hat\lambda^B_t\right) &=& \underset{\lambda^A_t,\lambda^B_t}{\arg\min} \left\{D_E^2+D_V^2\right\}\\
&\text{subject to} & \lambda^A_t\geq 0, \lambda^B_t\geq 0 \nonumber
\end{eqnarray}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{prob.pdf}
\caption{The Skellam process model for winning margin and game simulations. The top left panel shows the outcome distribution using odds data before the match starts. Each bar represents the probability of a distinct final score difference, with its color corresponding to the result of win/lose/draw. Score differences larger than 5 or smaller than -5 are not shown. The top right panel shows a set of simulated Skellam process paths for the game outcome. The bottom row has the two figures updated using odds data available at half-time.}
\label{prob}
\end{figure}
Figure \ref{prob} illustrates a simulation evolution of an EPL game between Everton and West Ham (March 5th, 2016) with their estimated parameters. It provides a discretized version of Figure 1 in \cite{Polson:2015ira}. The outcome probability of first half and updated second half are given in the left two panels. The top right panel illustrates a simulation-based approach to visualizing how the model works in the dynamic evolution of score difference. In the bottom left panel, from half-time onwards, we also simulate a set of possible Monte Carlo paths to the end of the game. This illustrates the discrete nature of our Skellam process and how the scores evolve.
\subsection{Model Diagnostics}
To assess the performance our score-difference Skellam model calibration for the market odds, we have collected data from {\tt ladbrokes.com} on the correct score odds of 18 EPL games (from October 15th to October 22nd, 2016) and plot the calibration result in Figure \ref{18games}. The Q-Q plot of $\log(odds)$ is also shown. In average, there are 13 different outcomes per game, i.e., $N(1) = -6, -5, ... 0, ..., 5, 6$. In total 238 different outcomes are used. We compare our Skellam implied probabilities with the market implied probabilities for every outcome of the 18 games. If the model calibration is sufficient, all the data points should lies on the diagonal line. \begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{18games.pdf}
\caption{Left: Market implied probabilities for the score differences versus Skellam implied probabilities. Every data point represents a particular score difference; Right: Market log(odds) quantiles versus Skellam implied log(odds) quantiles. Market odds (from {\tt ladbrokes.com}) of 18 games in EPL 2016-2017 are used (in average 13 score differences per game). The total number of outcomes is 238.}
\label{18games}
\end{figure}
Figure \ref{18games} left panel demonstrates that our Skellam model is calibrated by the market odds sufficiently well, except for the underestimated draw probabilities. \cite{Karlis:2009dq} describe this underestimation phenomenon in a Poisson-based model for the number of goals scored. Following their approach, we apply a zero-inflated version of Skellam distribution to improve the fit on draw probabilities, namely
\begin{equation}
\left\{
\begin{aligned}
\tilde{P}(N(1) = 0) &= p + (1-p) P(N(1) = 0)\\
\tilde{P}(N(1) = x) &= (1-p) P(N(1) = x) \qquad \text{if }x\neq 0.
\end{aligned}
\right.
\end{equation}
Here $0<p<1$ is an inflation factor and $\tilde{P}$ denotes the inflated probabilities. We also consider another type of inflation here
\begin{equation}
\left\{
\begin{aligned}
\tilde{P}(N(1) = 0) &= (1+\theta) P(N(1) = 0)\\
\tilde{P}(N(1) = x) &= (1-\gamma) P(N(1) = x) \qquad \text{if }x\neq 0
\end{aligned}
\right.
\end{equation}
where $\theta$ is the inflation factor and $P(N(1) = 0) = \gamma/(\gamma+\theta)$.
Both types of inflation factors have the corresponding interpretation regarding the bookmakers' way of setting odds. With the first type of factor, the bookmakers generate two different set of probabilities, one specifically for the draw probability (namely the inflation factor $p$) and the other for all the outcomes using the Skellam model. The ``market vig" for all the outcomes is a constant. With the second type, the bookmakers use the Skellam model to generate the probabilities for all the outcomes. Then they apply a larger ``market vig" for draws than others. \cite{yates1982} also point out the ``collapsing" tendency in forecasting behavior, whereby the bookmakers are inclined to report forecasts of 50\% when they feel they know little about the event. In Figure \ref{18games} right panel, we see that the Skellam implied $\log(odds)$ has a heavier right tail than the market implied $\log(odds)$. This effect results from the overestimation of extreme outcomes, which in turn is due to market microstructure effect due to the market ``vig".
\begin{figure}[ht!]
\centering
\includegraphics[width=7in, height=3.5in]{inflation2.pdf}
\caption{Left: Market implied probabilities of win and draw. The fitted curves are Skellam implied probabilities with fixed $\lambda^A\lambda^B = 1.8$. Right: Market odds and result frequency of home team winning. 1520 EPL games from 2012 to 2016 are used. The dashed line represents: Frequency = Market Implied Probability}
\label{inflation}
\end{figure}
To assess the out-of-sample predictive ability of the Skellam model, we analyze the market (win, lose, draw) odds for 1520 EPL games (from 2012 to 2016, 380 games per season). However, the sample covariance of the end of game scores,$N_A(1)$ and $N_B(1)$, is close to 0. If we assume parameters stay the same, then the estimates are $\hat\lambda^A=1.5$ and $\hat\lambda^B=1.2$. Since the probabilities of win, lose and draw sum to 1, we only plot the market implied probabilities of win and draw. In Figure \ref{inflation} left panel, the draw probability is nearly a non-linear function of the win probability. To illustrate our model, we set the value of $\lambda^A\lambda^B = 1.5 \times 1.2 = 1.8$ and plot the curve of Skellam implied probabilities (red line). We further provide the inflated Skellam probabilities (blue line for the first type and green line for the second type). As expected, the non-inflated Skellam model (red line) underestimates the draw probabilities while the second type inflated Skellam model (green line) produces the better fit. We also group games by the market implied winning probability of home teams $P(N(1)>0)$: (0.05,0.1], (0.1,0.15], $\cdots$, (0.8,0.85]. We calculate the frequency of home team winning for each group. In Figure \ref{inflation} right panel, the barplot of frequencies (x-axis is regarding scaled odds) shows that the market is efficient, i.e., the frequency is close to the corresponding market implied probability and our Skellam model is calibrated to the market outcome for this dataset.
\subsection{Time-Varying Extension}
One extension that is clearly warranted is allowing for time-varying $\{\lambda^A_t, \lambda^B_t\}$ where the Skellam model is re-calibrated dynamically through updated market odds during the game. We use the current $\{\lambda^A_t, \lambda^B_t\}$ to project possible results of the match in our Skellam model. Here $\{\lambda^A_t, \lambda^B_t\}$ reveal the market expectation of scoring difference for both teams from time $t$ to the end of the game as the game progresses. Similar to the martingale approach of \cite{Polson:2015ira}, $\{\lambda^A_t, \lambda^B_t\}$ reveal the best prediction of the game result. From another point of view, this approach is the same as assuming homogeneous rates for the rest of the game.
An alternative approach to time-varying $\{\lambda^A_t, \lambda^B_t\}$ is to use a Skellam regression with conditioning information such as possession percentages, shots (on goal), corner kicks, yellow cards, red cards, etc. We would expect jumps in the $\{\lambda^A_t, \lambda^B_t\}$ during the game when some important events happen. A typical structure takes the form
\begin{equation}
\left\{
\begin{aligned}
\log(\lambda^A_t) &=& \alpha_A + \beta_A X_{A,t-1} \\
\log(\lambda^B_t) &=& \alpha_B + \beta_B X_{B,t-1},
\end{aligned}
\right.
\end{equation}
estimated using standard log-linear regression.
Our approach relies on the betting market being efficient so that the updating odds should contain all information of game statistics. Using log differences as the dependent variable is another alternative with a state space evolution. \cite{Koopman2014} adopt stochastically time-varying densities in modeling the Skellam process. \cite{Barndorff-Nielsen2012a} is another example of the Skellam process with different integer valued extensions in the context of high-frequency financial data. Further analysis is required, and this produces a promising area for future research.
\section{Example: Everton vs West Ham (3/5/2016) }
We collect the real-time online betting odds data from {\tt ladbrokes.com} for an EPL game between Everton and West Ham on March 5th, 2016. By collecting real-time online betting data for every 10-minute interval, we can show the evolution of betting market prediction on the final result. We do not account for the overtime for both 1st half and 2nd half of the match and focus on a 90-minute game.
\subsection{Implied Skellam Probabilities}
\begin{table}[ht!]
\centering
\begin{tabular}{@{}ccccccc@{}}
\toprule
Everton \textbackslash West Ham & 0 & 1 & 2 & 3 & 4 & 5 \\ \midrule
0 & 11/1 & 12/1 & 28/1 & 66/1 & 200/1 & 450/1 \\
1 & 13/2 & 6/1 & 14/1 & 40/1 & 100/1 & 350/1 \\
2 & 7/1 & 7/1 & 14/1 & 40/1 & 125/1 & 225/1 \\
3 & 11/1 & 11/1 & 20/1 & 50/1 & 125/1 & 275/1 \\
4 & 22/1 & 22/1 & 40/1 & 100/1 & 250/1 & 500/1 \\
5 & 50/0 & 50/1 & 90/1 & 150/1 & 400/1 & \\
6 & 100/1 & 100/1 & 200/1 & 250/1 & & \\
7 & 250/1 & 275/1 & 375/1 & & & \\
8 & 325/1 & 475/1 & & & & \\ \bottomrule
\end{tabular}
\caption{Original odds data from Ladbrokes before the game started\label{Table1}}
\end{table}
Table \ref{Table1} shows the raw data of odds right the game. We need to transform odds data into probabilities. For example, for the outcome 0-0, 11/1 is equivalent to a probability of 1/12. Then we can calculate the marginal probability of every score difference from -4 to 5. We neglect those extreme scores with small probabilities and rescale the sum of event probabilities to one.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.6]{comparison.pdf}
\caption{Market implied probabilities versus the probabilities estimated by the model at different time points, using the parameters given in Table \ref{lambda} \label{comparison}.}
\end{figure}
In Figure \ref{comparison}, the probabilities estimated by the model are compared with the market implied probabilities. As we see, during the course of the game, the Skellam assumption suffices to approximate market expectation of score difference distribution. This set of plots is evidence of goodness-of-fit the Skellam model.
\begin{table}[ht!]
\centering
\begin{tabular}{c c c c c c c c c c c}
\toprule
Score difference&-4&-3&-2&-1&0&1&2&3&4&5\\
\midrule
Market Prob. (\%)& 1.70 & 2.03 & 4.88 &12.33& 21.93 &22.06 &16.58 &9.82 &4.72 &2.23\\
Skellam Prob.(\%)& 0.78 & 2.50 & 6.47 & 13.02 & 19.50 & 21.08 & 16.96 & 10.61 & 5.37 & 2.27\\
\bottomrule
\end{tabular}
\caption{Market implied probabilities for the score differences versus Skellam implied probabilities at different time points. The estimated parameters $\hat\lambda^A=2.33$, $\hat\lambda^B=1.44.$\label{Table2}}
\end{table}
Table \ref{Table2} shows the model implied probability for the outcome of score differences before the game, compared with the market implied probability. As we see, the Skellam model appears to have longer tails. Different from independent Poisson modeling in \cite{Dixon:1997jc}, our model is more flexible with the correlation between two teams. However, the trade-off of flexibility is that we only know the probability of score difference instead of the exact scores.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{game2.png}
\caption{The betting market data for Everton and West Ham is from {\tt ladbrokes.com}. Market implied probabilities (expressed as percentages) for three different results (Everton wins, West Ham wins and draw) are marked by three distinct colors, which vary dynamically as the game proceeds. The solid black line shows the evolution of the implied volatility (defined in Section \ref{IV}). The dashed line shows significant events in the game, such as goals and red cards. Five goals in this game are 13' Everton, 56' Everton, 78' West Ham, 81' West Ham and 90' West Ham.\label{Figure2}}
\end{figure}
Finally, we can plot these probability paths in Figure \ref{Figure2} to examine the behavior of the two teams and represent the market predictions on the final result. Notably, we see the probability change of win/draw/loss for important events during the game: goals scoring and a red card penalty. In such a dramatic game, the winning probability of Everton gets raised to 90\% before the first goal of West Ham in 78th minutes. The first two goals scored by West Ham in the space of 3 minutes completely reverses the probability of winning. The probability of draw gets raised to 90\% until we see the last-gasp goal of West Ham that decides the game.
\subsection{How the Market Forecast Adapts} \label{IV}
A natural question arises to how does the market odds (win, lose, draw and actual score) adjust as the game evolves. This is similar to option pricing where Black-Scholes model uses its implied volatility to show how market participants' beliefs change. Our Skellam model mimics its way and shows how the market forecast adapts to changing situations during the game. See \cite{Merton:1976ge} for references of jump models.
Our work builds on \cite{Polson:2015ira} who define the implied volatility of a NFL game. For an EPL game, we simply define the implied volatility as $\sigma_{IV,t} = \sqrt{\lambda^A_t + \lambda^B_t}$. As the market provides real-time information about $\lambda^A_t$ and $\lambda^B_t$, we can dynamically estimate $\sigma_{IV,t}$ as the game proceeds. Any goal scored is a discrete Poisson shock to the expected score difference (Skellam process) between the teams, and our odds implied volatility measure will be updated.
Figure \ref{Figure2} plots the path of implied volatility throughout the course of the game. Instead of a downward sloping line, we see changes in the implied volatility as critical moments occur in the game. The implied volatility path provides a visualization of the conditional variation of the market prediction for the score difference. For example, when Everton lost a player by a red card penalty at 34th minute, our estimates $\hat\lambda^A_t$ and $\hat\lambda^B_t$ change accordingly. There is a jump in implied volatility and our model captures the market expectation adjustment about the game prediction. The change in $\hat\lambda_A$ and $\hat\lambda_B$ are consistent with the findings of \cite{Vecer2009} where the scoring intensity of the penalized team drops while the scoring intensity of the opposing team increases. When a goal is scored in the 13th minute, we see the increase of $\hat\lambda^B_t$ and the market expects that the underdog team is pressing to come back into the game, an effect that has been well-documented in the literature. Another important effect that we observe at the end of the game is that as goals are scored (in the 78th and 81st minutes), the markets expectation is that the implied volatility increases again as one might expect.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{iv2.pdf}
\caption{Red line: the path of implied volatility throughout the game, i.e., $\sigma_{t}^{red} = \sqrt{\hat\lambda^A_t+\hat\lambda^B_t}$. Blue lines: the path of implied volatility with constant $\lambda^A+\lambda^B$, i.e., $\sigma_{t}^{blue} = \sqrt{(\lambda^A+\lambda^B)*(1-t)}$. Here $(\lambda^A+\lambda^B) = 1, 2, ..., 8$. \label{ivcompare}}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{c c c c c c c c c c c c}
\toprule
t & 0 & 0.11 & 0.22 & 0.33 & 0.44 & 0.50 & 0.61 & 0.72 & 0.83 & 0.94 & 1\\
\midrule
$\hat\lambda^A_t/(1-t)$ & 2.33 & 2.51 & 2.53 & 2.46 & 1.89 & 1.85 & 2.12 & 2.12 & 2.61 & 4.61 & 0\\
$\hat\lambda^B_t/(1-t)$ & 1.44 & 1.47 & 1.59 & 1.85 & 2.17 & 2.17 & 2.56 & 2.90 & 3.67 & 5.92 & 0\\
\midrule
$(\hat\lambda^A_t+\hat\lambda^B_t)/(1-t)$ & 3.78 & 3.98 & 4.12 & 4.31 & 4.06 & 4.02 & 4.68 & 5.03 & 6.28 & 10.52 &0\\
\midrule
$\sigma_{IV,t}$ & 1.94 & 1.88 & 1.79 & 1.70 & 1.50 & 1.42 & 1.35 & 1.18 & 1.02 & 0.76 & 0\\
\bottomrule
\end{tabular}
\caption{The calibrated $\{\hat\lambda^A_t, \hat\lambda^B_t\}$ divided by $(1-t)$ and the implied volatility during the game. $\{\lambda^A_t, \lambda^B_t\}$ are expected goals scored for rest of the game. The less the remaining time, the less likely to score goals. Thus $\{\hat\lambda^A_t, \hat\lambda^B_t\}$ decrease as $t$ increases to 1. Diving them by $(1-t)$ produces an updated version of $\hat\lambda_{0}$'s for the whole game, which are in general time-varying (but not decreasing necessarily).\label{lambda}}
\end{table}
Figure \ref{ivcompare} compares the updating implied volatility of the game with implied volatilities of fixed $(\lambda^A+\lambda^B)$. At the beginning of the game, the red line (updating implied volatility) is under the "($\lambda^A+\lambda^B=4)$"-blue line; while at the end of the game, it's above the "($\lambda^A+\lambda^B=8)$"-blue line. As we expect, the value of $(\hat\lambda^A_t + \hat\lambda^B_t)/(1-t)$ in Table \ref{lambda} increases throughout the game, implying that the game became more and more intense and the market continuously updates its belief in the odds.
\section{Discussion}
The goal of our analysis is to provide a probabilistic methodology for calibrating real-time market odds for the evolution of the score difference for a soccer game.Rather than directly using game information, we use the current odds market to calibrate a Skellam model to provide a forecast of the final result. To our knowledge, our study is the first to offer an interpretation of the betting market and to show how it reveals the market expectation of the game result through an implied volatility. One area of future research is studying the index betting. For example, a soccer game includes total goals scored in match and margin of superiority (see \cite{Jackson:1994gj}). The latter is the score difference in our model, and so the Skellam process directly applies.
Our Skellam model is also valid for low-scoring sports such as baseball, hockey or American football with a discrete series of scoring events. For NFL score prediction, \cite{baker2013} propose a point process model that performs as well as the betting market. On the one hand, our model has the advantage of implicitly considering the correlation between goals scored by both teams but on the other hand, ignores the sum of goals scored. For high-scoring sports, such as basketball, the Brownian motion adopted by \cite{Stern:1994hj} is more applicable. \cite{Rosenfeld:1000a} provides an extension of the model that addresses concerns of non-normality and uses a logistic distribution to estimate the relative contribution of the lead and the remaining advantage. Another avenue for future research, is to extend the Skellam model to allow for the dependent jumpiness of scores which is somewhere in between these two extremes (see \cite{Glickman:2012dt}, \cite{Polson:2015ira} and \cite{Rosenfeld:1000a} for further examples.)
Our model allows the researcher to test the inefficiency of EPL sports betting from a statistical arbitrage viewpoint. More importantly, we provide a probabilistic approach for calibrating dynamic market-based information. \cite{Camerer:1989dc} shows that the market odds are not well-calibrated and that an ultimate underdog during a long losing streak is underpriced on the market. \cite{Golec:1991cd} test the NFL and college betting markets and find bets on underdogs or home teams win more often than bets on favorites or visiting teams. \cite{Gray:1997gz} examine the in-sample and out-of-sample performance of different NFL betting strategies by the probit model. They find the strategy of betting on home team underdogs averages returns of over 4 percent, over commissions. In summary, a Skellam process appears to fit the dynamics of EPL soccer betting very well and produces a natural lens to view these market efficiency questions.
\newpage
|
1,941,325,221,045 | arxiv | \section{Introduction}
\setcounter{equation}{0}
In this paper, we consider the exact classical string dynamics in the
conformally
invariant background corresponding to the SL(2,R) WZWN model. This
background is locally $2+1$-dimensional Anti de Sitter
spacetime with non-vanishing parallelizing torsion.
Many mathematical aspects of the SL(2,R) WZWN model have been
discussed in
the literature (see for
instance Refs.\cite{bal,pet,nem,hwa}). In particular, it has been
known for
some time that
the SL(2,R) WZWN
model reduces
to Liouville theory (for a review of the different methods, see
\cite{rai}
and references given therein).
However, the physical aspects have still not really been
extracted so far. The purpose of
this paper is to investigate directly the physical effects of the
conformal
invariance on the generic exact classical string dynamics.
The conformal invariance of the SL(2,R) WZWN model
is expressed as the torsion becoming parallelizing. Thus we consider
the
string
equations of motion in a
background consisting of the Anti de Sitter (AdS) metric plus an
antisymmetric tensor representing
the parallelizing torsion. Using the reduction method of
\cite{san,all}, we
obtain
directly from the classical
equations of motion a simple differential equation, the Liouville
equation,
for the
fundamental quadratic form
$\alpha(\tau,\sigma)$, which determines the proper string size. By
comparing with analogues results
obtained in AdS but without torsion \cite{san,all,mik,nes},
we can then precisely extract the physical
effects of the conformal invariance
on the {\it exact} dynamics of classical strings. We also compare
with the
results of
\cite{vega} where, among other
things, the effect of the conformal invariance was analysed, but for
particular string
configurations and for perturbative
string solutions.
One essential point in this paper is the parametrization of the
string
equations of motion
and constraints in terms of the proper string size. Then, associated
potentials $V(\alpha)$
can be defined, and {\it generic} properties of the exact string
dynamics
can be extracted
directly from the reduced equations of motion and potentials (without
need
of any solution).
Previously \cite{all}, we have shown that the exact string dynamics
in the
non-conformally
invariant AdS spacetime (without torsion), reduces to three different
equations:
sinh-Gordon, cosh-Gordon and Liouville equation, and all three must
be
considered in order
to cover the generic string evolution.
In this paper we show that this reduction procedure beautifully
generalizes
and {\it
simplifies} in the presence of torsion, corresponding to conformal
invariance. In the
conformally invariant AdS background, the presence of torsion leads
to a
{\it precise}
cancellation of the term $+\mbox{exp}(\alpha)$ in the potentials
$\cosh(\alpha)$,
$\sinh(\alpha)$ and
$\mbox{exp}(\alpha)$ of the reduced equations. Thus, when including
the
torsion, the
original sinh-Gordon and cosh-Gordon sectors reduce to the Liouville
equation (with
different signs of the potential), while the original Liouville
sector
reduces to the free
wave equation (see Figs. 1, 2). Torsion generally produces a
repulsive term
$-\mbox{exp}(\alpha)$, which precisely cancels the dominant
attractive term
arising from
gravity. As a consequence, only the very large classical string size
behaviour is
affected by the torsion. Most of the string behaviour (medium and
small
string size
behaviour) is unchanged.
We also find in this paper illustrative classes of string solutions
in
the conformally
invariant AdS background. The ansatz we have introduced in
Ref.\cite{all}
is applied here to
this case. Dynamical closed strings as well as stationary open
infinitely
long strings are
described. Here in the presence of torsion, the mathematics
simplifies
considerably; the
solutions are expressed in closed form in terms of trigonometric and
hyperbolic functions
(in the non-conformally invariant case, the solutions generally
involved
elliptic functions
\cite{all}). Similarly, we find the string solutions in the 2+1
dimensional
black hole anti de Sitter spacetime (BH-AdS) with torsion, and
compare with the
case of vanishing torsion.
It must be noticed that, in the absence of torsion, stationary
strings in
AdS spacetime are
of "hanging string" type (that is, their shapes are simple
generalizations
of the shape of a
rope hanging in a constant Newtonian potential). In the presence of
torsion, these
configurations become of "spiralling string" type, and
asymptotically, they
are standard
logaritmic spirals (see Figs. 3, 4). The effect of torsion on
stationary
strings in the AdS
background thus appears quite similar to the effect of rotation in
the
Kerr-Newman spacetime
\cite{zel}.
This paper is organized as follows: In Section 2 we perform the
reduction
of the WZWN model
to the Liouville equation, in terms of the string equations of motion
and
constraints and the
proper string size, and we analyse the generic features of the exact
string
dynamics in the
AdS background with torsion. In Section 3 we deal with particularly
illustrative examples of
string configurations and precise effects of the torsion and
conformal
invariance in this
background, using a parametrization corresponding to {\it global} AdS
spacetime. In Section 4, we discuss the analogous results obtained
using a parametrization corresponding to the 2+1 dimensional BH-AdS
spacetime
with torsion. Conclusions and remarks are given in Section 5.
\section{Reduction of the WZWN Model to the Liouville Equation}
\setcounter{equation}{0}
Our starting point is the sigma-model action including the WZWN term
at
level
$k$ \cite{wit}:
\begin{equation}
S_{\sigma}=-\frac{k}{4\pi}\int_{M} d\tau
d\sigma\;\eta^{\alpha\beta}\mbox{Tr}[
g^{-1}\partial_\alpha g\;g^{-1}\partial_\beta g]-
\frac{k}{6\pi}\int_{B} \mbox{Tr}[
g^{-1}dg\wedge g^{-1}dg\wedge g^{-1}dg].
\end{equation}
Here $M$ is the boundary of the manifold $B$, and $g$ is a
group-element of
$SL(2,R)$:
\begin{equation}
g=\left( \begin{array}{cc} a & c \\
-d & b \end{array}\right) ;\;\;\;\;\;\;\;\;\ ab+cd=1.
\end{equation}
Then, the action (2.1) takes the form \cite{hor}:
\begin{equation}
S_{\sigma}=-\frac{k}{2\pi}\int_{M} d\tau
d\sigma\;[\dot{a}\dot{b}-a'b'+
\dot{c}\dot{d}-c'd']-\frac{k}{\pi}\int_{M} d\tau d\sigma\;
\log (c) [\dot{a}b'-a'\dot{b}],
\end{equation}
where dot and prime denote derivative with respect to $\tau$ and
$\sigma$,
respectively. Let us introduce new coordinates
$(X,Y,U,T)$:
\begin{equation}
a=H(U+X),\;\;\;\;\;\;b=H(U-X),\;\;\;\;\;\;c=H(T-Y),\;\;\;\;\;\;d=H(T+Y
),
\end{equation}
where $H$ is a constant (the Hubble constant). Then we get from
(2.2):
\begin{equation}
X^2+Y^2-U^2-T^2=-\frac{1}{H^2},
\end{equation}
which is the standard embedding equation for the 2+1 AdS spacetime.
Using a Lagrange
multiplier $\lambda$ to
incorporate
the condition (2.5), the action becomes:
\begin{eqnarray}
S_{\sigma}=-\frac{k H^2}{2\pi}\int_{M} d\tau
d\sigma\;[\dot{U}^2-U'^2+\dot{T}^2
-T'^2-\dot{X}^2 +X'^2-\dot{Y}^2 +Y'^2
+4\lambda(-T^2\nonumber\\
-U^2+X^2+Y^2+H^{-2})+4(\dot{X}U'-X'\dot{U})\log
(H(T-Y))].
\nonumber\\
\end{eqnarray}
It is also convenient to introduce the dimensionless 4-vector $q^\mu$
and
metric $\eta_{\mu\nu}$ in the
4-dimensional embedding spacetime:
\begin{equation}
q^\mu=H(T,U,X,Y),\;\;\;\;\;\;\;\;\eta_{\mu\nu}=\mbox{diag}
(-1,-1,1,1),
\end{equation}
as well as world-sheet light-cone coordinates:
\begin{equation}
\sigma^\pm=\tau\pm\sigma.
\end{equation}
It is then straightforward to show that the classical equations of
motion
corresponding to the action
(2.6) reduce to:
\begin{equation}
q^\mu_{+-}+e^\alpha q^\mu+\epsilon^\mu_{\;\;\nu\rho\sigma}
q^\nu q^\rho_+
q^\sigma_-=0,\;\;\;\;\;\;\;\;(\mu=0,1,2,3),
\end{equation}
where we introduced the fundamental quadratic form
$\alpha(\tau,\sigma)$:
\begin{equation}
e^\alpha=-\eta_{\mu\nu}q^\mu_+ q^\nu_-,
\end{equation}
and the antisymmetric tensor $\epsilon^\mu_{\;\;\nu\rho\sigma}$
corresponding to the metric (2.7):
\begin{equation}
\epsilon^{0123}=1,\;\;\;\;(\mbox{antisymmetric}).
\end{equation}
The equations of motion (2.9) should, as usual, be supplemented by
the
string constraints:
\begin{equation}
\eta_{\mu\nu}q^\mu_\pm q^\nu_{\pm}=0,
\end{equation}
and the embedding normalization condition (2.5):
\begin{equation}
\eta_{\mu\nu}q^\mu q^\nu=-1.
\end{equation}
Notice that $\alpha$ determines the proper string size, as follows
from the
induced metric on the
world-sheet:
\begin{equation}
dS^2=\frac{1}{H^2}\eta_{\mu\nu}dq^\mu dq^\nu
=\frac{2}{H^2}e^\alpha\;(-d\tau^2+d\sigma^2).
\end{equation}
It is convenient to introduce the basis:
\begin{equation}
{\cal U}=
\{ q^\mu, q^\mu_+,q^\mu_-,l^\mu\};
\;\;\;\;\;\;\;\;\;\;\;\;\;
l^\mu\equiv
e^{-\alpha} \epsilon^\mu_{\;\;\rho\sigma\delta}
q^\rho q^\sigma_+ q^\delta_-,
\end{equation}
\begin{equation}
\eta_{\mu\nu}l^\mu l^\nu=1.
\end{equation}
The second derivatives of $q^\mu,$
expressed in the basis
${\cal U},$ are
given by:
\begin{equation}
q^\mu_{++}=\alpha_+ q^\mu_+ +u
l^\mu,\;\;\;\;
q^\mu_{--}=\alpha_- q^\mu_- +v
l^\mu,\;\;\;\;
q^\mu_{+-}=-e^{\alpha}(q^\mu+l^\mu),
\end{equation}
where the functions $u$ and $v$ are
implicitly
defined by:
\begin{equation}
u\equiv
\eta_{\mu\nu}q^{\mu}_{++}l^\nu,
\;\;\;\;\;\;\;\;\;\;
v\equiv
\eta_{\mu\nu}q^{\mu}_{--}l^\nu,
\end{equation}
and satisfy:
\begin{equation}
u_-=v_+=0
\;\;\;\;\;\;\Longrightarrow\;\;\;\;\;\;u=u(\sigma^+),\;\;\;v=v(\sigma^
-).
\end{equation}
Then, by differentiating equation (2.10) twice, we get:
\begin{equation}
\alpha_{+-}
+u(\sigma_+) v(\sigma_-) e^{-\alpha}=0.
\end{equation}
If the
product $u
(\sigma_+) v(\sigma_-)$ is positive definite, then the following
conformal transformation on the world-sheet metric (2.14):
\begin{eqnarray}
&\alpha(\sigma_+,\sigma_-)=
\hat{\alpha}
(\hat{\sigma}_+,\hat{\sigma}_-)+\frac{1}{2}
\mbox{log}|u
(\sigma_+)||v(\sigma_-)|,&\nonumber\\
&\hat{\sigma}_+=\int\sqrt{|u
(\sigma_+)}|\;d\sigma_+,\;\;\;\;\;\;\;\;
\hat{\sigma}_-=\int\sqrt{|v
(\sigma_-)}|\;d\sigma_-,&
\end{eqnarray}
reduces equation (2.20) to:
\begin{equation}
\alpha_{+-}+
e^{-\alpha}=0,
\end{equation}
which is just the Liouville equation (we skipped the hats).
It must be noticed, however, that for a generic string world-sheet,
the product
$u
(\sigma_+) v(\sigma_-)$ is neither positive nor negative
definite.
In the case that
$u
(\sigma_+) v(\sigma_-)$ is negative, the conformal
transformation
(2.21) reduces equation (2.20) to:
\begin{equation}
\alpha_{+-}-
e^{-\alpha}=0,
\end{equation}
and including also the case when
$u(\sigma_+) v(\sigma_-)=0,$ we conclude that the most
general
equation fulfilled by the fundamental quadratic form
$\alpha$
is:
\begin{equation}
\alpha_{+-}+
Ke^{-\alpha}=0,
\end{equation}
where:
\begin{equation}
K=\left\{ \begin{array}{l}
+1,\;\;\;\;\;\;u(\sigma_+) v(\sigma_-)>0 \\
-1,\;\;\;\;\;\;u(\sigma_+) v(\sigma_-)<0 \\
\;0,\;\;\;\;\;\;\;\;u(\sigma_+)
v(\sigma_-)=0
\end{array}\right.
\end{equation}
Equation (2.24) is either the
Liouville equation ($K=\pm 1$), or the free wave equation $(K=0)$.
Let us
define a potential $V(\alpha)$ by:
\begin{equation}
\alpha_{+-}+\frac{dV
(\alpha)}
{d\alpha}=0,
\end{equation}
so that if $\alpha=\alpha(\tau),$ then
$\;\frac{1}{2}(\dot{\alpha})^2+V
(\alpha)={\mbox{const}}$. Then, it
follows that:
\begin{equation}
V(\alpha)=\left\{ \begin{array}{r}
-e^{-\alpha},\;\;\;\;\;\;K=+1 \\
e^{-\alpha},\;\;\;\;\;\;K=-1\\
\;\;\;\;\;0,\;\;\;\;\;\;\;\;\;\;\;K=0 \end{array}\right.
\end{equation}
The results (2.26)-(2.27) are represented in Fig.1., showing the
different potentials.\\
\\
\\
It is interesting to compare these results
with the analogue results obtained in AdS but
without torsion \cite{san,mik,nes,all}.
In that case, instead of eq.(2.27), we have found \cite{all}:
\begin{equation}
\tilde{V}(\alpha)=\left\{ \begin{array}{r}
2\sinh\alpha,\;\;\;\;\;\;K=+1 \\
2\cosh\alpha,\;\;\;\;\;\;K=-1\\
\;\;\;\;\;e^\alpha,\;\;\;\;\;\;\;\;\;\;\;K=0 \end{array}\right.
\end{equation}
which is shown in Fig.2. As discussed in more detail in \cite{all},
it
means that for large proper string
sizes (large
$\alpha$), the potential $\tilde{V}(\alpha)$
is always attractive. The positive increasing potential for positive
$\alpha$
in AdS spacetime
prevents the string from growing indefinetely. That is, gravity as
represented by the metric will (not
surprisingly in AdS) generally tend to contract a large string.
By comparing equations (2.27) and (2.28), we see that the effect of
conformal invariance is to {\it precisely}
cancel the term $e^\alpha$ in the potential. This holds for all three
cases
($K=0,\pm 1$). That is, when including the parallelizing torsion, the
original $\sinh$-Gordon and $\cosh$-Gordon equations reduce to the
Liouville equation (with different signs of the potential), while the
original Liouville equation reduces to the free wave equation.
Thus the physical effect of conformal invariance (represented
via the parallelizing torsion) is to {\it precisely} cancel the
dominant
attractive part of the potential
arising from the metric. In other words, the parallelizing torsion
generally gives rise to a repulsive
term
$-e^\alpha$ in the potential. The combined effect of gravity and
torsion
eventually gives rise to either
attraction or repulsion, but for large proper string size $\alpha$,
the
potential $V(\alpha)$ vanishes exponentially
in all cases, Fig.1.
On the other hand, for small proper string size
$\alpha$, the potential is not
affected by the parallelizing torsion.
These results complete and generalize results obtained in \cite{vega}
for particular string
configurations (circular strings), and in \cite{all} for the
non-conformally invariant case.\\
\\
\\
Finally, it should be noticed that the general solution of the
Liouville
equation (2.24) is known in closed
form (say, $K=1$):
\begin{equation}
\alpha(\sigma^+,\sigma^-)=\log \left\{ \frac
{(f(\sigma^+)+g(\sigma^-))^2}{2f'(\sigma^+)g'(\sigma^-)} \right\}
\end{equation}
where $f(\sigma^+)$ and $g(\sigma^-)$ are arbitrary functions of the
indicated variables.
The proper string size, $S(\tau)$, is then:
\begin{equation}
S(\tau)=\int d\sigma \; s(\tau,\sigma),
\end{equation}
where, using equations (2.14) and (2.29),
\begin{equation}
s(\tau,\sigma)=\frac{\sqrt{2}}{H}e^{\alpha/2}=
\frac{f(\sigma^+)+g(\sigma^-)}
{H\sqrt{f'(\sigma^+)g'(\sigma^-)}}.
\end{equation}
This is the general solution ($K=1$) to the string size in the
conformally
invariant AdS
background. The full string dynamics in this background is exactly
integrable. However, it
is still a highly non-trivial problem to obtain the explicit
expression for
the coordinates
$q^\mu$, taking into account the constraints (2.12) and the
normalization
condition (2.13).
\section{Examples}
\setcounter{equation}{0}
In this section we consider in detail some illustrative examples of
string
configurations. It
is convenient to first introduce the standard parametrization (see
for
instance \cite{rind})
of
$2+1$ AdS in terms of static coordinates $(t,r,\phi)$:
\begin{eqnarray}
&X=r\cos\phi,\;\;\;\;\;\;U=\frac{1}{H}\sqrt{1+H^2
r^2}\;\cos(Ht),&\nonumber\\
&Y=r\sin\phi,\;\;\;\;\;\;T=\frac{1}{H}\sqrt{1+H^2 r^2}\;\sin(Ht),&
\end{eqnarray}
which automatically fulfils the normalization condition (2.13). Next
we
make the following
ansatz \cite{all}:
\begin{eqnarray}
r&=&r(\xi^1),\nonumber\\
t&=&t(\xi^1)+c_1\xi^2,\\
\phi&=&\phi(\xi^1)+c_2\xi^2,\nonumber
\end{eqnarray}
where $(c_1,c_2)$ are arbitrary constants while $(\xi^1,\xi^2)$ are
the two
world-sheet
coordinates, to be specified later.
The mathematical motivation for this ansatz is that it reduces the
string
equations of motion
(2.9) to ordinary differential equations, as we now show (see also
Ref.\cite{all}). In fact, the equations
(2.9) reduce to:
\begin{eqnarray}
\frac{d^2t}{(d\xi^1)^2}&+&
\frac{2H^2r}{1+H^2r^2}\left(\frac{dt}{d\xi^1}\right)
\left(\frac{dr}{d\xi^1}
\right)
+\frac{2Hr}{1+H^2r^2}\left(\frac{dr}{d\xi^1}\right)c_2=0,\\
\frac{d^2\phi}{(d\xi^1)^2}&+&
\frac{2}{r}\left(\frac{d\phi}{d\xi^1}\right)
\left(\frac{dr}{d\xi^1}\right)
+\frac{2H}{r}\left(\frac{dr}{d\xi^1}\right)c_1=0,\\
\frac{d^2 r}{(d\xi^1)^2}&+&
H^2r(1+H^2r^2)\left(\left(\frac{dt}{d\xi^1}\right)^2-c_1^2\right)-
r(1+H^2r^2)
\left(\left(\frac{d\phi}{d\xi^1}\right)^2-c_2^2\right)\nonumber\\
&-&\frac{H^2r}{1+H^2r^2}\left(\frac{dr}{d\xi^1}\right)^2+
2Hr(1+H^2r^2)
\left(c_2\left(\frac{dt}{d\xi^1}\right)-
c_1\left(\frac{d\phi}{d\xi^1}\right)
\right)=0,\nonumber\\
\end{eqnarray}
while the constraints (2.12) become:
\begin{equation}
(1+H^2r^2)\left(\frac{dt}{d\xi^1}\right)c_1=
r^2\left(\frac{d\phi}{d\xi^1}
\right)c_2,
\end{equation}
\begin{equation}
\frac{1}{1+H^2r^2}\left(\frac{dr}{d\xi^1}\right)^2-
(1+H^2r^2)\left(\left(\frac{dt}{d\xi^1}\right)^2+c_1^2\right)+
r^2\left(\left(\frac{d\phi}{d\xi^1}\right)^2+c_2^2\right)=0.
\end{equation}
The above equations of motion and constraints are consistently
integrated to:
\begin{equation}
\frac{d t}{d\xi^1}=\frac{k_1-Hc_2 r^2}{1+H^2r^2},\;\;\;\;\;\;\;\;
\frac{d \phi}{d\xi^1}=\frac{k_2-Hc_1 r^2}{r^2},
\end{equation}
\begin{equation}
r'^2=\frac{(H^2k_2^2-k_1^2)(c_1^2+2Hc_1k_2)}{r^2k_2^2}
\left(r^2-\frac{k_2^2}{c_1^2+2Hc_1k_2}\right)
\left(r^2+\frac{k_2^2}{H^2k_2^2-k_1^2}\right),
\end{equation}
where the integration constants $(k_1,k_2)$ fulfil:
\begin{equation}
c_1 k_1=c_2 k_2.
\end{equation}
The equations (3.8)-(3.9) can be solved explicitly in closed form in
terms
of trigonometric or
hyperbolic functions. This is a great simplification compared to the
case
without torsion. In that
case, the solution generally involved elliptic functions \cite{all}.
As for the fundamental quadratic form $\alpha$, we get:
\begin{equation}
e^\alpha=\pm\frac{H^2}{2}\left[ r^2 c_2^2-(1+H^2r^2)c_1^2\right],
\end{equation}
where the sign must be chosen in accordance with eq.(2.14).
Then we get from equations (3.3)-(3.10):
\begin{equation}
\frac{d^2\alpha}{(d\xi^1)^2}\pm
\left[H^2\left( c_2^2-H^2 c_1^2\right)
\left( k_1^2-(c_1+Hk_2)^2\right) \right]
e^{-\alpha}=0.
\end{equation}
This corresponds to equation (2.24) after a constant redefinition of
$\xi^1$. The different values
of $K$ will appear depending on the sign of the square bracket in
(3.12).
In the following subsections, we consider some more explicit examples
to
clarify the physics of the
ansatz (3.2).
\subsection{Circular Strings}
Circular strings are obtained from the above general formalism by
setting
$(\xi^1,\xi^2)=(\tau,\sigma)$, as well as:
\begin{equation}
c_1=0,\;\;\;\;\;\;k_2=0,\;\;\;\;\;\;c_2=1,\;\;\;\;\;\;k_1\equiv E.
\end{equation}
Then equations (3.8)-(3.9) become:
\begin{eqnarray}
\phi&=&\sigma,\nonumber\\
\dot{t}&=&\frac{E-Hr^2}{1+H^2r^2},\\
\dot{r}^2&+&(1+2EH)r^2=E^2. \nonumber
\end{eqnarray}
Here we must take $E\geq 0$ to ensure that $\dot{t}\geq 0$ (the
string is
propagating forward in
time). Then (3.14) describes a circular string oscillating between
$r=0$
and $r=r_{\mbox{max}}$:
\begin{equation}
r_{\mbox{max}}=\frac{E}{\sqrt{1+2EH}}.
\end{equation}
The explicit solution of (3.14), in closed form, is:
\begin{eqnarray}
\phi&=&\sigma,\nonumber\\
Ht&=&\arctan\left(
\frac{1+EH}{\sqrt{1+2EH}}\tan(\sqrt{1+2EH}\;\tau)\right)
-\tau,\nonumber\\
r&=&\frac{E}{\sqrt{1+2EH}}\left| \sin(\sqrt{1+2EH}\;\tau)\right| ,
\end{eqnarray}
where we took initial conditions ($r(0)=0,\;t(0)=0$). Here,
\begin{equation}
e^\alpha= \frac{H^2r^2}{2}
\end{equation}
and the string size, equations (2.30)-(2.31), is:
\begin{equation}
S(\tau)=2\pi r(\tau).
\end{equation}
These circular
strings have been discussed in
more detail in Ref.\cite{vega}.
\subsection{Stationary Strings}
Stationary strings are obtained from the general formalism by setting
$(\xi^1,\xi^2)=(\sigma,\tau)$, as well as:
\begin{equation}
c_2=0,\;\;\;\;\;\;k_1=0,\;\;\;\;\;\;c_1=1,\;\;\;\;\;\;k_2\equiv L.
\end{equation}
Then equations (3.8)-(3.9) become:
\begin{eqnarray}
t&=&\tau,\nonumber\\
\phi'&=&\frac{L-Hr^2}{r^2},\\
{r'}^2&=&\frac{H^2(1+2HL)}{r^2}\left( r^2+
\frac{1}{H^2}\right) \left(
r^2-\frac{L^2}{1+2HL}\right) .
\nonumber
\end{eqnarray}
It follows that we must have $1+2HL> 0$ to ensure that a region
exists
where ${r'}^2 \geq 0$.
There is also a "turning point" ($r'=0$) at $r=r_{\mbox{min}}$:
\begin{equation}
r_{\mbox{min}}=\frac{|L|}{\sqrt{1+2HL}},
\end{equation}
thus the stationary string stretches out from $r=r_{\mbox{min}}\;$
to
$r=\infty$.
The explicit solution of (3.18), in closed form, is:
\begin{eqnarray}
t&=&\tau,\nonumber\\
\phi&=&\arctan\left(
\frac{\sqrt{1+2HL}}{{HL}}\tanh(H\sqrt{1+2HL}\;\sigma)\right)
-H\sigma,\nonumber\\
r&=&\sqrt{\frac{(1+HL)^2\;\sinh^2(H\sqrt{1+2HL}\;\sigma)+
H^2L^2}{H^2(1+2HL)}},
\end{eqnarray}
where we took initial conditions ($r(0)=r_{\mbox{min}},\;\phi(0)=0$).
Here,
\begin{equation}
e^\alpha=\frac{1}{2}(1+H^2r^2),
\end{equation}
where a factor $H$ has been absorbed in $\tau$ and $\sigma$.
Then the string size, equations (2.30)-(2.31), is:
\begin{equation}
S(\tau)=S=\int_{-\infty}^{+\infty}\sqrt{1+H^2 r^2}\:d\sigma=\infty.
\end{equation}
Notice that:
\begin{equation}
\phi(-\infty)=\infty,\;\;\;\;\;\;\phi(\infty)=-\infty,
\end{equation}
so that the stationary strings are open infinitely long clockwise
spirals.
An example is shown in Fig.3. Asymptotically
($|\sigma|\rightarrow\infty$), the stationary strings
are standard logaritmic spirals:
\begin{equation}
\left( \begin{array}{cc} X \\
Y \end{array}\right) = \left( \begin{array}{cc} r\cos\phi \\
r\sin\phi \end{array}\right) \sim\; \left(
\begin{array}{cc} \cos(H\sigma) \\
\sin(H\sigma) \end{array}\right)e^{\pm H\sqrt{1+2HL}\;\sigma},
\end{equation}
up to a constant scaling and a rotation. The simplest explicit
example is
obtained for $L=0$:
\begin{equation}
\left( \begin{array}{cc} X \\
Y \end{array}\right) = \left(
\begin{array}{cc} \cos(H\sigma) \\
-\sin(H\sigma) \end{array}\right)\sinh(H\sigma),
\end{equation}
which is shown in Fig.4.
Again it is interesting to compare with the case of stationary
strings in AdS
spacetime, but without torsion \cite{all2}. In that case, the
stationary
strings were of the "hanging string" type, that is, the shape of the
stationary strings was a simple generalization of the shape of a rope
hanging in a constant Newtonian potential. Here, in the presence of
torsion,
the stationary strings are instead of the "spiralling string" type.
It follows that the effect of torsion is somewhat similar to the
effect of
rotation in the metric: The effect of rotation in the metric on the
shape of
stationary strings was first investigated in Ref.\cite{zel}. It was
shown
that stationary strings in the Schwarzschild background are of the
"hanging string" type, while in the Kerr background, stationary
strings
could also
be of the "spiralling string" type. Thus, we have seen that
the effect of torsion in AdS spacetime is quite
similar.
Another effect of the torsion on stationary strings in AdS spacetime
concerns the multi-string property: In AdS spacetime without torsion
\cite{all2}, it was shown that the solution corresponding to the
ansatz (3.2), (3.17) actually describes a multi-string, that is, one
single world-sheet, determined by one set of initial conditions,
describes
a finite or even an infinite number of different and independent
stationary
strings. Here, in the presence of torsion, the multi-string property
is lost
for stationary strings: for $\sigma\in\;]-\infty,\;\infty[\;\;$ the
solution (3.20) describes only one stationary string.
\section{Strings in the BH-AdS background with Torsion}
\setcounter{equation}{0}
In Section 3, we have been concerned with {\it global} 2+1 AdS
spacetime.
However, the general results obtained in Section 2
hold for any parametrization of the SL(2,R) WZWN model. That is to
say,
everything in Section 2 is valid also for strings in the 2+1 black
hole
anti de Sitter spacetime (BH-AdS) \cite{ban}. The 2+1 BH-AdS
spacetime
is obtained by
replacing the SL(2,R)
parametrization (3.1) by \cite{ban2}:
\begin{eqnarray}
a&=&\sqrt{\frac{r^2-r_-^2}{r_+^2-r_-^2}}\;
e^{H(r_+\phi-Hr_- t)}\:,\nonumber\\
b&=&\sqrt{\frac{r^2-r_-^2}{r_+^2-r_-^2}}\;
e^{-H(r_+\phi-Hr_- t)}\:,\nonumber\\
c&=&\sqrt{\frac{r^2-r_+^2}{r_+^2-r_-^2}}\;
e^{H(Hr_+ t-r_- \phi)}\:,\nonumber\\
d&=&-\sqrt{\frac{r^2-r_+^2}{r_+^2-r_-^2}}\;
e^{-H(Hr_+\phi-r_- \phi)}\:,
\end{eqnarray}
where we used the notation of eq.(2.2). In these expressions $r_\pm$
are the
outer
and inner horizons:
\begin{equation}
r_\pm^2=\frac{M}{2H^2}(1\pm\sqrt{1-H^2J^2/M^2}\;),
\end{equation}
where $(M,J)$ represent the mass and angular momentum of the black
hole,
respectively. Finally we used the notation $H^{-1}=l$ (the length
scale) for
comparison with sections 2, 3.
Notice also that eq.(4.1) is only valid for $r>r_+$, but analogous
expressions
hold in the other regions. For more details about the BH-AdS
spacetime, we
refer the readers to the original papers \cite{ban,ban2}.
It is now straightforward to perform the analysis of circular and
stationary
strings in the background of 2+1 BH-AdS with torsion, c.f. the
analysis of
Section 3, so here we just give the main results.
\subsection{Circular Strings}
The equations of motion and constraints for circular strings are
solved by:
\begin{equation}
t=\int^\tau\frac{E-Hr^2}{H^2r^2-M+J^2/4r^2}\;d\tau\:,
\end{equation}
\begin{equation}
\phi=\sigma+\int^\tau
\frac{J(E-Hr^2)}{2r^2(H^2r^2-M+J^2/4r^2)}\;d\tau\:,
\end{equation}
\begin{equation}
\dot{r}^2+V(r)=0\;\;\;\;\Leftrightarrow\;\;\;\;\tau=\pm \int^r
\frac{dr}{\sqrt{-V(r)}}\:,
\end{equation}
where:
\begin{equation}
V(r)=-(M-2EH)r^2-(E^2-J^2/4),
\end{equation}
and $E$ is an integration constant. Here:
\begin{equation}
e^\alpha=\frac{H^2r^2}{2},
\end{equation}
and we have:
\begin{equation}
\ddot{\alpha}+H^2(E^2-J^2/4)e^{-\alpha}=0.
\end{equation}
Now taking into account that $M>0$, $H>0$,
$|J|\leq M/H$ as well as the {\it physical} requirement that the
string
propagates forward in time (at least outside the horizon), we must
have:
\begin{equation}2EH-M>0,
\end{equation} which also implies that:
\begin{equation}
E^2>J^2/4.
\end{equation}
It follows that the potential is a monotonically increasing function,
and that the circular string contracts from $r=r_{\mbox{max}}$ to
$r=0$, where:
\begin{equation}
r_{\mbox{max}}=\sqrt{\frac{E^2-J^2/4}{2EH-M}}.
\end{equation}
Depending on $E$, the maximal string radius can be larger or smaller
than
$r_+$. In the first case, the string will initially be outside the
horizon, but
will then contract, fall into it and collapse into $r=0$. In the
latter case,
the string is always
inside the horizon and collapses into $r=0$.
In the case without torsion \cite{all3}, the potential $V(r)$ was
quartic in
$r$ and the solutions involved elliptic functions. As a consequence,
the
possibility $E^2<J^2/4$ (Sinh-Gordon sector) also appeared, and a
potential
barrier between the inner horizon $r_-$ and $r=0$, preventing the
string from
collapsing into $r=0$, was present.
\subsection{Stationary Strings}
The equations of motion and constraints for stationary strings are
solved by:
\begin{equation}
t=\tau-\int^\sigma
\frac{J(L-Hr^2)}{2r^2(H^2r^2-M+J^2/4r^2)}\;d\sigma\:,
\end{equation}
\begin{equation}
\phi=\int^\sigma
\frac{(L-Hr^2)(H^2r^2-M)}{r^2(H^2r^2-M+J^2/4r^2)}\;d\sigma\:,
\end{equation}
\begin{equation}
{r'}^2+U(r)=0\;\;\;\;\Leftrightarrow\;\;\;\;\sigma=\pm \int^r
\frac{dr}{\sqrt{-U(r)}}\:,
\end{equation}
where:
\begin{equation}
U(r)=(H^2r^2-M)\left[ \frac{L^2-J^2/4}{r^2}+(M-2LH)\right],
\end{equation}
and $L$ is an integration constant. Here:
\begin{equation}
e^{\alpha}=\frac{1}{2}(H^2r^2-M),
\end{equation}
and we have:
\begin{equation}
\alpha''+H^2\left[ M(2HL-M)-H^2(L^2-J^2/4)\right] e^{-\alpha}=0.
\end{equation}
Now taking into account that $M>0$, $H>0$,
$|J|\leq M/H$ as well as the {\it physical} requirement that the
stationary
string (at least a part of) must be outside the static limit, we
must have:
\begin{equation}
2LH-M>0,
\end{equation}
which also implies that:
\begin{equation}
L^2>J^2/4.
\end{equation}
It follows that the stationary string stretches out to infinity, but
there is a
"turning point"($r'=0$) at $r=r_{\mbox{min}}$:
\begin{equation}
r_{\mbox{min}}=\sqrt{\frac{L^2-J^2/4}{2LH-M}}.
\end{equation}
Depending on $L$, the turning point can be outside or inside the
static limit
$r_{st}=\sqrt{M}/H$. In the first case, the string will be of
"hanging string"
type, with both ends at infinity, while in the latter case it will be
of
"spiralling string" type with one end at infinity, crossing the
static limit
and spiralling into the black hole. In the limiting case, when
$r_{\mbox{min}}$
is equal to $r_{st}$, corresponding to:
\begin{equation}
L=\frac{M}{H}\pm\frac{|J|}{2},
\end{equation}
the solution just fulfills the free wave equation, interpolating
between the
"hanging string" and the "spiralling string" types.
\section{Conclusion}
Using a physical approach, working directly with the classical string
equations
of motion and the proper string size, we reduced the SL(2,R)
WZWN model to Liouville
theory. This allowed us to extract the precise physical
effects of the parallelizing torsion on the generic string dynamics.
We
showed that the parallelizing torsion, corresponding to conformal
invariance, generally led to repulsion. In fact, the parallelizing
torsion
gives rise to a repulsive term that {\it precisely} cancels the
dominant
attractive term arising from the metric. As a consequence, the
sinh-Gordon
and cosh-Gordon
sectors of the non-conformally invariant AdS background reduce to the
Liouville equation
(with different signs of the potential), while the original Liouville
sector reduces to the
free wave equation. Thus, the dynamics of the classical large size
strings
is affected by
the torsion, but most of the string size behaviour (intermediate and
small
sizes) is quite
the same. We also gave the general solution to the proper string
size.
We then analysed in detail the circular and stationary strings in the
AdS
spacetime and in the 2+1 BH-AdS spacetime, both with parallelizing
torsion.
These results confirmed our generic
results (as they should), and we compared with the case of vanishing
torsion. In particular, it was shown that the effect of torsion on
the
stationary strings is quite similar to the effect of rotation in
the metric.
\setcounter{equation}{0}
\newpage
|
1,941,325,221,046 | arxiv | \section{Introduction}
It has long been recognized that, in addition to the conventional
Mikheyev-Smirnov-Wolfenstein (MSW) effect
\cite{Wolfenstein:1977ue,Wolfenstein:1979ni,Mikheyev:1985aa},
neutrino self-coupling can be important for
neutrino flavor evolution when neutrino number densities are large
\cite{Fuller:1987aa,Notzold:1988kx,Pantaleone:1992xh,Sigl:1992fn,%
Fuller:1992aa,Qian:1993dg,Samuel:1993uw,Kostelecky:1994dt,Pastor:2001iu,%
Pastor:2002we,Balantekin:2004ug,Fuller:2005ae}.
Recently two-flavor neutrino oscillations in the core-collapse
supernova environment have been intensively investigated
\cite{Duan:2005cp,Duan:2006an,Duan:2006jv,Hannestad:2006nj,%
Raffelt:2007yz,EstebanPretel:2007ec,Duan:2007mv,Duan:2007bt,Fogli:2007bk}.
These studies show that supernova neutrinos can indeed
experience collective flavor evolution because of neutrino
self-coupling,
even when the neutrino self-coupling is subdominant compared
to the MSW potential \cite{Duan:2005cp}.
An important result is that collective two-flavor neutrino oscillations
can exhibit
``stepwise spectral swaps'' or ``spectral splits''
in the final neutrino energy spectra when the neutrino number density
slowly decreases from a high value, where
neutrinos experience synchronized oscillations \cite{Pastor:2001iu},
towards zero
(see, e.g., Ref.~\cite{Duan:2006jv}). When this occurs,
$\nu_e$'s appear to swap their energy spectra with $\nu_x$'s
at energies below or above (depending on the neutrino mass hierarchy)
a transition energy $E^\mathrm{s}_{2\times2}$ (where the superscript ``s'' can stand either
for ``swapping point'' or ``splitting point'').
Here $\nu_x$ is some linear combination of $\nu_\mu$ and $\nu_\tau$.
The phenomenon of spectral swapping is present
in both the ``single-angle approximation'', where flavor evolution
along various neutrino trajectories is assumed to be the same as
that along a representative trajectory (e.g., the radial trajectory),
and the ``multi-angle approximation'', where flavor evolution
along different trajectories is independently
followed \cite{Duan:2006an}.
For the inverted neutrino mass hierarchy, it also has been found that
stepwise neutrino
spectral swapping is essentially independent of the $2\times2$ effective
vacuum mixing angle when this angle is small
(see, e.g., Ref.~\cite{Duan:2007bt}).
Collective two-flavor neutrino oscillations are best understood
with the help of neutrino flavor polarization vectors \cite{Sigl:1992fn}
or neutrino flavor isospins \cite{Duan:2005cp}. Using the spin analogy,
one can represent the flavor content of a neutrino mode by
a spin vector in flavor isospace. In this analogy,
the matter effect is described as a spin-field coupling, and the
neutrino self-coupling is described as spin-spin coupling.
The phenomenon of spectral swapping is a result of collective
precession of all neutrino flavor isospins with a common angular velocity $\wpr$
at any given neutrino number density \cite{Duan:2006an,Duan:2007mv}.
This collective precession of neutrino flavor isospins
is described the two-flavor adiabatic/precession solution
\cite{Raffelt:2007cb,Raffelt:2007xt}. In this solution, all neutrino
flavor isospins stay aligned or antialigned with a total effective
field in a reference frame that rotates with angular velocity $\wpr$.
Numerical simulations have shown that, in the supernova environment,
neutrinos can first experience
collective MSW-like flavor transformation (in which the MSW effect
is enhanced by neutrino self-coupling) and then subsequently
the adiabatic/precession solution \cite{Duan:2007fw}.
In the real world, however, there are three active neutrino flavors.
Some limited progress has been made on understanding
collective three-flavor neutrino oscillations.
The first fully-coupled simulation of three-flavor neutrino oscillations in the
supernova environment showed a spectral swapping phenomenon similar
to the two-flavor scenario except possibly with two swaps \cite{Duan:2007sh}.
Another simplified numerical study with a single neutrino energy bin
\cite{EstebanPretel:2007yq}
reveals that collective neutrino oscillations can be sensitive
to deviations from maximal 23-mixing
when there is a dominant ``mu-tau'' term arising from
a high order contribution from virtual $\mu$'s and $\tau$'s
(e.g., Refs.~\cite{Fuller:1987aa,Botella:1987aa,Roulet:1995qb}).
Ref.~\cite{Dasgupta:2007ws} has extended the neutrino flavor polarization
vector notation to the three-flavor scenario and discussed
the collective three-flavor oscillations as ``factorization''
of two two-flavor oscillations.
In this paper we develop a framework for studying collective
three-flavor neutrino oscillations based on the density matrix
formalism. Using this framework we find a generalized three-flavor
adiabatic/precession solution and show how stepwise
spectral swapping can appear as a natural result of such a solution.
The rest of this paper is organized as follows. In Sec.~\ref{sec:eom}
we develop a framework centered around
a $3\times3$ reduced neutrino flavor density matrix.
This density matrix is equivalent to an 8-component neutrino flavor vector,
a generalization of neutrino flavor isospin and similar to
the Bloch vector used in Ref.~\cite{Dasgupta:2007ws}. We show
how techniques important
in studying collective two-flavor oscillations such as
corotating frames
can be applied in the framework.
In Sec.~\ref{sec:sol} we demonstrate how the three-flavor
adiabatic/precession solution can be found using two simple assumptions
and the conservation of two ``lepton numbers''. We also illustrate
with a numerical example how two spectral swaps can form
from the adiabatic/precession solution when the total neutrino number
density vanishes. In Sec.~\ref{sec:matt} we employ the
corotating frame technique and show that the adiabatic/precession
solution obtains even in the presence of a dominant matter background.
In particular, we show that, in the presence of a large mu-tau
term, neutrino spectral swapping becomes sensitive
to deviations from maximal 23-mixing. In Sec.~\ref{sec:conclusions}
we give our conclusions.
\section{Equations of Motion%
\label{sec:eom}}
\subsection{Density Matrix Description}
The flavor content of a neutrino (antineutrino) mode with momentum $\bfp$
is generally described by density matrix
$\rho_\bfp$ ($\bar\rho_\bfp$) \cite{Sigl:1992fn}.
The diagonal elements of a density matrix are the occupation numbers
of the neutrino eigenstates in a particular basis, and the off-diagonal
elements contain the phase information relating to
neutrino mixing. For a neutrino pure
state described by flavor wavefunction
\begin{equation}
\psi_{\nu_\bfp}=\begin{pmatrix}
a_{\nu_1}\\ a_{\nu_2}\\ a_{\nu_3}
\end{pmatrix},
\end{equation}
the density matrix is
\begin{equation}
\rho_\bfp=n_{\nu_\bfp}\begin{pmatrix}
|a_{\nu_1}|^2 & a_{\nu_1}a_{\nu_2}^* & a_{\nu_1}a_{\nu_3}^* \\
a_{\nu_2}a_{\nu_1}^* & |a_{\nu_2}|^2 & a_{\nu_2}a_{\nu_3}^* \\
a_{\nu_3}a_{\nu_1}^* & a_{\nu_3}a_{\nu_2}^* & |a_{\nu_3}|^2
\end{pmatrix},
\end{equation}
where $n_{\nu_\bfp}$ is the neutrino number density in momentum mode $\bfp$, and
$a_{\nu_1(\nu_2,\nu_3)}$ are the amplitudes for the neutrino
to be in the corresponding vacuum mass eigenstates.
With this notation we have normalization
\begin{equation}
\sum_{i=1,2,3}|a_{\nu_i}|^2=1.
\end{equation}
The density matrix $\bar\rho$
for an antineutrino pure state is defined similarly.
In this paper we will assume that the $CP$-violating phase is $\delta=0$.
A brief discussion of the effect of a nonvanishing $CP$ phase is given in
Sec.~\ref{sec:conclusions}. Because we are only interested in neutrino
flavor transformation, we will assume that neutrinos are free streaming
except for forward scattering on the background medium.
For now we also will assume that there is no ordinary matter background.
For this case it is most convenient to work in the vacuum mass basis.
This basis is implicitly adopted in all the following discussions except for
Sec.~\ref{sec:matt}, where we will discuss the effects of the
ordinary matter potential.
To simplify the problem even further,
we will assume that the neutrino gas is isotropic and uniform.
This corresponds
to the ``single-angle approximation'' in numerical simulations of
flavor oscillations of supernova neutrinos. It has been shown
numerically \cite{Duan:2006an,Duan:2006jv,EstebanPretel:2007ec,Fogli:2007bk}
that both single-angle
and multi-angle calculations produce similar neutrino spectral
swaps.
For an isotropic and uniform neutrino gas confined in a fixed volume,
the equations of motion (e.o.m.) for neutrino density matrix $\rho_\bfp$ are
\begin{equation}
\rmi\dot{\rho}_\bfp=[\sfH_\bfp,\rho_\bfp].
\label{eq:eom-nu}
\end{equation}
Here the Hamiltonian for neutrino mode $\bfp$ is
\begin{equation}
\sfH_\bfp=\sfH^\vac_\bfp+
\sqrt{2}\GF\int\!\frac{\rmd^3\bfq}{(2\pi)^3}
(\rho_\bfq-\bar\rho_\bfq^*),
\label{eq:Ham}
\end{equation}
where $\GF$ is the Fermi constant, and the vacuum term in the Hamiltonian is
\begin{equation}
\sfH^\vac_\bfp=\frac{1}{2|\bfp|}\diag(m_1^2,m_2^2,m_3^2)
\end{equation}
with $m_i^2$ being the mass-squared eigenvalues corresponding to
neutrino vacuum mass eigenstates $|\nu_i\rangle$.
For antineutrino density matrix $\bar\rho_\bfp$ one has
\begin{equation}
\rmi\dot{\bar\rho}_\bfp=[\bar{\sfH}_\bfp,\bar\rho_\bfp],
\label{eq:eom-anu}
\end{equation}
with the Hamiltonian defined as
\begin{equation}
\bar{\sfH}_\bfp=\sfH^\vac_\bfp+
\sqrt{2}\GF\int\!\frac{\rmd^3\bfq}{(2\pi)^3}
(\bar\rho_\bfq-\rho_\bfq^*).
\label{eq:Ham-anti}
\end{equation}
We note that Ref.~\cite{Sigl:1992fn} has defined the antineutrino
density matrix as the
complex conjugate of what one usually writes as a density matrix
(i.e., $\bar\rho_\bfp^*\rightarrow\bar\rho_\bfp$).
This notation leads to a slightly simpler version of the e.o.m.~for
both neutrinos and antineutrinos.
However, for a vanishing $CP$ phase, it is possible to treat
neutrinos and antineutrinos on an equal footing and combine
Eqs.~\eqref{eq:eom-nu} and \eqref{eq:eom-anu} into a single expression,
as has been done in Ref.~\cite{Duan:2005cp}
for the two-flavor mixing scenario.
To see this, we note that a neutrino or antineutrino mode
in an isotropic and uniform gas
is completely characterized by
\begin{equation}
\omega(E)\equiv\mp\Tr(\sfH^\vac_\bfp\lambda_3)
=\pm\frac{\Delta m_{21}^2}{2E},
\label{eq:omega}
\end{equation}
where $E=|\bfp|$ is the energy of the neutrino or antineutrino,
upper (lower) signs are for neutrinos (antineutrinos),
$\Delta m_{21}^2=m_2^2-m_1^2$ is approximately
the solar mass-squared difference $\dmsol$,
and $\lambda_3$ is one of the Gell-Mann matrices
$\lambda_a$ ($a=1,\ldots,8$).
Because the number density for a neutrino (antineutrino) mode
$\nu_\bfp$ ($\bar\nu_\bfp$) is conserved for a neutrino gas
in a fixed volume,
we can define the total neutrino number density
\begin{equation}
n_\nu^\tot\equiv
\int\!\frac{\rmd^3\bfq}{(2\pi)^3}\Tr(\rho_\bfq+\bar\rho_\bfq),
\label{eq:ntot}
\end{equation}
and the normalized distribution function is
\begin{equation}
f_{\omega}\equiv \frac{E^2}{2\pi^2 n_\nu^\tot}
\left|\frac{\rmd E}{\rmd\omega}\right|\times
\left\{\begin{array}{ll}
\Tr(\rho_\bfp)&\text{if }\omega>0,\\
\Tr(\bar\rho_\bfp)&\text{if }\omega<0.
\end{array}\right.
\label{eq:f}
\end{equation}
Using Eqs.~\eqref{eq:ntot} and \eqref{eq:f} we can express the
integral in Eq.~\eqref{eq:eom-nu} as
\begin{equation}
\int\!\frac{\rmd^3\bfq}{(2\pi)^3}
(\rho_\bfq-\bar\rho_\bfq^*)
= n_\nu^\tot
\int_{-\infty}^\infty\!\rmd\omega f_\omega\varrho_\omega.
\label{eq:rho-varrho}
\end{equation}
In Eq.~\eqref{eq:rho-varrho} we have defined the ``reduced
neutrino flavor density matrix'' $\varrho_\omega$ for neutrino mode $\omega$:
\begin{equation}
\varrho_{\omega}\sim\left\{\begin{array}{ll}
\rho_\bfq &\text{if }\omega>0,\\
-\bar\rho_\bfq^* &\text{if }\omega<0
\end{array}\right.
\label{eq:varrho}
\end{equation}
which has normalization
\begin{equation}
\Tr(\varrho_\omega)=\left\{\begin{array}{ll}
+1 &\text{if }\omega>0,\\
-1 &\text{if }\omega<0.
\end{array}\right.
\label{eq:varrho-norm}
\end{equation}
Using Eqs.~\eqref{eq:eom-nu}, \eqref{eq:eom-anu}
and \eqref{eq:rho-varrho} we find the e.o.m.~for $\varrho_\omega$:
\begin{equation}
\rmi\dot\varrho_\omega=[\sfH_\omega, \varrho_\omega].
\label{eq:eom}
\end{equation}
The Hamiltonian for neutrino mode $\omega$ is
\begin{equation}
\sfH_\omega=\sfH_\omega^\vac
+\mu\varrho_\tot,
\label{eq:Ham-omega}
\end{equation}
where in the vacuum mass basis
\begin{equation}
\sfH_\omega^\vac=-\omega\frac{\lambda_3}{2}-\kappa\frac{\lambda_8}{\sqrt{3}}.
\end{equation}
Here we define
\begin{equation}
\mu\equiv\sqrt{2}\GF n_\nu^\tot.
\end{equation}
This parameter dictates the strength of neutrino self-coupling.
The total neutrino flavor density matrix is defined to be
\begin{equation}
\varrho_\tot\equiv\int_{-\infty}^\infty\!\rmd\omega \,f_\omega\varrho_{\omega}.
\label{eq:varrho-tot}
\end{equation}
In Eq.~\eqref{eq:Ham-omega}
we have left out the trace term for $\sfH_\omega$
(which is irrelevant for neutrino oscillations),
and we have defined the oscillation parameter $\kappa$ to be
\begin{subequations}
\label{eq:kappa}
\begin{align}
\kappa(E)&\equiv\mp\frac{\sqrt{3}}{2}\Tr(\sfH^\vac_\bfp\lambda_8),\\
&=\pm\frac{1}{2E}\left[m_3^2-\frac{(m_1^2+m_2^2)}{2}\right],
\end{align}
\end{subequations}
where the upper (lower) signs are for neutrinos (antineutrinos).
Because the atmospheric mass-squared difference $\dmatm$
is much larger than $\dmsol$, one has
\begin{equation}
m_3^2-\frac{(m_1^2+m_2^2)}{2}\simeq\pm\dmatm,
\end{equation}
where the plus (minus) sign is for the normal (inverted)
neutrino mass hierarchy.
We note that $f_\omega$ does not change with time if there
is no inelastic scattering of neutrinos.
We also note that Eq.~\eqref{eq:eom} is actually
more generally valid than Eqs.~\eqref{eq:eom-nu} and \eqref{eq:eom-anu}
so long as the neutrino gas stays isotropic and uniform.
For example, this would be true for an homogeneous and isotropic
early universe, i.e., the Friedman solution. The ``single-angle
approximation'' (see, e.g., Ref.~\cite{Duan:2006an})
for flavor evolution of supernova neutrinos
is essentially equivalent to this scenario. For this case,
the flavor content of a neutrino propagating along any trajectory
at a given radius is assumed to be identical to that of a neutrino
with the same energy propagating along a radial trajectory at the same radius.
In this approximation the flavor evolution of neutrinos as a function
of time $t$ can be represented as the flavor evolution
of neutrinos propagating along the
radial trajectory as a function of radius $r$. In addition,
one can define the effective total neutrino number density
at each radius as
\begin{equation}
n_\nu^\tot=\frac{D(r/R_\nu)}{2\pi R_\nu^2}
\sum_{\nu}
\frac{L_\nu}{\langle E_\nu\rangle}\int_0^\infty\!\rmd E\, f_\nu(E).
\label{eq:sn-ntot}
\end{equation}
This takes account of
both the geometric dilution and (partly) the anisotropy of
supernova neutrinos.
In Eq.~\eqref{eq:sn-ntot} $R_\nu$ is the radius of the neutrino
sphere, the geometric factor is
\begin{equation}
D(\xi)=\frac{1}{2}(1-\sqrt{1-\xi^{-2}})^2,
\end{equation}
$L_\nu$, $\langle E_\nu\rangle$ and $f_\nu(E)$
are the neutrino luminosity, average energy and normalized
energy distribution function for species $\nu$ at the neutrino sphere,
respectively, and the summation is over all neutrino species
(including both neutrinos
and antineutrinos).
\subsection{Flavor Vector Description}
The flavor polarization
vector $\mathbf{P}$ \cite{Sigl:1992fn}
and neutrino flavor isospin $\mathbf{s}_\omega$ \cite{Duan:2005cp}
are important techniques that
have been used extensively to describe the flavor mixing of neutrinos
in two-flavor mixing scenarios.
These notations have helped in visualizing
and giving insights into the problem of collective neutrino oscillations.
To generalize these notations to the $3\times3$ case, we note that
a $3\times3$ Hermitian matrix $\mathsf{A}$ can be expressed as
the linear combination of the identity matrix $\mathsf{I}$ and
Gell-Mann matrices $\lambda_a$:
\begin{equation}
\mathsf{A}=\frac{1}{3}\Tr(\mathsf{A})\mathsf{I}
+\sum_a A^{(a)}\frac{\lambda_a}{2},
\end{equation}
where
\begin{equation}
A^{(a)} \equiv \Tr(\mathsf{A}\lambda_a)
\end{equation}
can be viewed as the $a$'th component of an 8-dimensional vector $\mathbf{A}$.
In particular, ``flavor vector''
\begin{equation}
\bmr_\omega=(\varrho_\omega^{(1)},\ldots,\varrho_\omega^{(8)})^\mathrm{T}
\end{equation}
is the generalized version of neutrino flavor isospin $\mathbf{s}_\omega$.%
\footnote{Ref.~\cite{Dasgupta:2007ws} appears
while this manuscript was in preparation. Ref.~\cite{Dasgupta:2007ws}
has proposed a three-flavor version of the
Bloch vector which is a generalization
of the two-flavor polarization vector defined in Ref.~\cite{Sigl:1992fn}.
The difference between the three-flavor Bloch vector and the flavor
vector defined here is similar to that between the two-flavor polarization
vector and the neutrino flavor isospin defined in Ref.~\cite{Duan:2005cp}.
In the flavor vector description,
the directions of flavor vectors and flavor isospins
for antineutrinos are intentionally reversed
so that the e.o.m.~for flavor vectors and flavor isospins
(for both neutrinos and antineutrinos)
can be written within a single expression.
Likewise, the density matrix defined in Ref.~\cite{Dasgupta:2007ws}
is different from the flavor density matrix in this paper by
a sign for antineutrinos.
This is of course a notation
difference and does not affect the physical results.}
Because $\Tr(\varrho_\omega)$ is fixed by the
normalization condition in Eq.~\eqref{eq:varrho-norm},
flavor vector $\bmr_\omega$ is fully equivalent to the
density matrix $\varrho_\omega$.
In particular, the number densities of $\nu_1$, $\nu_2$ and $\nu_3$
in mode $\omega$
can be expressed in terms of $\varrho_\omega^{(3)}$ and $\varrho_\omega^{(8)}$:
\begin{subequations}
\label{eq:n123}
\begin{align}
n_{\nu_1}(\omega)&=n_\nu^\tot f_\omega
\left(\frac{1}{3}
+\frac{1}{2}\varrho_\omega^{(3)}+\frac{1}{2\sqrt{3}}\varrho_\omega^{(8)}\right),\\
n_{\nu_2}(\omega)&=n_\nu^\tot f_\omega
\left(\frac{1}{3}
-\frac{1}{2}\varrho_\omega^{(3)}+\frac{1}{2\sqrt{3}}\varrho_\omega^{(8)}\right),\\
n_{\nu_3}(\omega)&=n_\nu^\tot f_\omega
\left(\frac{1}{3}-\frac{1}{\sqrt{3}}\varrho_\omega^{(8)}\right).
\end{align}
\end{subequations}
Noting the difference between the definition of $\varrho_\omega$
for neutrinos and antineutrinos [Eq.~\eqref{eq:varrho}], we have
\begin{subequations}
\label{eq:an123}
\begin{align}
n_{\bar\nu_1}(\omega)&=n_\nu^\tot f_\omega
\left(\frac{1}{3}
-\frac{1}{2}\varrho_\omega^{(3)}-\frac{1}{2\sqrt{3}}\varrho_\omega^{(8)}\right),\\
n_{\bar\nu_2}(\omega)&=n_\nu^\tot f_\omega
\left(\frac{1}{3}
+\frac{1}{2}\varrho_\omega^{(3)}-\frac{1}{2\sqrt{3}}\varrho_\omega^{(8)}\right),\\
n_{\bar\nu_3}(\omega)&=n_\nu^\tot f_\omega
\left(\frac{1}{3}+\frac{1}{\sqrt{3}}\varrho_\omega^{(8)}\right).
\end{align}
\end{subequations}
The relation between $\omega$ and the energy of a neutrino or antineutrino
is described in Eq.~\eqref{eq:omega}.
One can define the cross and dot products of
two 8-dimensional vectors $\mathbf{A}$ and $\mathbf{B}$ to be
\begin{subequations}
\begin{align}
\mathbf{A}\times\mathbf{B}&\equiv
-\rmi\,\Tr([\mathsf{A},\mathsf{B}]\lambda_a)\hbe_a
=f_{abc}A^{(b)}B^{(c)}\hbe_a,
\label{eq:cross}
\\
\mathbf{A}\cdot\mathbf{B}&\equiv
2\Tr(\mathsf{A}\mathsf{B})-\frac{2}{3}\Tr(\mathsf{A})\Tr(\mathsf{B})
=A^{(a)}B^{(a)},
\end{align}
\end{subequations}
where $\hbe_a$ is the $a$'th unit vector in the
8-dimensional flavor space, and $f_{abc}$ are the antisymmetric
structure constants of SU(3):
\begin{equation}
\left[\frac{\lambda_a}{2}, \frac{\lambda_b}{2}\right]
=\rmi f_{abc}\frac{\lambda_c}{2}.
\end{equation}
The summation over Gell-Mann indices are implicitly
assumed in the above equations when they appear twice
in the subscripts or subscripts.
Using Eqs.~\eqref{eq:eom} and \eqref{eq:cross} we can write
the e.o.m.~for flavor vector $\bmr_\omega$ as
\begin{subequations}
\label{eq:eom-vec}
\begin{align}
\frac{\rmd}{\rmd t}\bmr_\omega
&=-\bmr_\omega \times \mathbf{H}_\omega,
\label{eq:eom-vec1}\\
&=\bmr_\omega \times
\left(\omega\hbe_3+\frac{2\kappa}{\sqrt{3}}\hbe_8\right)
-\mu\bmr_\omega\times
\int_{-\infty}^\infty\!\rmd\omega^\prime\,f_{\omega^\prime} \bmr_{\omega^\prime}.
\label{eq:eom-vec2}
\end{align}
\end{subequations}
The first term in Eq.~\eqref{eq:eom-vec2} corresponds the
precession of $\bmr_\omega$ around an ``external field'', and the second
term corresponds a ``spin-spin anti-coupling'' with strength $\mu$.
Eq.~\eqref{eq:eom-vec1} makes it clear that the change in
$\bmr_\omega$ is orthogonal to $\bmr_\omega$ itself, and, therefore,
the magnitude of this quantity does not changed. We have
\begin{equation}
|\bmr_\omega|^2=\sum_a[\Tr(\varrho_\omega\lambda_a)]^2=\text{const.}
\end{equation}
This is a natural result that also follows from
the fact that Eqs.~\eqref{eq:eom-nu} and \eqref{eq:eom-anu}
maintain the coherence of $\rho_\bfp$ and $\bar\rho_\bfp$.
Following Ref.~\cite{Duan:2005cp} we define the effective energy
of the system to be
\begin{equation}
\begin{split}
\mathcal{E}&\equiv -\int_{-\infty}^\infty\!\rmd\omega\,
f_\omega\bmr_\omega \cdot
\left(\frac{\omega}{2}\hbe_3+\frac{\kappa}{\sqrt{3}}\hbe_8\right)\\
&\quad+\frac{\mu}{4}\int_{-\infty}^\infty\!\rmd\omega
\int_{-\infty}^\infty\!\rmd\omega^\prime\,
f_\omega f_{\omega^\prime} \bmr_\omega\cdot\bmr_{\omega^\prime}.
\end{split}
\label{eq:E}
\end{equation}
Clearly the effective energy $\mathcal{E}$ of the system
is conserved if $n_\nu^\tot$ is constant.
Although flavor vector $\bmr_\omega$ seems to behave in a way similar to flavor
isospin $\mathbf{s}_\omega$ in two-flavor mixing scenarios, there
are fundamental differences between the 8-dimensional flavor
vector space and the 3-dimensional flavor isospace.
For example, two 8-dimensional vectors $\mathbf{A}$
and $\mathbf{B}$ are ``perpendicular'' or ``parallel'' to each other if
$\mathbf{A}\cdot\mathbf{B}=0$ or $\mathbf{A}\times\mathbf{B}=0$, respectively.
Because there are two linearly independent generators of SU(3)
that commute with each other, for any vector $\mathbf{A}$
one can always find another vector $\mathbf{A}^\prime$ which is both
``perpendicular'' and ``parallel'' to $\mathbf{A}$.
Consequently, generally speaking
\begin{equation}
\mathbf{B}\neq\mathbf{A}\frac{\mathbf{A}\cdot\mathbf{B}}{|\mathbf{A}|^2}
\end{equation}
even if $\mathbf{B}$ is ``parallel'' to $\mathbf{A}$.
The existence of two linearly independent and commuting generators of SU(3)
has another important consequence. Because $[\lambda_3,\lambda_8]=0$,
rotations around $\hbe_3$ and $\hbe_8$
can be viewed as independent. In particular,
the first term in Eq.~\eqref{eq:eom-vec2} can be interpreted as
simultaneous but independent precession of flavor vector $\bmr_\omega$
around $\hbe_3$ and $\hbe_8$ with generally
different angular velocities.
Although the density matrix and flavor vector descriptions
are equivalent, the rotation of a flavor vector in the
8-dimensional flavor space is not easily visualizable.
Therefore, we will base our discussions mostly on
the density matrix formalism with intermittent references
to the flavor vector notation where it seems convenient.
\subsection{Conserved ``Lepton Numbers''%
\label{sec:Ls}}
Multiplying Eq.~\eqref{eq:eom} by $f_\omega$ and integrating it over $\omega$
we obtain the e.o.m.~for $\varrho_\tot$:
\begin{equation}
\rmi\dot{\varrho}_\tot=
\frac{1}{2}
\int_{-\infty}^\infty\!\rmd\omega\,f_\omega\omega[\varrho_\omega,\lambda_3]
+\frac{1}{\sqrt{3}}
\int_{-\infty}^\infty\!\rmd\omega\,f_\omega\kappa[\varrho_\omega,\lambda_8].
\label{eq:eom-rhotot}
\end{equation}
Eq.~\eqref{eq:eom-rhotot} is not a closed equation
from which we could solve for $\varrho_\tot$.
However,
because $\lambda_3$ and $\lambda_8$
commute with each other, it is clear
that the two ``lepton numbers''
\begin{subequations}
\label{eq:Ls}
\begin{align}
L_3&\equiv\varrho_\tot^{(3)}=\int_{-\infty}^\infty\!\rmd\omega\,f_\omega
\Tr(\varrho_\omega\lambda_3)\\
\intertext{and}
L_8&\equiv\varrho_\tot^{(8)}=\int_{-\infty}^\infty\!\rmd\omega\,f_\omega
\Tr(\varrho_\omega\lambda_8)
\end{align}
\end{subequations}
are constants of the motion.
Because $\Tr(\varrho_\tot)$ does not change with time,
the lepton number (fraction) in each vacuum
mass eigenstate is individually conserved.
\subsection{Corotating Frame%
\label{sec:cr-frame}}
In the density matrix description, changing from the
static frame to a corotating frame
corresponds to a transformation
\begin{equation}
\varrho_\omega\rightarrow
\tilde\varrho_\omega\equiv e^{\rmi\sfH_\mathrm{cor} t}
\varrho_\omega e^{-\rmi\sfH_\mathrm{cor} t},
\end{equation}
where $\sfH_\mathrm{cor}$ is a $3\times3$ Hermitian matrix
that is common to all neutrino modes and does not change with time.
The flavor density matrix $\tilde\varrho_\omega$
in the corotating frame satisfies an e.o.m.~similar to
that in Eq.~\eqref{eq:eom}:
\begin{equation}
\rmi\dot{\tilde\varrho}_\omega=[\tilde{\sfH}_\omega, \tilde\varrho_\omega],
\end{equation}
where
\begin{equation}
\tilde{\sfH}_\omega\equiv e^{\rmi\sfH_\mathrm{cor} t}\sfH_\omega e^{-\rmi\sfH_\mathrm{cor} t}-\sfH_\mathrm{cor}
\end{equation}
is the Hamiltonian for neutrino mode $\omega$ in the corotating
frame associated with $\sfH_\mathrm{cor}$.
A very special set of corotating frames corresponds to
simultaneous rotations around $\hbe_3$ and $\hbe_8$ with
angular velocities $\Omega$ and $2K/\sqrt{3}$, respectively, and
\begin{equation}
\sfH_\mathrm{cor}=-\Omega\frac{\lambda_3}{2}-K\frac{\lambda_8}{\sqrt{3}}.
\end{equation}
Because $\sfH_\omega^\vac$, $\lambda_3$
and $\lambda_8$ commute with each other,
$\tilde{\sfH}_\omega$ in these special corotating frames
takes the same form as $\sfH_\omega$
in Eq.~\eqref{eq:Ham-omega}
except for the replacements
\begin{subequations}
\begin{align}
\omega&\rightarrow\omega-\Omega,\\
\kappa&\rightarrow\kappa-K,\\
\varrho_\tot&\rightarrow\tilde\varrho_\tot
=e^{\rmi\sfH_\mathrm{cor} t}\varrho_\tot e^{-\rmi\sfH_\mathrm{cor} t}.
\end{align}
\end{subequations}
Also, because the occupation numbers of each
vacuum eigenstate in a neutrino mode $\omega$ are
determined only by $\varrho_\omega^{(3)}$ and $\varrho_\omega^{(8)}$,
the probability for the neutrino mode
to be in the $i$'th vacuum mass eigenstate in these special
corotating frames, $|\tilde{a}_{\nu_i}(\omega)|^2$,
is the same as
the probability for the neutrino mode to be in the same eigenstate
in the static frame, $|a_{\nu_i}(\omega)|^2$.
Therefore, lepton numbers $L_3$ and $L_8$ are not
changed by the corotating-frame transformation either.
\section{Adiabatic/Precession Solution
and Stepwise Spectral Swapping%
\label{sec:sol}}
We now seek a natural extension to the adiabatic/precession solution
presented in Ref.~\cite{Raffelt:2007cb}.
This solution, like the $2\times2$ case,
will be essentially a quasi-static solution that, for given $n_\nu^\tot$,
is the same as the ``static'' solution which satisfies the
\textit{precession ansatz}. There is a family of static solutions
for each value of $n_\nu^\tot$. So long as $n_\nu^\tot$ changes slowly,
a particular solution in each family parametrized by $n_\nu^\tot$
is uniquely determined by the initial conditions and by the
\textit{adiabatic ansatz}. This gives the adiabatic/precession solution.
We will discuss the precession ansatz and three-flavor
synchronization in Sec.~\ref{sec:prec}.
In Sec.~\ref{sec:adiabatic} we will discuss
the adiabatic ansatz and outline a formal procedure for
obtaining a three-flavor
adiabatic/precession solution.
In Sec.~\ref{sec:swapping} we will illustrate with a
numerical example how stepwise
spectral swapping arises from the three-flavor
adiabatic/precession solution as $n_\nu^\tot\rightarrow0$.
\subsection{The Precession Ansatz and Synchronization%
\label{sec:prec}}
The \textit{precession ansatz} is that, for constant $n_\nu^\tot$,
it is possible to find a Hermitian matrix
\begin{equation}
\sfH_\mathrm{cor}=-\wpr\frac{\lambda_3}{2}-\kpr\frac{\lambda_8}{\sqrt{3}}
\end{equation}
such that the flavor density matrix $\tilde\varrho_\omega$
is static in the corotating frame associated with $\sfH_\mathrm{cor}$, i.e.
\begin{equation}
[\tilde\sfH_\omega,\tilde\varrho_\omega] = 0.
\label{eq:prec-ansatz}
\end{equation}
From the arguments in Sec.~\ref{sec:cr-frame},
if the precession ansatz is satisfied,
$|a_{\nu_i}(\omega)|^2$ does not change with time.
In this sense, a solution that satisfies the precession
ansatz is ``static''.
When the adiabatic ansatz is satisfied,
each flavor vector
$\bmr_\omega$ will precess uniformly around
$\hbe_3$ and $\hbe_8$. In other words, the system
is in a state that is symmetric about $\hbe_3$ and $\hbe_8$.
Because the e.o.m.~for $\bmr_\omega$ [Eqs.~\eqref{eq:eom-vec}] possess
the same symmetry around the $\hbe_3$ and $\hbe_8$ axes,
we expect ``static'' precession solutions
to exist for constant $n_\nu^\tot$
with appropriate initial conditions.
In particular, this symmetry obtains approximately in
dense neutrino gases where
\begin{equation}
\mu=\sqrt{2}\GF n_\nu^\tot\gg|\kappa|\gg|\omega|
\label{eq:large-n}
\end{equation}
for most neutrino modes.
This conclusion can be shown as follows.
When Eq.~\eqref{eq:large-n} is
satisfied, the conserved effective energy of the system is
[Eq.~\eqref{eq:E}]
\begin{equation}
\mathcal{E}\simeq\frac{\mu}{4}|\bmr_\tot|^2
\end{equation}
and, therefore,
\begin{equation}
|\bmr_\tot|\simeq\text{const.}
\end{equation}
Because the two lepton numbers $L_3=\varrho_\tot^{(3)}$
and $L_8=\varrho_\tot^{(8)}$ are conserved (Sec.~\ref{sec:Ls}),
the total flavor vector $\bmr_\tot$ must precess simultaneously
around $\hbe_3$ and $\hbe_8$ according to
\begin{equation}
\frac{\rmd}{\rmd t}\bmr_\tot\simeq
\bmr_\tot\times\left(\wpr^\infty\hbe_3+\frac{2\kpr^\infty}{\sqrt{3}}\hbe_8\right).
\label{eq:eom-ln-tot}
\end{equation}
Also in this limit, Eq.~\eqref{eq:eom-vec} becomes
\begin{equation}
\frac{\rmd}{\rmd t}\bmr_\omega\simeq
-\mu\bmr_\omega\times\bmr_\tot.
\label{eq:eom-ln}
\end{equation}
Eqs.~\eqref{eq:eom-ln-tot} and \eqref{eq:eom-ln} suggests a
simple geometric picture for the flavor evolution in dense
neutrino gases. On short time scales ($\Delta t\sim\mu^{-1}$),
each flavor vector $\bmr_\omega$ precess rapidly around the
total flavor vector $\bmr_\tot$. On large time scales
($\Delta t\sim|\wpr^\infty|^{-1},|\kpr^\infty|^{-1}$),
all flavor vectors precess slowly around $\hbe_3$
and $\hbe_8$ with angular velocities $\wpr^\infty$ and $2\kpr^\infty/\sqrt{3}$,
respectively. This is analogous to the synchronization phenomenon
in two-flavor mixing scenarios \cite{Pastor:2001iu}.
\subsection{The Adiabatic Ansatz and Adiabatic/Precession Solutions%
\label{sec:adiabatic}}
When the precession ansatz in Eq.~\eqref{eq:prec-ansatz}
is satisfied, it is possible to find a unitary matrix $\sfX_\omega$
that simultaneously diagonalizes both
$\tilde{\sfH}_\omega$ and $\tilde\varrho_\omega$:
\begin{subequations}
\begin{align}
\sfX_\omega\tilde{\sfH}_\omega\sfX_\omega^\dagger
&=\diag(\wtm{L},\wtm{M},\wtm{H}),
\label{eq:diag-H}\\
\sfX_\omega\tilde\varrho_\omega\sfX_\omega^\dagger
&=\pm\diag(|a_{\ntm{L}}|^2,|a_{\ntm{M}}|^2,|a_{\ntm{H}}|^2),
\label{eq:diag-varrho}
\end{align}
\end{subequations}
where $\wtm{L}<\wtm{M}<\wtm{H}$
are the eigenvalues corresponding to
the eigenstates of $\tilde{\sfH}_\omega$,
$|\ntm{L}(\omega)\rangle$, $|\ntm{M}(\omega)\rangle$
and $|\ntm{H}(\omega)\rangle$.
In Eq.~\eqref{eq:diag-varrho}
the plus (minus) sign is for neutrinos (antineutrinos).
The \textit{adiabatic ansatz} is simply that
\begin{equation}
|a_{\tilde\nu_l}(\omega)|^2=\text{const.}\quad(l=\text{L, M, H})
\label{eq:adiabatic-ansatz}
\end{equation}
as $n_\nu^\tot$ slowly varies with time.
As discussed in Ref.~\cite{Duan:2007fw}, this ``adiabaticity''
criterion connects neutrino systems in different corotating frames
at different values of $n_\nu^\tot$.
Note that this adiabaticity criterion
is different from the meaning of adiabaticity usually adopted in
the literature, e.g., when discussing the MSW mechanism,
which is always based on the static frame.
Following Ref.~\cite{Raffelt:2007cb}, we argue here that the
adiabatic ansatz can be satisfied if, for each
neutrino mode $\omega$,
$\tilde{\bfH}_\omega$ rotates at a speed much slower
than precession rate of $\tilde{\bmr}_\omega$ around $\tilde{\bfH}_\omega$,
i.e.
\begin{equation}
\gamma\equiv
\frac{|\tilde{\bfH}_\omega\times\rmd \tilde{\bfH}_\omega/\rmd t|}%
{|\tilde{\bfH}_\omega|^3}\ll1.
\label{eq:adiabatic}
\end{equation}
The adiabatic/precession solution can be obtained formally by
employing the following procedure:\begin{enumerate}
\item At any $n_\nu^\tot$ find for each neutrino mode $\omega$
a unitary matrix $\sfX_\omega$
that diagonalizes $\tilde{\sfH}_\omega$. Matrix $\sfX_\omega$
is expressed as a function of $(\tilde\varrho_\tot,\wpr,\kpr)$.
\item\label{step:varrho-omega}
For given initial values of $|a_{\tilde\nu_l}(\omega)|^2$, find
$\tilde\varrho_\omega=
\pm\sfX_\omega^\dagger
\diag(|a_{\ntm{L}}|^2,|a_{\ntm{M}}|^2,|a_{\ntm{H}}|^2)\sfX_\omega$
as a function of $(\tilde\varrho_\tot,\wpr,\kpr)$.
\item For given initial lepton numbers $L_3$ and $L_8$
find $\tilde\varrho_\tot^{(3)}=L_3$ and
$\tilde\varrho_\tot^{(8)}=L_8$.
From the definition
$\tilde\varrho_\tot=\int_{-\infty}^\infty\!\rmd\omega f_\omega
\tilde\varrho_\omega(\tilde\varrho_\tot,\wpr,\kpr)$, solve for the
precession angular velocities
$\wpr$ and $\kpr$ and the remaining components of $\tilde\varrho_\tot$.
\item Find $\tilde\varrho_\omega$ for each neutrino mode $\omega$
using the expression $\tilde\varrho_\omega(\tilde\varrho_\tot,\wpr,\kpr)$
obtained in step \ref{step:varrho-omega}.
\end{enumerate}
We note that the above procedure actually gives a set of equivalent
solutions. This is because the precession solution is symmetric
around $\hbe_3$ and $\hbe_8$. If $(\tilde\varrho_\tot,\wpr,\kpr)$
is a solution, $(\tilde\varrho_\tot^\prime,\wpr,\kpr)$ is also a solution,
where $\tilde\varrho_\tot^\prime$ is related to $\tilde\varrho_\tot$
by two arbitrary phases $\phi_3$ and $\phi_8$:
\begin{equation}
\tilde{\varrho}_\tot^\prime=
\exp\left(-\rmi\phi_3\frac{\lambda_3}{2}-\rmi\phi_8\frac{\lambda_8}{2}\right)
\tilde{\varrho}_\tot
\exp\left(\rmi\phi_3\frac{\lambda_3}{2}+\rmi\phi_8\frac{\lambda_8}{2}\right).
\end{equation}
One can fix these two phases by, e.g., choosing
$\tilde{\varrho}_\tot^{(2)}=\tilde{\varrho}_\tot^{(7)}=0$.
\subsection{Stepwise Spectral Swapping%
\label{sec:swapping}}
Neutrino flavor mixing becomes very simple
in the adiabatic/precession solution
presented in Sec.~\ref{sec:adiabatic} when $n_\nu^\tot\rightarrow0$.
In this limit $\tilde{\sfH}_\omega$ is diagonal in the vacuum
mass basis:
\begin{equation}
\tilde{\sfH}_\omega|_{n_\nu^\tot\rightarrow0}
=\diag(\tilde\omega_1,\tilde\omega_2,
\tilde\omega_3)_{n_\nu^\tot\rightarrow0},
\label{eq:H-cr-0n}
\end{equation}
where
\begin{subequations}
\label{eq:omega123-0n}
\begin{align}
\tilde\omega_1|_{n_\nu^\tot\rightarrow0}
&=-\frac{1}{2}(\omega-\wpr^0)-\frac{1}{3}(\kappa-\kpr^0),\\
\tilde\omega_2|_{n_\nu^\tot\rightarrow0}
&=\frac{1}{2}(\omega-\wpr^0)-\frac{1}{3}(\kappa-\kpr^0),\\
\tilde\omega_3|_{n_\nu^\tot\rightarrow0}
&=\frac{2}{3}(\kappa-\kpr^0),
\end{align}
\end{subequations}
with $\wpr^0$ and $\kpr^0$ being the collective precession angular
velocities as $n_\nu^\tot\rightarrow0$. Eq.~\eqref{eq:H-cr-0n}
shows that $\tilde{\sfH}_\omega|_{n_\nu^\tot\rightarrow0}$ has
3 critical values of $\omega$ at which any two of its eigenvalues are equal.
These critical values are
\begin{subequations}
\begin{align}
\omega^\mathrm{s}_1&=\wpr^0,\\
\omega^\mathrm{s}_2&=\frac{\Delta m_{21}^2}{\Delta m_{31}^2}
\left(\kpr^0+\frac{1}{2}\wpr^0\right),\\
\omega^\mathrm{s}_3&=\frac{\Delta m_{21}^2}{\Delta m_{32}^2}
\left(\kpr^0-\frac{1}{2}\wpr^0\right).
\end{align}
\end{subequations}
In practice, however, two of the critical points are usually
indistinguishable
\begin{subequations}
\begin{align}
\omega^\mathrm{s}_1&=\omega^\mathrm{s}_\odot\equiv\wpr^0,\\
\omega^\mathrm{s}_2&\simeq\omega^\mathrm{s}_3\simeq\omega^\mathrm{s}_\mathrm{atm}\equiv\pm\frac{\dmsol}{\dmatm}\kpr^0,
\label{eq:watm}
\end{align}
\end{subequations}
because $\dmatm\gg\dmsol$ and, therefore, $|\kpr^0|\gg|\wpr^0|$.
In Eq.~\eqref{eq:watm}
the positive (negative) sign is for the normal (inverted)
neutrino mass hierarchy.
A critical value $\omega^\mathrm{s}$ corresponds to the energy where
``stepwise spectral swapping'' or a ``spectral split'' occurs.
If $\omega^\mathrm{s}>0$, the spectral swap occurs at neutrino energy
\begin{equation}
E^\mathrm{s}\simeq\frac{\dmsol}{2|\omega^\mathrm{s}|}
\end{equation}
in the neutrino sector.
If $\omega^\mathrm{s}<0$, the swap is located in the antineutrino sector
at energy $E^\mathrm{s}$.
We can illustrate stepwise spectral swapping by
using the following test example.
We assume a bare, hot, spherical neutron-star that
isotropically emits neutrinos directly into the vacuum from its
infinitely thin neutrino sphere. We adopt the
single-angle approximation and take the radius
of the neutrino sphere to be $R_\nu=30$ km, and the luminosity for
each neutrino species to be
$L_\nu=10^{52}\,\mathrm{erg/s}$.
We take the energy spectra of neutrinos to be
of the Fermi-Dirac form:
\begin{equation}
f_\nu(E)=\frac{1}{T_\nu^3 F_2(\eta_\nu)}
\frac{E^2}{\exp(E/T_\nu-\eta_\nu)+1},
\end{equation}
where
\begin{equation}
F_k(\eta)=\int_0^\infty\frac{x^k\rmd x}{\exp(x-\eta)+1}.
\end{equation}
We take degeneracy parameters to be the same for all
neutrino species, $\eta_\nu=3$, and we choose $T_\nu$ to be such
that the average energies for various neutrino species are
$\langle E_{\nu_e}\rangle=11$ MeV, $\langle E_{\bar\nu_e}\rangle=16$ MeV
and $\langle E_{\nu_\mu}\rangle=\langle E_{\bar\nu_\mu}\rangle
=\langle E_{\nu_\tau}\rangle=\langle E_{\bar\nu_\tau}\rangle=25$ MeV,
respectively.
For the neutrino mixing parameters
(see, e.g., Ref.~\cite{PDBook} for our conventions)
we take $\theta_{12}=0.6$,
$\theta_{13}=0.1$, $\theta_{23}=\pi/4$, $\delta=0$,
$\Delta m_{21}^2=8\times10^{-5}\,\mathrm{eV}^2$ and
$\Delta m_{32}^2=-3\times10^{-3}\,\mathrm{eV}^2$
(inverted neutrino mass hierarchy).
\begin{figure*}
\begin{center}
\includegraphics*[width=\textwidth, keepaspectratio]{fig1.eps}
\end{center}
\caption{\label{fig:swap}(Color online)
Conversion probabilities $P(\nu_\alpha\rightarrow\nu_i)$ (left panels)
and $P(\bar\nu_\alpha\rightarrow\bar\nu_i)$ (right panels) as functions of
neutrino and antineutrino energies $E_\nu$ and $E_{\bar\nu}$,
respectively, in the bare, hot neutron star example.
The top, middle and bottom panels are for neutrinos that are
initially in pure $e$, $\mu$ and $\tau$ flavors, respectively.
The solid, dashed and dot-dashed lines are for neutrinos
that end up in the 1st, 2nd and 3rd vacuum mass eigenstates, respectively,
when $n_\nu^\tot\rightarrow0$.}
\end{figure*}
We define $P(\nu_\alpha\rightarrow\nu_i)$ and
$P(\bar\nu_\alpha\rightarrow\bar\nu_i)$ as
the probabilities for neutrinos and antineutrinos that are
initially in pure $\alpha$ flavor state at the neutrino sphere
to end up
in the $i$'th vacuum mass eigenstate
as $n_\nu^\tot\rightarrow0$.
In Fig.~\ref{fig:swap} we show neutrino conversion
probabilities $P(\nu_\alpha\rightarrow\nu_i)$ and
$P(\bar\nu_\alpha\rightarrow\bar\nu_i)$
as functions
of neutrino energies in our test example.
We observe
that while $P(\nu_\alpha\rightarrow\nu_i)$ shows two swaps at
$E^\mathrm{s}_\odot\simeq5.2$ MeV and $E^\mathrm{s}_\mathrm{atm}\simeq8.4$ MeV, respectively,
$P(\bar\nu_\alpha\rightarrow\bar\nu_i)$ shows no swap at all. This
phenomenon can be explained using the adiabatic/precession
solution discussed above.
\begin{table}[ht]
\caption{\label{tab:nu-ini}The correspondence between neutrino states
$|\tilde\nu_l(\omega)\rangle$ and $|\nu_\alpha\rangle$ or
$|\bar\nu_\alpha\rangle$
at $r=R_\nu$ for the inverted neutrino
mass hierarchy case with mixing angles
$\theta_{13}\simeq0$ and $\theta_{23}\simeq\pi/4$.}
\begin{ruledtabular}
\begin{tabular}{c|c|c|c}
& $\omega<0$ & $0<\omega<\omega_\mathrm{atm}^\mathrm{sync}$
& $\omega>\omega_\mathrm{atm}^\mathrm{sync}$\\
\hline
$|\ntm{L}(\omega)\rangle$
& $\frac{1}{\sqrt{2}}(|\bar\nu_\mu\rangle-|\bar\nu_\tau\rangle)$
& $\frac{1}{\sqrt{2}}(|\nu_\mu\rangle-|\nu_\tau\rangle)$
& $\frac{1}{\sqrt{2}}(|\nu_\mu\rangle+|\nu_\tau\rangle)$\\
$|\ntm{M}(\omega)\rangle$
& $\frac{1}{\sqrt{2}}(|\bar\nu_\mu\rangle+|\bar\nu_\tau\rangle)$
& $\frac{1}{\sqrt{2}}(|\nu_\mu\rangle+|\nu_\tau\rangle)$
& $\frac{1}{\sqrt{2}}(|\nu_\mu\rangle-|\nu_\tau\rangle)$\\
$|\ntm{H}(\omega)\rangle$& $|\bar\nu_e\rangle$
& $|\nu_e\rangle$ & $|\nu_e\rangle$
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[ht]
\caption{\label{tab:nu-fin}The correspondence between neutrino states
$|\tilde\nu_l(\omega)\rangle$ and $|\nu_i\rangle$ or $|\bar\nu_i\rangle$
as $n_\nu^\tot\rightarrow0$ for the
inverted mass hierarchy case and for swap points
with hierarchy $\omega^\mathrm{s}_\odot>\omega^\mathrm{s}_\mathrm{atm}>0$.}
\begin{ruledtabular}
\begin{tabular}{c|c|c|c|c}
& $\omega<0$ & $0<\omega<\omega^\mathrm{s}_\mathrm{atm}$ & $\omega^\mathrm{s}_\mathrm{atm}<\omega<\omega^\mathrm{s}_\odot$ & $\omega>\omega^\mathrm{s}_\odot$\\
\hline
$|\ntm{L}(\omega)\rangle$& $|\bar\nu_2\rangle$ & $|\nu_2\rangle$
& $|\nu_3\rangle$ & $|\nu_3\rangle$ \\
$|\ntm{M}(\omega)\rangle$& $|\bar\nu_1\rangle$ & $|\nu_1\rangle$
& $|\nu_2\rangle$ & $|\nu_1\rangle$ \\
$|\ntm{H}(\omega)\rangle$& $|\bar\nu_3\rangle$& $|\nu_3\rangle$
& $|\nu_1\rangle$ & $|\nu_2\rangle$
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[ht]
\caption{\label{tab:P} Nonzero neutrino conversion probabilities
$P(\nu_\alpha\rightarrow\nu_i)$ and $P(\bar\nu_\alpha\rightarrow\bar\nu_i)$
based on Tables \ref{tab:nu-ini} and \ref{tab:nu-fin}
and the adiabaticity ansatz in Eq.~\eqref{eq:equal-as}.}
\begin{ruledtabular}
\begin{tabular}{c|c}
energy range & nonzero conversion probabilities\\
\hline
any $E_{\bar\nu}$ &
$P(\bar\nu_e\rightarrow\bar\nu_3)=
2P(\bar\nu_{\mu,\tau}\rightarrow\bar\nu_{1,2})=1$\\
$E_\nu>E^\mathrm{s}_\mathrm{atm}$ &
$P(\nu_e\rightarrow\nu_3)=
2P(\nu_{\mu,\tau}\rightarrow\nu_{1,2})=1$\\
$E^\mathrm{s}_\odot<E_\nu<E^\mathrm{s}_\mathrm{atm}$ &
$P(\nu_e\rightarrow\nu_1)=
2P(\nu_{\mu,\tau}\rightarrow\nu_{2,3})=1$\\
$E_\nu<E^\mathrm{s}_\odot$ &
$P(\nu_e\rightarrow\nu_2)=
2P(\nu_{\mu,\tau}\rightarrow\nu_{1,3})=1$
\end{tabular}
\end{ruledtabular}
\end{table}
Assuming that the precession ansatz in Eq.~\eqref{eq:prec-ansatz} is satisfied,
we can diagonalize the Hamiltonian $\tilde{\sfH}_\omega|_{r=R_\nu}$
in the corotating frame in which all flavor vectors are static.
We denote $\wtm{L}<\wtm{M}<\wtm{H}$ as eigenvalues corresponding to the
eigenstates $|\ntm{L}\rangle$, $|\ntm{M}\rangle$ and $|\ntm{H}\rangle$
of $\tilde{\sfH}_\omega|_{r=R_\nu}$, respectively.
Because $n_\nu^\tot$ is very large at the neutrino sphere, we have
\begin{equation}
\sqrt{2}\GF (n_{\nu_e}-n_{\bar\nu_e}) \gg |\omega-\wpr^\infty|, |\kappa-\kpr^\infty|
\end{equation}
for most neutrino modes. Therefore,
\begin{equation}
|\ntm{H}(\omega)\rangle_{r=R_\nu}\simeq\left\{\begin{array}{ll}
|\nu_e\rangle& \text{ if } \omega>0,\\
|\bar\nu_e\rangle& \text{ if } \omega<0.
\end{array}\right.
\end{equation}
We note that $\wpr$ and $\kpr$ are essentially a kind of average
of $\omega$ and $\kappa$ in the system. In our example, neutrinos
(instead of antineutrinos) are the dominant species and
the neutrino mass hierarchy is inverted. So we expect
$\wpr>0>\kpr$ at any value of $n_\nu^\tot$.
We can diagonalize the $\mu\tau$-submatrix of $\tilde{\sfH}_\omega^\vac$
in the flavor basis and obtain
\begin{subequations}
\begin{align}
|\ntm{L}(\omega)\rangle_{r=R_\nu}&\simeq
\frac{1}{\sqrt{2}}(|\bar\nu_\mu\rangle-|\bar\nu_\tau\rangle),\\
|\ntm{M}(\omega)\rangle_{r=R_\nu}&\simeq
\frac{1}{\sqrt{2}}(|\bar\nu_\mu\rangle+|\bar\nu_\tau\rangle),
\end{align}
\end{subequations}
if $\omega<0$, and
\begin{subequations}
\label{eq:ntm-Rn}
\begin{align}
|\ntm{L}(\omega)\rangle_{r=R_\nu}&\simeq
\frac{1}{\sqrt{2}}(|\nu_\mu\rangle\mp|\nu_\tau\rangle),\\
|\ntm{M}(\omega)\rangle_{r=R_\nu}&\simeq
\frac{1}{\sqrt{2}}(|\nu_\mu\rangle\pm|\nu_\tau\rangle),
\end{align}
\end{subequations}
if $\omega>0$. In Eq.~\eqref{eq:ntm-Rn},
the upper and lower signs are for the cases
where $\omega$ is smaller or larger than
\begin{equation}
\omega_\mathrm{atm}^\mathrm{sync}=-\kpr^\infty\frac{\dmsol}{\dmatm},
\end{equation}
respectively.
Here $|\ntm{L}(\omega)\rangle_{r=R_\nu}$ and
$|\ntm{M}(\omega)\rangle_{r=R_\nu}$
are ``equal mixes'' of $|\nu_\mu\rangle$ and $|\nu_\tau\rangle$
or $|\bar\nu_\mu\rangle$ and $|\bar\nu_\tau\rangle$. These
results are summarized in
Table \ref{tab:nu-ini}.
Far away from the neutron star where $n_\nu^\tot\rightarrow0$, we obtain
$|\tilde\nu_l\rangle_{n_\nu^\tot\rightarrow0}$ using
Eqs.~\eqref{eq:H-cr-0n} and \eqref{eq:omega123-0n}.
For $\omega<0$ (antineutrinos) we have
\begin{subequations}
\begin{align}
|\ntm{L}(\omega)\rangle_{n_\nu^\tot\rightarrow0}&=|\bar\nu_2\rangle,\\
|\ntm{M}(\omega)\rangle_{n_\nu^\tot\rightarrow0}&=|\bar\nu_1\rangle,\\
|\ntm{H}(\omega)\rangle_{n_\nu^\tot\rightarrow0}&=|\bar\nu_3\rangle.
\end{align}
\end{subequations}
Assuming that $\omega^\mathrm{s}_\odot>\omega^\mathrm{s}_\mathrm{atm}>0$, we summarize the correspondence
between $|\tilde\nu_l(\omega)\rangle$ and $|\nu_i\rangle$
for $n_\nu^\tot\rightarrow0$ in Table \ref{tab:nu-fin}.
If the adiabatic ansatz in Eq.~\eqref{eq:adiabatic-ansatz} is satisfied,
we have
\begin{equation}
|a_{\tilde\nu_l}(\omega)|^2_{r=R_\nu}
=|a_{\tilde\nu_l}(\omega)|^2_{n_\nu^\tot\rightarrow0}
\label{eq:equal-as}
\end{equation}
for each neutrino mode $\omega$. Using Eq.~\eqref{eq:equal-as}
and Tables \ref{tab:nu-ini} and \ref{tab:nu-fin}, we can obtain
the neutrino conversion probabilities
$P(\nu_\alpha\rightarrow\nu_i)$ and $P(\bar\nu_\alpha\rightarrow\bar\nu_i)$
for each neutrino mode $\omega$.
In Table \ref{tab:P} we summarize the values of
nonzero neutrino conversion probability
when the flavor evolution of the neutrino gas follows the
adiabatic/precession solution.
The results in Table \ref{tab:P} and Fig.~\ref{fig:swap}
are in good agreement.
The neutrino spectral swapping feature in
Fig.~\ref{fig:swap} indeed can be explained by the adiabatic/precession
solution.
As in two-flavor scenarios, the swapping energies $E^\mathrm{s}_\odot$ and $E^\mathrm{s}_\mathrm{atm}$
can be obtained by using the conservation of lepton numbers
and by assuming that
the spectral swaps are infinitely sharp in energy.
From conservation of $L_8=\varrho_\tot^{(8)}$ we have
\begin{widetext}
\begin{subequations}
\label{eq:L8-Ecatm}
\begin{eqnarray}
\sqrt{3}L_8&\stackrel{r=R_\nu}{\simeq}&
\int_0^\infty\!\rmd E\,
(\tilde{f}_{\nu_e}-\tilde{f}_{\bar\nu_e}),\\
&\stackrel{n_\nu^\tot\rightarrow0}{\simeq}&
\int_0^{E^\mathrm{s}_\mathrm{atm}}\!\rmd E\,(\tilde{f}_{\nu_e}-\tilde{f}_{\nu_x})
+\int_{E^\mathrm{s}_\mathrm{atm}}^\infty\!\rmd E\,(2\tilde{f}_{\nu_x}-2\tilde{f}_{\nu_e})
-\int_0^\infty\!\rmd E\,(2\tilde{f}_{\bar\nu_x}-2\tilde{f}_{\bar\nu_e}),
\end{eqnarray}
\end{subequations}
\end{widetext}
where $\tilde{f}_\nu(E)$ is a distribution function which satisfies
\begin{equation}
\tilde{f}_\nu(E)\propto f_\nu(E)
\end{equation}
with normalization condition
\begin{equation}
\sum_\nu\int_0^\infty\!\rmd E\,\tilde{f}_\nu(E)=1,
\end{equation}
and
\begin{equation}
\tilde{f}_{\nu_x}=\tilde{f}_{\nu_\mu}=\tilde{f}_{\bar\nu_\mu}
=\tilde{f}_{\nu_\tau}=\tilde{f}_{\bar\nu_\tau}.
\end{equation}
From Eq.~\eqref{eq:L8-Ecatm} we can determine that $E^\mathrm{s}_\mathrm{atm}\simeq8.4$ MeV.
We also can obtain $E^\mathrm{s}_\odot$ from the conservation of $L_3=\varrho_\tot^{(3)}$:
\begin{subequations}
\label{eq:L3-Ecsol}
\begin{eqnarray}
L_3&\stackrel{r=R_\nu}{\simeq}&
\int_0^\infty\!\rmd E\,
\cos2\theta_{12}(\tilde{f}_{\nu_e}-\tilde{f}_{\bar\nu_e}),\\
&\stackrel{n_\nu^\tot\rightarrow0}{\simeq}&
\int_0^{E^\mathrm{s}_\odot}\!\rmd E\,(\tilde{f}_{\nu_x}-\tilde{f}_{\nu_e})\nonumber\\
&&+\int_{E^\mathrm{s}_\odot}^{E^\mathrm{s}_\mathrm{atm}}\!\rmd E\,(\tilde{f}_{\nu_e}-\tilde{f}_{\nu_x}).
\end{eqnarray}
\end{subequations}
This implies $E^\mathrm{s}_\odot\simeq5.3$ MeV.
The values for $E^\mathrm{s}_\mathrm{atm}$ and $E^\mathrm{s}_\odot$ derived from
the adiabatic/precession solution are also in good
agreement with the numerical results
shown in Fig.~\ref{fig:swap}.
We note that, in this numerical example, neutrinos of the $\mu$
and $\tau$ flavors are equally mixed and have identical energy
spectra initially. In this particular case,
the two-flavor approximation
with effective mixing parameters
$\Delta m^2\simeq-\dmatm$ and $\theta\simeq\theta_{13}$ produce
a similar spectral swapping feature around $E^\mathrm{s}_{2\times2}=E^\mathrm{s}_\mathrm{atm}$.
In the two-flavor mixing scenario,
the value for $E^\mathrm{s}_{2\times2}$ is also determined by the conservation
of a lepton number
\begin{subequations}
\label{eq:L-2x2}
\begin{eqnarray}
L_{2\times2}&\stackrel{r=R_\nu}{\simeq}
&I_1+I_2,\\
&\stackrel{n_\nu^\tot\rightarrow0}{\simeq}
&I_1-I_2,
\end{eqnarray}
\end{subequations}
where
\begin{subequations}
\begin{align}
I_1&=\int_0^{E^\mathrm{s}_{2\times2}}\!\rmd E\,(\tilde{f}_{\nu_e}-\tilde{f}_{\nu_x}),\\
I_2&=\int_{E^\mathrm{s}_{2\times2}}^\infty\!\rmd E\,(\tilde{f}_{\nu_e}-\tilde{f}_{\nu_x})
-\int_0^\infty\!\rmd E\,(\tilde{f}_{\bar\nu_e}-\tilde{f}_{\nu_x}).
\end{align}
\end{subequations}
From Eq.~\eqref{eq:L-2x2} it is easy to see that the integral $I_2=0$.
Comparing Eqs.~\eqref{eq:L8-Ecatm}
and \eqref{eq:L-2x2}, we see that a solution for $E^\mathrm{s}_{2\times2}$ in Eq.~\eqref{eq:L-2x2}
is a solution for $E^\mathrm{s}_\mathrm{atm}=E^\mathrm{s}_{2\times2}$ in Eq.~\eqref{eq:L8-Ecatm}.
Therefore, the two-flavor spectral swap phenomenon is completely
consistent with the
three-flavor spectral swap phenomenon at the atmospheric
mass-squared-difference scale.
We also note that, in this example, there would exist only one
spectral swap at energy $E_\nu\simeqE^\mathrm{s}_\mathrm{atm}$ if we had
chosen the other hierarchy for swap points, i.e.
$\omega^\mathrm{s}_\mathrm{atm}>\omega^\mathrm{s}_\odot>0$. It is generally not
possible for a single spectral swap to satisfy the conservation
of both lepton numbers and, therefore, this hierarchy for
swap points is not physical.
The example given here can be generalized into
a generic procedure to predict neutrino spectral swaps:\begin{enumerate}
\item\label{step:nu-ini} Diagonalize $\tilde\sfH_\omega$ when $n_\nu^\tot$ is
large, and find the correspondence between $|\tilde\nu_l(\omega)\rangle$
and $|\nu_\alpha\rangle$ or $|\bar\nu_\alpha\rangle$
as was done in Table \ref{tab:nu-ini}.
\item\label{step:estimate-Es} Estimate
the approximate locations of the swap points
as $n_\nu^\tot\rightarrow0$.
The swap points are expected to be in the neutrino sector
if the system is dominated by neutrinos instead of antineutrinos.
Pick a hierarchy for swap points, i.e., whether $E^\mathrm{s}_\mathrm{atm}>E^\mathrm{s}_\odot$
or $E^\mathrm{s}_\mathrm{atm}<E^\mathrm{s}_\odot$.
\item\label{step:nu-fin} Obtain from Eq.~\eqref{eq:omega123-0n}
the correspondence between $|\tilde\nu_l(\omega)\rangle$
and $|\nu_i\rangle$ or $|\bar\nu_i\rangle$ for $n_\nu^\tot\rightarrow0$,
as was done
in Table \ref{tab:nu-fin}.
\item\label{step:P} Use
the results in step \ref{step:nu-ini} and \ref{step:nu-fin}
and the adiabatic ansatz in Eq.~\eqref{eq:equal-as} to find
neutrino conversion probabilities $P(\nu_\alpha\rightarrow\nu_i)$ and
$P(\bar\nu_\alpha\rightarrow\bar\nu_i)$, as was done in Table \ref{tab:P}.
\item Find lepton numbers $L_3$ and $L_8$ as $n_\nu^\tot\rightarrow0$.
These will be functions of
$E^\mathrm{s}_\odot$ and $E^\mathrm{s}_\mathrm{atm}$ when the initial energy spectra
$f_\nu(E)$ and the results in step \ref{step:P} are used.
\item\label{step:solve-Es} Solve
for $E^\mathrm{s}_\odot$ and $E^\mathrm{s}_\mathrm{atm}$ by using lepton number conservation
$L_3|_{t=0}=L_3|_{n_\nu^\tot\rightarrow0}(E^\mathrm{s}_\odot,E^\mathrm{s}_\mathrm{atm})$
and $L_8|_{t=0}=L_8|_{n_\nu^\tot\rightarrow0}(E^\mathrm{s}_\odot,E^\mathrm{s}_\mathrm{atm})$.
If no consistent solution can be found, pick the other
hierarchy for swap points $E^\mathrm{s}_\odot$ and $E^\mathrm{s}_\mathrm{atm}$ in step \ref{step:estimate-Es}
and repeat steps \ref{step:nu-fin}--\ref{step:solve-Es}.
\end{enumerate}
\section{Adiabatic/Precession Solutions in a Dominant Matter
Background%
\label{sec:matt}}
\subsection{Effects of Neutrino-Electron Forward Scattering}
In the presence of ordinary matter, Eq.~\eqref{eq:eom} is
still valid except that the Hamiltonian for neutrino mode $\omega$
becomes
\begin{equation}
\sfH_\omega=\sfH_\omega^\vac
+\sfH^\matt+\mu\varrho_\tot,
\end{equation}
where $\sfH^\matt$ is the Hamiltonian contribution arising from
neutrino-electron forward scattering. In general this term
will give different
refractive indices for $\nu_e/\bar\nu_e$ and $\nu_{\mu,\tau}/\bar\nu_{\mu,\tau}$
\cite{Wolfenstein:1977ue,Wolfenstein:1979ni,Mikheyev:1985aa}.
Ignoring the trace term, in the flavor basis we can write
\begin{equation}
\sfH^\matt=\sqrt{2}\GF \nb\diag(Y_e,0,0),
\label{eq:Hmatt}
\end{equation}
where $\nb$ is the number density of baryons, and $Y_e$
is the electron fraction. MSW resonances
can occur if $n_e=\nb Y_e$ is small and comparable to $|\omega|/\sqrt{2}\GF$
or $|\kappa|/\sqrt{2}\GF$. In this section, however, we will assume
$n_e$ to be constant and very large for most neutrino modes:
\begin{equation}
n_e\gg\frac{|\kappa|}{\sqrt{2}\GF}.
\end{equation}
Because $\sfH^\matt$ is independent of $\omega$, it vanishes
in the corotating frame picked out by
\begin{equation}
\sfH_\mathrm{cor}=\sfH^\matt.
\label{eq:cr-matt}
\end{equation}
In this corotating frame, the vacuum
Hamiltonian for neutrino mode $\omega$ becomes
\begin{widetext}
\begin{equation}
\tilde{\sfH}_\omega^\vac\simeq
\frac{\omega}{\Delta m_{21}^2}
\begin{pmatrix}
m_1^2c_{12}^2+m_2^2s_{12}^2
& \tilde{h}_{12}(t) &\tilde{h}_{13}(t)\\
\tilde{h}_{12}^*(t)
& \frac{1}{2}(m_3^2+m_1^2s_{12}^2+m_2^2c_{12}^2)
& \frac{1}{2}(m_3^2-m_1^2s_{12}^2-m_2^2c_{12}^2)\\
\tilde{h}_{13}^*(t)
& \frac{1}{2}(m_3^2-m_1^2s_{12}^2-m_2^2c_{12}^2)
& \frac{1}{2}(m_3^2+m_1^2s_{12}^2+m_2^2c_{12}^2)
\end{pmatrix}
\label{eq:HV-cr}
\end{equation}
\end{widetext}
in the flavor basis, where $c_{ij}=\cos\theta_{ij}$,
$s_{ij}=\sin\theta_{ij}$, and $\tilde{h}_{12}(t)$ and $\tilde{h}_{13}(t)$
are functions that oscillate with angular frequency $\sqrt{2}\GF n_e$.
In deriving Eq.~\eqref{eq:HV-cr}
we have taken $\theta_{13}\simeq0$ and $\theta_{23}\simeq\pi/4$.
Because
\begin{equation}
\sqrt{2}\GF n_e\gg|\tilde{h}_{12}(t)|,|\tilde{h}_{13}(t)|,
\end{equation}
we expect $\tilde{h}_{12}(t)$ and $\tilde{h}_{13}(t)$ to average to 0
and, therefore, to have little effect on neutrino flavor evolution.
Setting $\tilde{h}_{12}(t)$ and $\tilde{h}_{13}(t)$ to 0, we can
diagonalize $\tilde\sfH_\omega^\vac$:
\begin{equation}
\tilde\sfH_\omega^\vac\rightarrow
\frac{\omega}{\Delta m_{21}^2}\diag(m_1^{\prime2},m_2^{\prime2},m_3^{\prime2}),
\end{equation}
where
\begin{subequations}
\begin{align}
m_1^{\prime2}&\simeq c_{12}^2m_1^2+s_{12}^2m_2^2,\\
m_2^{\prime2}&\simeq s_{12}^2m_1^2+c_{12}^2m_2^2,\\
m_3^{\prime2}&\simeq m_3^2
\end{align}
\end{subequations}
are the effective mass-squared values for neutrino states
\begin{subequations}
\begin{align}
|\nu_1^\prime\rangle&\simeq|\nu_e\rangle,\\
|\nu_2^\prime\rangle&\simeq\frac{1}{\sqrt{2}}(|\nu_\mu\rangle-|\nu_\tau\rangle),
\\
|\nu_3^\prime\rangle&\simeq\frac{1}{\sqrt{2}}(|\nu_\mu\rangle+|\nu_\tau\rangle).
\end{align}
\end{subequations}
We note that the flavor density matrix $\tilde\varrho_\omega$ in the corotating
frame associated with $\sfH_\mathrm{cor}=\sfH^\matt$ obeys an e.o.m.~similar to that
obeyed by
$\varrho_\omega$ in vacuum [Eq.~\eqref{eq:eom}], except for small
perturbations occurring
on very short time scales ($\Delta t\sim\GF^{-1}n_e^{-1}$).
The only difference is that
the presence of a large net electron background breaks
the ``degeneracy'' between
$\nu_e/\bar\nu_e$ and $\nu_{\mu,\tau}/\bar\nu_{\mu,\tau}$.
Therefore, we can obtain adiabatic/precession solutions using the same
procedure listed in Sec.~\ref{sec:adiabatic} but with the replacements
\begin{equation}
|\nu_i\rangle\longrightarrow|\nu_i^\prime\rangle
\quad\text{and}\quad
m_i^2\longrightarrow m_i^{\prime2}.
\end{equation}
Likewise, the conserved lepton numbers should be calculated in the
$|\nu_i^\prime\rangle$ basis instead of the $|\nu_i\rangle$ basis.
\subsection{Effects of Virtual Charged Leptons}
At very large matter density, virtual $\mu$ and $\tau$ states
contribute to a higher order correction to neutrino refractive indices,
and $\sfH^\matt$ is found to be
\cite{Fuller:1987aa,Botella:1987aa,Roulet:1995qb}
\begin{equation}
\sfH^\matt=\sqrt{2}\GF\nb\diag(Y_e,0,Y_\tau)
\end{equation}
in the flavor basis,
where $\nb Y_\tau$ gives the effective net $\tau$ lepton abundance.
If the matter
density is so large that
\begin{equation}
\sqrt{2}\GF \nb Y_\tau\gg|\kappa|,
\end{equation}
we can again employ the corotating frame as
in Eq.~\eqref{eq:cr-matt}. Ignoring the rapidly-oscillating off-diagonal
elements diagonalizes $\tilde{\sfH}_\omega$ in the flavor basis
in this corotating frame:
\begin{equation}
\tilde{\sfH}_\omega^\vac\simeq
\frac{\omega}{\Delta m_{21}^2}
\diag(m_1^{\prime\prime2},m_2^{\prime\prime2},m_3^{\prime\prime2}),
\end{equation}
where
\begin{subequations}
\label{eq:m123-matt}
\begin{align}
m_1^{\prime\prime2}&\simeq c_{12}^2m_1^2+s_{12}^2m_2^2,\\
m_2^{\prime\prime2}&\simeq c_{23}^2(s_{12}^2m_1^2+c_{12}^2m_2^2)+s_{23}^2m_3^2,\\
m_3^{\prime\prime2}&\simeq s_{23}^2(s_{12}^2m_1^2+c_{12}^2m_2^2)+c_{23}^2m_3^2
\end{align}
\end{subequations}
are the effective mass-squared value for the neutrino states
\begin{subequations}
\begin{align}
|\nu_1^{\prime\prime}\rangle&\simeq|\nu_e\rangle,\\
|\nu_2^{\prime\prime}\rangle&\simeq|\nu_\mu\rangle,\\
|\nu_3^{\prime\prime}\rangle&\simeq|\nu_\tau\rangle.
\end{align}
\end{subequations}
Therefore, the adiabatic/precession solution still obtains in the presence
of both large electron and large effective tau abundances.
In this case, however, the degeneracy among
$\nu_e/\bar\nu_e$, $\nu_\mu/\bar\nu_\mu$
and $\nu_\tau/\bar\nu_\tau$ is completely broken.
In this case, the adiabatic/precession solution is best obtained
in the flavor basis.
The conserved lepton numbers should also be calculated in the flavor basis.
Because the absolute
neutrino masses are irrelevant for neutrino oscillations,
in the inverted neutrino mass hierarchy case
we can take $m_1\simeq m_2^2\simeq \dmatm>m_3^2=0$.
When $\nb Y_\tau$ is large,
from Eq.~\eqref{eq:m123-matt}
it can be seen that the mass-squared eigenvalue
for $|\nu_1^{\prime\prime}\rangle$
is always heavier than the mass-squared eigenvalues
for $|\nu_2^{\prime\prime}\rangle$ and
$|\nu_3^{\prime\prime}\rangle$,
while the effective $23$-mass-hierarchy, or the sign of
\begin{equation}
\Delta m_{32}^{\prime\prime2}\equiv m_3^{\prime\prime2}-m_2^{\prime\prime2},
\end{equation}
depends on whether $\theta_{23}$ is larger or smaller than $\pi/4$.
Similarly, for the normal neutrino mass hierarchy case, we can
take $m_3^2\simeq\dmatm>m_1^2\simeq m_2^2\simeq0$, and again
the effective $23$-mass-hierarchy depends on
whether $\theta_{23}$ is larger or smaller than $\pi/4$,
although in a reversed fashion. According to the discussion
in Sec.~\ref{sec:swapping}, the final energy spectra for
$\nu_2^{\prime\prime}/\bar\nu_2^{\prime\prime}$ and
$\nu_3^{\prime\prime}/\bar\nu_3^{\prime\prime}$
(and, therefore, the spectra for $\nu_\mu/\bar\nu_\mu$ and
$\nu_\tau/\bar\nu_\tau$)
interchange with each other when $\theta_{23}$ rotates
from the first octant to the second octant. In other
words, the final neutrino energy spectra can be
sensitive to deviations from maximal 23-mixing.
This can be illustrated using the toy model discussed in
Ref.~\cite{EstebanPretel:2007yq}.
In this toy model the neutron star emits only $\nu_e$ and
$\bar\nu_e$ with the same energy $E_\nu$ into a
thick matter envelope where $\nb Y_\tau$ is large.
In this model it is assumed also that $n_{\nu_e}/n_{\bar\nu_e}=1+\epsilon$
at the neutrino sphere with $\epsilon>0$.
If the neutrino gas follows the adiabatic/precession
solution, then $P(\nu_e\rightarrow\nu_i^{\prime\prime})$
and $P(\bar\nu_e\rightarrow\bar\nu_i^{\prime\prime})$ must be either
0 or 1 except at swapping points (see Sec.~\ref{sec:swapping}).
However, it is clear from the conservation of lepton numbers
in the flavor basis that $\nu_e$'s can not have been fully converted
to other flavors. Because neutrino (instead of antineutrino)
is the dominant species in the system, the spectral swaps
occur in the neutrino sector.
Therefore, $\bar\nu_e$'s are completely
converted into antineutrinos of another flavor
as $n_\nu^\tot\rightarrow0$.
In this case, the precession ansatz
in Eq.~\eqref{eq:prec-ansatz} is trivially satisfied
as $n_\nu^\tot\rightarrow0$
in the corotating frame defined by
\begin{equation}
\sfH_\mathrm{cor}=\sfH^\vac_{\omega_+},
\end{equation}
where $\sfH^\vac_{\omega_+}$ is the vacuum term in the Hamiltonian
for the neutrino mode with energy $E_\nu$. In other words,
both swapping points have collapsed
into one which is located in the neutrino sector and at energy
$E^\mathrm{s}=E_\nu$. Following the discussions in Sec.~\ref{sec:swapping},
we can use the conservation of
lepton numbers in the flavor basis to find that
the neutrino conversion
probabilities are
\begin{subequations}
\label{eq:P-matt}
\begin{align}
P(\nu_e\rightarrow\nu_e)&=\frac{\epsilon}{1+\epsilon},\\
P(\nu_e\rightarrow\nu_\mu)&=0,\\
P(\nu_e\rightarrow\nu_\tau)&=\frac{1}{1+\epsilon},\\
P(\bar\nu_e\rightarrow\bar\nu_e)&=0,\\
P(\bar\nu_e\rightarrow\bar\nu_\mu)&=0,\\
P(\bar\nu_e\rightarrow\bar\nu_\tau)&=1
\end{align}
\end{subequations}
for the inverted neutrino mass hierarchy and $\theta_{23}<\pi/4$.
For the inverted neutrino mass hierarchy and $\theta_{23}>\pi/4$
we can obtain neutrino conversion probabilities which are similar to
those in Eq.~\eqref{eq:P-matt} but with $\nu_\mu\leftrightarrow\nu_\tau$
and $\bar\nu_\mu\leftrightarrow\bar\nu_\tau$.
This is exactly what has been observed in the numerical calculations
for the toy model at $r\simeq400$ km
(Fig.~2 in Ref.~\cite{EstebanPretel:2007yq}).
\section{Conclusions%
\label{sec:conclusions}}
We have developed a framework for studying collective
three-flavor neutrino oscillations. Important techniques in
studying collective two-flavor oscillations such as corotating frames
can be applied readily to three-flavor scenarios in
this framework. We have shown that the three-flavor
adiabatic/precession solution obtains when both the precession ansatz and
the adiabatic ansatz are satisfied. If the flavor
evolution of a neutrino gas is described by the adiabatic/precession solution,
the final neutrino energy spectra will exhibit the stepwise swapping
phenomenon. We have shown that stepwise spectral swapping
appears in a numerical example in which neutrinos are directly
emitted from the neutrino sphere of a bare neutron star into vacuum.
For this special example, because the neutrinos in $\mu$ and $\tau$ flavors
are equally mixed and have identical energy spectra initially,
the adiabatic/precession solutions for both the three-flavor and the two-flavor
scenarios produce the same spectral swapping at the
atmospheric neutrino mass-squared-difference scale.
In more general cases, however, this
may not be the case, and the full $3\times3$
mixing framework should be employed.
Strictly speaking, the adiabatic/precession solution
obtains only when the $CP$-violating phase $\delta=0$. This is because,
if $\delta\neq0$, the unitary transformation matrix $U$ connecting the
flavor states and vacuum mass eigenstates is not real, and
\begin{equation}
U\rho_\bfp^*U^\dagger\neq(U\rho_\bfp U^\dagger)^*
\text{ and }
U\bar\rho_\bfp^*U^\dagger\neq(U\bar\rho_\bfp U^\dagger)^*.
\end{equation}
As a result, Eqs.~\eqref{eq:eom-nu} and \eqref{eq:eom-anu} become
invalid in the vacuum mass basis, and all the following derivations are invalid.
In practice, however, the adiabatic/precession solution may still be a good
approximation even when $\delta$ is large.
This is because $\theta_{13}\simeq0$ and the transformation
matrix $U$ is almost real. In fact,
numerical simulations in Ref.~\cite{Duan:2007sh} show that,
at least for the parameters employed in those simulations,
varying the $CP$ phase $\delta$ has little effect on the final
neutrino energy spectra except for changing the relative mixing
of $\mu$ and $\tau$ neutrino flavors.
The possible effects of $CP$ violation in stellar collapse
have been discussed in a different context
(see, e.g., Ref.~\cite{Balantekin:2007es}).
We also have demonstrated that the adiabatic/precession solution obtains
even in the presence of a dominant matter background and a large
mu-tau term. When the matter term is much larger than the vacuum
mixing term, the presence of the ordinary matter only reshuffles
the neutrino states in which the spectral swapping has the
most dramatic manifestation. For the supernova environment, this means
that the regime where collective neutrino oscillations occur
is solely determined by the neutrino fluxes and does not
depend sensitively on the matter density profile.
This is in agreement with the previous analysis in
two-flavor scenarios \cite{Duan:2005cp}. The supernova neutrino
signals observed on earth will depend of course on the matter
profile in the supernova.
In part, this is because the MSW effect will
modify the neutrino energy spectra subsequent to
the collective oscillations discussed here.
This dependence of neutrino signals on the matter profile
can provide important information on the
conditions deep in the supernova envelope
\cite{Schirato:2002tg,Duan:2007sh,Lunardini:2007vn}.
\begin{acknowledgments}
This work was supported in part by
DOE grants DE-FG02-00ER41132 at INT,
DE-FG02-87ER40328 at UMN,
NSF grant PHY-04-00359 at UCSD,
and an IGPP/LANL mini-grant.
\end{acknowledgments}
|
1,941,325,221,047 | arxiv | \section{Introduction}
A famous open problem known as Lvov-Kaplansky's conjecture asserts: the image of a multilinear polynomial in noncommutative variables over a field $\mathbb{K}$ on the matrix algebra $M_{n}(\mathbb{K})$ is always a vector space \cite{Dniester}.
Recently, Kanel-Belov, Malev and Rowen \cite{Kanel2} made a major breakthrough and solved the problem for $n=2$.
A special case on polynomials of degree two has been known for long time (\cite{Shoda} and \cite{Albert}). Recently, Mesyan \cite{Mesyan} and Buzinski and Winstanley \cite{Buzinski} extended this result for nonzero multilinear polynomials of degree three and four, respectively.
We will study the following variation of the Lvov-Kaplansky's conjecture:
\begin{con}\label{c1}
The image of a multilinear polynomial on the upper triangular matrix algebra is a vector space.
\end{con}
In this paper, we will answer Conjecture \ref{c1} for polynomials of degree up to four. We point out that whereas in \cite{Buzinski} and \cite{Mesyan} the results describe conditions under which the image of a multilinear polynomial $p$, $Im(p)$, contains a certain subset of $M_n(\mathbb{K})$, our results give the explicit forms of $Im(p)$ on the upper triangular matrix algebra in each case.
Throughout the paper $UT_{n}$ will denote the set of upper triangular matrices. The set of all strictly upper triangular matrices will be denoted by $UT_{n}^{(0)}$. More generally, if $k\geq 0$, the set of all matrices in $UT_{n}$ whose entries $(i,j)$ are zero, for $j-i\leq k$, will be denoted by $UT_{n}^{(k)}$. Also if $i,j\in \{1,\dots,n\}$, we denote by $e_{ij}$ the $n\times n$ matrix with 1 in the entry $(i,j)$, and $0$ elsewhere. These will be called matrix units. In particular, $UT_n^{(k)}$ is the vector space spanned by the $e_{ij}$ with $j-i>k$.
Our main goal in this paper is to prove the following:
\begin{teor}
Let $n\geq2$ be an integer.
\begin{itemize}
\item[$(1)$] If $\mathbb{K}$ is an any field and $p$ is a multilinear polynomial over $\mathbb{K}$ of degree two, then $Im(p)$ over $UT_{n}$ is $\{0\}, UT_{n}^{(0)}$ or $UT_{n}$;
\item[$(2)$] If $\mathbb{K}$ is a field with at least $n$ elements and $p$ is a multilinear polynomial over $\mathbb{K}$ of degree three, then $Im(p)$ over $UT_{n}$ is $\{0\}, UT_{n}^{(0)}$ or $UT_{n}$;
\item[$(3)$] If $\mathbb{K}$ is a zero characteristic field and $p$ is a multilinear polynomial over $\mathbb{K}$ of degree four, then $Im(p)$ over $UT_{n}$ is $\{0\}, UT_{n}^{(1)}, UT_{n}^{(0)}$ or $UT_{n}$.
\end{itemize}
\end{teor}
To prove the statement $(1)$ we use some ideas of Shoda \cite{Shoda} and Albert and Muckenhoupt \cite{Albert}, and for statements $(2)$ and $(3)$ we use the polynomial reductions of Mesyan \cite{Mesyan}, \v{S}penko \cite{Spela} and Buzinski and Winstanley \cite{Buzinski}.
\section{The linear span of a multilinear polynomial on $UT_{n}$}
Throughout this section we will denote by $\mathbb{K}$ an arbitrary field and by $p(x_{1},\dots,x_{m})$ a multilinear polynomial in $\mathbb{K}\langle X \rangle$. We will also denote by $\langle Im(p) \rangle$ the linear span of $Im(p)$ on $UT_{n}$.
We start with an analogous result to Lemma $4$ from \cite{Kanel2}, where we analyse the image of a multilinear polynomial $p(x_{1},\dots,x_{m})\in\mathbb{K}\langle X \rangle$ on upper triangular matrix units.
Let $e_{i_{1},j_{1}},\dots,e_{i_{m},j_{m}}$ be upper triangular matrix units. Then $i_{q}\leq j_{q}$ for all $q$. We know that
\begin{eqnarray}\label{e1}
e_{i_{1},j_{1}}\cdots e_{i_{m},j_{m}}
\end{eqnarray}
is nonzero (and equal to $e_{i_{1},j_{m}}$) if and only if $j_{q}=i_{q+1}$, for all $q$.
Hence, if we change the order of the product in (\ref{e1}) we will obtain either $0$ or $e_{i_{1},j_{m}}$. To verify this claim, we will assume that we get a nonzero matrix unit after changing the order of some terms in (\ref{e1}). It is enough analyse just when we change the first or the last term. So, if $e_{i_{k},j_{k}}\cdots e_{i_{1},j_{1}}\cdots e_{i_{m},j_{m}}$ is nonzero then $i_{k}\leq i_{1}$, and by the product (\ref{e1}) we also have $i_{1}\leq i_{k}$, which proves that $i_{k}=i_{1}$ and therefore $e_{i_{k},j_{k}}\cdots e_{i_{1},j_{1}}\cdots e_{i_{m},j_{m}}=e_{i_{1},j_{m}}$. Analogously we prove that if $e_{i_{1},j_{1}}\cdots e_{i_{m},j_{m}}\cdots e_{i_{k},j_{k}}$ is nonzero then this product will be $e_{i_{1},j_{m}}$.
In this way, $p$ evaluated on upper triangular matrix units is equal to zero or to some multiple of an upper triangular matrix unit.
\begin{defi}
Let $A=\displaystyle\sum_{i,j=1}^{n}a_{i,j}e_{i,j}\in UT_{n}$. For each $k\in\{1,\dots,n\}$ the $k$-th diagonal of $A$ is the one with entries in positions $(1,k),(2,k+1),\dots,(n-k+1,n)$. We say that the $k$-th diagonal of $A$ is nonzero if at least one entry in its $k$-th diagonal is nonzero.
\end{defi}
The next lemma shows that if an upper triangular matrix unit can be obtained as an evaluation of a multilinear polynomial on matrix units, then all matrix units in the same diagonal can also be obtained by one such evaluation.
\begin{lema}\label{l2}
Assume that a nonzero multiple of $e_{i,i+k-1}$ can be written as an evaluation of $p$ on upper triangular matrix units, for some $i$ and $k$. Then $e_{1,k},e_{2,k+1},\dots,e_{n-k+1,n}\in Im(p)$.
\end{lema}
\begin{proof}
We write $\alpha e_{i,i+k-1}=p(e_{i_{1},j_{1}},\dots,e_{i_{m},j_{m}})$, for some nonzero $\alpha\in\mathbb{K}$. Hence,
$$\alpha e_{1,k}=p(e_{i_{1}-i+1,j_{1}-i+1},\dots,e_{i_{m}-i+1,j_{m}-i+1}),$$
and since $Im(p)$ is closed under scalar multiplication, $e_{1,k}\in Im(p)$. Analogously, we prove that $e_{2,k+1},\dots,e_{n-k+1,n}\in Im(p)$.
\end{proof}
\begin{lema}\label{l4}
Assume that a nonzero multiple of $e_{i,i+k-1}$ can be written as an evaluation of $p$ on upper triangular matrix units, for some $i$ and $k$. Then $e_{i,i+k}\in Im(p)$.
\end{lema}
\begin{proof}
We write $\alpha e_{i,i+k-1}=p(e_{i_{1},j_{1}},\dots,e_{i_{m},j_{m}})$ for some nonzero $\alpha\in\mathbb{K}$. Hence $i+k-1=j_{l}$ for some indexes $l\in\{1,\dots,m\}$. Replacing for each $l$ the corresponding $j_{l}$ by $j_{l}+1$ we get $$\alpha e_{i,i+k}=p(e_{i_{1},j_{1}},\dots,e_{i_{l},j_{l}+1},\dots,e_{i_{m},j_{m}})$$ which proves that $e_{i,i+k}\in Im(p)$.
\end{proof}
If we also denote $UT_{n}$ by $UT_{n}^{(-1)}$, then we have the main result of this section.
\begin{prp}
Let $p$ be a multilinear polynomial over $\mathbb{K}$. Then $\langle Im(p) \rangle$ is either $\{0\}$ or $UT_{n}^{(k)}$ for some integer $k\geq -1$.
\end{prp}
\begin{proof}
Assume that $Im(p)$ is nonzero. Hence, if $A=\displaystyle\sum_{i,j=1}^{n}a_{ij}e_{ij}\in Im(p)$ is nonzero, writing $A$ as a linear combination of evaluations of $p$ on upper triangular matrix units, we get that a multiple of $e_{ij}$ belongs to $Im(p)$, for each nonzero $(i,j)$ entry of $A$.
Let $k$ be the minimal integer such that the $k$-th diagonal of some matrix $A=\displaystyle\sum_{i,j=1}^{n}a_{ij}e_{ij}\in Im(p)$ is nonzero. Then there exists some $a_{i,i+k-1}\neq0$ and therefore $\alpha e_{i,i+k-1}=p(e_{i_{1},j_{1}},\dots,e_{i_{m},j_{m}})$ for some nonzero $\alpha\in\mathbb{K}$. By Lemma \ref{l2} all the matrix units $e_{1,k},\dots,e_{n-k+1,n}$ belong to $Im(p)$. By Lemma \ref{l4} $e_{i,i+k}\in Im(p)$. Using these both lemmas alternatively, we get that $UT_{n}^{(k-2)}\subset \langle Im(p) \rangle$. By the minimality of $k$ we have $\langle Im(p) \rangle = UT_{n}^{(k-2)}$.
\end{proof}
By the above proposition we can restate Conjecture \ref{c1} as
\begin{con}
The image of a multilinear polynomial on the upper triangular matrix algebra is either $\{0\}$ or $UT_{n}^{(k)}$ for some integer $k\geq-1$.
\end{con}
\section{A technical proposition}
We start with a fact about the image of multilinear polynomials of any degree on $UT_{n}$. We will prove that no subset between $UT_{n}^{(0)}$ and $UT_{n}$ can be the image of a multilinear polynomial over $UT_{n}$.
\begin{prp}\label{p1}
Let $\mathbb{K}$ be any field, $m\geq2$ an integer and $$p(x_{1},\dots,x_{m})=\sum_{\sigma\in S_{m}}\alpha_{\sigma}x_{\sigma(1)}\cdots x_{\sigma(m)},\alpha_{\sigma}\in\mathbb{K},$$ a nonzero multilinear polynomial.
\begin{itemize}
\item[$(1)$] if $\displaystyle\sum_{\sigma\in S_{m}}\alpha_{\sigma}\neq0$, then $Im(p)=UT_{n}$;
\item[$(2)$] if $\displaystyle\sum_{\sigma\in S_{m}}\alpha_{\sigma}=0$ and $UT_{n}^{(0)}\subset Im(p)$, then $Im(p)=UT_{n}^{(0)}$.
\end{itemize}
\end{prp}
\begin{proof}
If $\displaystyle\sum_{\sigma\in S_{m}}\alpha_{\sigma}\neq0$, then replacing $m-1$ variables by $I_{n}$ (the identity matrix) and the last one by $(\displaystyle\sum_{\sigma\in S_{m}}\alpha_{\sigma})^{-1}A$ where $A$ is any matrix in $UT_{n}$, we get $Im(p)=UT_{n}$, from which $(1)$ follows.
If $\displaystyle\sum_{\sigma\in S_{m}}\alpha_{\sigma}=0$ and $UT_{n}^{(0)}\subset Im(p)$, then let $\tau\in S_{n}$ such that $\alpha_{\tau}\neq0$ (there exists such a permutation because $p\neq0$). Then, $\alpha_{\tau}=-\displaystyle\sum_{\sigma \in S_{m}\setminus \{\tau\}}\alpha_{\sigma}.$
So, $$p(x_{1},\dots,x_{m})=\sum_{\sigma\in S_{m}\setminus\{\tau\}} \alpha_{\sigma}(x_{\sigma(1)}\cdots x_{\sigma(m)}-x_{\tau(1)}\cdots x_{\tau(m)}).$$
Therefore, replacing $x_{1},\dots,x_{m}$ by upper triangular matrices we obtain in each term of the sum above a matrix with just zeros in the main diagonal. Indeed, the main diagonal of a product of upper triangular matrices is the same, regardless of the order.
With this, we conclude that $Im(p)\subset UT_{n}^{(0)}$ and by hypothesis, $Im(p)=UT_{n}^{(0)}$.
\end{proof}
\section{The images of multilinear polynomials of degree two}
We consider a multilinear polynomial of degree two, which has the following form: $p(x,y)=\alpha xy+\beta yx$ for some $\alpha,\beta\in\mathbb{K}$. We will divide the study of the image of $p$ in two cases.
Case 1. $\alpha+\beta\neq0$.
In this case we can use Proposition \ref{p1} (1) and get $Im(p)=UT_{n}$.
Case 2. $\alpha+\beta=0$.
If $\alpha=\beta=0$ then $Im(p)=\{0\}$. Otherwise, we may assume that $p(x,y)=xy-yx$. Let $A=(a_{ij})\in UT_{n}^{(0)}$. Take $B=\displaystyle\sum_{k=1}^{n-1}e_{k,k+1}$ and $C=(c_{ij})\in UT_{n}$. So,
\begin{eqnarray}\label{comutador}
BC-CB&=&(\sum_{k=1}^{n-1}e_{k,k+1})(\sum_{i,j=1}^{n}c_{ij}e_{ij})-(\sum_{i,j=1}^{n}c_{ij}e_{ij})(\sum_{k=1}^{n-1}e_{k,k+1})\\\nonumber
&=& \sum_{i=1}^{n-1}\sum_{j=2}^{n}(c_{i+1,j}-c_{i,j-1})e_{ij}
\end{eqnarray}
Using $c_{ij}=0$ for $i>j$ , we note that the diagonal entries of the matrix $BC-CB$ above are all zero.
Now we consider the system defined by the equations $c_{i+1,j}-c_{i,j-1}=a_{ij}$. A solution of this system is $c_{1k}=0,k=1,\dots,n$ and $c_{i+1,j}=a_{ij}+a_{i-1,j-1}+\cdots+a_{1,j-(i-1)}$ where $i<j$ and $i=2,\dots,n-1,j=2,\dots,n$.
So, $Im(p)\supset UT_{n}^{(0)}$ and by Proposition \ref{p1} $(2)$, we have $Im(p)=UT_{n}^{(0)}$.
In resume, we have proved the following
\begin{prp}\label{4}
Let $p(x,y)\in\mathbb{K}\langle X \rangle$ be a multilinear polynomial where $\mathbb{K}$ is any field. Then $Im(p)$ on $UT_{n}$ is $\{0\}, UT_{n}^{(0)}$ or $UT_{n}$.
\end{prp}
\section{The images of multilinear polynomials of degree three}
To start this section we prove the following lemma, which is a an analogous of Lemma 1.2 of \cite{Amitsur}.
\begin{lema}\label{l1}
Let $\mathbb{K}$ be a field with at least $n$ elements and let $d_{11},\dots,d_{nn}\in\mathbb{K}$ be distinct elements. Then for $D=diag(d_{11},\dots,d_{nn})$ and $k\geq 0$, we have
\begin{eqnarray}\nonumber
[UT_{n}^{(k)},D]=UT_{n}^{(k)} \ \mbox{and}\ [UT_{n},D]=UT_{n}^{(0)}.
\end{eqnarray}
\end{lema}
\begin{proof} Clearly, $[UT_{n}^{(k)},D]\subset UT_{n}^{(k)}$.
Now, let $A=\displaystyle\sum_{j-i>k}a_{ij}e_{ij}$ be an arbitrary element of $UT_{n}^{(k)}$. Then,
\begin{eqnarray}\nonumber
[A,D]&=&AD-DA=(\sum_{i,j=1}^{n}a_{ij}e_{ij})(\sum_{l=1}^{n}d_{ll}e_{ll})-(\sum_{l=1}^{n}d_{ll}e_{ll})(\sum_{i,j=1}^{n}a_{ij}e_{ij})\\\nonumber
&=&\sum_{j-i>k}^{n}a_{ij}(d_{jj}-d_{ii})e_{ij}
\end{eqnarray}
Hence, if $B=\displaystyle\sum_{j-i>k}^{n}b_{ij}e_{ij}\in UT_{n}^{(k)}$, we choose $a_{ij}=b_{ij}(d_{jj}-d_{ii})^{-1},$ for $j-i>k$. This proves that $[UT_n^{(k)}, D]\supset UT_n^{(k)}$, and the first equality is proved.
Now we prove the second equality. It is immediate that $[UT_n,D]\subset UT_n^{(0)}$. Also, since $UT_n^{(0)}\subset UT_n$, we have $[UT_n^{(0)},D]\subset [UT_n,D]$. Hence, from the first equation for $k=0$, we have $UT_n^{(0)}=[UT_n^{(0)},D]\subset [UT_n,D]$. And the second equality is proved.
\end{proof}
Following the proof of Theorem 13 of \cite{Mesyan}, we obtain the next theorem, where we determine the image of multilinear polynomials of degree 3 on $UT_{n}$.
\begin{teor}\label{6}
Let $\mathbb{K}$ be a field with at least $n$ elements and let $p(x,y,z)\in \mathbb{K}\langle X \rangle$ be a multilinear polynomial. Then $Im(p)$ is $\{0\}, UT_{n}^{(0)}$ or $UT_{n}$.
\end{teor}
\begin{proof}
Let $p(x,y,z)\in\mathbb{K}\langle X \rangle$ be a nonzero multilinear polynomial. So, $$p(x,y,z)=\alpha_{1}xyz+\alpha_{2}xzy+\alpha_{3}yxz+\alpha_{4}yzx+\alpha_{5}zxy+\alpha_{6}zyx,\alpha_{l}\in\mathbb{K}.$$ If $\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}+\alpha_{5}+\alpha_{6}\neq0$ then using Proposition \ref{p1} (1) we have $Im(p)=UT_{n}$.
Hence, we may assume that $\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}+\alpha_{5}+\alpha_{6}=0$. So, we write $p$ as $$p(x,y,z)=\alpha_{1}(xyz-zyx)+\alpha_{2}(xzy-zyx)+\alpha_{3}(yxz-zyx)+\alpha_{4}(yzx-zyx)+\alpha_{5}(zxy-zyx).$$
If any of $p(1,y,z),p(x,1,z)$ or $p(x,y,1)$ are non-zero, then we have by Proposition \ref{4} that $Im(p)$ contains all upper triangular matrices with zero main diagonal. Then by Proposition \ref{p1} we have that $Im(p)$ is $UT_{n}^{(0)}$ or $UT_{n}$.
Otherwise, the equations $p(1,y,z)=p(x,1,z)=p(x,y,1)=0$ imply that $\alpha_{3}=\alpha_{5},\alpha_{2}=\alpha_{4}$ and $\alpha_{1}=-\alpha_{2}-\alpha_{3}$. Therefore,
\begin{eqnarray}\nonumber
p(x,y,z)&=&(-\alpha_{2}-\alpha_{3})(xyz-zyx)+\alpha_{2}(xzy-zyx+yzx-zyx)\\\nonumber
& & + \ \alpha_{3}(yxz-zyx+yzx-xyz)\\\nonumber
&=& \alpha_{2}(xyz-zyx+yzx-xyz)+\alpha_{3}(yxz-zyx+zxy-xyz)\\\nonumber
&=& \alpha_{2}[x,[z,y]]+\alpha_{3}[z,[x,y]]
\end{eqnarray}
Since $p\neq0$, renaming the variables if necessary, we may assume that $\alpha_{2}\neq0$ and therefore assume $$p(x,y,z)=[x,[z,y]]+\alpha[z,[x,y]],$$ for some $\alpha\in\mathbb{K}$.
By Lemma \ref{l1}, $UT_{n}^{(0)}=[D,UT_{n}^{(0)}]=[D,[UT_{n},D]]$. So, taking $x=y=D$ and $z=A$ any matrix in $UT_{n}$ we get all of $UT_{n}^{(0)}$. So, $Im(p)=UT_{n}^{(0)}$.
\end{proof}
\section{The images of multilinear polynomials of degree four}
In this section we will determinate the image of multilinear polynomials of degree four over a field $\mathbb{K}$ of zero characteristic.
We start with the following lemma.
\begin{lema}\label{l3}
Let $\mathbb{K}$ be any field. Then $[UT_{n}^{(0)},UT_{n}^{(0)}]=UT_{n}^{(1)}$.
\end{lema}
\begin{proof}
Clearly, $[UT_{n}^{(0)},UT_{n}^{(0)}]\subset UT_{n}^{(1)}$.
Now, let $A=\displaystyle\sum_{k=1}^{n}e_{k,k+1}\in UT_{n}^{(0)}$ and $B=\displaystyle\sum_{i,j=1}^{n}b_{ij}e_{ij}\in UT_{n}^{(0)}$. The same computations as in equation (\ref{comutador}) yields
\begin{eqnarray}\nonumber
[A,B]&=
\sum_{i=1}^{n-1}\sum_{j=2}^{n}(b_{i+1,j}-b_{i,j-1})e_{ij}.
\end{eqnarray}
So, for $C=(c_{ij})\in UT_{n}^{(1)}$, the system below has solution
\begin{eqnarray}
\left\{\begin{array}{ccc}\nonumber
b_{23}-b_{12}&=&c_{13}\\
&\vdots &\\
b_{2n}-b_{1,n-1}&=&c_{1n}\\
&\vdots &\\
b_{n-1,n}-b_{n-2,n-1}&=&c_{n-2,n}\\
\end{array}\right..
\end{eqnarray}
Indeed, we may choose $b_{1k}=0, k=2,\dots,n-1$ and $b_{i+1,j}=c_{i,j}+\cdots+c_{1,j-(i-1)},$ $i=1,\dots,n-2,j=3,\dots,n$. Therefore, $C\in [UT_{n}^{(0)},UT_{n}^{(0)}]$.
\end{proof}
Now we prove the main result for polynomials of degree 4. Ou proof is based on the proof of Theorem 1 of \cite{Buzinski}.
\begin{teor}
Let $\mathbb{K}$ be a field of zero characteristic and let $p(x_{1},x_{2},x_{3},x_{4})\in\mathbb{K}\langle X \rangle$ be a multilinear polynomial. Then the image of $p$ on $UT_{n}$ is $\{0\}, UT_{n}^{(1)},UT_{n}^{(0)}$ or $UT_{n}$.
\end{teor}
\begin{proof}
We may assume that $p\neq0$. If any of $p(1,x_2,x_3,x_4)$, $p(x_1,1,x_3,x_4)$, $p(x_1,x_2,1,x_4)$ or $p(x_1,x_2,x_3,1)$ are nonzero, then by Proposition \ref{p1} and Theorem \ref{6}, we have $Im(p)=UT_{n}^{(0)}$ or $UT_{n}$. So, we may assume that $$ p(1,x_2,x_3,x_4)=p(x_1,1,x_3,x_4)=p(x_1,x_2,1,x_4)=p(x_1,x_2,x_3,1)=0.$$ Then by Falk's theorem \cite{Falk} we have
\begin{eqnarray}\nonumber
p(x_{1},x_{2},x_{3},x_{4})&=&L(x_{1},x_{2},x_{3},x_{4})+\alpha_{1234}[x_{1},x_{2}][x_{3},x_{4}]+\alpha_{1324}[x_{1},x_{3}][x_{2},x_{4}]\\\nonumber
& &+\alpha_{1423}[x_{1},x_{4}][x_{2},x_{3}]+\alpha_{2314}[x_{2},x_{3}][x_{1},x_{4}]+\alpha_{2413}[x_{2},x_{4}][x_{1},x_{3}]\\\nonumber
& &+\alpha_{3412}[x_{3},x_{4}][x_{1},x_{2}]
\end{eqnarray}
where $L(x_{1},x_{2},x_{3},x_{4})$ is a Lie polynomial and $\alpha_{1234},\alpha_{1324},\alpha_{1423},\alpha_{2314},\alpha_{2413},\alpha_{3412} \in \mathbb{K}$.
Using Hall basis (see \cite{Hall}) we can write
\begin{eqnarray}\nonumber
L(x_{1},x_{2},x_{3},x_{4})&=&\alpha_{1}[[[x_{2},x_{1}],x_{3}],x_{4}]+\alpha_{2}[[[x_{3},x_{1}],x_{2}],x_{4}]+\alpha_{3}[[[x_{4},x_{1}],x_{2}],x_{3}]\\\nonumber
& & +\alpha_{4}[[x_{4},x_{1}],[x_{3},x_{2}]]+\alpha_{5}[[x_{4},x_{2}],[x_{3},x_{1}]]+\alpha_{6}[[x_{4},x_{3}],[x_{2},x_{1}]],
\end{eqnarray}
where $\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5},\alpha_{6}\in\mathbb{K}$.
Opening the brackets for the three last terms we can assume $p$ as
\begin{eqnarray}\nonumber
p(x_{1},x_{2},x_{3},x_{4})&=&\alpha_{1}[[[x_{2},x_{1}],x_{3}],x_{4}]+\alpha_{2}[[[x_{3},x_{1}],x_{2}],x_{4}]+\alpha_{3}[[[x_{4},x_{1}],x_{2}],x_{3}]\\\nonumber
& & +\alpha_{1234}[x_{1},x_{2}][x_{3},x_{4}]+\alpha_{1324}[x_{1},x_{3}][x_{2},x_{4}]+\alpha_{1423}[x_{1},x_{4}][x_{2},x_{3}]\\\nonumber
& & +\alpha_{2314}[x_{2},x_{3}][x_{1},x_{4}]+\alpha_{2413}[x_{2},x_{4}][x_{1},x_{3}] +\alpha_{3412}[x_{3},x_{4}][x_{1},x_{2}].
\end{eqnarray}
Now suppose that for some $i=1,2,3$ we have $\alpha_{i}\neq0$. Without loss of generality, we may assume that $\alpha_{1}\neq0$. So, replacing $x_{1},x_{3}$ and $x_{4}$ by $D=diag(d_{11},\dots,d_{nn})$ with $d_{11},\dots,d_{nn}$ distinct elements in $\mathbb{K}$, we get $p(D,x_{2},D,D)=\alpha_{1}[[[x_{2},D],D],D]$. Now, using Lemma \ref{l1} we have $Im(p)=UT_{n}^{(0)}$. So, we may assume that $\alpha_{1}=\alpha_{2}=\alpha_{3}=0$ and then
\begin{eqnarray}\nonumber
p(x_{1},x_{2},x_{3},x_{4})&=&\alpha_{1234}[x_{1},x_{2}][x_{3},x_{4}]+\alpha_{1324}[x_{1},x_{3}][x_{2},x_{4}]+\alpha_{1423}[x_{1},x_{4}][x_{2},x_{3}]\\\nonumber
& & +\alpha_{2314}[x_{2},x_{3}][x_{1},x_{4}]+\alpha_{2413}[x_{2},x_{4}][x_{1},x_{3}]+\alpha_{3412}[x_{3},x_{4}][x_{1},x_{2}].
\end{eqnarray}
Clearly, $Im(p)\subset UT_{n}^{(1)}$.
We will consider two cases.
Case 1. Assume $\alpha_{1234}=\alpha_{2314}=\alpha_{3412}=\alpha_{1423}=-\alpha_{1324}=-\alpha_{2413}$. Then we may assume that
\begin{eqnarray}\nonumber
p(x_{1},x_{2},x_{3},x_{4})&=&[x_{1},x_{2}][x_{3},x_{4}]+[x_{3},x_{4}][x_{1},x_{2}]+[x_{2},x_{3}][x_{1},x_{4}]+[x_{1},x_{4}][x_{2},x_{3}]\\\nonumber
& & -[x_{1},x_{3}][x_{2},x_{4}]-[x_{2},x_{4}][x_{1},x_{3}].
\end{eqnarray}
Consider $A\in UT_{n}^{(1)}$. Let $D=diag(d_{11},\dots,d_{nn})$ where $d_{11},\dots,d_{nn}$ are all distinct elements of $\mathbb{K}$. Then, by Lemma \ref{l1} there exists $G\in UT_{n}^{(1)}$ with $A=[D,G]$. By Lemma \ref{l3} there are $E,F\in UT_{n}^{(0)}$ such that $G=[E,F]$. Again by Lemma \ref{l1} we have $B,C\in UT_{n}$ such that $E=[D,B]$ and $F=[D,C]$. So, $A=[D,[[D,B],[D,C]]]$. Observing that
\begin{eqnarray}\nonumber
p(D,D^{2},B,C)&=&[D,D^{2}][B,C]+[B,C][D,D^{2}]+[D^{2},B][D,C]+[D,C][D^{2},B]\\\nonumber
& &-[D,B][D^{2},C]-[D^{2},C][D,B]\\\nonumber
&=&[D^{2},B][D,C]+[D,C][D^{2},B]-[D,B][D^{2},C]-[D^2,C][D,B]\\\nonumber
&=&[D,[[D,B],[D,C]]],
\end{eqnarray}
we have $A\in Im(p)$, proving in this way that $Im(p)=UT_{n}^{(1)}$.
Case 2. Assume that at least one of following $\alpha_{1234}=\alpha_{2314}=\alpha_{3412}=\alpha_{1423}=-\alpha_{1324}=-\alpha_{2413}$ does not hold. So, there are $A,B,C \in UT_{n}$ such that at least one of the following expressions is not zero:
\begin{eqnarray}\nonumber
p(A,A,B,C)&=&(\alpha_{1324}+\alpha_{2314})[A,B][A,C]+(\alpha_{1423}+\alpha_{2413})[A,C][A,B],\\\nonumber
p(A,B,A,C)&=&(\alpha_{1234}-\alpha_{2314})[A,B][A,C]+(\alpha_{3412}-\alpha_{1423})[A,C][A,B],\\\nonumber
p(A,B,C,A)&=&(-\alpha_{1234}-\alpha_{2413})[A,B][A,C]+(-\alpha_{1324}-\alpha_{3412})[A,C][A,B],\\\nonumber
p(B,A,A,C)&=&(-\alpha_{1234}-\alpha_{1324})[A,B][A,C]+(-\alpha_{2413}-\alpha_{3412})[A,C][A,B],\\\nonumber
p(B,A,C,A)&=&(-\alpha_{1423}+\alpha_{1234})[A,B][A,C]+(\alpha_{3412}+\alpha_{2314})[A,C][A,B],\\\nonumber
p(B,C,A,A)&=&(\alpha_{1324}+\alpha_{1423})[A,B][A,C]+(\alpha_{2314}+\alpha_{2413})[A,C][A,B].
\end{eqnarray}
Therefore, we may reduce the problem to prove that with the expression $$[A,B][A,C]+\lambda [A,C][A,B], \lambda \in \mathbb{K},$$ we get all elements in $UT_{n}^{(1)}$. Using Lemma \ref{l1} and taking $A=diag(a_{11},\dots,a_{nn})$ where all $a_{11},\dots,a_{nn}$ are distinct elements of $\mathbb{K}$, there exist $B\in UT_{n}$ such that $\displaystyle\sum_{k=1}^{n-1}e_{k,k+1}=[A,B]$. Writing $[A,C]=\displaystyle\sum_{i,j=1}^{n}b_{ij}e_{ij}$ we have
\begin{eqnarray}\nonumber
[A,B][A,C]+\lambda [A,C][A,B]&=&(\sum_{k=1}^{n-1}e_{k,k+1})(\sum_{i,j=1}^{n}b_{ij}e_{ij})+\lambda(\sum_{i,j=1}^{n}b_{ij}e_{ij})(\sum_{k=1}^{n-1}e_{k,k+1})\\\nonumber
&=&\sum_{i=1}^{n-1}\sum_{j=2}^{n}(b_{i+1,j}+\lambda b_{i,j-1})e_{ij}
\end{eqnarray}
So, for an arbitrary $M=(c_{ij})\in UT_{n}^{(1)}$,
the system below has solution.
\begin{eqnarray}
\left\{\begin{array}{ccc}\nonumber
b_{23}+\lambda b_{12}&=&c_{13}\\
&\vdots &\\
b_{2n}+\lambda b_{1,n-1}&=&c_{1n}\\
&\vdots &\\
b_{n-1,n}+\lambda b_{n-2,n-1}&=&c_{n-2,n}\\
\end{array}\right..
\end{eqnarray}
Indeed, we may choose $b_{1k}=0, k=2,\dots,n-1$ and $$b_{i+1,j}=c_{i,j}-\lambda c_{i-1,j-1}+\cdots+(-\lambda)^{i-1}c_{1,j-(i-1)},i=1,\dots,n-2,j=3,\dots,n.$$
Therefore, $M\in Im(p)$, proving that $Im(p)=UT_{n}^{(1)}$.
\end{proof}
\section*{Acknowledgments}
This work was completed when the first author visited Kent State University. The first author would like to thank the Department of Mathematical Sciences of Kent State University for its hospitality. The authors would like to thank Dr. Mikhail Chebotar for the helpful comments and the anonymous referee for his/her useful suggestions that much improved the final version of this paper.
\section*{Funding}
The first author was supported by São Paulo Research Foundation (FAPESP), grants \# 2017/16864-5 and \# 2016/09496-7.
The second author was supported by São Paulo Research Foundation (FAPESP), grant \# 2014/09310-5 and by National Council for Scientific and Technological Development (CNPq), grant \# 461820/2014-5.
|
1,941,325,221,048 | arxiv | \section{Introduction and main results}
Let $G$ be an $r$-uniform hypergraph ($r$\emph{-graph} for short) with vertex
set $V$ and edge set $E$. Assume that $V=\left[ n\right] :=\left\{
1,\ldots,n\right\} $ and let $\mathbf{x}:=\left( x_{1},\ldots,x_{n}\right)
\in\mathbb{R}^{n}$. Write $P_{G}\left( \mathbf{x}\right) $ for the
polynomial form of $G$
\[
P_{G}\left( \mathbf{x}\right) :
{\textstyle\sum\limits_{\left\{ i_{1},\ldots,i_{r}\right\} \in E}}
x_{i_{1}}\cdots x_{i_{r}},
\]
and le
\[
\mu\left( G\right) :=\max_{\Delta^{n-1}}P_{G}\left( \mathbf{x}\right) ,
\]
where $\Delta^{n-1}\subset\mathbb{R}^{n}$ is the standard simplex:
\[
\Delta^{n-1}=\left\{ \mu\left( G\right) :\left( x_{1},\ldots,x_{n}\right)
:x_{1}\geq0,\ldots,x_{n}\geq0\text{ and }x_{1}+\cdots+x_{n}=1\right\} .
\]
We call $\mu\left( G\right) $ the \emph{MS-index }of $G$ in honor of Motzkin
and Straus, who introduced and studied $\mu\left( G\right) $ for $2$-graphs
in \cite{MoSt65}\footnote{The MS-index is commonly called the
\emph{Lagrangian}, a misty term that denies credit to Motzkin and Straus,
while Lagrange has nothing to do with the concept. Besides, the term
\emph{Lagrangian} has seven or so other meanings already in use elsewhere.}.
Let us note that the MS-index has a long-standing history in extremal
hypergraph theory (see, e.g., \cite{FrFu89} and \cite{Kee11} for more detailed discussion.)
Now, let
\[
\mu_{r}\left( m\right) :=\max\left\{ \mu\left( G\right) :G\text{ is an
}r\text{-graph with }m\text{ edges}\right\} .
\]
The problem of finding $\mu_{r}\left( m\right) $ was first raised in 1989,
by Frankl and F\"{u}redi \cite{FrFu89}, who conjectured the exact value of
$\mu_{r}\left( m\right) $. During the years, their conjecture proved to be
rather hard: notwithstanding that it has been confirmed for most values of $m$
(see \cite{Tal02},\cite{TPWP16},\cite{Tyo17}), its toughest and most delicate
cases are still open.
However, even if completely solved, Frankl and F\"{u}redi's conjecture does
not provide an easy-to-use, closed-form expression for $\mu_{r}\left(
m\right) $. In this regard, the following conjecture might be of interest:
\begin{conjecture}
\label{con1}Let $r\geq3$ and $G$ be an $r$-graph with $m$ edges. If $t\geq
r-1$ is the unique real number such that $m=\binom{t}{r}$, then $\mu\left(
G\right) \leq mt^{-r},$ with equality if and only if $t$ is an integer.
\end{conjecture}
Note that the value of $\mu_{r}\left( m\right) $ conjectured by Frankl and
F\"{u}redi is quite close to $mt^{-r}$, and moreover, both values coincide if
$t$ is an integer. Tyomkyn \cite{Tyo17} called the latter case the\emph{
principal case\ }of the Frankl-F\"{u}redi's conjecture, and solved it for any
$r\geq4$ and $m$ sufficiently large; prior to that, Talbot \cite{Tal02} had
solved the principal case for $r=3$ and any $m$. Let us note that Talbot and
Tyomkyn contribute much more than the mentioned results, but neither of these
works imply a complete solution to Conjecture \ref{con1} for any $r$.
In this paper, we confirm Conjecture \ref{con1} whenever $3\leq r\leq5,$
thereby completely resolving the principal case of the Frankl-F\"{u}redi's
conjecture for these values of $r$. In addition, we show that Conjecture
\ref{con1} holds whenever $t\geq4\left( r-1\right) \left( r-2\right) $,
thereby giving an alternative proof of Tyomkyn's result and providing an
explicit bound\footnote{This bound is chosen for simplicity, and can be cut at
least by half. In contrast, Tyomkyn's proof of the principal case of the
Frankl-F\"{u}redi's conjecture provides no explicit bounds.}.
Our proofs are based on some seemingly novel bounds on elementary symmetric
functions, which, somewhat surprisingly, are just analytic results with no
relation to hypergraphs whatsoever. Theorems \ref{tub}, \ref{tlb}, and
\ref{tlub} below present the gist of this approach.\medskip
Given a vector $\mathbf{x}:=\left( x_{1},\ldots,x_{n}\right) $, write
$S_{k}\left( \mathbf{x}\right) $ for the $k$th elementary symmetric function
of $x_{1},\ldots,x_{n}$. Set $q\left( \mathbf{x}\right) :=x_{1}^{2
+\cdots+x_{n}^{2},$ and for nonnegative $\mathbf{x}$ set $\sigma\left(
\mathbf{x}\right) :=x_{1}^{x_{1}}\cdots x_{n}^{x_{n}}$, with the caveat that
$0^{0}=1$.
Let us recall the most important case of the celebrated Maclaurin inequality
(\cite{M1729}, see also \cite{HLP88}, Theorem 52), that can be stated as:
\medskip
\emph{If }$k\geq2$\emph{ and }$\mathbf{x}\in\Delta^{n-1}$\emph{, then}
\begin{equation}
S_{k}\left( \mathbf{x}\right) <n^{-k}\binom{n}{k}, \label{MI
\end{equation}
\emph{unless the entries of }$\mathbf{x}$\emph{ are equal.\medskip}
Since $\mathbf{x}\in\Delta^{n-1}$ implies that $\sigma\left( \mathbf{x
\right) \geq1/n$, the following theorem strengthens inequality (\ref{MI})
under a mild restriction on the maximum entry of $\mathbf{x}$ (denoted by
$\left\vert \mathbf{x}\right\vert _{\max}$ hereafter):
\begin{theorem}
\label{tub}Let $k\geq3,$ $\mathbf{x}\in\Delta^{n-1}$, and $\sigma
=\sigma\left( \mathbf{x}\right) .$ If $\left\vert \mathbf{x}\right\vert
_{\max}\leq1/4\left( k-2\right) $, then
\[
S_{k}\left( \mathbf{x}\right) <\sigma^{k}\binom{1/\sigma}{k},
\]
unless the nonzero entries of $\mathbf{x}$ are equal.
\end{theorem}
It turns out that the bound in Theorem \ref{tub} is not an isolated exception,
but one of many interrelated similar bounds that avoid the parameter $n$
altogether (see the closing remarks of Section \ref{sym}.) In particular,
Theorem \ref{tub} can be matched with a very similar lower bound:
\begin{theorem}
\label{tlb}Let $k\geq3,$ $\mathbf{x}\in\Delta^{n-1}$, and $q=q\left(
\mathbf{x}\right) $. If $\left\vert \mathbf{x}\right\vert _{\max}<1/\left(
k-1\right) $, then
\[
S_{k}\left( \mathbf{x}\right) >q^{k}\binom{1/q}{k},
\]
unless the nonzero entries of $\mathbf{x}$ are equal.
\end{theorem}
The following theorem, crucial for tackling Conjecture \ref{con1},
incorporates an additional twist in order to weaken the constraint on
$\left\vert \mathbf{x}\right\vert _{\max}$:
\begin{theorem}
\label{tlub}Let $3\leq k\leq5,$ $\mathbf{x}:=\left( x_{1},\ldots
,x_{n}\right) \in\Delta^{n-1}$, and $\sigma=\sigma\left( \mathbf{x}\right)
.$ If $x_{n}\leq\cdots\leq x_{1}\leq1/k$, the
\begin{equation}
\frac{\partial S_{k}\left( \mathbf{x}\right) }{\partial x_{1}}<k\sigma
^{k}\binom{1/\sigma}{k}, \label{inr3
\end{equation}
unless the nonzero entries of $\mathbf{x}$ are equal.
\end{theorem}
The rest of the paper is structured as follows: in Section \ref{sym}, we give
some results about symmetric functions and prove Theorems \ref{tub},
\ref{tlb}, and \ref{tlub}; at the end of the section we discuss possible
extensions of these theorems. In Section \ref{pfc}, we prove an upper bound on
$\mu\left( G\right) $ and then prove Conjecture \ref{con1} if $3\leq r\leq5$
or $t\geq4\left( r-1\right) \left( r-2\right) $.
\section{\label{sym}Some bounds on elementary symmetric functions}
Let $\mathbf{x}:=\left( x_{1},\ldots,x_{n}\right) \in\Delta^{n-1}$, and
assume hereafter that $x_{1}\geq\cdots\geq x_{n}.$ Set
\[
p\left( \mathbf{x}\right) :=x_{1}^{3}+\cdots+x_{n}^{3}\text{ \ \ and
\ \ }t\left( \mathbf{x}\right) :=x_{1}^{4}+\cdots+x_{n}^{4}.
\]
Whenever $\mathbf{x}$ is understood, we shorten $S_{k}\left( \mathbf{x
\right) ,$ $q\left( \mathbf{x}\right) ,$ $p\left( \mathbf{x}\right) ,$
$t\left( \mathbf{x}\right) ,$ $\sigma\left( \mathbf{x}\right) $ to
$S_{k},$ $q,$ $p,$ $t,$ $\sigma$.
We start with a few basic inequalities about $\sigma,$ $q,$ $p,$ $t,$ and
$\left\vert \mathbf{x}\right\vert _{\max}$.
\begin{proposition}
\label{pmi}Let $\mathbf{x}\in\Delta^{n-1}$, and let $n^{\prime}$ be the number
of nonzero entries of $\mathbf{x}$. The
\[
1/n^{\prime}<\sigma<q<p^{1/2}<t^{1/3}<\left\vert \mathbf{x}\right\vert _{\max
},
\]
unless the nonzero entries of $\mathbf{x}$ are equal, in which case,
equalities hold throughout.
\end{proposition}
\begin{proof}
Without loss of generality, assume that $\mathbf{x}$ is positive, and that the
entries of $\mathbf{x}$ are not all equal. First, the function $x^{c}$ is
strictly convex for $x>0$ and $c>1$; hence, Jensen's inequality implies tha
\
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}x_{i}<\left(
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}x_{i}^{2}\right) ^{1/2}<\left(
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}x_{i}^{3}\right) ^{1/3},
\]
yielding $q<p^{1/2}<t^{1/3}$. Likewise, since the function $\log
x$\footnote{Here and elsewhere $\log$ stands for \textquotedblleft logarithm
base $e$\textquotedblright.} is strictly concave for $x>0$, we see that
\[
\log\sigma
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}\log x_{i}<\log\left(
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}x_{i}\right) =\log q,
\]
yielding $\sigma<q$. Further, since the function $x\log x$ is strictly convex
for $x>0$, we see tha
\[
\log\sigma=x_{1}\log x_{1}+\cdots+x_{n}\log x_{n}>n\left( \frac{1}{n
\log\frac{1}{n}\right) =\log1/n,
\]
yielding $\sigma>1/n$. Finally,
\[
t=x_{1}^{4}+\cdots+x_{n}^{4}<\left\vert \mathbf{x}\right\vert _{\max
^{3}\left( x_{1}+\cdots+x_{n}\right) =\left\vert \mathbf{x}\right\vert
_{\max}^{3},
\]
completing the proof of Proposition \ref{pmi}.
\end{proof}
Further, note that $\partial S_{k}/\partial x_{i}$ is just the sum of all
products in $S_{k-1}$ that do not contain $x_{i}$; thus for every $i\in\left[
n\right] $, we hav
\begin{equation}
\frac{\partial S_{k}}{\partial x_{i}}=S_{k-1}-x_{i}\frac{\partial S_{k-1
}{\partial x_{i}}. \label{MR
\end{equation}
In addition, a short argument shows that
\[
\frac{\partial S_{k}}{\partial x_{i}}-\frac{\partial S_{k}}{\partial x_{j
}=\left( x_{j}-x_{i}\right) \frac{\partial S_{k}}{\partial x_{i}\partial
x_{j}}.
\]
Hence, the assumption $x_{1}\geq\cdots\geq x_{n}$ implies that $\partial
S_{k}/\partial x_{1}\leq\cdots\leq\partial S_{k}/\partial x_{n}$. Moreover, we
see that $x_{i}=x_{j}$ if and only if $\partial S_{k}/\partial x_{i}=\partial
S_{k}/\partial x_{j}.$\medskip
Our proofs crucially rely on the weighted Chebyshev inequality (see, e.g.,
\cite{Bul03}, p. 161):\medskip
\textbf{Chebyshev's inequality. }\emph{Let }$\mathbf{w}:=\left( w_{1
,\ldots,w_{n}\right) \in\Delta^{n-1}$\emph{ be positive, and let }$a_{1
\leq\cdots\leq a_{n\text{ }}$\emph{. If }$b_{1}\geq\cdots\geq b_{n},$\emph{
then
\
{\textstyle\sum\limits_{i=1}^{n}}
w_{i}a_{i}b_{i}\le
{\textstyle\sum\limits_{i=1}^{n}}
w_{i}a_{i
{\textstyle\sum\limits_{i=1}^{n}}
w_{i}b_{i}.
\]
\emph{If }$b_{1}\leq\cdots\leq b_{n}$\emph{, then the opposite inequality
holds. In both cases equality holds if and only if }$a_{1}=\cdots=a_{n}$\emph{
or }$b_{1}=\cdots=b_{n}$\emph{.}
\subsection{Two recurrence inequalities}
The proof of Theorems \ref{tub} and \ref{tlb} reside on two recurrence
inequalities, stated in Propositions \ref{glb} and \ref{gub} below.
\begin{proposition}
\label{glb}If $k\geq2$ and $\mathbf{x}\in\Delta^{n-1}$, the
\begin{equation}
kS_{k}\geq\left( 1-\left( k-1\right) q\right) S_{k-1}. \label{ina
\end{equation}
Equality holds if and only if the nonzero entries of $\mathbf{x}$ are equal.
\end{proposition}
\begin{proof}
Without loss of generality, we assume that $\mathbf{x}$ is positive, for
dropping out its zero entries does not alter $S_{2},\ldots,S_{n},$ and $q$.
First, multiplying equation (\ref{MR}) by $x_{i}$ and summing the results, we
ge
\[
kS_{k}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}\frac{\partial S_{k}}{\partial x_{i}}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}S_{k-1}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{2}\frac{\partial S_{k-1}}{\partial x_{i}}=S_{k-1}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{2}\frac{\partial S_{k-1}}{\partial x_{i}}.
\]
Now, let $w_{i}=a_{i}=x_{i}$ and $b_{i}=\partial S_{k-1}/\partial x_{i}$ for
all $i\in\left[ n\right] ,$ and note that Chebyshev's inequality implies
that
\begin{equation
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}x_{i}\frac{\partial S_{k-1}}{\partial x_{i}}\le
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{2
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}\frac{\partial S_{k-1}}{\partial x_{i}}=\left( k-1\right) qS_{k-1},
\label{ina1
\end{equation}
so inequality (\ref{ina}) follows.
The sufficiency of the condition for equality in (\ref{ina}) is clear, so we
only prove its necessity. If equality holds in (\ref{ina}), then equality
holds in (\ref{ina1}). Hence, the condition for equality in Chebyshev's
inequality implies that
\[
x_{1}=\cdots=x_{n}\text{ \ \ or \ \ }\frac{\partial S_{k-1}}{\partial x_{1
}=\cdots=\frac{\partial S_{k-1}}{\partial x_{n}}.
\]
As noted above, in either case $x_{1}=\cdots=x_{n}$, completing the proof of
Proposition \ref{glb}.
\end{proof}
\medskip
\begin{proposition}
\label{gub}If $k\geq3$ and $\mathbf{x}\in\Delta^{n-1}$, the
\begin{equation}
kS_{k}\leq S_{k-1}-\left( q-\left( k-2\right) p\right) S_{k-2}.
\label{inb
\end{equation}
Equality holds if and only if the nonzero entries of $\mathbf{x}$ are equal.
\end{proposition}
\begin{proof}
Without loss of generality we assume that $\mathbf{x}$ is positive. As in the
proof of Proposition \ref{glb}, we see that
\[
kS_{k}=S_{k-1}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{2}\frac{\partial S_{k-1}}{\partial x_{i}}=S_{k-1}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{2}\left( S_{k-2}-x_{i}\frac{\partial S_{k-2}}{\partial x_{i}}\right)
=S_{k-1}-qS_{k-2}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{3}\frac{\partial S_{k-2}}{\partial x_{i}}.
\]
Now, let $w_{i}=x_{i},$ $a_{i}=x_{i}^{2}$ and $b_{i}=\partial S_{k-2}/\partial
x_{i}$ for all $i\in\left[ n\right] ,$ and note that Chebyshev's inequality
implies that
\begin{equation
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{3}\frac{\partial S_{k-2}}{\partial x_{i}}
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}x_{i}^{2}\frac{\partial S_{k-2}}{\partial x_{i}}\le
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{3
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}\frac{\partial S_{k-2}}{\partial x_{i}}=\left( k-2\right) pS_{k-2}.
\label{inb1
\end{equation}
so inequality (\ref{inb}) follows.
The sufficiency of the condition for equality in (\ref{inb}) is clear, so we
only prove its necessity. If equality holds in (\ref{inb}), then equality
holds in (\ref{inb1}). Hence, the condition for equality in Chebyshev's
inequality implies that
\[
x_{1}^{2}=\cdots=x_{n}^{2}\text{ \ \ or \ \ }\frac{\partial S_{k-2}}{\partial
x_{1}}=\cdots=\frac{\partial S_{k-2}}{\partial x_{n}}.
\]
In either case $x_{1}=\cdots=x_{n}$, completing the proof of Proposition
\ref{gub}.
\end{proof}
\subsection{Proofs of Theorems \ref{tub} and \ref{tlb}}
\begin{proof}
[\textbf{Proof of Theorem \ref{tub}.}]Set for short $x=x_{1}.$ Our proof
hinges on two claims:\medskip
\textbf{Claim 1. }\emph{If }$x\leq1/4\left( k-2\right) ,$ then for
$i=3,\ldots,k,$ we hav
\[
iS_{i}\leq S_{i-1}-\sigma\left( 1-\left( i-2\right) \sigma\right)
S_{i-2}.
\]
\emph{Proof.} Referring to Proposition \ref{gub}, it is enough to show that
\[
q-\left( i-2\right) p\geq\sigma\left( 1-\left( i-2\right) \sigma\right)
.
\]
To this end, le
\[
f\left( z\right) :=e^{z}-\left( i-2\right) e^{2z},
\]
and note that $f\left( z\right) $ is convex whenever $e^{z}\leq1/4\left(
i-2\right) .$ Hence, in view of $x\leq1/4\left( k-2\right) $
\[
q-\left( i-2\right) p
{\textstyle\sum\limits_{j=1}^{n}}
x_{j}f\left( \log x_{j}\right) \leq f\left(
{\textstyle\sum\limits_{j=1}^{n}}
x_{j}\log x_{j}\right) =\sigma-\left( i-2\right) \sigma^{2},
\]
proving Claim 1.\medskip
\textbf{Claim 2. }\emph{If }$x\leq1/4\left( k-2\right) ,$ then for
$i=3,\ldots,k,$ we have
\begin{equation}
iS_{i}\leq\left( 1-\left( i-1\right) \sigma\right) S_{i-1}. \label{reci
\end{equation}
\emph{Proof. }We use induction on $i$. If $i=3,$ then Claim 1 yields
\[
3S_{3}=S_{2}-\left( \sigma-\sigma^{2}\right) =S_{2}-\sigma\left(
1-q\right) =\left( 1-2\sigma\right) S_{2}\text{;
\]
hence, the statement holds for $i=3.$ If $i>3$, then the induction assumption
implies that
\[
\frac{\left( i-1\right) S_{i-1}}{1-\left( i-2\right) \sigma}\geq\left(
i-1\right) S_{i-1},
\]
because $1-\left( i-2\right) \sigma\geq1-\left( k-2\right) x>0.$ Now,
Claim 1 yields
\begin{align*}
iS_{k} & \leq S_{i-1}-\sigma\left( 1-\left( i-2\right) \sigma\right)
S_{i-2}\leq S_{i-1}-\sigma\left( 1-\left( i-2\right) \sigma\right)
\frac{\left( i-1\right) S_{i-1}}{\left( 1-\left( i-2\right)
\sigma\right) }\\
& =\left( 1-\left( i-1\right) \sigma\right) S_{i-1},
\end{align*}
completing the induction step and the proof of Claim 2.\medskip
To finish the proof of Theorem \ref{tub}, note that $1-\left( i-1\right)
\sigma\geq1-\left( k-1\right) x>0.$ Thus we can multiply inequalities
(\ref{reci}) for $i=3,\ldots,k,$ gettin
\[
\frac{k!}{2}S_{3}\cdots S_{k}\leq\left( 1-2\sigma\right) \cdots\left(
1-\left( k-1\right) \sigma\right) S_{2}\cdots S_{k-1}.
\]
Now, using the fact $2S_{2}=1-q\leq1-\sigma$, we see that
\[
S_{k}\leq\frac{1}{k!}\left( 1-\sigma\right) \left( 1-2\sigma\right)
\cdots\left( 1-\left( k-1\right) \sigma\right) =\sigma^{k}\binom{1/\sigma
}{k},
\]
as desired. The condition for equality in this inequality follows from the
conditions for equality in Proposition \ref{gub}.Theorem \ref{tub} is proved.
\end{proof}
\emph{Remark. }The proof of Theorem \ref{tub} shows that its conclusion can be
strengthened t
\[
S_{k}\leq\frac{1}{k!}\left( 1-q\right) \left( 1-2\sigma\right)
\cdots\left( 1-\left( k-1\right) \sigma\right) .
\]
\medskip
\begin{proof}
[\textbf{Proof of Theorem \ref{tlb}.}]Proposition \ref{glb} implies that
\begin{equation}
S_{i}\geq\frac{1}{i}\left( 1-\left( i-1\right) q\right) S_{i-1}
\label{lin
\end{equation}
for every $i=2,\ldots,k$. Since
\[
1-\left( i-1\right) q\geq1-\left( k-1\right) \left\vert \mathbf{x
\right\vert _{\max}>1-\frac{k-1}{k-1}=0,
\]
we can multiply inequalities (\ref{lin}) for $i=2,\ldots,k,$ obtainin
\[
S_{2}\cdots S_{k}\geq\frac{1}{k!}S_{1}\cdots S_{k-1}\left( 1-q\right)
\cdots\left( 1-\left( k-1\right) q\right) =q^{k}\binom{1/q}{k},
\]
as desired.
The condition for equality in this inequality follows from the conditions for
equality in Proposition \ref{glb}. Theorem \ref{tlb} is proved.
\end{proof}
\subsection{Proof of Theorem \ref{tlub}}
The proof of Theorem \ref{tlub} is the most involved one in this paper,
especially the case $k=5$. We give separate proofs for $k=3,4,5$, because a
compound one would be a harder read.
In all three cases we assume that $\mathbf{x}$ is positive, and set $x=x_{1}$.
The proofs of the conditions for equality are straightforward and are
omitted.\medskip
\begin{proof}
[\textbf{Proof for }$k=3$.]Using equation (\ref{MR}) and Proposition
\ref{pmi}, we find that
\[
\frac{\partial S_{3}}{\partial x_{1}}=S_{2}-x\frac{\partial S_{2}}{\partial
x_{1}}=\frac{1-q}{2}-x\left( 1-x\right) \leq\frac{1-\sigma-2x\left(
1-x\right) }{2}.
\]
Since $x\left( 1-x\right) $ is increasing for $x\leq1/2,$ it follows that
$\sigma\left( 1-\sigma\right) \leq x\left( 1-x\right) ,$ and so
\[
\frac{\partial S_{3}}{\partial x_{1}}\leq\frac{1-\sigma-2x\left( 1-x\right)
}{2}\leq\frac{1-\sigma-2\sigma\left( 1-\sigma\right) }{2}=3\sigma^{3
\binom{1/\sigma}{3}.
\]
Theorem \ref{tlub} is proved for $k=3.$
\end{proof}
\medskip
\begin{proof}
[\textbf{Proof for }$k=4$.]Equation (\ref{MR}) implies tha
\begin{align*}
\frac{\partial S_{4}}{\partial x_{1}} & =S_{3}-x\frac{\partial S_{3
}{\partial x_{1}}=S_{3}-x\left( S_{2}-x\frac{\partial S_{2}}{\partial x_{1
}\right) =\frac{1-3q+2p}{6}-x\left( \frac{1-q}{2}-x+x^{2}\right) \\
& =\frac{1}{6}\left( 1-3q+2p-3x\left( 1-q\right) +6x^{2}-6x^{3}\right)
\end{align*}
To establish (\ref{inr3}), we prove a chain of inequalities, consecutively
eliminating the parameters $p,$ $q,$ and $x$ from the right side of the above equation.
First, note that the function
\[
f\left( z\right) =e^{z}-e^{2z
\]
is convex whenever $e^{z}\leq1/4.$ Hence
\[
q-p
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}f\left( \log x_{i}\right) \geq f\left(
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}\log x_{i}\right) =\sigma-\sigma^{2},
\]
yielding in tur
\begin{align*}
6\frac{\partial S_{4}}{\partial x_{1}} & \leq1-q-2\sigma+2\sigma
^{2}-3x\left( 1-q\right) +6x^{2}-6x^{3}\\
& =1-q\left( 1-3x\right) -2\sigma+2\sigma^{2}-3x+6x^{2}-6x^{3}.
\end{align*}
Since $1-3x>0$ and $q\leq\sigma,$ we ge
\[
6\frac{\partial S_{4}}{\partial x_{1}}\leq1-3x\sigma-3\sigma+2\sigma
^{2}-3x+6x^{2}-6x^{3}.
\]
To finish the proof, we have to show that
\[
1-3x\sigma-3\sigma+2\sigma^{2}-3x+6x^{2}-6x^{3}\leq1-6\sigma+11\sigma
^{2}-6\sigma^{3}=24\sigma^{4}\binom{1/\sigma}{4},
\]
which, after some algebra, turns out to be equivalent to
\[
\left( x-\sigma\right) \left( 3x-2x^{2}+2\sigma-2x\sigma-2\sigma
^{2}\right) \leq\left( x-\sigma\right) .
\]
Using the fact $x\leq\sigma,$ if suffice to prove that
\[
3x-2x^{2}+2\sigma-2x\sigma-2\sigma^{2}<1.
\]
However, $2\sigma-2x\sigma-2\sigma^{2}$ increases in $\sigma$ because
$\sigma\leq x\leq1/4<\left( 1-x\right) /2.$ Hence
\[
3x-2x^{2}+2\sigma-2x\sigma-2\sigma^{2}\leq5x-6x^{2}\leq\frac{5}{4}-\frac
{6}{16}<1.
\]
Theorem \ref{tlub} is proved for $k=4.$
\end{proof}
\medskip
\begin{proof}
[\textbf{Proof for} $k=5$.]Equation (\ref{MR}) implies tha
\begin{align}
\frac{\partial S_{5}}{\partial x_{1}} & =S_{4}-x\frac{\partial S_{4
}{\partial x_{1}}=S_{4}-xS_{3}+x^{2}\frac{\partial S_{3}}{\partial x_{1
}=S_{4}-xS_{3}+x^{2}S_{2}-x^{3}\frac{\partial S_{2}}{\partial x_{1
}\nonumber\\
& =\frac{1-6q+3q^{2}+8p-6t}{24}-x\frac{1-3q+2p}{6}+x^{2}\frac{1-q}{2
-x^{3}\left( 1-x\right) \nonumber\\
& =\frac{1}{24}\left( 1-6q+3q^{2}+8\left( 1-x\right) p-6t+12xq-12x^{2
q-4x+12x^{2}-24x^{3}+24x^{4}\right) . \label{me5
\end{align}
To establish (\ref{inr3}), we prove a chain of inequalities, consecutively
eliminating the parameters $t,p,q,$ and $x$ from the right side of equation
(\ref{me5})\footnote{The reader may find that the tradeoffs between the stages
of the calculations result in weird numbers, but we were not able to spare
much room for elegance.}.
For a start, the following claim is used to eliminate $p$ and $t$:\medskip
\textbf{Claim 1 }\emph{If }$x<1/5,$ \emph{then
\begin{equation}
-\frac{106-160x}{25}q+8\left( 1-x\right) p-6t\leq-\frac{106-160x}{25
\sigma+8\left( 1-x\right) \sigma^{2}-6\sigma^{3}. \label{insig
\end{equation}
\emph{Proof.} Claim 1 follows from the fact that the function
\[
f\left( z\right) :=-\frac{106-160x}{25}e^{z}+8\left( 1-x\right)
e^{2z}-6e^{3z
\]
is concave whenever $e^{z}\leq1/5$. To verify this fact, we show that
\begin{equation}
f^{\prime\prime}\left( z\right) =e^{z}\left( -\frac{106-160x}{25}+32\left(
1-x\right) e^{z}-54e^{2z}\right) \leq0. \label{concf
\end{equation}
Indeed, the expression $32\left( 1-x\right) e^{z}-54e^{2z}$ is quadratic in
$e^{z}$, and thus it increases in $z$ whenever $e^{z}\leq8\left( 1-x\right)
/27$. On the other hand,
\[
e^{z}\leq1/5\leq8\left( 1-x\right) /27,
\]
implying that
\[
32\left( 1-x\right) e^{z}-54e^{2z}\leq\frac{32\left( 1-x\right) }{5
-\frac{54}{25}=\frac{106-160x}{25}.
\]
The latter inequality clearly entails (\ref{concf}); therefore, $f\left(
z\right) $ is concave.
Now, the concavity of $f\left( z\right) $ implies tha
\begin{align*}
-\frac{106-160x}{25}q+8\left( 1-x\right) p-6t & =\sum x_{i}f\left( \log
x_{i}\right) \leq f\left( x_{i}\log x_{i}\right) \\
& =-\frac{106-160x}{25}\sigma+8\left( 1-x\right) \sigma^{2}-6\sigma^{3},
\end{align*}
completing the proof of Claim 1.\medskip
To use Claim 1 we add and subtract the term $\frac{106-160x}{25}q$ in the
right side of (\ref{me5}), and note that
\[
-6q+\frac{106-160x}{25}q+12xq-12x^{2}q=-\frac{44-130x+300x^{2}}{25}q+\frac
{2}{5}xq.
\]
Thus, summarizing the current progress, Claim 1 implies that
\begin{align}
24\frac{\partial S_{5}}{\partial x_{1}} & \leq1-\frac{44-130x+300x^{2}
{25}q+3q^{2}+\frac{2}{5}xq-4x+12x^{2}-24x^{3}+24x^{4}\label{in1}\\
& -\frac{106-160x}{25}\sigma+8\left( 1-x\right) \sigma^{2}-6\sigma
^{3}.\nonumber
\end{align}
Our next goal is to eliminate $q$ in the right side of (\ref{in1}). To this
end, define the function
\[
g\left( z\right) :=-\frac{44-130x+300x^{2}}{25}z+3z^{2
\]
\medskip
\textbf{Claim 2} \emph{If }$x\leq1/5,$ \emph{then }$g\left( q\right) \leq
g\left( \sigma\right) .\medskip$
\emph{Proof. }Claim 2 follows from the fact that $g\left( z\right) $
decreases in $z$ whenever $z\leq x$. To prove this fact note that $g\left(
z\right) $ is quadratic in $z$, and so it decreases whenever
\[
z\leq\frac{44-130x+300x^{2}}{150}.
\]
However, the stipulation $x\leq1/5$ entails that
\[
x\leq\frac{44-130x+300x^{2}}{150}.
\]
Thus, $g\left( z\right) $ decreases in $z$ whenever $z\leq x$. In
particular, the inequalities $\sigma\leq q\leq x$ imply that $g\left(
q\right) \leq g\left( \sigma\right) $, completing the proof of Claim
2.\medskip
Applying Claim 2, we replace $q$ by $\sigma$ in the right side of (\ref{in1}),
and obtain
\begin{align*}
24\frac{\partial S_{5}}{\partial x_{1}} & \leq1-\frac{44-130x+300x^{2}
{25}\sigma+3\sigma^{2}-\frac{106-160x}{25}\sigma+8\left( 1-x\right)
\sigma^{2}-6\sigma^{3}+\frac{2}{5}xq\\
& -4x+12x^{2}-24x^{3}+24x^{4}\\
& =1-\frac{150-290x+300x^{2}}{25}\sigma+11\sigma^{2}-8x\sigma^{2}-6\sigma
^{3}-4x+\frac{62}{5}x^{2}-24x^{3}+24x^{4}\\
& =1-6\sigma+11\sigma^{2}-6\sigma^{3}+\frac{58}{5}x\sigma-12x^{2
\sigma-8x\sigma^{2}-4x+\frac{62}{5}x^{2}-24x^{3}+24x^{4}.
\end{align*}
In the above derivation we also use the inequality $\frac{2}{5}xq\leq\frac
{2}{5}x^{2}$.
Therefore, to finish the proof of (\ref{inr3}), it remains to show that
\begin{align*}
& 1-6\sigma+11\sigma^{2}-6\sigma^{3}+\frac{58}{5}x\sigma-12x^{2
\sigma-8x\sigma^{2}-4x+\frac{62}{5}x^{2}-24x^{3}+24x^{4}\\
& \leq1-10\sigma+35\sigma^{2}-50\sigma^{3}+24\sigma^{4}=120\sigma^{5
\binom{1/\sigma}{5},
\end{align*}
which is equivalent t
\[
\frac{58}{5}x\sigma-12x^{2}\sigma-8x\sigma^{2}-4x+\frac{62}{5}x^{2
-24x^{3}+24x^{4}\leq-4\sigma+24\sigma^{2}-44\sigma^{3}+24\sigma^{4}.
\]
After rearranging and factoring $\left( x-\sigma\right) $ out, we get
\begin{align*}
4\left( x-\sigma\right) & \geq\frac{58}{5}\left( x-\sigma\right)
\sigma+\frac{62}{5}\left( x-\sigma\right) \left( x+\sigma\right) \\
& -12\left( x-\sigma\right) \left( x+\sigma\right) \sigma-8\left(
x-\sigma\right) \sigma^{2}-24\left( x-\sigma\right) \left( x^{2
+x\sigma+\sigma^{2}\right) \\
& +24\left( x-\sigma\right) \left( x^{3}+x^{2}\sigma+x\sigma^{2
+\sigma^{3}\right) .
\end{align*}
Since $x\geq\sigma$, it suffices to show tha
\begin{align}
2 & >\frac{29}{5}\sigma+\frac{31}{5}\left( x+\sigma\right) -6\left(
x+\sigma\right) \sigma-4\sigma^{2}-12\left( x^{2}+x\sigma+\sigma^{2}\right)
+12\left( x^{3}+x^{2}\sigma+x\sigma^{2}+\sigma^{3}\right) \nonumber\\
& =\frac{31}{5}x-12x^{2}+12x^{3}+12\sigma^{3}-\left( 22-12x\right)
\sigma^{2}+\left( 12-18x+12x^{2}\right) \sigma. \label{exp2
\end{align}
To this end, set
\[
h\left( z\right) :=12z^{3}-\left( 22-12x\right) z^{2}+\left(
12-18x+12x^{2}\right) z,
\]
and note that
\[
h^{\prime}\left( z\right) =36z^{2}-2\left( 22-12x\right) z+\left(
12-18x+12x^{2}\right) .
\]
Since $h^{\prime}\left( z\right) $ is quadratic in $z$ and $36>0,$ we see
that $h\left( z\right) $ is increasing if $z\leq z_{\min}$, where $z_{\min}$
is the smaller root of the equation $h^{\prime}\left( z\right) =0.$ However,
the stipulation $x\leq1/5$ easily implies that
\[
z_{\min}=\frac{22-12x-\sqrt{\left( 22-12x\right) ^{2}-36\left(
12-18x+12x^{2}\right) }}{36}>x,
\]
and therefore $h\left( z\right) $ is increasing in $z$ if $z\leq x$.
Thus, in view of $\sigma\leq x\leq1/5,$ we find that
\begin{align*}
12\sigma-18x\sigma-22\sigma^{2}+12x^{2}\sigma+12x\sigma^{2}+12\sigma^{3} &
=h\left( \sigma\right) \\
& \leq h\left( x\right) =12x-40x^{2}+36x^{3}.
\end{align*}
Finally, for the right side of (\ref{exp2}) we obtain
\[
\frac{31}{5}x-12x^{2}+12x^{3}+12\sigma^{3}-\left( 22-12x\right) \sigma
^{2}+\left( 12-18x+12x^{2}\right) \sigma\leq\frac{91}{5}x-52x^{2
+48x^{3}<2.
\]
Theorem \ref{tlub} is proved for $k=5$.
\end{proof}
\subsection{Closing remarks}
The restriction on $\left\vert \mathbf{x}\right\vert _{\max}$ in Theorem
\ref{tub} can be somewhat relaxed. For example, for $r=3$ it is enough to
require that $\left\vert \mathbf{x}\right\vert _{\max}\leq3/8,$ while for
$r=4$ it is enough to have $\left\vert \mathbf{x}\right\vert _{\max}\leq
11/48$. It is challenging to find the weakest possible restriction on
$\left\vert \mathbf{x}\right\vert _{\max}$ for the conclusion of Theorem
\ref{tub} to hold.
It is unlikely that Theorem \ref{tlub} remains valid as is for sufficiently
large $r$; even the case $r=6$ is a challenge. Thus, it is interesting what
alterations are necessary to prove Conjecture \ref{con1} for $r>5$. Here is a
possibility for some progress:
Given $\mathbf{x}\in\Delta^{n-1}$ and real $t\geq-1$, defin
\[
\varphi_{t}\left( \mathbf{x}\right) :=\left\{
\begin{array}
[c]{cc
\left(
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}^{1+t}\right) ^{1/t}\text{,} & \text{if }x\neq0\text{;}\\
\sigma\left( \mathbf{x}\right) , & \text{if }x=0.
\end{array}
\right.
\]
Note that if $\mathbf{x}$ is fixed and $t\in\left[ -1,\infty\right) $, the
function $\varphi_{t}\left( \mathbf{x}\right) $ is continuous and
nondecreasing in $t.$ In addition, assuming that $0^{0}=1,$ we see that
\[
\varphi_{1}\left( \mathbf{x}\right) =q\left( \mathbf{x}\right) \text{;
\ \ \ }\varphi_{-1}\left( \mathbf{x}\right) =1/n\text{; \ \ \ }\lim
{}_{t\rightarrow\infty}\varphi_{t}\left( \mathbf{x}\right) =\left\vert
x\right\vert _{\max}.
\]
It seems possible to extend Theorems \ref{tub}, \ref{tlb}, and \ref{tlub}
using $\varphi_{t}\left( \mathbf{x}\right) $ instead of $q$ and $\sigma$.
\section{\label{pfc}Proof of Conjecture \ref{con1} for $3\leq r\leq5$ or
$t\geq4\left( r-1\right) \left( r-2\right) $}
Let $G$ be an $r$-graph of order $n$. A vector $\mathbf{x}\in\Delta^{n-1}$
such that $P_{G}\left( \mathbf{x}\right) =\mu\left( G\right) $ is called
an \emph{eigenvector} to $\mu\left( G\right) .$
Let $\mathbf{x}:=\left( x_{1},\ldots,x_{n}\right) \in\Delta^{n-1}$ be an
eigenvector to $\mu\left( G\right) $. Using Lagrange
multipliers\footnote{This argument is known from the times of Motzkin and
Straus, so we skip the details.}, one finds that
\begin{equation}
r\mu\left( G\right) =\frac{\partial P_{G}\left( \mathbf{x}\right)
}{\partial x_{i}}
{\textstyle\sum\limits_{\left\{ i_{1},\ldots,i_{r-1},i\right\} \in E}}
x_{i_{1}}\cdots x_{i_{r-1}}, \label{eeq
\end{equation}
for every $i\in\left[ n\right] $ such that $x_{i}>0.$
We start with a simple lemma, valid for any $r\geq2$:
\begin{lemma}
\label{tsin}Let $G$ be an $r$-graph of order $n$, with $m$ edges. If
$\mathbf{x}\in\Delta^{n-1}$ is an eigenvector to $\mu\left( G\right) $, the
\begin{equation}
\mu\left( G\right) \leq m\sigma^{r}. \label{sin
\end{equation}
\end{lemma}
\begin{proof}
Clearly the lemma holds if $m=0$, so suppose that $m>0$. Likewise, without
loss of generality, suppose that $\mathbf{x}$ is positive. Let $\mu=\mu\left(
G\right) $ and note that equations (\ref{eeq}) imply that
\begin{align*}
\mu\log\sigma^{r} & =r\m
{\textstyle\sum\limits_{i=1}^{n}}
x_{i}\log x_{i}
{\textstyle\sum\limits_{i=1}^{n}}
\frac{\partial P\left( \mathbf{x}\right) }{\partial x_{i}}x_{i}\log x_{i}
{\textstyle\sum\limits_{\left\{ i_{1},\ldots,i_{r}\right\} \in E}}
x_{i_{1}}\cdots x_{i_{r}}\left( \log x_{i_{1}}+\cdots+\log x_{i_{r}}\right)
\\
&
{\textstyle\sum\limits_{\left\{ i_{1},\ldots,i_{r}\right\} \in E}}
x_{i_{1}}\cdots x_{i_{r}}\log x_{i_{1}}\cdots x_{i_{r}}.
\end{align*}
Since the function $x\log x$ is convex, we see that
\
{\textstyle\sum\limits_{\left\{ i_{1},\ldots,i_{r}\right\} \in E}}
x_{i_{1}}\cdots x_{i_{r}}\log x_{i_{1}}\cdots x_{i_{r}}\geq m\left( \frac
{\mu}{m}\right) \log\frac{\mu}{m}=\mu\log\frac{\mu}{m}.
\]
Hence
\[
\mu\log\sigma^{r}\geq\mu\log\frac{\mu}{m},
\]
implying that $\mu\leq m\sigma^{r}$, as desired.
\end{proof}
\emph{Remark. }Note that $\mu\left( G\right) \leq mq^{r}$ is a weaker, yet
more usable consequence of bound (\ref{sin}).\medskip
With Theorem \ref{tlub} and Lemma \ref{tsin} in hand, it is not hard to prove
Conjecture \ref{con1} for $3\leq r\leq5$:$\medskip$
\begin{proof}
[\textbf{Proof of Conjecture \ref{con1} for }$3\leq r\leq5.$]Let $G$ be an
$r$-graph of order $n$ with $m$ edges, and let $\mu\left( G\right) =\mu
_{r}\left( m\right) .$ Suppose that $t>r-1$ is a real number satisfying
$m=\binom{t}{r}$. To prove Conjecture \ref{con1}, we have to show that
\begin{equation}
\mu\left( G\right) \leq mt^{-r}=\frac{1}{r!}\left( 1-\frac{1}{t}\right)
\cdots\left( 1-\frac{r-1}{t}\right) .\label{exf
\end{equation}
Let $\mathbf{x}:=\left( x_{1},\ldots,x_{n}\right) \in\Delta^{n-1}$ be an
eigenvector to $\mu\left( G\right) $ and suppose that $x_{1}\geq\cdots\geq
x_{n}$. If $x_{n}=0$, replace $G$ with the subgraph $H\subset G$ induced by
the vertices with nonzero entries in $\mathbf{x}$. Clearly $\mu\left(
H\right) =\mu\left( G\right) $, and $H$ has at most $m$ edges. Since the
right side of (\ref{exf}) decreases with $t$, it is enough to prove
(\ref{exf}) for $H$. Thus, without loss of generality, we assume that
$\mathbf{x}$ is positive.
Set for short $\mu=\mu\left( G\right) $, and assume for a contradiction that
(\ref{exf}) fails, that is, $mt^{-r}<\mu$. Now, Lemma \ref{tsin} implies that
$mt^{-r}<\mu\leq m\sigma^{r}$, and so $\sigma>1/t$.
On the other hand, equation (\ref{eeq}) yields
\[
r\mu x_{1}=\frac{\partial P_{G}\left( \mathbf{x}\right) }{\partial x_{1
}x_{1}\leq P_{G}\left( \mathbf{x}\right) =\mu\text{;
\]
therefore, $x_{1}\leq1/r$. With this provision, and supposing that $3\leq
r\leq5$, Theorem \ref{tlub} gives
\begin{equation}
r\mu=\frac{\partial P_{G}\left( \mathbf{x}\right) }{\partial x_{1}}\leq
\frac{\partial S_{r}\left( \mathbf{x}\right) }{\partial x_{1}}\leq
r\sigma^{r}\binom{1/\sigma}{r}. \label{ine
\end{equation}
Hence, in view of $\sigma>1/t$, we find that
\[
\mu\leq\frac{1}{r!}\left( 1-\sigma\right) \cdots\left( 1-\left(
r-1\right) \sigma\right) <\frac{1}{r!}\left( 1-\frac{1}{t}\right)
\cdots\left( 1-\frac{r-1}{t}\right) =mt^{-r}.
\]
This contradiction shows that $\mu\left( G\right) \leq mt^{-r}$.
It remains to prove the conditions for equality in Conjecture \ref{con1}.
Suppose that $\mu\left( G\right) =mt^{-r}$; thus, equalities hold throughout
in (\ref{ine}), and by Theorem \ref{tlub} the entries of $\mathbf{x}$ are
equal to $1/n$. Therefore, we find that
\[
\frac{1}{r!}\left( 1-\frac{1}{t}\right) \cdots\left( 1-\frac{r-1
{t}\right) =mt^{-r}=\mu\left( G\right) \leq\binom{n}{r}n^{-r}=\frac{1
{r!}\left( 1-\frac{1}{n}\right) \cdots\left( 1-\frac{r-1}{n}\right) ,
\]
yielding in turn $n\geq t$. Now, $mt^{-r}=\mu\left( G\right) \leq mn^{-r}$
implies that $t=n$.
Finally, if $t$ is an integer, then taking $G$ to be the complete $r$-graph of
order $t$ and $\mathbf{x}\in\Delta^{t-1}$ to be the vector with all entries
equal to $1/t$, we see that $\mu_{r}\left( m\right) =mt^{-r}$, completing
the proof of Conjecture \ref{con1} for $3\leq r\leq5$.
\end{proof}
Our proof of Conjecture \ref{con1} for $t\geq4\left( r-1\right) \left(
r-2\right) $ is similar, so we omit a few details.\medskip
\begin{proof}
[\textbf{Proof of Conjecture \ref{con1} for }$t\geq4\left( r-1\right)
\left( r-2\right) .$]Let $G$ be an $r$-graph of order $n$ with $m$ edges,
and let $\mu\left( G\right) =\mu_{r}\left( m\right) .$ Suppose that
$t\geq4\left( r-1\right) \left( r-2\right) $ is a real number satisfying
$m=\binom{t}{r}$. To prove Conjecture \ref{con1}, we show that $\mu\left(
G\right) \leq mt^{-r}$. To this end, select an eigenvector $\mathbf{x
:=\left( x_{1},\ldots,x_{n}\right) $ to $\mu\left( G\right) $ with
$x_{1}\geq\cdots\geq x_{n}>0$.
First note that if $x_{1}>\left( r-1\right) /t,$ then
\[
\frac{\left( 1-x_{1}\right) ^{r-1}}{\left( r-1\right) !}<\frac{1}{\left(
r-1\right) !}\left( 1-\frac{r-1}{t}\right) ^{r-1}<\frac{1}{\left(
r-1\right) !}\left( 1-\frac{1}{t}\right) \cdots\left( 1-\frac{r-1
{t}\right) =rmt^{-r},
\]
and therefor
\[
r\mu\left( G\right) =\frac{\partial P_{G}\left( \mathbf{x}\right)
}{\partial x_{1}}\leq\frac{\partial S_{r}\left( \mathbf{x}\right) }{\partial
x_{1}}<\frac{\left( 1-x_{1}\right) ^{r-1}}{\left( r-1\right) !}<rmt^{-r},
\]
proving that $\mu\left( G\right) <mt^{-r}$.
On the other hand, if $x_{1}\leq\left( r-1\right) /t,$ then the premise
$t\geq4\left( r-1\right) \left( r-2\right) $ implies that $x_{1
\leq1/4\left( r-2\right) $; therefore, Theorem \ref{tub} yields
\[
\mu\left( G\right) =P_{G}\left( \mathbf{x}\right) \leq S_{r}\left(
\mathbf{x}\right) \leq\sigma\binom{1/\sigma}{r}.
\]
This inequality and Lemma \ref{tsin} imply that $\mu\left( G\right) \leq
mt^{-r}$.
The proof of the condition for equality in $\mu\left( G\right) \leq mt^{-r}$
is omitted.
\end{proof}
\bigskip
\textbf{Acknowledgement. }A preliminary version of this work has been reported
at the International workshop on spectral hypergraph theory held in November,
2017 at Anhui University, Hefei, P.R. China. I am grateful to the organizers,
and particularly to Prof. Yi-Zheng Fan, for wonderful experience.
\bigskip
|
1,941,325,221,049 | arxiv | \section{Introduction}
It is well-known that in structural models with
occasionally binding constraints, equilibria may not exist (incoherency) or
there may be multiple equilibria (incompleteness). \cite{GourierouxLaffontMonfort1980} (henceforth GLM) studied this problem in the context of simultaneous equations models with endogenous regime switching, and derived conditions for existence and uniqueness of solutions, which are known as `coherency and completeness' (CC) conditions. \cite{AruobaMlikotaSchorfheideVillalvazo2021} and \cite{Mavroeidis2019} derived these conditions for structural vector autoregressions with occasionally binding constraints. However, to the best of our knowledge, there are no general results about the conditions for existence and uniqueness of equilibria in dynamic forward-looking models with rational expectations when some variables are subject to occasionally binding constraints. This is despite the fact that there is a large and expanding literature on solution algorithms for such models \citep[see][]{FernandezRubioSchorheide2016hdk} applied for example to models with a zero lower bound (ZLB) constraint on the interest rate \citep[see e.g.,][]{FernandezGordonGuerronRubio2015,GuerrieriIacoviello2015, AruobaCubaBordaSchorfheide2018, gustetal2017AER, AruobaCubaBordaHigaFloresSchorfheideVillalvazo2021,eggertsson2021toolkit}.
In this paper, we attempt to fill that gap in the literature. We show that the question of existence of equilibria (coherency) is a nontrivial problem in models with a ZLB constraint on the nominal interest rate. Our main finding is that, under rational expectations, coherency requires restrictions on the support of the distribution of the exogenous shocks, and these restrictions are difficult to interpret.
The intuition for this result can be gauged from a standard New Keynesian (NK) model. Coherency of the model requires that the aggregate demand (AD) and supply (AS) curves intersect for all possible values of the shocks. If the curves are straight lines, then the model is coherent if and only if the curves are not parallel. Therefore, linear models are generically coherent. However, models with a ZLB constraint are at most piecewise linear even if the Euler equations of the agents are linearized. In those models coherency is no longer generic, because the curves may not intersect. This depends on the slope of the curves and their intercept. The former depends on structural parameters, while the latter depends on the shocks.
In fact, many applications in the literature feature parameters and distribution of shocks that place them in the incoherency region (e.g., a monetary policy rule that satisfies the Taylor principle, structural shocks with unbounded support). Given the parameters, coherency can only be restored by restricting the support of the distribution of the shocks, so the AD and AS curves never fail to intersect. In other words, we need to exclude the possibility of sufficiently adverse shocks causing rational expectations to diverge.
We derive our main result first in a simple model that consists of an active Taylor rule with a ZLB constraint and a nonlinear Fisher equation with a single discount factor (AD) shock that can take two values. This setup has been used, amongst others, by \cite{EggertssonWoodford2003} and \cite{AruobaCubaBordaSchorfheide2018}, and it suffices to study the problem analytically and convey the main intuition. The main takeaway from this example is that when the Taylor rule is active, there exist no bounded fundamental or sunspot equilibria unless negative AD shocks are sufficiently small. Because this restriction on the support of the distribution of the shock is asymmetric, this finding is not equivalent to restricting the variance of the shock.
We then turn to (piecewise) linear models, and focus on the question of existence of minimum state variable (MSV) solutions, which are the solutions that most of the literature typically focuses on. A key insight of the paper is that when the support of the distribution of the exogenous variables is discrete, these models can be cast into the class of piecewise linear simultaneous equations models with endogenous regime switching analysed by GLM. We can therefore use the main existence theorem of GLM to study their coherency properties.
Applying this methodology to a prototypical three-equation NK model, we find that the model is not generically coherent both when the Taylor rule is active and when monetary policy is optimal under discretion. The restrictions on the support that are needed to restore an equilibrium depend on the structural parameters as well as the past values of the state variables. When there are multiple shocks, the support restrictions are such that the shocks cannot have `rectangular' support, meaning that they cannot be independent from each other. For example, the range of values that the monetary policy shock can be allowed to take depends on the realizations of the other shocks. So, the assumption of orthogonality of structural shocks is incompatible with coherency.
When the CC condition is violated, imposing the necessary support restrictions to guarantee existence of a solution causes incompleteness, i.e., multiplicity of MSV solutions. We show that there may be up to $2^k$ MSV equilibria, where $k$ is the number of states that the exogenous variables can take.
The literature on the ZLB stressed from the outset the possibility of multiple steady states and/or multiple equilibria, and of sunspots solutions due either to indeterminacy or to belief-driven fluctuations between the two steady states \citep[e.g.,][]{AruobaCubaBordaSchorfheide2018,MertensRavn2014}. Here, we stress a novel source of multiplicity: the multiplicity of MSV solutions.
Finally, we identify possible ways out of the conundrum of incoherency and incompleteness of the NK model. These call for a different modelling of monetary policy. A first possibility would be to assume that monetary policy steps in with a different policy reaction, e.g., unconventional monetary policy (UMP), to catastrophic shocks that cause the economy to collapse. However, this policy response would need to be incorporated in the model, affecting the behavior of the economy also in normal times (i.e., when shocks are small). A more straightforward approach is to assume that UMP can relax the ZLB constraint sufficiently to restore the generic coherency of the model without support restrictions.
This underscores another potentially important role of UMP not emphasized in the literature so far: UMP does not only help take the economy out of a liquidity trap, but it is also useful in ensuring the economy does not collapse in the sense that there is no bounded equilibrium.
A number of theoretical papers provide sufficient conditions for existence of MSV equilibria in NK models \cite[see][]{egg2011nberma, Bonevaetal2016JME, Armenter2018, Christianoetal2018wpuniqueness,Nakata2018,Nakataschmidt2019JME}. Our contribution relative to this literature is to provide both necessary and sufficient conditions that can be applied more generally. \cite{Holden2021} analyses existence under perfect foresight, so his methodology is complementary to ours. \cite{Mendes2011} provides existence conditions on the variance of exogenous shocks in models without endogenous or exogenous dynamics, while
\cite{richthrock2015bej} report similar findings based on simulations. Our analysis provides a theoretical underpinning of these findings and highlights that existence generally requires restrictions on the \textit{support} of the distribution of the shocks rather than their variance
The structure of the paper is as follows.
Section \ref{s: coherency problem} presents the main findings of the paper regarding the problem of incoherency (i.e., non-existence of equilibria). Section \ref{s: incompleteness} looks at the problem of incompleteness (i.e., multiplicity of MSV solutions).
Section \ref{s: conclusions} concludes. All proofs are given in the Appendix available online.
\section{The incoherency problem}\label{s: coherency problem}
This Section illustrates the main results of the paper that concern coherency, i.e., existence of a solution, in models with a ZLB constraint. Subsection \ref{s: ACS simple example} presents the simplest nonlinear example. Subsection \ref{s: piecewise linear} turns to piecewise (log)linear models, including the three-equation NK model, and introduces a general method for analysing their coherency properties. Subsection \ref{s: support restrictions} highlights the nature of the support restrictions needed for coherency allowing for continuous stochastic shocks, using a convenient forward-looking Taylor rule example. Subsection \ref{s: cc conditions k} derives the conditions on the Taylor rule coefficient for coherency and completeness in the simple NK model. Subsection \ref{s: UMP} shows how unconventional monetary policy can restore coherency in the NK model with an active Taylor rule. Finally, Subsection \ref{s: endog} examines the implications of endogenous dynamics.
\subsection{The incoherency problem in a simple example\label{s: ACS simple example}}
We illustrate the main results of the paper using the simplest possible model that is analytically tractable and suffices to illustrate our point in a straightforward way. It should be clear that the problem that we point out is generic and not confined to this simple setup.
The model is taken from Section 2 in \cite{AruobaCubaBordaSchorfheide2018} (henceforth ACS). It consists of two equations: a consumption Euler equation
\begin{equation}
1=E_{t}\left( M_{t+1}\frac{R_{t}}{\pi_{t+1}}\right) \label{eq: EE}%
\end{equation}
and a simple Taylor rule subject to a ZLB constraint
\begin{equation}
R_{t}=\max\left\{ 1,r\pi_{\ast}\left( \frac{\pi_{t}}{\pi_{\ast}}\right)
^{\psi}\right\} ,\quad\psi> 1,\label{eq: Taylor}%
\end{equation}
where $R_{t}$ is the gross nominal interest rate, $\pi_{t}$ is the gross inflation rate, $\pi_{\ast}$ is the target of the central bank for the gross inflation rate, $M_{t+1}$ is the stochastic discount factor, and $r$ is the steady-state value of $1/M_{t+1}$, which is also the steady-state value of the gross real interest rate $R_t/E_t(\pi_{t+1})$. To complete the specification of the model, we need to specify the law of motion of $M_t$.
\begin{assumption}\label{ass: M nonlinear absorbing}
$M_{t}$ is a 2-state Markov-Chain process with an absorbing state
$r^{-1}$, and a transitory state $r^{-1}e^{-r^{L}}>r^{-1}$ that persists with probability
$p>0$.
\end{assumption}
This is a common assumption in the theoretical literature \citep[see, e.g.,][]{EggertssonWoodford2003,CEE2011JPE, egg2011nberma}. $r^{L}<0$
can be interpreted as negative real interest rate shock, which captures the
possibility of a temporary liquidity trap.
Substituting for $R_t$ in (\ref{eq: EE}) using (\ref{eq: Taylor}), we obtain
\begin{equation}
1=\max\left\{ 1,r\pi_{\ast}\left( \frac{\pi_{t}}{\pi_{\ast}}\right)
^{\psi}\right\} E_{t}\left( \frac{M_{t+1}}{\pi_{t+1}}\right), \quad \psi>1. \label{eq: ACS nonlinear}%
\end{equation}
Let $\Omega_t$ denote the information set at time $t$, such that $E_t(\cdot):=E(\cdot|\Omega_t)$. In the words of \cite{Blan80}, a solution $\pi_t$ of the model is a sequence of functions of variables in $\Omega_t$ that satisfies (\ref{eq: ACS nonlinear}) for all possible realizations of these variables. Like \cite{Blan80}, we focus on bounded solutions.
The following proposition provides, in the context of the present example, the main message of the paper, that coherency of the model (i.e., existence of a solution) requires restrictions on the support of the distribution of the state variable $M_t$.
\begin{proposition}\label{prop: nonlinear ACS}
Under Assumption \ref{ass: M nonlinear absorbing} and $\psi>1$, a fundamental solution to (\ref{eq: ACS nonlinear}) exists if and only if the exogenous process $M_t$ satisfies the support restrictions
\begin{equation}\label{eq: support restr nonlin ACS}
r^{-1}\leq\pi_{\ast},\quad\text{and\quad}-r^{L}\leq\log\left( \frac
{r\pi_{\ast}-1+p}{p}\right)-\frac{1}{\psi}\log\left( r\pi_{\ast}\right).
\end{equation}
\end{proposition}
Here we sketch graphically the argument for the first of the support restrictions in (\ref{eq: support restr nonlin ACS}) in order to convey the main intuition for why a solution fails to exist when the shocks are sufficiently large. Note that the upper bound on $(-r^L)$ in (\ref{eq: support restr nonlin ACS}) is increasing in the Taylor rule coefficient $\psi$. So, for some values of the shock $(-r^L)$, the model may be coherent with a sufficiently active Taylor rule and incoherent with a less active one. Moreover, both support restrictions in (\ref{eq: support restr nonlin ACS}) become slacker as the inflation target $\pi_*$ increases. The proposition also shows that coherency does not depend on the variance of the exogenous process per se.\footnote{Raising $p$ reduces the variance, but it also reduces the upper bound for coherency on the shock $(-r^L)$ in (\ref{eq: support restr nonlin ACS}). Thus, a model with a higher variance of $M_t$ may be coherent, while a model with a lower variance of $M_t$ may be incoherent.}
Suppose that $M_t$ is in the absorbing state $r^{-1}$. Then, there is no uncertainty in $\pi_{t+1}$ along a fundamental solution, so (\ref{eq: ACS nonlinear}) becomes a deterministic difference equation that can be represented in terms of $\hat{\pi}_t:=\log{(\pi_t/\pi_*)}$ (no approximation is involved) as
\[
\hat{\pi}_{t+1}=\max\left\{-\log{r\pi_*},\psi\hat{\pi}_{t}\right\}.
\]
Figure \ref{fig: coherency} plots the right hand side of the above equation along with a $\ang{45}$ line. It is clear from the graph on the right that if $r\pi_*<1$, $\pi_{t+s}$ diverges for any initial value of $\pi_t$, i.e., there is no bounded solution. This is because the stable point to which $\pi_{t}$ would jump to in the absence of the constraint (i.e., the origin in the figure) violates the constraint, so it is infeasible. In contrast, when $r\pi_*\geq 1$, there exist many bounded solutions with $\pi_t \leq \pi_*$, which is a stable manifold in this case. In this simple example, $r$ corresponds to the steady-state value of the gross real interest rate, so it is fairly innocuous to assume $r\geq 1$ and the inflation target is typically nonnegative ($\pi_*\geq1$). But the same basic intuition applies in the transitory state: coherency of the model requires that the transitory shock is such that there exist stable paths which $\pi_t$ can jump to, or in other words, that the curve representing the transitory dynamics intersects with the $\ang{45}$ line, see Figure \ref{fig: ACS nonlinear} in \ref{app: s: nonlinear ACS}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{coherency2.jpg}
\caption{Illustration of coherency restriction $r\pi_*\geq 1$ under the absorbing state in Proposition \ref{prop: nonlinear ACS}. The red line plots $\hat{\pi}_{t+1}=\max\left\{-\log{r\pi_*},\psi\hat{\pi}_{t}\right\}$ with $\psi>1$ for two different values of $r\pi_*$. When $r\pi_*<1$, no bounded solution exists.}
\label{fig: coherency}
\end{figure}
Proposition \ref{prop: nonlinear ACS} focused only on the case $\psi>1$, but it is easy to see from the proof, as well as from the argument in Figure \ref{fig: coherency}, that no support restrictions are needed when $\psi<1$: the model is always coherent when the Taylor rule is passive.
When the coherency condition in Proposition \ref{prop: nonlinear ACS} holds, the stationary solutions of the transition equations represent fundamental solutions at which $\pi_t$ depends only on $M_t$ and not on its lags. Such solutions are also known as minimum state variable (MSV) solutions in the literature, because they involve the smallest number of state variables (in this case, only one). So, for this model, the same coherency condition that is required for existence of bounded fundamental solutions is also necessary and sufficient for the existence of MSV solutions, which is a subset of all fundamental solutions. This is noteworthy because many of the solution methods in the literature focus on MSV solutions, e.g., \cite{FernandezGordonGuerronRubio2015}, \cite{richthrock2015bej}.
We conclude our analysis of this simple example by considering sunspot solutions. For simplicity, we assume there are no fundamental shocks, as in \cite{MertensRavn2014}.
\begin{proposition}\label{prop: nonlinear ACS sunspots}
Suppose $M_t=r^{-1}$ with probability 1 and $\psi>1$, and let $\varsigma_t\in \{0,1\}$ be a first-order Markovian sunspot process that belongs to agents' information set $\Omega_t$. Sunspot solutions to (\ref{eq: ACS nonlinear}) exist if and only if $r^{-1}\leq\pi_{\ast}$.
\end{proposition}
Proposition \ref{prop: nonlinear ACS sunspots} shows that the support restriction for the existence of sunspot solutions is exactly the same as for the existence of fundamental solutions (see the condition corresponding to the absorbing state in Proposition \ref{prop: nonlinear ACS}). Thus allowing for sunspot equilibria does not alter the essence of the coherency problem, as we further show in the next subsection.
\subsection{Checking coherency of piecewise linear models} \label{s: piecewise linear}
Many of the solution methods in the literature apply to (log)linear models, whose only nonlinearity arises from the lower bound constraint on interest rates, e.g., \cite{EggertssonWoodford2003}, \cite{GuerrieriIacoviello2015}, \cite{KulishMorleyRobinson2017}, \cite{Holden2021}.\footnote{These models are often motivated as (log)linear approximations to some originally nonlinear model under the assumption that the equilibria of the linear model are close to the equilibria of the original nonlinear model \citep[see][]{Bonevaetal2016JME,eggsingh19jedc}. This assumption implicitly imposes conditions for the existence of these equilibria. The coherency of the approximating linear model is therefore a necessary precondition that needs to be checked.} Let $Y_{t}$ be a $n\times1$ vector of endogenous variables, $X_{t}$ be a
$n_x\times1$ vector of exogenous state variables, which could include a sunspot shock whose coefficients in the model are zero, $Y_{t+1|t}:=E\left(
Y_{t+1}|\Omega_{t}\right) $, $X_{t+1|t}:=E\left( X_{t+1}|\Omega_{t}\right)
,$ and $s_{t}\in\left\{ 0,1\right\} $ an indicator variable that takes the
value 1 when some inequality constraint is slack and zero otherwise. We
consider models that can be written in the canonical form%
\begin{equation}%
\begin{tabular}
[c]{l}%
$A_{s_{t}}Y_{t}+B_{s_{t}}Y_{t+1|t}+C_{s_{t}}X_{t}+D_{s_{t}}X_{t+1|t}=0$\\
$s_{t}=1_{\left\{ a^{\prime}Y_{t}+b^{\prime}Y_{t+1|t}+c^{\prime}%
X_{t}+d^{\prime}X_{t+1|t}>0\right\} },$%
\end{tabular}
\label{eq: canon}
\end{equation}
where $A_{s},B_{s},C_{s},D_{s}$ are coefficient matrices, $a,b,c,d$ are
coefficient vectors and $1_A$ is the indicator function that takes the value 1 if $A$ holds and zero otherwise.\footnote{Although we focus on a single inequality constraint, the methodology we discuss here readily applies to more than one constraints. An example of a model with an additional ZLB on inflation expectations \citep{GorodnichenkoSergeyev21} is discussed in Appendix
\ref{app: s: ZLB expectations}.}
\paragraph{Example ACS}\label{ex: ACS}Taking a log-linear approximation of
(\ref{eq: EE}) around $M_{t}=r^{-1}$ and $\pi_{t}=\pi_{\ast}$ we obtain
$\hat{\pi}_{t+1|t}=\hat{R}_{t}+\hat{M}_{t+1|t},$ where $\hat{\pi}_{t}%
:=\log\left( \pi_{t}/\pi_{\ast}\right) ,$ $\hat{M}_{t}:=\log\left(
rM_{t}\right) ,$ $\hat{R}_{t}:=\log R_{t}-\mu,$ $\mu:=\log\left( r\pi_{\ast
}\right) .$ Taking logs of (\ref{eq: Taylor}) (no approximation) yields
$\hat{R}_{t}=\max\left\{ -\mu,\psi\hat{\pi}_{t}\right\} $ and combining the
two equations yields $\hat{\pi}_{t+1|t}-\hat{M}_{t+1|t}-\max\left\{ -\mu,\psi\hat{\pi}_{t}\right\}
=0.$
The regime indicator is $s_{t}=1_{\left\{ \psi\hat{\pi}_{t}+\mu>0\right\}
}.$ This model can be put in the canonical form
(\ref{eq: canon}) with $Y_{t}=\hat{\pi}_{t},$ $X_{t}=\left( \hat{M}%
_{t},1\right) ^{\prime},$ $A_{0}=0,$ $A_{1}=-\psi,$ $B_{0}=B_{1}=1,$
$C_{0}=\left( 0,\mu\right) ,$ $C_{1}=\left( 0,0\right) ,$ $D_{0}%
=D_{1}=\left( -1,0\right) $, $a=\psi,$ $b=0,$ $c=\left( 0,\mu\right)
^{\prime}$ and $d=\left( 0,0\right) ^{\prime}$.
\openbox
\paragraph{Example NK-TR}\label{ex: NK}The basic three-equation New Keynesian model, consisting of a Phillips curve, an Euler equation and a Taylor rule, is
\begin{subequations}\label{eq: NK}%
\begin{align}\label{eq: NK NKPC}
\hat{\pi}_{t} & =\beta\hat{\pi}_{t+1|t}+\lambda\hat{x}_{t}+u_{t} \\
\hat{x}_{t} & =\hat{x}_{t+1|t}-\sigma\left( \hat{R}_{t}-\hat{\pi}%
_{t+1|t}\right) +\epsilon_{t} \label{eq: NK EE} \\
\hat{R}_{t} & =\max\left\{ -\mu,\psi\hat{\pi}_{t}+\psi_{x}\hat{x}_{t}%
+\nu_t\right\} \label{eq: NK TR}
\end{align}
\end{subequations}
where $\hat{\pi}_{t},\hat{R}_{t}$ were defined in the previous example and
$\hat{x}_{t}$ is the output gap. It can be put in the canonical form
(\ref{eq: canon}) with $Y_{t}=\left( \hat{\pi}_{t},\hat{x}_{t}\right)
^{\prime},$ $X_{t}=\left( u_{t},\epsilon_{t},\nu_t,1\right)
^{\prime}$, and coefficients given in \ref{app: s: coeffcanonical}.
\openbox
\paragraph{Example NK-OP}\label{ex: NK-OP}The NK model with optimal discretionary policy replaces (\ref{eq: NK TR}) with
\begin{equation}
\gamma \hat{x}_t + \lambda \hat{\pi}_t = 0, \quad \text{if} \quad \hat{R}_t > -\mu, \quad \text{or} \quad \gamma \hat{x}_t + \lambda \hat{\pi}_t < 0, \quad \text{if} \quad \hat{R}_t = -\mu, \label{eq: NK OP}
\end{equation}
where $\gamma\geq0$ is the weight the monetary authority attaches to output stabilization relative to inflation stabilization, see \cite{Armenter2018}, \cite{Nakata2018} or \cite{Nakataschmidt2019JME} for details. Substituting for $\hat{R}_t=-\mu$ in (\ref{eq: NK EE}) when the ZLB binds, the model can be written in terms of two equations: (\ref{eq: NK NKPC}) and $\hat{x}_t = (1-s_t)\left[\hat{x}_{t+1|t}-\sigma\left(-\mu-\hat{\pi}%
_{t+1|t}\right) +\epsilon_{t} \right]-s_t \frac{\lambda}{\gamma}\hat{\pi}_t$, where $s_{t}=1_{\left\{\hat{\pi}_{t+1|t}+\frac{\hat{x}_{t+1|t} -\hat{x}_{t} +\epsilon_{t}}{\sigma}+\mu>0\right\} }$.
This can be put in the canonical representation (\ref{eq: canon}) with $Y_{t}=\left( \hat{\pi}_{t},\hat{x}_{t}\right)
^{\prime}$, $X_{t}=\left( u_{t},\epsilon_{t},1\right)$, and coefficients given in \ref{app: s: coeffcanonical}.
\openbox\medskip
A special case of (\ref{eq: canon}) without expectations of the endogenous
variables, i.e., $B_{0}=B_{1}=0$ and $b=0,$ is a piecewise linear simultaneous
equations model with endogenous regime switching, whose coherency was analysed
by GLM. We will now show how (\ref{eq: canon}) with expectations can be cast into the model analysed by GLM when the shocks are Markovian with discrete support. This is a key insight of the paper.
Without much loss of generality, we assume that the state variables
$X_{t}$ are first-order Markovian. We also focus on the existence of MSV solutions that can be represented as
$Y_{t}=f\left( X_{t}\right) $ for some function $f\left( \cdot\right) .$
Therefore, from now on, coherency of the model (\ref{eq: canon}) is understood to mean existence of some function $f\left( \cdot\right) $ such that
$Y_{t}=f\left( X_{t}\right) $ satisfies (\ref{eq: canon}).
Assume that $X_{t}$ can be represented as a $k$-state stationary first-order
Markov chain process with transition matrix $K$, and collect all the possible
states $i=1,...,k$ of $X_{t}$ in a $n_x\times k$ matrix $\mathbf{X}$. Let
$e_{i}$ denote the $i$th column of $I_k$, the identity matrix of dimension $k$, so
that $\mathbf{X}e_{i}$ -- the $i$th column of $\mathbf{X}$ -- is the $i$th
state of $X_{t}$. Note that the elements of the transition kernel are $K_{ij} = \Pr\left(X_{t+1} = \mathbf{X}e_j|X_{t} = \mathbf{X}e_i\right)$ and hence, $E\left( X_{t+1}|X_{t}=\mathbf{X}e_{i}\right)
=\mathbf{X}K^{\prime}e_{i}.$ Let $\mathbf{Y}$ denote the $n\times k$ matrix
whose $i$th column, $\mathbf{Y}e_{i}$, gives the value of $Y_{t}$ that
corresponds to $X_{t}=\mathbf{X}e_{i}$ along a MSV solution. Therefore,
along a MSV solution we have $E\left( Y_{t+1}|Y_{t}=\mathbf{Y}e_{i}\right)
=E\left( Y_{t+1}|X_{t}=\mathbf{X}e_{i}\right) =\mathbf{Y}K^{\prime}e_{i}.$
Substituting into (\ref{eq: canon}), $\mathbf{Y}$ must satisfy the following
system of equations
\begin{align}
0 & =\left( A_{s_{i}}\mathbf{Y}+B_{s_{i}}\mathbf{Y}K^{\prime}+ C_{s_{i}}\mathbf{X}+D_{s_{i}}\mathbf{X}K^{\prime}\right)
e_{i}\label{eq: canon i}\\
s_{i} & =1_{\left\{ \left( a^{\prime}\mathbf{Y}+b^{\prime}\mathbf{Y}%
K^{\prime}+ c^{\prime}\mathbf{X}+d^{\prime}%
\mathbf{X}K^{\prime}\right) e_{i}>0\right\} },\quad i=1,...,k. \nonumber
\end{align}
This system of equations can be expressed in the form $F\left( \mathbf{Y}%
\right) =\kappa\left( \mathbf{X}\right) $, where $\kappa\left(
\cdot\right) $ is some function of $\mathbf{X,}$ and $F\left( \cdot\right)
$ is a piecewise linear continuous function of $\mathbf{Y}$. Specifically, let
$J$ be a subset of $\left\{ 1,...,k\right\} .$ Then, we can write $F\left(
\cdot\right) $ as
\begin{equation}
F\left( \mathbf{Y}\right) =\sum_{J}\mathcal{A}_{J}1_{\mathcal{C}_{J}%
}vec\left( \mathbf{Y}\right) ,\label{eq: F}%
\end{equation}
where $\mathcal{C}_{J}=\left\{ \mathbf{Y}:\mathbf{Y\in\Re}^{n\times k}%
,s_{i}=1_{\left\{ i\in J\right\} }\right\} $ is defined by a particular
configuration of regimes over the $k$ states given by $J$.
If the piecewise linear function $F\left( \cdot\right)$ in (\ref{eq: F}) is invertible, then the system is coherent. This can be checked using Theorem 1 from GLM reproduced below.
\begin{theorem}[GLM]\label{th: GLM}
Suppose that the mapping $F\left( \cdot\right) $ defined in
(\ref{eq: F}) is continuous. A necessary and sufficient condition for
$F\left( \cdot\right) $ to be invertible is that all the determinants
$\det\mathcal{A}_{J},$ $J\subseteq\left\{ 1,...,k\right\} $ have the same
sign.\footnote{We only need to
check the determinants over all $2^{k}$ subsets of $\left\{ 1,...,k\right\}
$ rather than $2^{nk}$ subsets of $\left\{ 1,...,nk\right\} ,$ because the
$A_{J}$ will be the same for all $n$-dimensional blocks of $vec\left(
\mathbf{Y}\right) $ that belong to the same state $i=1,...,k.$ }
\end{theorem}
The above determinant condition is straightforward to check. If the condition
is satisfied, then the model has a unique MSV solution. If the condition
fails, the model is not generically coherent, meaning that there will be
values of $\mathbf{X}$ for which no MSV solution exists. Since $\mathbf{X}$
represents the support of the distribution of $X_{t}$, violation of the
coherency condition in the \nameref{th: GLM} Theorem means that a MSV solution can only be
found if we impose restrictions on the support of the distribution of the
exogenous variables $X_{t}$.
\paragraph{\nameref{ex: ACS} continued}
Suppose $\hat{M}_{t}$ follows a two-state Markov Chain with transition kernel $K$, there are four possible subsets of $\left\{ 1,2\right\}$.
Let PIR refer to a positive interest rate state when the ZLB constraint is slack
and ZIR to a zero interest rate state when the ZLB constraint binds. Given $e_1:=(1,0)'$, $e_2:=(0,1)'$, the coefficients of (\ref{eq: F}) are
\begin{equation}%
\begin{tabular}
[c]{lll}%
$\mathcal{A}_{J_{1}}=A_{1}I_{2}+B_{1}K,$ & $J_{1}=\left\{ 1,2\right\} $ &
$\text{(PIR,PIR)}$\\
$\mathcal{A}_{J_{2}}=%
e_1e_1' (A_0 I_2+B_0K) + e_2 e_2'(A_1 I_2 + B_1 K)
$,
& $J_{2}=\left\{ 2\right\} $ & $\text{(ZIR,PIR)}$\\
$\mathcal{A}_{J_{3}}=%
e_2e_2' (A_0 I_2+B_0K) + e_1 e_1'(A_1 I_2 + B_1 K)
,$ & $J_{2}=\left\{ 1\right\} $ & $\text{(PIR,ZIR)}$\\
$\mathcal{A}_{J_{4}}=A_{0}I_{2}+B_{0}K,$ & $J_{4}=\varnothing$ & (ZIR,ZIR)
\end{tabular}
\label{eq: A matrices ACS}%
\end{equation}
where, as we showed previously, $A_{0}=0,$ $A_{1}=-\psi,$ and $B_{0}=B_{1}=1.$
From (\ref{eq: A matrices ACS}), we obtain $\det\mathcal{A}%
_{J_{1}}=\left( \psi-1\right) ( 1-p\allowbreak-q+\psi) ,$ $\det
\mathcal{A}_{J_{2}}=p\left( 1-\psi\right) +q-1,$ $\det\mathcal{A}_{J_{3}%
}=p-1+q\left( 1-\psi\right) ,$ $\det\mathcal{A}_{J_{4}}\allowbreak=p+q-1.$ We focus on
the case $\psi>1.$ Since $0\leq p,q\leq1,$ it follows immediately that
$\det\mathcal{A}_{J_{1}}$ is positive while $\det\mathcal{A}_{J_{2}}$ and
$\det\mathcal{A}_{J_{3}}$ are both negative, so the coherency condition in the
\nameref{th: GLM} Theorem is violated.
\openbox\bigskip
The next proposition states that the conclusion that an active Taylor rule leads to a model that is not generically coherent generalizes to the basic three-equation NK model. The following one states that the same conclusion applies to a NK model with optimal policy.
\begin{proposition} \label{prop: NK-TR CC}
The NK-TR model given by equations (\ref{eq: NK NKPC}) with $u_t=0$, (\ref{eq: NK EE}) with $\epsilon_{t}$ following a two-state Markov chain process, and the active Taylor rule (\ref{eq: NK TR}) with $\psi>1$ and $\psi_{x}=\nu
_t=0$, is not generically coherent.\footnote{The assumption $\psi _{x}=0$
in the Taylor rule is imposed to simplify the exposition. The conclusion that the model is not generically coherent when it satisfies the Taylor
principle can be extended to the case $\psi _{x}\neq 0
$, when the Taylor principle becomes $\psi +\frac{\beta -1}{\lambda }\psi
_{x}>1$, see the proof of the Proposition for further discussion.}
\end{proposition}
\begin{proposition} \label{prop: NK-OP CC}
The NK-OP model given by equations (\ref{eq: NK NKPC}) with $u_t=0$, (\ref{eq: NK EE}) with $\epsilon_{t}$ following a two-state Markov chain process, and the optimal discretionary policy (\ref{eq: NK OP}) is not generically coherent.
\end{proposition}
Proposition \ref{prop: NK-OP CC} proves that there are values of the shocks for which no MSV equilibrium exists, thus formally corroborating the numerical findings in \cite{Armenter2018} about non-existence of Markov-perfect equilibria (which we call MSV solutions) in the NK-OP model.
Analogously to Proposition \ref{prop: nonlinear ACS} in the previous subsection, we can characterize the support restrictions for existence of a solution in the special case given by Assumption \ref{ass: M nonlinear absorbing}, such that $p<1$ (transitory state) and $q=1$ (absorbing state), with support of
$\hat{M}_{t}$ equal to $(-r^{L})$ and $0$, respectively.
\begin{proposition} \label{prop: NK-TR sup res}
Consider the NK-TR model of Proposition \ref{prop: NK-TR CC}. Suppose further that $\epsilon_{t}=-\sigma \hat{M}_{t+1|t}$, where $M_t$ satisfies Assumption \ref{ass: M nonlinear absorbing}, and define $\theta:=\frac{\left( 1-p\right) \left(1-p\beta\right)}{p\sigma\lambda}$. A MSV solution exists if and only if
\begin{subequations}
\label{eq: supp restr NK}
\begin{align}\label{eq: supp restr NK E}
\text{either}\quad & \theta > 1 \text{ and } r^{-1}\leq\pi_{\ast}, \\
\text{or} \qquad & \theta \leq 1, \text{ } r^{-1}\leq\pi_{\ast} \text{ and } -r^L \leq \log(r\pi_*)\left(\frac{\psi-p}{\psi p}+\frac{\theta}{\psi}\right). \label{eq: supp restr NK B}
\end{align}
\end{subequations}
\end{proposition}
Figure \ref{fig: nk_psilargerthanp} helps to grasp the economic intuition. The $AD$ curve is piecewise linear depending on whether the economy is at the ZLB ($AD^{ZLB}$) or monetary policy follows the Taylor rule ($AD^{TR}$). The negative shock shifts the $AD$ curve to the left. In the transitory state, there are four possibilities
depending on the value of $\theta$, and on the equilibrium in the absorbing
state, which can be either a PIR one or a ZIR one (see Appendix
\ref{app: s: prop NK-TR sup res}).
\begin{figure}[h]
\centering
\includegraphics[scale = 0.45]{NKTR_pmore1_all.jpg}
\caption{The temporary state in the NK model when $\psi>1$.}
\label{fig: nk_psilargerthanp}
\end{figure}
When $\theta >1$, the $AS$ is flatter than $AD^{ZLB}$, and the $AS-AD$ system is described by the curves plotted in the left column of Figure \ref{fig: nk_psilargerthanp} for the two cases when the absorbing state is PIR on the top, i.e., panel (a), and when the absorbing state is ZIR on the bottom, i.e., panel (c). Inspection of these two graphs shows there is always a solution in both cases. Hence, when $\theta >1,$ the only necessary support
restriction is $\left( r\pi _{\ast }\right) ^{-1}\leq 1$, which guarantees the existence of an equilibrium in the absorbing state, as stated in (\ref{eq: supp restr NK E}).\footnote{$\theta>1$ exactly corresponds to condition C2 in Proposition 1 of \cite{egg2011nberma}. Figure \ref{fig: nk_psilargerthanp} provides a visual and intuitive interpretation of the coherency condition in these two sub-cases related to the analysis presented in \cite{egg2011nberma} and \cite{Bilbiie2019neofisher} for the NK-TR model.}
Next, turn to the case $\theta \leq 1.$ The $AS$ is steeper than $AD^{ZLB},$ and the $AS-AD$ system is described by the curves plotted in the right column of Figure \ref{fig: nk_psilargerthanp} for the two cases when the absorbing state is PIR on the top, i.e., panel (b), and when the absorbing state is ZIR on the bottom, i.e., panel (d). Clealry, a further support restriction is needed on the value of the shock in the transitory state to avoid the $AD$ curve being completely above the $AS$ curve. Intuitively, the negative shock cannot be too large (in absolute value) for an equilibrium (actually two equilibria in this case) to exist. This is what the second condition (\ref{eq: supp restr NK B}) guarantees.
\begin{proposition} \label{prop: NK-OP sup res}
Consider the NK-OP models of Proposition \ref{prop: NK-OP CC}. Suppose further that $\epsilon_{t}=-\sigma \hat{M}_{t+1|t}$, where $M_t$ satisfies Assumption \ref{ass: M nonlinear absorbing}, and define $\theta:=\frac{\left( 1-p\right) \left(1-p\beta\right)}{p\sigma\lambda}$. A MSV solution exists if and only if
\begin{subequations}
\label{eq: supp restr NK-OP}
\begin{align}\label{eq: supp restr NK-OP E}
\text{either}\quad & \theta > 1 \text{ and } r^{-1}\leq\pi_{\ast}, \\
\text{or} \qquad & \theta \leq 1, \text{ } r^{-1}\leq\pi_{\ast} \text{ and } -r^L \leq \frac{\log(r\pi_*)}{p}. \label{eq: supp restr NK-OP B}
\end{align}
\end{subequations}
\end{proposition}
Figure \ref{fig: nk-OP_temp} helps to grasp the economic intuition. The $AD$ curve is piecewise linear depending on whether the economy is at the ZLB ($AD^{ZLB}$) or monetary policy follows the optimal rule ($AD^{OP}$). The negative shock shifts the $AD^{ZLB}$ curve upward. In the transitory state, there are four possibilities.
When $\theta >1$, the $AS$ is flatter than $AD^{ZLB}$ and the relevant plots are panel (a) and (c) on the left column, depending whether agents expect the absorbing state to be a PIR or a ZIR. There is always a solution in both cases, so again we conclude that when $\theta >1,$ the only necessary support
restriction is $\left( r\pi _{\ast }\right) ^{-1}\leq 1$, as stated in (\ref{eq: supp restr NK-OP E}). Instead, when $\theta \leq 1$, the $AS$ is steeper than $AD^{ZLB},$ and the relevant plots are the one on the right column. There exists an equilibrium (actually two equilibria) if and only if $(-r^{L})$ is below a threshold level, which is given by $\frac{\mu}{p},$ as stated by (\ref{eq: supp restr NK-OP B}) (see Appendix
\ref{app: s: prop NK-OP sup res}).
\begin{figure}[htb]
\centering\includegraphics[scale = 0.5]{optpol_temporary_all_cases.jpg}
\caption{The temporary state in the NK-OP model.}
\label{fig: nk-OP_temp}
\end{figure}
The literature on confidence-driven equilibria \citep[e.g.,][]{MertensRavn2014} emphasised the possibility of sunspots when $\theta\leq1$.
Propositions \ref{prop: NK-TR sup res} and \ref{prop: NK-OP sup res} do not consider sunspot equilibria. Existence of sunspot equilibria can be examined by including a sunspot shock in the exogenous state variables $X_t$. For example, analogously to Proposition \ref{prop: nonlinear ACS sunspots}, it can be shown that if the discount factor shock $M_t$ takes a single value $1/r$ and we allow for a binary sunspot process $\varsigma_t$ with transition matrix $K_\varsigma$, the NK-TR model with $\psi>1$ is not generically coherent, and an equilibrium exists if and only if $r^{-1}\leq \pi_*$.\footnote{This is true for any transition matrix $K_\varsigma$, i.e., not confined to the case when one of the states of the sunspot shock is absorbing as in \cite{MertensRavn2014}, see \ref{app: s: sunspots} for details.}
In the case of the NK-OP model, \cite{Nakata2018} and \cite{Nakataschmidt2019JME} consider only the case when the ZLB always binds in the `low' state, corresponding to $-pr^L>\log (r\pi_*)$. Because this excludes (\ref{eq: supp restr NK-OP B}), the condition for existence of a Markov-perfect (MSV) equilibrium given in \citet[Prop.~1]{Nakataschmidt2019JME} corresponds to (\ref{eq: supp restr NK-OP E}), which they express as a restriction on the transition probabilities, equivalent to $\theta>1$ in (\ref{eq: supp restr NK-OP E}), see Appendix
\ref{app: NS} for details. Therefore, Proposition \ref{prop: NK-OP sup res} corroborates and extends the existence results in \cite{Nakataschmidt2019JME} by highlighting that existence requires restrictions on the values the shocks can take (support) rather than on the transition probabilities.
Finally, note that as $\sigma$ gets large, $\theta$ goes to zero and condition (\ref{eq: supp restr NK}) reduces to
\begin{equation}\label{eq: support restr lin ACS}
r^{-1}\leq\pi_{\ast},\quad\text{and\quad}-r^{L}\leq\log\left( r\pi_{\ast}\right) \frac{\psi-p}{\psi p},
\end{equation}
which is the support restriction for \nameref{ex: ACS}.\footnote{The first inequality in (\ref{eq: support restr lin ACS}) is identical to the corresponding condition in (\ref{eq: support restr nonlin ACS})
in Proposition \ref{prop: nonlinear ACS} for existence of a fundamental solution in the nonlinear ACS model. This is
not surprising because in the absorbing state the two models are identical --
no approximation is involved. The second condition is approximately the same as the
corresponding second inequality in (\ref{eq: support restr nonlin ACS}) when $r\pi_*$ is close to 1. The right hand sides of the two inequalities differ by $\log\left(\frac{r\pi_*-1}{p}+1\right)-\frac{\log(r\pi_*)}{p}$, which is zero to a first-order approximation around $r\pi_*=1$.}
\subsection{More about the nature of the support restrictions\label{s: support restrictions}}
To shed some further light on the nature of the support restrictions, we consider a modification of \nameref{ex: ACS} that allows us to characterize the support restrictions analytically even when there are multiple shocks and the distribution of the shocks is continuous. Specifically, we replace the contemporaneous Taylor rule (\ref{eq: Taylor}) with a purely forward-looking one that also includes a monetary policy shock $\nu_t$. In log-deviations from steady state, the forward-looking Taylor rule is $\hat{R}_t=\max(-\mu,\psi\hat{\pi}_{t+1|t}+\nu_t)$. Substituting for $\hat{\pi}_{t+1|t}=\hat{R}_t+\hat{M}_{t+1|t}$ from the log-linear Fisher equation, we obtain the univariate equation
\begin{equation}
\hat{R}_{t} =\max\left\{ -\mu ,\psi\hat{R}_t + \psi\hat{M}_{t+1|t}+\nu_t\right\}.\label{eq: fig5}
\end{equation}
The advantage of a forward-looking Taylor rule in this very simple model is that it allows us to substitute out the expectations of the endogenous variable $\hat{\pi}_{t+1|t}$, and therefore obtain an equation that is immediately piecewise linear in the remaining endogenous variable $\hat{R}_t$. Application of the \nameref{th: GLM} Theorem then shows that the model is not generically coherent when $\psi>1$. This is shown graphically in Figure \ref{fig: incompleteness}, where the left-hand side (LHS) and right-hand side (RHS) of (\ref{eq: fig5}) are shown in blue and red, respectively. When $\psi>1$, (\ref{eq: fig5}) may have no solution, an example of which is shown on the left graph in Figure \ref{fig: incompleteness}; or it may have two solutions, which is shown in the graph on the right in Figure \ref{fig: incompleteness}. The latter graph also highlights the range of values of the shocks corresponding to incoherency -- when the positively sloped part of the red curve lies in the grey area, and the ones for which two solutions exist -- when the positively sloped part of the red curve lies to the right of the grey area.
The support restrictions required for existence of a solution are $\psi\hat{M}_{t+1|t}+\nu_{t} \leq (\psi-1)\mu$.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{fig5_new.jpg}
\caption{Illustration of the restriction on the support of
$\hat{M}_{t+1|t},\nu_t$ in the model given by the intersection between the LHS of (\ref{eq: fig5}), blue line, and the RHS of (\ref{eq: fig5}), red line.}
\label{fig: incompleteness}
\end{figure}
Suppose further that $\hat{M}_t$ follows the AR(1) process $\hat{M}_{t}=\rho\hat{M}_{t-1}+\sigma \epsilon_{t}$ with $E_{t-1}\epsilon_t = 0$, which is the continuous counterpart to the Markov Chain representation we used previously.
The support restrictions can then be equivalently rewritten as
\begin{equation}
\nu_{t}\leq-\psi\rho\sigma\epsilon_{t}-\psi\rho^2\hat{M}_{t-1}-\left(
1-\psi\right) \log\left( r\pi_{\ast}\right) ,\text{ when }\psi>1.
\label{eq: sup restr}%
\end{equation}
Condition (\ref{eq: sup restr}) has important implications that have been overlooked in the literature on estimation of DSGE models with a ZLB constraint: the shocks $\epsilon_t$ and $\nu_t$ cannot be independently distributed over time, nor can they be independent of each other.
First, suppose $\nu_t=0$, so the only shock driving the model is $\epsilon_t$. Condition (\ref{eq: sup restr}) says that $\epsilon_t$ cannot be independently and identically distributed over time, since the support of its distribution depends on past $\hat{M}_{t}$, and hence, past $\epsilon_{t}$. The presence of state variables ($\hat{M}_{t}$) will generally cause support restrictions to depend on the past values of the state variables.
Second, condition (\ref{eq: sup restr}) states that
the monetary policy shock cannot be independent of the real shock since the support of their distribution cannot be rectangular. Specifically, the monetary policy shock cannot be too big relative to current and past shocks to the discount factor if we are to rule out incoherency.
If these shocks are structural shocks in a DSGE model, such \textit{necessary} support restrictions are difficult to justify. Structural shocks are generally assumed to be orthogonal. In our opinion, it is hard to make sense of structural shocks whose supports depend on the value of the other shocks in a time-dependent way, as well as on the past values of the state variables.
We believe this is a substantive problem for any DSGE model with a ZLB constraint, and, possibly, more generally, with any kinked constraint.
A possible solution to this problem is to interpret condition (\ref{eq: sup restr}) as a constraint on the monetary policy shock $\nu_{t}$. When a very adverse shock hits the economy, monetary policy has to step in to guarantee the existence of an equilibrium, that is, to avoid the collapse of the economy. In a sense, this can represent what we witnessed after the Great Financial Crisis or after the COVID-19 pandemic: central banks engaged in massive operations through unconventional monetary policy measures (beyond the standard interest rate policy) in response to these large negative shocks. Hence, $\nu_{t}$ could represent what is currently missing in the simple Taylor rule or in the optimal policy problem to describe what monetary policy needs to do to guarantee the existence of an equilibrium facing very negative shocks and a ZLB constraint. This positive interpretation of condition (\ref{eq: sup restr}) calls for going beyond these descriptions of monetary policy behavior to model explicitly monetary policy conduct such that incoherency disappears. An alternative way, presented in Subsection \ref{s: UMP} below, relates to the use of unconventional monetary policy modelled via a shadow rate.
\subsection{The Taylor coefficient and the coherency and completeness conditions}\label{s: cc conditions k}
In the examples above, we saw that active Taylor rules ($\psi>1$) lead to incoherency, so restrictions on the support of the shocks are required for equilibria to exist. More generally, we can use the \nameref{th: GLM} Theorem to find the range of parameters of the models that guarantee coherency without support restrictions. In this subsection, we investigate this question in piecewise linear models with discrete shocks that follow a generic $k$-state Markov Chain.
The main result of this subsection is that the Taylor rule needs to be passive for the coherency and completeness condition in the \nameref{th: GLM} Theorem to be satisfied in the NK model. More generally, there is an upper bound, $\bar{\psi}_k$, on the Taylor rule coefficient $\psi$, which depends on parameters and on the number of states $k$, and it is always less than one.
We start with an analytical result for the special case with two states $k=2$.
\begin{proposition} \label{prop: NK cutoff}
Consider the NK model given by (\ref{eq: NK}) with $\psi_{x}=u_{t}=\nu
_t=0$ and suppose $\epsilon_{t}$ follows a Markov Chain with two states $\epsilon^1,\epsilon^2$ and transition probabilities $p=\Pr(\epsilon_{t+1}=\epsilon^1\allowbreak|\epsilon_{t}=\epsilon^1)$ and $q=\Pr(\epsilon_{t+1}=\epsilon^2\allowbreak|\epsilon_{t}=\epsilon^2)$ and define
\begin{equation}\label{eq: NK cutoff}
\psi_{p,q,\beta,\sigma\lambda}:=p+q-1-\frac{\left( 2-p-q\right) \left( 1-p\beta - q\beta +\beta\right) }{\sigma\lambda}.
\end{equation}
The coherency condition in the \nameref{th: GLM} Theorem holds if and only if
\begin{subequations}\label{eq: CC-NK-2s}
\begin{align}\label{eq: CC-NK-2s E}
\text{either}\quad & \psi_{p,q,\beta,\sigma\lambda}<0 \text{ and } \psi_{p,q,\beta,\sigma\lambda}<\psi<1, \\
\text{or}\qquad & \psi_{p,q,\beta,\sigma\lambda}>0 \text{ and } \psi<\psi_{p,q,\beta,\sigma\lambda}\leq 1. \label{eq: CC-NK-2s B}
\end{align}
\end{subequations}
\end{proposition}
Again, the coherency condition depends on the slopes of the AS (\ref{eq: NK NKPC}) and AD (\ref{eq: NK EE}) curves. However, in all cases, it rules out $\psi>1$, generalizing Proposition \ref{prop: NK-TR CC}.
If one of the states is absorbing, $q=1$, then $\psi_{p,q,\beta,\sigma\lambda} = p-\frac{\left( 1-p\right) \left(1-p\beta\right)}{\sigma\lambda},$ and the condition in (\ref{eq: CC-NK-2s E}) $\psi_{p,q,\beta,\sigma\lambda}<0$ is equivalent to $\theta>1$, as in (\ref{eq: supp restr NK E}) in Proposition \ref{prop: NK-TR sup res}, implying that the slope of the AS curve is flatter than the one of the AD curve under ZLB in the temporary state.
Another important special case is $p=q=(1+\rho)/2$, where $\rho \in (-1,1)$ is the autocorrelation coefficient of the shock $\epsilon_t$. In that case, we obtain $\psi_{p,q,\beta,\sigma\lambda} = \rho-\frac{\left( 1-\rho\right) \left(1-\rho\beta\right)}{\sigma\lambda}$. This can be thought of as a two-state approximation of a continuous AR(1) process for $\epsilon_t$. We can evaluate the coherency condition numerically for a $k$-state \cite{Rouwenhorst1995} approximation of an AR(1) process with $k>2$. Table \ref{t: calibrations} reports the coherency condition for various calibrations of the model found in \cite{MertensRavn2014}, \cite{eggsingh19jedc} and \cite{Bilbiie2019neofisher}.\footnote{Note that in some of the calibrations, the dynamics are driven by a sunspot shock, e.g., the confidence-driven model listed as MR2014 CD. However, the derivation of the coherency condition remains exactly the same when the transition matrix $K$ corresponds to a sunspot shock instead of the fundamental shock $\epsilon_t$.}
For example, when $\rho\sigma\lambda<\left( 1-\rho\right) \left(
1-\rho\beta\right)$, we verified numerically to 6 decimal digit precision that the coherency condition remains $\psi < 1$, that is, (\ref{eq: CC-NK-2s E}) holds for all $k\leq30$. In the opposite case, the coherency condition is (\ref{eq: CC-NK-2s B}), and $\bar{\psi}_k$
can get considerably smaller for large values of $\rho$ and $\sigma\lambda$. For any given values of $\rho$ and $\sigma\lambda$, $\bar{\psi}_k$ seems to converge to some value that is bounded away from zero (see the last column of Table \ref{t: calibrations}). As discussed previously, \nameref{ex: ACS} is a special case that obtains when $\sigma$ is large. In that case, $\bar{\psi}_k$ tends to zero with $k$, which suggests that the coherency condition is not satisfied for any $\psi>0$ in the ACS model with a continuously distributed AR(1) shock.
\begin{table}[h!]
\centering
\caption{Coherency condition $\psi<\bar{\psi}$ for different calibrations of the NK model\label{t: calibrations}}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
Paper & $\beta $ & $\sigma $ & $\lambda $ & $\mu $ & $\rho$
& $\bar{\psi}$ \\ \hline
MR2014 FD & 0.99 & 1 & 0.4479 & 0.01 & 0.4 & 1 \\
MR2014 CD & 0.99 & 1 & 0.4479 & 0.01 & 0.7 & 0.494 \\ \hline
\cite{Bilbiie2019neofisher} & 0.99 & 1 & 0.02 & 0.01 & 0.8 & 1 \\
& 0.99 & 1 & 0.2 & 0.01 & 0.8 & 0.592 \\
\hline
ES2019 GD & 0.9969 & 0.6868 & 0.0091 & 0.0031 & 0.9035 & 1 \\
ES2019 GR & 0.997 & 0.6202 & 0.0079 & 0.003 & 0.86 & 1 \\ \hline
\end{tabular}
\fnote{Notes: MR2014: \cite{MertensRavn2014}, FD: Fundamental-driven, CD: Confidence-driven; ES2019: \cite{eggsingh19jedc}, GD: Great Depression, GR: Great Recession. These papers assume an absorbing state and $\rho$ corresponds to the persistence probability of the transitory state.}
\end{table}
\subsection{Coherency with unconventional monetary policy}\label{s: UMP}
In this subsection we show that UMP can relax the restrictions for coherency
in the NK model. An UMP channel can be added to the model in \nameref{ex: NK}
using a `shadow rate' $\hat{R}_{t}^{\ast }$ that represents the desired UMP
stance when it is below the ZLB.
Consider a model of bond market
segmentation \citep{ChenCurdiaFerrero2012}, where a fraction of households can only invest in long-term bonds. In such a model, the amount of long-term assets held by the private sector affects the term premium and provides an UMP channel via long-term asset purchases by the central bank. If we assume that asset purchases (quantitative easing) follow a similar policy rule to the Taylor rule, i.e., react to inflation deviation from target, then the IS curve (\ref{eq: NK EE}) can be written as (see Appendix
\ref{s: QE})
\begin{eqnarray}
\hat{x}_{t} &=&\hat{x}_{t+1|t}-\sigma \left((1-\xi) \hat{R}_{t}+\xi \hat{R}_{t}^{\ast }-\hat{\pi}_{t+1|t}\right) +\epsilon _{t},
\label{eq: NK EE UMP} \\
\hat{R}_{t} &=&\max \left\{ -\mu ,\hat{R}_{t}^{\ast }\right\} , \quad \hat{R}_{t}^*=\psi \hat{\pi}_{t}+\psi _{x}\hat{x}_{t}+\nu_{t}, \label{eq: NK TR UMP}
\end{eqnarray}%
where $\xi$ is a function of the fraction of households constrained to invest
in long-term bonds, the elasticity of the term premium with respect to the stock of long term bonds and the intensity of UMP.
The standard NK model (\ref{eq: NK}) arises as a special case with $\xi
=0$.
The conditions for coherency can be derived analytically in the case of a
single AD shock with a two-state support, analogously to Proposition \ref%
{prop: NK cutoff}.
\begin{proposition}
\label{prop: NK cutoff UMP} Consider the NK model given by (\ref{eq: NK NKPC}%
), (\ref{eq: NK EE UMP}) and (\ref{eq: NK TR UMP}) with $\psi _{x}=u_{t}=\nu
_{t}=0$ and suppose $\epsilon _{t}$ follows a Markov Chain with one
absorbing state and one transitory state that persists with probability $p$. Then, the coherency condition
in the \nameref{th: GLM} Theorem holds if and only if
\begin{subequations}
\label{eq: CC-NK-2s-UMP}
\begin{align}
\qquad & \psi >\max \left( 1,\frac{1}{\xi }\right) ,
\label{eq: CC NK UMP high} \\
\text{or}\quad \ & \max \left( \psi _{p,1,\beta ,\sigma \lambda },\frac{\psi
_{p,1,\beta ,\sigma \lambda }}{\xi }\right) <\psi <\min \left( 1,\frac{1}{%
\xi }\right) , \label{eq: CC NK UMP mid} \\
\text{or}\quad \psi & <\min \left( \psi _{p,1,\beta ,\sigma \lambda },\frac{%
\psi _{p,1,\beta ,\sigma \lambda }}{\xi }\right) , \label{eq: CC NK UMP low}
\end{align}%
\end{subequations}
where $\psi _{p,1,\beta ,\sigma \lambda }\leq 1$ is defined in (\ref{eq: NK
cutoff}).
\end{proposition}
As $\xi $ goes to zero, the model reduces to the standard NK\ model (\ref%
{eq: NK}), and the coherency condition (\ref{eq: CC-NK-2s-UMP}) reduces to (%
\ref{eq: CC-NK-2s}). We already established that in that case there are no values of $\psi
>1$ that lead to coherency, i.e., an active Taylor rule violates the
coherency condition. However, when UMP\ is present and effective, i.e., $\xi>0$, condition (\ref{eq: CC NK UMP high}) shows that an active Taylor
rule can still lead to coherency, i.e., a MSV solution exists without
support restrictions. For example, the value $\psi =1.5$ for the Taylor rule
coefficient used in typical calibrations leads to coherency if $\xi >2/3$. This is consistent
with the estimation results reported in \cite{IkedaLiMavroeidisZanetti2020},
who find the identified set of $\xi$ to be $\left[ 0.74,0.76\right]$ using postwar U.S. data.
\subsection{Endogenous state variables}\label{s: endog}
Up to this point, we analysed models without endogenous dynamics. In this subsection, we describe the challenges posed by endogenous dynamics and discuss various solutions.
We can add endogenous dynamics to the canonical model (\ref{eq: canon}) as follows:
\begin{equation}%
\begin{tabular}
[c]{l}%
$A_{s_{t}}Y_{t}+B_{s_{t}}Y_{t+1|t}+C_{s_{t}}X_{t}+D_{s_{t}}X_{t+1|t}+H_{s_{t}%
}Y_{t-1}=0$\\
$s_{t}=1_{\left\{ a^{\prime}Y_{t}+b^{\prime}Y_{t+1|t}+c^{\prime}%
X_{t}+d^{\prime}X_{t+1|t}+h^{\prime}Y_{t-1}>0\right\} },$%
\end{tabular}
\label{eq: canon endog}%
\end{equation}
where $Y_{t}$ is a $n\times1$ vector of (nonpredetermined) endogenous variables, $X_{t}$ is a
$n_{x}\times1$ vector of exogenous state variables, $H_{s}$ are $n\times n$ coefficient matrices, $h$ is an $n\times1$ coefficient vector, and the
remaining coefficients were already defined above in (\ref{eq: canon}).
\paragraph{Example NK-ITR}\label{ex: NK inert}(NK model with Inertial Taylor rule)
A generalization of the basic three-equation NK model in \nameref{ex: NK} is obtained by replacing (\ref{eq: NK TR}) with
$\hat{R}_{t}=\max\{ -\mu,\allowbreak\phi\hat{R}_{t-1}+\psi\hat{\pi}_{t}%
+\psi_{x}\hat{x}_{t}+\nu_{t}\} $. It can be put in the canonical form
(\ref{eq: canon endog}) with $Y_{t}=\left( \hat{\pi}_{t},\hat{x}_{t},\hat
{R}_{t}\right) ^{\prime},$ $X_{t}=\left( u_{t},\epsilon_{t},\nu
_{t},1\right) ^{\prime}$, and coefficient matrices given in Appendix
\ref{app: s: coeffNKITR}. \hfill \openbox\medskip
As before, we study the existence of MSV solutions, which are of the form
$Y_{t}=f\left( Y_{t-1},X_{t}\right) $. We also assume as before that $X_{t}$
follows a $k$-state Markov chain with transition kernel $K$ and support
$\mathbf{X\in\Re}^{n_{x}\times k}$, so that the $i$th column of $\mathbf{X,}$ i.e., $\mathbf{X}e_{i},$ gives the value of $X_{t}$ in state $i=1,\ldots,k$.
In models without endogenous state variables (\ref{eq: canon}), we can represent the
MSV solution $Y_{t}=f\left( X_{t}\right) $ exactly by a constant $n\times k$ matrix
$\mathbf{Y}$, since $Y_{t}$ has exactly $k$ points of support
corresponding to the $k$ states of $X_{t}$ that are constant over time.
However, with endogenous states the support of $Y_{t}$ will vary endogenously
over time along a MSV solution $Y_{t}=f\left( Y_{t-1},X_{t}\right)$ through the evolution of $Y_{t-1}$, and thus, it cannot be characterized by a constant matrix $\mathbf{Y}$.
That is, the matrix of support points of $Y_t$ -- which we previously defined by the time-invariant matrix $\mathbf{Y}$ because the support of $X_t$ is time-invariant -- must now be a function of $Y_{t-1}$, too. Hence while without endogenous state variables along a MSV solution we have $E\left( Y_{t+1}|Y_{t}=\mathbf{Y}e_{i}\right) =E\left( Y_{t+1}|X_{t}=\mathbf{X}e_{i}\right) =\mathbf{Y}K^{\prime}e_{i}$, when there are endogenous state variables we have $E(Y_{t+1}|Y_{t}\allowbreak=\mathbf{Y}_{t} e_{i},X_{t}\allowbreak=\mathbf{X}e_{i})\allowbreak=\mathbf{Y}_{t+1}^{i}K^{\prime}e_{i},$ where $\mathbf{Y}_{t+1}^{i}$ gives the support of $Y_{t+1}$ when $Y_{t}$ is in the $i$th state. The problem is that the support of $Y_{t}$ is rising exponentially for any given initial condition $Y_{0},$ so the MSV solution cannot be represented by any finite-dimensional system of piecewise linear equations. This makes the analysis of Subsection \ref{s: piecewise linear} generally inapplicable.
To make progress, we will consider solving the model backwards from some
terminal condition in a way that nests the case of no endogenous dynamics that
we studied earlier. We will assume, for simplicity of notation, that the
endogenous state variable is a scalar, i.e., $H_{s_{t}}Y_{t-1}=h_{s_{t}%
}y_{t-1}$ in (\ref{eq: canon endog}), where $h_{s_{t}}$ is $n\times1$ and
$y_{t}:=g^{\prime}Y_{t}$ is a linear combination of $Y_{t}$, for some known
$n\times1$ vector $g$, as in \nameref{ex: NK inert}, where $g=\left(
0,0,1\right) ^{\prime}$.\footnote{Having multiple endogenous state variables
will increase the number of coefficients we need to solve, but will not
increase the complexity of the problem: the solution $\mathbf{Y}%
_{t}=\mathbf{G}y_{t-1}+\mathbf{Z}$ will need to be replaced by $\mathbf{Y}%
_{t}=\sum_{l=1}^{n}\mathbf{G}_{l}Y_{l,t-1}+\mathbf{Z}$.} Next, suppose there
is a date $T$ such that for all $t\geq T$, the MSV solution $f\left(
y_{t-1},X_{t}\right) $ can be represented in the form
$\mathbf{Y}_{t}=\mathbf{G}y_{t-1}+\mathbf{Z}$, with $n\times k$ matrices
$\mathbf{G}$ and $\mathbf{Z}$. This is a matrix representation of a general possibly nonlinear function $f(y_{t-1},X_t)$. The matrix $\mathbf{Z}$ represents the part of $Y_t$ that depends only on the exogenous variables $X_t$. When there are no endogenous states, we have $\mathbf{G}=0$, so $\mathbf{Y}_{t}=\mathbf{Z}$, which we denoted by $\mathbf{Y}$ in (\ref{eq: canon i}) above. Each column of $\mathbf{G}$ gives the coefficients on $y_{t-1}$ in the MSV solution that correspond to each different state of $X_t$. For example, if $k=2$, then the endogenous dynamics in the low state $i=1$, say, can be different from the high state $i=2$. If $i=1$ were a ZIR and $i=2$ were a PIR, the endogenous dynamics could differ across regimes, as shown analytically in the example below.
In the case of no endogenous dynamics, we analysed the solution of $\mathbf{Y}(=\mathbf{Z}$ here) using the method of undetermined coefficients, see equation (\ref{eq: canon i}) above. The corresponding equation for the model with endogenous dynamics is
\begin{align}\label{eq: equations for G and Z}
0 & =\left( A_{s_{t,i}}\mathbf{G}e_{i}+h_{s_{t,i}}+B_{s_{t,i}}%
\mathbf{G}K^{\prime}e_{i}g^{\prime}\mathbf{G}e_{i}\right) y_{t-1}\\
& +\left( A_{s_{t,i}}\mathbf{Z}+B_{s_{t,i}}\mathbf{G}K^{\prime}%
e_{i}g^{\prime}\mathbf{Z}+B_{s_{t,i}}\mathbf{Z}K^{\prime}+C_{s_{t,i}%
}\mathbf{X}+D_{s_{t,i}}\mathbf{X}K^{\prime}\right) e_{i}, \nonumber
\end{align}
for all $i=1,\ldots,k$, see \ref{app: s: bruteforcealg}.
Note that (\ref{eq: equations for G and Z}) exactly nests (\ref{eq: canon i}) when $h_{s_{t,i}}=0$, $G=0$ and $\mathbf{Z}=\mathbf{Y}$. Given a particular regime configuration $J\subseteq
\left\{ 1,\ldots,k\right\}$, which determines in which of the states the constraint is slack, $s_{t,i}=1$, (\ref{eq: equations for G and Z}) gives a system of $2nk$ polynomial equations in the $2nk$ unknowns $\mathbf{G}$ and $\mathbf{Z}$ by equating the coefficients on $y_{t-1}$ and the constant terms to zero, respectively. Unfortunately, because this system of equations is not piecewise linear in $\mathbf{G}$ and $\mathbf{Z}$, we cannot use the \nameref{th: GLM} Theorem to check coherency.
Therefore, one would need to resort to a brute force method of going through
all possible $2^{k}$ regime configurations $J$ and checking if there are any
solutions
that satisfy the inequality constraints. Since $y_{t-1}$ is endogenous, one would need
to solve the system backwards subject to some initial condition $y_{0}$, and
then check coherency for all possible values of $y_{0}$. An algorithm for
doing this is given in Appendix
\ref{app: s: bruteforcealg}
.\footnote{It is worth noting that solving the model
backwards from $T$ to $1$ requires (up to)\ $2^{k\left( T-1\right) }$
calculations, in order to consider all possible regime paths from $1$ to
$T-1$. This is an NP hard problem even for fixed $k$. One could drastically
reduce number of calculations by limiting the possible regime transitions, e.g., as
in \cite{eggertsson2021toolkit}, at the cost of making the conditions for
coherency even stricter.}
\paragraph{\nameref{ex: NK inert} continued}
Again, it is possible to obtain some analytical results in the special case of the model in \nameref{ex: NK inert} if we assume, as in Proposition \ref{prop: NK-TR sup res}, that $\psi_{x}=u_t=\nu_t=0$ and that $\epsilon_{t}=-\sigma \hat{M}_{t+1|t}$, where $M_t$ satisfies Assumption \ref{ass: M nonlinear absorbing}.
Here, we report results on the existence of a MSV solution such that the economy is in a ZIR in the temporary state where $\hat{M}_{t} =-r^{L}$ and then converges to a PIR in the absorbing state where $\hat{M}_{t} =0$. In other words, we report the support restrictions needed for a ZIR to exist in the temporary state, given that the agents expect to move to the stable manifold of the PIR system as soon as the shock vanishes. Appendix
\ref{app: s: NKITR_analytics} shows that such a solution exists if and only if
\begin{subequations}
\label{eq: supp restr NK inert}
\begin{align}
\text{either}\quad & \theta>1\text{ and }r^{-1}\leq\pi_{\ast} \text{ and }%
-r^{L}\geq-\bar{r}^{L}
,\label{eq: supp restr NK inert E}\\
\text{or}\qquad & \theta\leq1,\text{ }r^{-1}\leq\pi_{\ast}\text{ and }
-r^{L}\leq-\bar{r}^{L},\label{eq: supp restr NK inert B}
\end{align}
\end{subequations}
where\vspace{-36pt}
\begin{equation}
-\bar{r}^{L} =\mu\left( \frac{\psi
-p}{\psi p}+\frac{\theta}{\psi}+\frac{\phi}{\psi}\left( 1-\theta\right) -(1-p)\frac
{\lambda\gamma_{x}+\gamma_{\pi}\left[ \beta(1-p)+\lambda\sigma\right]
}{\lambda\sigma p}\right), \label{eq: rl NK-ITR}
\end{equation}
$\theta:=\frac{\left( 1-p\right) \left(1-p\beta\right)}{p\sigma\lambda}$ and $\gamma_{x},\gamma_{\pi}$ are functions of the model's parameters. $\gamma_{x},\gamma_{\pi}$ define the slope of the stable manifold of the PIR system such that in the absorbing state $\hat{\pi}_t$ and $\hat{x}_t$ will travel to the PIR steady state along the paths $\hat{\pi}_t = \gamma_{\pi} \hat{R}_{t-1}$ and $\hat{x}_t=\gamma_{x}\hat{R}_{t-1}$.
This result nests the corresponding analysis in Proposition \ref{prop: NK-TR sup res}, because if $\phi=0$ then $\gamma_{\pi}=\gamma_{x}=0$ and (\ref{eq: supp restr NK inert B}) collapses to (\ref{eq: supp restr NK B}), where $-r^{L}\leq\log(r\pi_{\ast})\allowbreak\left( \frac{\psi-p}{\psi p}+\frac{\theta
}{\psi}\right)$. (\ref{eq: supp restr NK inert}) shows that, for a ZIR-PIR to exist, the negative shock should be large enough in absolute value when $\theta>1$, while it should be small enough in absolute value when $\theta \leq 1.$ Numerical results (not reported) show that, if that condition is not satisfied, then there is a PIR-PIR solution if $\theta>1$, and there is no solution if $\theta\leq 1$, as implied by Proposition \ref{prop: NK-TR sup res} for the case of no inertia.
The effects of inertia on the support restrictions needed for the existence of an equilibrium when $\theta\leq 1$ can be evaluated numerically, since we do not have analytic expressions for $\gamma_\pi$ and $\gamma_x$. Figure \ref{fig: bound} shows the lower bound $\bar{r}^L$ as a function of $\phi$ for the two calibrations given in Table \ref{t: calibrations} where $\theta\leq 1$.
Given that the shock needs to be above the lower bound (i.e., smaller in absolute value) for the equilibrium to exists, the graph reveals that inertia relax the support restrictions.
\begin{figure}[h!]
\centering
\includegraphics[width =6in]{boundJME.jpg}
\caption{$\bar{r}^{L}$ as a function of $\phi$, using parameters from \cite{MertensRavn2014} and \cite{Bilbiie2019neofisher} calibrations shown in Table \ref{t: calibrations} where $\theta<1$.}
\label{fig: bound}
\end{figure}
\subsubsection{Quasi differencing}\label{s: quasi diff}
In a very special case, we can analyse the coherency of the model using the \nameref{th: GLM} Theorem.
\begin{assumption}\label{ass: ST} Assume
the first $n_{1}$ elements $Y_{1t}$ of $Y_{t}=\left( Y_{1t}^{\prime},Y_{2t}^{\prime
}\right) ^{\prime}$ in model (\ref{eq: canon}) are predetermined,
$B_{0},B_{1}$ are invertible, and there exists a $n\times n$ invertible matrix
$Q$ such that $Q^{-1}B_{s}^{-1}A_{s}Q=\Lambda_{s}$ is upper triangular for
both $s=0,1.$ Let $Q_{1}$ denote the first $n_{1}$ elements of $Q,$ and assume also
that $a^{\prime}Q_{1}=b^{\prime}Q_{1}=0.$
\end{assumption}
The first part of Assumption \ref{ass: ST} is satisfied for commuting pairs of
matrices, but commutation is not necessary.\footnote{For example, a pair of
non-commuting triangular matrices is trivially simultaneously triangularized.
One can check Assumption \ref{ass: ST} using the algorithm of \cite{Dubi2009}.}
This assumption is clearly restrictive, but not empty, see \nameref{ex: ACS inert} below.
If Assumption \ref{ass: ST} holds, then we can remove the predetermined variables $Y_{1t}$ from (\ref{eq: canon}) by premultiplying the equation by
some $n_{2}\times n$ matrix $(-\Gamma,I_{n_2})$, where $\Gamma$ is $n_2\times n_1$
(see Appendix
\ref{app: s: simutri}). Intuitively, we transform the original system to one that does
not involve any predetermined variables by taking a `quasi difference' of
$Y_{2t}$ from the predetermined variables $Y_{1t},$ say $\widetilde{Y}_{t} := Y_{2t}-\Gamma Y_{1t}$.
This is typically possible in linear models, but it is not, in general,
possible in piecewise linear models because the `quasi difference'
coefficient $\Gamma$ needs to be the same across all regimes. Assumption
\ref{ass: ST} ensures this is possible. Since the model in $\widetilde{Y}_{t}$ does not have any predetermined variables and is still of the form (\ref{eq: canon}),
we can analyze its coherency using the \nameref{th: GLM} Theorem as in
Subsection \ref{s: piecewise linear}.
\paragraph{Example ACS-STR}
\label{ex: ACS inert}Consider a generalization of \nameref{ex: ACS} where
the Taylor rule $\hat{R}_t=\max\left\{-\mu, \psi\hat{\pi}_t\right\}$ is replaced with $\hat{R}_{t}=\max\left\{ -\mu,\phi\hat
{R}_{t-1}+\psi\hat{\pi}_{t}\right\} $. Appendix
\ref{app: s: ACS-STR} shows that this
model satisfies Assumption \ref{ass: ST}, applies the \nameref{th: GLM} Theorem and analyzes the solutions. When the exogenous shock satisfies
Assumption \ref{ass: M nonlinear absorbing}, the coherency condition is
$\psi<p-\phi$, which generalizes the one found earlier without
inertia: $\psi<p$. When $\psi>1$, the requisite support restriction on $r^{L}$
is $-r^{L}<\mu\frac{\psi +\phi-p}{p\psi}$. For $\phi>0$, this support restriction is weaker than the one for the noninertial model given in (\ref{eq: support restr lin ACS}). \openbox
\section{The incompleteness problem}\label{s: incompleteness}
This Section explores the multiplicity of the MSV solutions in piecewise linear models of the form (\ref{eq: canon}). The main message is that when the CC condition of the \nameref{th: GLM} Theorem is not satisfied, but the support of the distribution of the shocks is restricted appropriately, there are many more MSV solutions than are typically considered in the literature. This is distinct from the usual issue of indeterminacy in models without occasionally binding constraints.
As we discussed in Subsection \ref{s: piecewise linear}, when the state variables $X_t$ follow a $k$-state Markov chain, these models can be written as $F\left( \mathbf{Y}%
\right) =\kappa\left( \mathbf{X}\right)$, where $F(\cdot)$ is the piecewise linear function (\ref{eq: F}), and all possible solutions correspond to
$\mathbf{Y=}\mathcal{A}_{J}^{-1}\kappa\left( \mathbf{X}\right)$ for each
$J\subseteq\left\{ 1,\ldots,k\right\}$. Thus, there are up to $2^k$ possible MSV solutions.
\begin{figure}
[ptb]
\begin{center}
\centering\includegraphics[width =6in]{4cases_psi1_final.jpg}
\caption{The possible equilibrium outcomes in \nameref{ex: ACS}}
\label{fig: 2states}
\end{center}
\end{figure}
\paragraph{\nameref{ex: ACS} continued}
To start with, consider the special case where $M_t$ satisfies Assumption \ref{ass: M nonlinear absorbing}. A corollary of Proposition \ref{prop: NK cutoff} (with $q=1$ and $\sigma=\infty$) shows that the CC condition is satisfied if and only if $\psi<p$. Hence, for $\psi\geq p$ there will be up to 4 MSV solutions if the support of $M_t$ allows it. These are shown in Figure \ref{fig: 2states} and Table \ref{tab: equilibria simple} for $\psi>1$, see Appendix
\ref{app:simple ex} for the derivation. The equilibrium typically used in the literature is the second one in Table \ref{tab: equilibria simple}: ZIR in the transitory and PIR in the absorbing state. This is a fairly intuitive choice in this simple case, but there is no clear choice in more general scenarios with no absorbing state and $k>2$.
\setlength{\tabcolsep}{0.7em}
\begin{table}[h!]
\centering
\caption{The four possible equilibria in \nameref{ex: ACS} when $\psi>1$}
\label{tab: equilibria simple}
\begin{tabular}{ccc}
\hline \hline
Analytical Solution & & Type of Equilibrium \\ \hline \hline
& & \\
\begin{math}
\hat{\pi}_{t}=\left\{
\begin{array}
[c]{ll}
r^{L}\frac{p}{\psi-p} & \text{if }\hat{M}_{t}=-r^{L}\in\left( 0,\mu
\frac{\psi-p}{\psi p}\right) \\
0 & \text{if }\hat{M}_{t}=0
\end{array}
\right.
\end{math} & & (PIR, PIR) \\ & & \\\hline
& & \\
\begin{math}
\hat{\pi}_{t} =\left\{
\begin{array}
[c]{ll}
-r^{L}-\frac{\mu}{p}, & \text{if }\hat{M}_{t}=-r^{L}\in\left( 0,\mu\frac
{\psi-p}{\psi p}\right) \\
0 & \text{if }\hat{M}_{t}=0
\end{array}
\right.
\end{math} & & (ZIR, PIR) \\ & & \\\hline
& & \\
\begin{math}
\hat{\pi}_{t} =\left\{
\begin{array}
[c]{ll}
\frac{pr^{L}-\left( 1-p\right) \mu}{\psi-p}, & \text{if }\hat{M}_{t}
=-r^{L}\in\left( 0,\mu\frac{\psi-1}{\psi}\right) \\
-\mu & \text{if }\hat{M}_{t}=0
\end{array}
\right.
\end{math} & & (PIR, ZIR) \\ & & \\\hline
& & \\
\begin{math}
\hat{\pi}_{t} =\left\{
\begin{array}
[c]{ll}%
-r^{L}-\mu, & \text{if }\hat{M}_{t}=-r^{L}\in\left( 0,\mu\frac{\psi-1}{\psi
}\right) \\
-\mu, & \text{if }\hat{M}_{t}=0.
\end{array}
\right.
\end{math} & & (ZIR, ZIR) \\ & & \\\hline
& & \\
\end{tabular}
\end{table}
To demonstrate the problem, we consider the case where $\hat{M}_t$ is described by a $k$-state \cite{Rouwenhorst1995} approximation of an AR(1) process $\hat{M}_t = \rho\hat{M}_{t-1} + \sigma_\varepsilon \varepsilon_t$, with parameter values $\rho=0.9$ and $\sigma_\varepsilon=0.0007$, and we set $\psi = 1.5$ and $\mu=2\log(1.005)$ following the calibration in ACS, so that the CC condition fails. Figure \ref{fig: mult sol k=3} reports the 8 MSV solutions corresponding to $k=3$. We notice that the first solution is at the ZIR for all values of the shock, while the last solution is the opposite, always at PIR. Unsurprisingly, those two solutions are linear in $\hat{M}_{t}.$ The remaining 6 solutions are non-linear and half of them are non-monotonic in $\hat{M}_{t}$. In Appendix
\ref{app: s: kfigures}, we present results for $k>3$, showing that the number of MSV solutions increases with $k$. In all cases, two solutions correspond to ZIR-only and PIR-only equilibria. For any $k$, it is possible to impose restrictions on the support of the distribution of the shocks such that we are always at ZIR or always at PIR. \openbox
\begin{figure}[htb]
\centering\includegraphics{MultipleSolutions.png}
\caption{MSV solutions of model: $\hat{\pi}_{t|t+1} = max(-\mu,\psi \hat{\pi}_t)+\hat{M}_{t+1|t}$, when $\mu=0.01$, $\psi=1.5$ and $\hat{M}_t$ follows a 3-state Markov Chain with mean 0, conditional st.~dev.~$\sigma = 0.0007$, and autocorrelation $\rho=0.9$.\label{fig: mult sol k=3}}
\end{figure}
\paragraph{\nameref{ex: NK} continued}
Consider again the NK model with $u_t=\nu_t=\psi_x=0$ and a Rouwenhorst approximation to an AR(1) process for the AD shock $\epsilon_t$.
Figure \ref{fig: MR-NK} plots the decision rules for $\hat{\pi}_t$, $\hat{x}_t$ and $\hat{R}_t$, as functions of $\epsilon_t$,
associated with various MSV equilibria of the model for $k=20,$ using the parameter values from \cite{MertensRavn2014}, i.e., the left panel uses ``MR2014 CD'' and the right panel ``MR2014 FD''.\footnote{The support of the distribution of the shock $\epsilon_t$ has been carefully chosen to avoid incoherency. In this case, because the distribution of the shock is symmetric, the necessary support restrictions can be imposed by manipulating the standard deviation of the shock, denoted by $\sigma_\epsilon$. Larger values yield more dispersion, so when $\sigma_\epsilon$ gets sufficiently large, there are no MSV equilibria.}
The graphs on the left report the four MSV equilibria arising from the calibration in which $\rho\sigma\lambda>\left( 1-\rho\right) \left(
1-\rho\beta\right)$. We notice that two of those equilibria have $\hat{\pi}_t, \hat{x}_t$ respond positively to the AD shock, while the other two equilibria are exactly the opposite. The graphs on the right report the case $\rho\sigma\lambda<\left( 1-\rho\right) \left(
1-\rho\beta\right)$, where now only two MSV equilibria have been found, and they both have the property that the policy functions are increasing in the AD shock. Moreover, changing the parameters of the structural model or of the shocks yields a different number of solutions. For example, with a low variance of the shock and $\sigma=4$, the ``MR2014 CD'' case in Table \ref{t: calibrations} delivers 8 solutions.
\hfill\openbox
\begin{figure}%
\centering
\begin{subfigure}{0.495\textwidth}
\includegraphics[width=\textwidth]{figMR-Ben.png}
\caption{Confidence-driven}
\label{fig: MR-Ben}
\end{subfigure}
\begin{subfigure}{0.495\textwidth}
\includegraphics[width=\textwidth]{figMR-Egg.png}
\caption{Fundamental-driven}
\label{fig: MR-Egg}
\end{subfigure}
\caption{Decision Rules associated with different MSV solutions (equilibria) of the NK model, using parameters from \citetpos{MertensRavn2014} calibration shown in Table \ref{t: calibrations} and $k=20$. The figures on the left correspond to $\rho=0.7$ with $\sigma_\epsilon = 0.0011$, while on the right $\rho=0.4$ and $\sigma_\epsilon=0.0014$.}
\label{fig: MR-NK}
\end{figure}
\section{Conclusions\label{s: conclusions}}
This paper highlights a seemingly overlooked problem in rational expectation models with an occasionally binding constraint. The constraint might make the model incoherent or incomplete.
We propose a method for checking the coherency and completeness (CC) condition, that is, the existence and uniqueness of equilibria in piecewise linear DGSE models with a ZLB constraint based on \cite{GourierouxLaffontMonfort1980}. When applied to the typical NK model, this method shows that the CC condition generally violates the Taylor principle. Hence, the case typically analysed in the literature is either incoherent or incomplete. This raises two main issues future research should focus on when solving or estimating these models.
First, we have shown that there must be restrictions on the distribution of the shocks to ensure the existence of equilibria. These support restrictions are time-varying and, in the case of multiple shocks, their support is not rectangular, i.e., the shocks cannot be independent of each other. This raises a first question regarding the interpretation of these shocks: in what sense are they structural if they cannot be independent? A second related question regards the estimation of these models: what are the implications of these restrictions for the correct form of the likelihood?
Second, we have shown there are typically (many) more equilibria than currently reported in the literature. These findings raise questions about the properties of existing numerical solution algorithms, for example, which solutions among the many possible ones do they find and why.
We have not found a computationally feasible way to analyse coherency and completeness in forward-looking models in which the variables are continuously distributed. This problem is hard because of the infinite dimensionality induced by the rational expectation operator, and the fact that the computations required for discrete approximations are NP hard.
This is an important challenge for future research.
Finally, our results highlight the role of unconventional monetary policy in ensuring coherency. An incoherent model cannot be an operational model of the economy. Hence, the need for support restrictions can be positively interpreted as an implicit need for a different policy reaction to catastrophic shocks to ensure the economy does not collapse. This suggests a direction for amending the basic NK model, by modelling monetary policy in such a way that, conditional on bad shocks hitting the economy and conventional interest rate policy being constrained by the ZLB, the use of unconventional monetary policies offers a route to solving the incoherency problem. This route is not only promising, but, even more importantly, realistic: central banks engaged in massive operations through unconventional monetary policy measures (beyond the standard interest rate policy) in response to the large negative shocks causing the Great Financial Crisis and the COVID-19 pandemic. Future work should also study whether it is possible to design fiscal policy to ensure equilibrium existence \citep[e.g.,][]{NakataSchmidt2020}. A takeaway from of our paper, therefore, is to warn that considering a ZLB constraint on monetary policy requires an explicit modelling of unconventional monetary policies (or some other mechanisms) to avoid incoherency.
\bibliographystyle{elsarticle-harv}
|
1,941,325,221,050 | arxiv | \section{Introduction \label{sec:intro}}
Atmospheric turbulence has long been a source of distortion in open air imaging applications. Spatial and temporal fluctuations in the physical properties of the atmosphere (e.g., temperature, humidity) give rise to variability in the index of refraction, thereby altering the optical signal. In imaging applications, the end result is degraded image or video data while for free space optical communications, the turbulence corrupts the link causing a higher bit error rate. Efforts to mitigate these errors have been hindered to a large extent by the lack of practical, accurate models for the solution of the wave equation in the presence of atmospheric turbulence.
Here we demonstrate a new solution based on minimization of kinetic energy using optimal transport. The resulting transport model is efficient to compute, invertible, and can be estimated from easily obtained intensity measurements (i.e. images). Moreover, the model is not phenomenological (e.g., convolution \cite{Bertozzi:13}, optical flow \cite{Mitzel:09}) but is shown to be consistent with the physics associated with the image formation. For this reason, we hypothesize the transport-based approach to image modeling might offer improved predictions of imagery collected in a turbulent medium. Indeed, the model is demonstrated here to provide a more accurate, parsimonious model of sequences of turbulence-corrupted imagery than does optical flow \cite{Mitzel:09}. The solution has potentially important implications for any application involving propagation of an electro-magnetic field through a medium with varying refractive index.
\section{Atmospheric propagation as a Transport Problem \label{sec:Maxwell}}
The goal of this section is to describe the propagation of an electromagnetic (EM) field through the atmosphere as a transport problem. As will be shown, transport models are consistent with the problem physics and admit practical, computational solutions.
The starting point for the study of propagating EM radiation is Maxwell's equations for isotropic materials \cite{Ryan:91}
\begin{subequations}
\begin{align}
\nabla\times{\bf E}({\bf x})&=i\omega \mu_0 {\bf H}({\bf x})
\label{eqn:max1}\\
\nabla\times {\bf H}({\bf x})&=-i\omega \epsilon_0{\bm\epsilon}({\bf x}){\bf E}({\bf x})
\label{eqn:max2}\\
\mu_0\nabla\cdot {\bf H}({\bf x})&=0
\label{eqn:max3}\\
\epsilon_0\nabla\cdot \left({\bm\epsilon}({\bf x}) {\bf E}({\bf x})\right)&=0
\label{eqn:max4}
\end{align}
\end{subequations}
where ${\bf E}({\bf x})$ is the electric field intensity vector in (V/m), ${\bf H}({\bf x})$ is the magnetic field intensity vector in (A/m), ${\bf B}({\bf x}) = \mu_0 {\bf H}({\bf x})$ is the magnetic field induction vector in (Wb/m) and ${\bf D}({\bf x}) = {\bm\epsilon({\bf x})}{\bf E}({\bf x})$ is the electric field displacement vector in (C/m) and $\omega$. The radiation is assumed to be mono-chromatic, with time dependence governed by the angular frequency $\omega$ \cite{Ryan:91}. The vector ${\bf x}$ specifies the full 3-dimensional space ${\bf x}\equiv (x_1,x_2,z)$, where $z$ is the direction of propagation.
The quantity ${\bm\epsilon}({\bf x})$ is the relative complex permittivity of the atmosphere while the constants $\epsilon_0,~\mu_0$ are the vacuum dielectric constant and free space (vacuum) permeability respectively. Note also that in forming Eqn. (\ref{eqn:max3}), it is assumed that the relative permeability of the atmosphere is unity which allows us to further relate the relative complex permittivity to the complex index of refraction via \cite{Ryan:91}
\begin{align}
{\bm\epsilon}({\bf x})&\equiv \left[n({\bf x})+i\kappa({\bf x})\right]^2
\label{eqn:index}
\end{align}
where $n({\bf x})$ is the usual refractive index and $\kappa({\bf x})$ is referred to as the extinction coefficient. In what follows is assumed that the latter is negligible so that we may write ${\bm\epsilon}({\bf x})=n({\bf x})^2$
Taking the curl of Eqn. (\ref{eqn:max1}) and then substituting in Eqn. (\ref{eqn:max2}-\ref{eqn:max4}) yields the vector wave equation
\begin{align}
\nabla^2{\bf E}({\bf x})+\nabla\left({\bf E}({\bf x})\cdot\frac{\nabla {\bm\epsilon}({\bf x})}{\bm\epsilon({\bf x})}\right)+k_0^2{\bm\epsilon}({\bf x}){\bf E}({\bf x})&=0
\label{eqn:Helmholtz}
\end{align}
where $k_0=\sqrt{\epsilon_0\mu_0}\omega=1/\lambda_0$ is the wavenumber and $\lambda_0$ the associated wavelength. The second term in this expression is a direct result of applying the constitutive relationship, Eqn. (\ref{eqn:max4}), giving
\begin{align}
\nabla\cdot {\bf E}({\bf x})&=-{\bf E}({\bf x})\cdot \frac{\nabla {\bm\epsilon}({\bf x})}{{\bm\epsilon}({\bf x})}.
\label{eqn:constraint}
\end{align}
However, this term is typically neglected as it is assumed that either the atmosphere is homogeneous, or that the relative permittivity is nearly unity ($\frac{\nabla {\bm\epsilon}({\bf x})}{{\bm\epsilon}({\bf x})}=\nabla\log({\bm\epsilon}({\bf x}))\approx 0$ if ${\bm\epsilon}({\bf x})\approx 1$). Indeed, we will show later that if one considers the constitutive equation (\ref{eqn:constraint}), the resulting contribution to the transport-based framework is higher-order in terms of the turbulence-induced perturbations to the refractive index.
\subsection{Transforming the Parabolic Wave Equation}
Leaving out the second term in (\ref{eqn:Helmholtz}), the parabolic wave equation can be derived by replacing the (vector) electric field with the scalar field
\begin{align}
{\bf E}({\bf x})=\Psi(\vec{x},z)e^{ik_0 z}
\label{eqn:wave}
\end{align}
where $\vec{x}=(x_1,x_2)$ defines the plane in the direction transverse to propagation. This representation assumes a wave propagating horizontally (in the $\hat{z}$ direction) in air with wavenumber $k_0$. Note that in making this substitution we are replacing a vector with a complex scalar. This substitution (scalar for a vector) is mathematically justified, since the Laplacian operator in (\ref{eqn:Helmholtz}) is separable in terms of the field components. More importantly, Eqn. (\ref{eqn:wave}) is justified on physical grounds by noting that for a propagating EM plane wave, the electric field vector is confined to the transverse plane (negligable polarization in the ``$z$'' direction). The complex scalar amplitude $\Psi(\vec{x},z)$ is therefore sufficient to capture both the magnitude and polarization direction (i.e., phase angle in the transverse plane associated with real and imaginary parts) of the electric field (see \cite{Saleh:91}, section 5.4). Note also that had we not assumed a negligible extinction coefficient there would be a real portion of the exponent in (\ref{eqn:wave}) governing the decay of the solution. In short, for the application of interest, the vector-to-scalar wavefield transformation is both mathematically convenient and physically meaningful (see e.g., \cite{Born:99}, section 8.4).
Substituting (\ref{eqn:wave}) into (\ref{eqn:Helmholtz}) gives
\begin{align}
i2k_0\frac{\partial{\Psi}(\vec{x},z)}{\partial z}+\nabla_{X}^2\Psi(\vec{x},z)+k_0^2\eta(\vec{x},z)\Psi(\vec{x},z)=0.
\label{eqn:parabolic}
\end{align}
where the operator $\nabla_X^2$ denotes the Laplacian operating in the two transverse coordinates and $\eta(\vec{x},z)\equiv n^2(\vec{x},z)-1$ is the deviation in refractive index from unity. Additionally, we have neglected dispersion as is commonly done, i.e. $|\partial_{zz}\Psi(\vec{x},z)|<<2k_0|\partial_z{\Psi}(\vec{x},z)|$.
It is important to note that this expression possesses a strong similarity to the Schr\"{o}dinger equation where the last term in (\ref{eqn:parabolic}) plays the role of a potential function \cite{Flatte:86}.
Based on this similarity, one can pursue similar analysis techniques. Here, we use the so-called Madelung transformation \cite{Madelung:27,Tsang:06,Vadasz:16} (also known as the Luneberg-Kline transformation \cite{Ott:97}) and represent the field as $\Psi(\vec{x},z)=\sqrt{\rho(\vec{x},z)}\exp(i\phi(\vec{x},z)/2)$ where it is assumed $\rho(\vec{x},z)\ge 0$. Combined with appropriate re-scaling of the spatial coordinates (see Appendix \ref{sec:appendixA}), Eqn. (\ref{eqn:parabolic}) becomes
\begin{subequations}
\begin{align}
&\frac{\partial \rho(\vec{x},z)}{\partial z}+\nabla_X\cdot \left(\rho(\vec{x},z) v(\vec{x},z)\vphantom{A_1^2}\right)=0
\label{eqn:continuity2}\\
&\frac{\partial v(\vec{x},z)}{\partial z}+(v(\vec{x},z)\cdot\nabla_X)v(\vec{x},z)=2\nabla_X \gamma(\eta(\vec{x},z)).
\label{eqn:momentum2}
\end{align}
\end{subequations}
where $v(\vec{x},z)\equiv \nabla_X \phi(\vec{x},z)$ and the function
\begin{align}
\gamma(\eta(\vec{x},z))&\equiv -\nabla^2_X\log(n^2(\vec{x},z))+(\nabla_X\log(n^2(\vec{x},z)))^2\nonumber \\
&~~~~~~~+\eta(\vec{x},z)
\label{eqn:eta}
\end{align}
is solely a function of the refractive index. The first two terms in (\ref{eqn:eta}) arise due to the ``diffraction term'' \cite{Gureyev:95} (alternatively the ``quantum potential'' \cite{Bohm:84}), which naturally appears as $\frac{\nabla_X^2(\rho(\vec{x},z)^{1/2})}{\rho(\vec{x},z)^{1/2}}$ in (\ref{eqn:momentum2}), but can be re-cast in terms of the refractive index using the constitutive relationship (\ref{eqn:constraint}) (see Appendix \ref{sec:appendixB}).
{\it Thus the parabolic wave equation can be readily interpreted as the familiar continuity and momentum equations from fluid mechanics where the phase gradient $v(\vec{x},z)=\nabla_X \phi(\vec{x},z)$ plays the role of the velocity, the ``density'' $\rho(\vec{x},z)=\Psi(\vec{x},z)\Psi(\vec{x},z)^*$ is the image intensity, and the refractive index creates the potential function $2\gamma(\eta(\vec{x},z))$.}
Now note that Eqn. (\ref{eqn:momentum2}) could also be written solely in terms of the phase variable (see Appendix \ref{sec:appendixA}) as the familiar Hamilton-Jacobi equation, or in fluid mechanics terminology, the unsteady Bernoulli equation
\begin{align}
\frac{\partial\phi(\vec{x},z)}{\partial z}+\frac{1}{2}(\nabla_X\phi(\vec{x},z))^2=2\gamma(\eta(\vec{x},z)).
\label{eqn:momentum3}
\end{align}
Moreover, for small perturbations to the index $\eta\ll 1$ the approximation $\log(1+\delta)\approx \delta$ for $\delta\ll 1$ means we could alternatively have written $\gamma(\eta(\vec{x},z))\approx -\nabla^2_X\eta(\vec{x},z)+(\nabla_X\eta(\vec{x},z))^2+\eta(\vec{x},z)$. We can therefore neglect the first two terms so that $\gamma(\eta(\vec{x},z))\approx \eta(\vec{x},z)$. Based on the genesis of these terms (discussion surrounding Eqn. \ref{eqn:eta}), this approximation is tantamount to the assumption that $\nabla^2_X\rho(\vec{x},z)^{1/2}/\rho(\vec{x},z)^{1/2}\ll 1$, one which is often made in optics \cite{Saleh:91},~\cite{Gureyev:95}.
We will therefore seek an approach to modeling images that is consistent with the physics described by Eqns (\ref{eqn:continuity2},~\ref{eqn:momentum2} \&~\ref{eqn:momentum3}). First, however, we briefly discuss some existing solutions.
\subsection{Prior art}
Some researchers have attempted to solve Eqn. (\ref{eqn:parabolic}) directly via numerical methods (see e.g., \cite{Kuttler:91}). Such methods are known to be computationally intensive \cite{Wheeler:85}, thereby leading to approximate methods (see e.g., \cite{Leland:94}), or by instead focusing only on the statistical properties of the solution (see e.g., Fannjiang and Solna \cite{Fannjiang:05}). None of these approaches are suitable for modeling sequences of images.
The ``transport'' form of Eqn. (\ref{eqn:parabolic}) has been leveraged by other research in optics, perhaps most notably as a means of phase retrieval under the heading of ``Transport Intensity Equation'' (TIE) approaches \cite{Gureyev:95},~\cite{Petruccelli:13}. The focus in the TIE method is on (\ref{eqn:continuity2}) as it is assumed that intensity measurements are made over short propagation distances such that (\ref{eqn:momentum2}) can be ignored \cite{Petruccelli:13}, an assumption we cannot make in imaging.
Related applications have used the same basic Madelung transformation followed by the ``Wentzel-Kramers-Brillouin'' (WKB) approximation (high frequency approximation whereby one equates terms of common wavenumber) to analyze equations of the form (\ref{eqn:parabolic}) \cite{Benamou:99}. In the context of the Schr\"{o}dinger equation, WKB analysis also yields the system of equations (\ref{eqn:continuity2}) and (\ref{eqn:momentum3}) (see e.g. \cite{Liu:06} \cite{Jin:11}). A common solution is the method of characteristics, a Lagrangian approach that numerically integrates the spatial coordinates of the phase front (e.g., rays) forward in time (see e.g., \cite{Nazarathy:82},~\cite{Moussa:03}). The main challenges are the problem size (each ray is integrated separately), and the associated numerical errors \cite{Jin:11}. Methods that rely on a fixed grid (so-called Eulerian methods), can overcome the problem size and resolution issues, but tend to suffer from multi-valued solutions arising due to the nonlinearity in (\ref{eqn:momentum3}) \cite{Liu:06} which require other approximations and numerical procedures to alleviate \cite{Benamou:04}, (see also \cite{Blanc:00} and the references therein). Inversion of these numerical methods is similarly challenging.
Moreover, in the context of image propagation the WKB analysis is equivalent to geometric optics, where the first term in (\ref{eqn:momentum3}) is neglected \cite{Saleh:91}\cite{Benamou:99}.
Thus, the WKB approximation does not actually solve the paraxial wave equation, a point that was recently highlighted by Potvin \cite{Potvin:15}. In this work it is important to retain (and solve) the full expression, Eqn. (\ref{eqn:momentum3}), as this allows us to formally connect solutions of the parabolic wave equation to optimal transport theory in Section (\ref{sec:transport}).
Due to the deficiencies of these physics-based models, the typical approach in image processing is to pursue phenomenological models that are practical, yet preserve certain features of the physical process. To this end, by far the most popular approaches to modeling turbulence-corrupted images are convolution and optical flow; both have seen use in turbulence-mitigation. A recent discussion of deconvolution methods applied to this problem can be found in \cite{Zhu:13} while an optical flow implementation of turbulence mitigation was explored in \cite{Mao:12}. In section (\ref{sec:test}) we will, in fact, compare our physics-based model to an optical flow model in terms of their respective abilities to predict turbulence-corrupted images.
In section (\ref{sec:transport}) we will derive a solution that is both practical {\it and} consistent with the problem physics by making the connection to optimal transport theory.
By doing so, we can leverage the tremendous progress in optimal transport \cite{Gustavoreview:16} and develop a fast, accurate solution that works for very large problem sizes (e.g., Mega-pixel images), does not require time-marching, and is easily invertible (a pre-requisite for several applications).
\subsection{Model Interpretation}
Before proceeding to the solution, it is helpful to first consider the interpretation of the model (\ref{eqn:continuity2},~\ref{eqn:momentum2}). Figure (\ref{fig:scheme}) depicts an example EM field propagating through an atmosphere governed by a varying index of refraction, quantified by the index perturbations $\eta(\vec{x},z)$. Note that the geometry of the wave propagation here allows us to view the $z$ dimension as time and thus we are able to exchange $z$ for $t$.
\begin{figure}[tb]
\centerline{
\begin{tabular}{c}
\includegraphics[scale=0.375]{Schematic}
\end{tabular}
}
\caption{Illustration of the transport problem. Intensity is transported in the transverse plane as the associated EM field moves through space from $z=0$ to $z=Z$. The transport model described here assumes the intensity is being transported along constant velocity paths, i.e., straight lines. Each point on the source image is therefore mapped to a point on the corrupted image by a linear path. The transverse displacement is denoted $u(\vec{x}_Z,Z)$; an expression for this displacement and its relation to the model (\ref{eqn:continuity2},~\ref{eqn:momentum2}) are given below.}
\label{fig:scheme}
\end{figure}
The structure of Eqns. (\ref{eqn:continuity2}-\ref{eqn:momentum3}) allows us to interpret the movement of an image through space as a transport problem that can be solved using recently developed tools (as will be shown in the next section). The original image intensity $\rho(\vec{x},0)$ located at $\vec{x}$ is moved in directions defined by the phase gradient in the transverse plane. The directions can be different at each transverse location and will change as $z$ (alternatively time) progresses. The changes in direction are due to variations in the refractive index.
For example, in the absence of turbulence or other index fluctuations, the right hand side of Eqn. (\ref{eqn:momentum2}) disappears and the momentum equation becomes simply $Dv(\vec{x},z)/Dz=0$ where $D(\cdot)/Dz$ denotes the ``total derivative''. Thus, in a homogeneous medium, and recalling the equivalence between $z$ and $t$, Eqn. (\ref{eqn:momentum2}) suggests there will be no transport in the transverse direction. This makes sense as our (initially) paraxial rays are not experiencing refraction in this case, hence no intensity is being moved in the transverse plane. Moreover, because the right-hand side is a function of the transverse index {\it gradient}, this statement also holds in the case that the refractive index is varying in $z$ only. The phase will change with $z$ in this case (by Eqn. \ref{eqn:momentum3}), but the intensity will still move from source to destination in horizontal, straight lines (i.e., $Dv(\vec{x},z)/Dz$ is still 0).
Transport therefore occurs when a transverse index gradient causes refraction, at which point the intensity moves in the transverse plane along directions dictated by $\nabla_X\phi(\vec{x},z)$. To illustrate, Figure (\ref{fig:path}) shows an image of a single point being transported in the transverse plane as time progresses. The direction of propagation does not appear explicitly in the lower figure but rather is implicit in defining the transport path. In this example, the index of refraction clearly possesses a series of steps in its transverse gradient, thereby causing the point to move in the transverse plane (absent such a gradient no apparent transverse motion would occur). Assuming we can only observe the first and last images, we are using the constant velocity model $u(\vec{x}_Z,Z)/Z\approx v(\vec{x}_Z,Z)$ where $u(\vec{x}_Z,Z)$ denotes the displacement experienced by the point as it moves from location $\vec{x}_0$ to $\vec{x}_Z$.
\begin{figure}[tb]
\centerline{
\begin{tabular}{c}
\includegraphics[scale=0.375]{path}
\end{tabular}
}
\caption{In the transport modeling approach, one can think of the motion as occurring only in the transverse plane (lower plot) with the direction of propagation implicitly included as a time coordinate. In this example, a single point is being perturbed by a series of step changes in refractive index. Assuming only the first and last images are available, this approach is modeling the transport as constant velocity, linear motion between those two images. The quality of this approximation will clearly depend on the strength of the index fluctuations and the distance $Z$ between the images used in creating the model. As $Z\rightarrow 0$ or $\nabla_X \eta(\vec{x},z)=0$ the model is exact. }
\label{fig:path}
\end{figure}
As implied by the figure, this model will approach the true velocity as $Z\rightarrow 0$. We now address the question of how to obtain the model from observed data.
\section{Solutions via Optimal Transport \label{sec:transport}}
In this section we will demonstrate how to solve for both $\rho(\vec{x},z)$ and $v(\vec{x},z)$ for $z=0\cdots Z$ given a single pair of images $\rho(\vec{x},0),~\rho(\vec{x},Z)$ and absent information about the refractive index profile. The solution is unique under the stated assumptions, computationally efficient and invertible, and can be estimated from intensity measurements (i.e., images) rendering it practically useful. The resulting model can 1) be used to understand and predict the effects of turbulence on the imagery and 2) be inverted so that given an image, $\rho(\vec{x},Z)$, we can solve for $\rho(\vec{x},0)$.
To see how, we first define the kinetic energy associated with moving image intensity over a distance $z=[0,Z]$ and corresponding time interval $t=[0,T]$
\begin{align}
\mathcal{A}\equiv Z\int_{\R^2}\int_0^Z \rho(\vec{x},z)|v(\vec{x},z)|^2dzd\vec{x}.
\label{eqn:action}
\end{align}
In continuum mechanics this quantity is often referred to as the {\it action} associated with a non-dissipative dynamical system without external forces or potentials \cite{Meirovitch:97}. Now, of course, there is a potential function associated with this problem corresponding to the last term in (\ref{eqn:parabolic}) and given by $V(\vec{x},z)=2\gamma(\eta(\vec{x},z))$. However, given the modest influence of the potential on the transport, recall $\eta(\vec{x},z)\ll 1$, we neglect this term in forming the action. The consequences of this decision are discussed in what follows, along with results that justify this assumption (see Section \ref{sec:test}).
The principle of action minimization is a familiar one and has been used to derive the equations of motion for many dynamical systems, including Eqns. (\ref{eqn:continuity2},~\ref{eqn:momentum2}). In fact, it has recently been shown that minimization of the specific action (\ref{eqn:action}) given the constraint (\ref{eqn:continuity2}) (intensity is conserved), yields precisely (\ref{eqn:momentum2}) along with the requirement that $v(\vec{x},z)=\nabla_X\phi(\vec{x},z)$ \cite{Dejan:16}, a relationship that came about naturally in our derivation of Eqn. (\ref{eqn:momentum2}). It is therefore appropriate to study (\ref{eqn:action}) in formulating solutions to the parabolic wave equation (equivalently, Eqns \ref{eqn:continuity2} and \ref{eqn:momentum2}) for the case where index fluctuations are small.
Making explicit the analogy between the system (\ref{eqn:continuity2},~\ref{eqn:momentum2}) and the associated action (\ref{eqn:action}) allows us to leverage ``optimal transport'' theory and the associated computational tools to solve for $\rho(\vec{x},z),~v(\vec{x},z)$. The theory of optimal transport has in fact shown that there is only one solution to equation (\ref{eqn:continuity2}) that minimizes (\ref{eqn:action}) and possesses endpoints $\rho(\vec{x},0)$ and $\rho(\vec{x},Z)$ \cite{Villani:08},~\cite{Gustavoreview:16}.
To develop this connection more fully, we take the Lagrangian perspective of the fluid system (\ref{eqn:continuity2},~\ref{eqn:momentum2}). In this view the coordinates defining the transverse plane, $\vec{x}$, are no longer fixed, but change according to the system dynamics. With this in mind, we label the coordinates over which the image is defined according to their location along the direction of propagation, e.g. $\vec{x}_z$ is the support of the image at $z$. The dynamic coordinates are defined by the {\it Lagrangian flow map}, $\vec{x}_z\equiv f(\vec{x}_0,z)$ which evolves the starting coordinates $\vec{x}_0$ forward in space to location $z$. This also means that $\dot{f}(\vec{x}_0,z)=v(f(\vec{x}_0,z),z)$ is the velocity \cite{Brenier:89}.
Returning to the continuity equation (\ref{eqn:continuity2}), we can see this is nothing more than a statement of total intensity conservation. That is to say $\int \rho(\vec{x}_z,z)=\int \rho(\vec{x}_0,0)$. This relationship can be re-written in terms of our previously defined mapping as
\begin{align}
\det(J_f(\vec{x}_0,z))\rho(\vec{x}_z,z)=\rho(\vec{x}_0,0)
\label{eqn:transform}
\end{align}
where $J_f(\vec{x}_0,z)$ denotes the Jacobian of $f(\vec{x}_0,z)$ (see \cite{Brenier:89}, \cite{Gustavoreview:16} or \cite{Dejan:16}) (note that in writing Eqn. \ref{eqn:transform} there is an implicit assumption that the coordinate transformation is smooth). Thus, knowledge of the Lagrangian flow map and its time rate of change are sufficient to define our solution.
Indeed, recent works have demonstrated that one can obtain the unique flow map so that the resulting intensity and velocity fields are consistent with minimization of (\ref{eqn:action}). Specifically, it has been shown that the minimization
\begin{align}
d_p(0,Z)^2&=\inf_{f}\int_{\R^2}\|f(\vec{x}_0,Z)-\vec{x}_0\|^2\rho(\vec{x}_0,0)d\vec{x}\nonumber \\
&=\min_{v} \mathcal{A},
\label{eqn:Kanto}
\end{align}
subject to the constraints imposed by the continuity equation (\ref{eqn:continuity2}), produces a coordinate transformation $f(\vec{x}_0,Z)$ that can be used to solve (\ref{eqn:momentum2}) \cite{Brenier:00},~\cite{Dejan:16}. Note that the displacements being minimized, $u(\vec{x}_Z)\equiv f(\vec{x}_0,Z)-\vec{x}_0$, are in the transverse direction only.
In deriving the relationship (\ref{eqn:Kanto}) it can also be shown that the minimizing solutions possess constant velocity which, in Lagrangian coordinates, is simply $u(\vec{x}_Z)/Z$. Put another way, the turbulence-induced perturbations captured in the image pair $\rho(\vec{x},0),~\rho(\vec{x},Z)$ are modeled as growing linearly as the image moves from $z=0$ and $z=Z$.
This also means we can linearly interpolate the displacement coordinates $f(\vec{x}_0,z)=(1-z/Z)\vec{x}_0+\frac{z}{Z} f(\vec{x}_0,Z)$ to obtain the image at {\it any} point in time via Eqn. (\ref{eqn:transform}). This is consistent with our earlier assertion that, in the absence of index fluctuations, light moves in straight lines. Finally, because the velocity (which is constant in $z$) must be expressed as a phase gradient \cite{Dejan:16}, we have
\begin{align}
v(\vec{x}_z,z)&=(f(\vec{x}_0,Z)-\vec{x}_0)/Z=\nabla_X\phi(\vec{x}_z,z)
\label{eqn:interp2}
\end{align}
thereby completing the solution to (\ref{eqn:continuity2},~\ref{eqn:momentum2}). Note, the phase function in (\ref{eqn:interp2}) is the same as that used in defining the complex field amplitude in (\ref{eqn:parabolic}).
{\it Provided that we accept the physical principle of action minimization we can indeed solve (\ref{eqn:continuity2},~\ref{eqn:momentum2}) and, by extension (\ref{eqn:parabolic}), given a single pair of clean/corrupted images and a means of solving (\ref{eqn:Kanto}). The solution is the coordinate transformation $f(\vec{x}_0,z)$ from which we can obtain the image intensity via (\ref{eqn:transform}) and the velocity via (\ref{eqn:interp2})}. This solution is exact if the index perturbations are zero; in the event that the index is fluctuating, the constant velocity solutions are approximating a wandering path with a straight line (see again Fig. \ref{fig:scheme} and Fig. \ref{fig:path}).
What's more, as reviewed in \cite{Gustavoreview:16}, numerous numerical methods for solving (\ref{eqn:Kanto}) have emerged in recent years and are readily available. The model is simple to invert, handles very large problem sizes, does not require time-marching, and most importantly, is true to the physics of the problem.
In the following section we will demonstrate the efficacy of this modeling approach and draw comparisons to traditional ``optical flow'' methods.
\section{Testing the Model \label{sec:test}}
In this section we test the applicability of the optimal transport-based model for imaging under turbulence developed above using both simulated and real imagery. Because the model is consistent with the problem physics, we hypothesize it will perform well relative to phenomenological models.
\subsection{Simulation}
To demonstrate the validity of our model, we verify whether Eqn. \eqref{eqn:interp2} holds in a simulated experiment. By verifying that under turbulence intensity travels in a straight path (constant velocity), we can indirectly verify whether optical flow solutions (all of which occur in straight paths) are compatible with the turbulence phenomenon.
We consider an experiment whereby an image is passed through several ``phase screens'' in order to mimic the effects of the spatially varying refractive index \cite{Lane:92}.
\begin{figure}[tb]
\centerline{
\begin{tabular}{ccc}
\includegraphics[scale=0.25]{RayGrowth1} \\
\includegraphics[scale=0.25]{RayGrowth2}
\end{tabular}
}
\caption{(Top) Simulated propagation of a large number of rays through a pristine atmosphere. The rays diverge linearly in time, consistent with our assumed action, Eqn. (\ref{eqn:action}). (Bottom) As the rays move through a turbulent atmosphere, simulated using 100 phase screens, they fluctuate slightly blurring the resulting image. Nonetheless, the motion is still clearly dominated by kinetic energy with the variations in refractive index causing small changes to the motion.}
\label{fig:justify}
\end{figure}
A numerical simulation of this method is shown in Figure (\ref{fig:justify}) in order to demonstrate how a ray-optics description of the EM field is influenced by the turbulence. The upper plot shows a number of different optics rays propagating through a pristine (non-turbulent) atmosphere. As expected the rays move in perfectly straight lines, thereby implying a constant velocity solution consistent with the action given by \eqref{eqn:action}. The right plot shows the rays moving through a turbulent atmosphere as realized using 100 evenly spaced phase screens, designed to mimic the atmospheric properties of Kolmogorov turbulence. While the rays clearly fluctuate over the path length, those fluctuations are minor relative to the main, linear trend. Thus, we are capturing the turbulence-induced perturbations between the clean and corrupted image, but are modeling them as growing linearly over time in the transverse direction. Thus we conclude that, in an approximate sense, the deviations from a linear path are mostly local in time, in accordance with the result predicted from the optimal transport model expressed in Eqn. \eqref{eqn:interp2}.
\subsection{Modeling turbulence in image time series}
In this section we analyze video data collected through a turbulent atmosphere and compare different modeling approaches with respect to their ability to describe the observed imagery. A frame from such video is shown in Figure \ref{fig:video_frame}. The video shows a static scene, imaged through turbulent atmosphere, and thus contains the effects of noise, diffraction, and turbulence. As is commonly done, the models are compared in terms of 1) the error in the description and 2) the number of terms required of the description. These are the two fundamental ingredients to all ``model selection'' methods we are aware of (see e.g., \cite{Burnham:98}).
\
\begin{figure}[tb]
\center
\includegraphics[scale=0.45]{turb_video.png}
\caption{Frame from video of a static scene imaged under turbulence due to atmospheric changes.}
\label{fig:video_frame}
\end{figure}
Using the transport model described above, the underlying assumption is that (neglecting the effects of noise) the difference between two frames can be characterized by photon transport due to turbulence. Thus, from Eqn. \eqref{eqn:transform}, we hypothesize that $\det(J_f(\vec{x}_0,z))\rho(f(\vec{x}_0),z)=\rho(\vec{x}_0,0)$ where $\rho(\vec{x}_0,0)$ now represents the first frame of the movie, and $\rho(\vec{x}_0,z)$ is assumed to be the frame at time $t = z$. Taking the first frame as a reference, we seek to recover the information contained in the first frame from any other arbitrary frame using $f$ computed with an optimal transport code as described in \cite{Gustavo:16} that takes as input two images and outputs $f$ such that $\det(J_f(\vec{x}_0,z))\rho(f(\vec{x}_0),z)=\rho(\vec{x}_0,0)$ while simultaneously minimizing the action expressed in Eqn. \eqref{eqn:Kanto}. For comparison purposes we also utilize an optical flow method \cite{Mitzel:09} for computing $g$ such that $\rho(g(\vec{x}_0),z) \sim \rho(\vec{x}_0,0)$, where the estimation is performed utilizing a regularized least squared error procedure. Results showing the mean squared error (MSE) between each reconstruction (using both transport $\det(J_f(\vec{x}_0,z))\rho(f(\vec{x}_0),z)$ and optical flow $\rho(g(\vec{x}_0),z)$ models) and the reference frame $\rho(\vec{x}_0,0)$ appear in Figure \ref{fig:mse_result}.
The plot shows that the transport model is able to better match frames from the movie, which is an unsurprising result given that there exist multiple (infinite) $f$'s that will satisfy $\det(J_f(\vec{x}_0,z))\rho(f(\vec{x}_0),z)=\rho(\vec{x}_0,0)$ for any two normalized input images, while the same cannot be guaranteed for an optical flow (registration) model $\rho(g(\vec{x}_0),z) \sim \rho(\vec{x}_0,0)$.
\begin{figure}[tb]
\centerline{
\includegraphics[scale=0.325]{mse_total_fixed.png}
}
\caption{Comparison between the optical flow and transport models described in this section. Shown are the mean-square error associated with frame-to-frame reconstruction showing that, as expected, the transport approach is able to obtain better matches between frames.}
\label{fig:mse_result}
\end{figure}
We then sought to characterize the complexity present in the spatial transformation estimates computed via the transport and optical flow methods. Let $f_z$ correspond to the function that matches frame $z$ to frame $0$, that is $f_z$ is computed such that $\det(J_f(\vec{x}_0,z))\rho(f(\vec{x}_0),z)=\rho(\vec{x}_0,0)$. Similarly, we denote $g_z$ as the spatial transformation that matches $\rho(g(\vec{x}_0),z) \sim \rho(\vec{x}_0,0)$ using the optical flow model. Utilizing the standard principal component analysis (PCA) techniques we decompose the sequence of $f_z$, and respectively $g_z$, as a sum of eigen-functions (bases) computed using the PCA method. PCA is a technique that given a set of vectors, automatically discovers an ordered basis whereby the average MSE for reconstructing the dataset using only certain components (basis vectors or functions) is minimum. For comparison purposes, we also compute the eigen-decomposition of the image intensities for all frames (image space) as well. The percent of total variance captured as a function of the number of eigen-functions used in the reconstruction for all three spaces (transport, optical flow, and image) is shown in Fig. \ref{fig:pca_result} and shows that the transport model appears to be the most parsimonious model of all three.
\begin{figure}[tb]
\centerline{
\includegraphics[scale=0.5]{pca_fixed.png}
}
\caption{Percent of data set variance (normalized) as a function of the number of principal components used to model the input data in image space, optical flow, and transport models. The result shows that the transport model is the most parsimonious.}
\label{fig:pca_result}
\end{figure}
Finally, we combine the MSE measurements described in Fig. \ref{fig:mse_result} with the PCA-derived parsimony measure displayed in Fig. \ref{fig:pca_result}. More specifically, here we investigate the ability of the PCA model for both transport and optical flow models to reconstruct the original frame $\rho(\vec{x}_0,0)$ as a function of the number of components utilized in estimating their respective transformations. Figure \ref{fig:mse_pca} shows the mean squared error between the original frame and the estimate of both transport and optical flow models, each using the same number of PCA components. In short, it is clear that for a fixed model complexity (a certain fixed number of basis functions used to model the transport or optical flows) the transport model more accurately reconstructs the original frame.
\begin{figure}[tb]
\centerline{
\includegraphics[scale=0.5]{mse_pca_fixed.png}
}
\caption{Mean square error of frame reconstruction of individual frames using both optical flow and transport models, as a function of the number of principal components used in each model, respectively.}
\label{fig:mse_pca}
\end{figure}
\section{Summary \& Discussion \label{sec:discussion}}
We have described a new approach for modeling the effects of turbulence in optical images using the principle of least action. In short, given only a pair of images (clean/corrupted) $\rho(\vec{x},0),~\rho(\vec{x},Z)$, and accepting the principle of least-action, we can solve Eqn. (\ref{eqn:Kanto}) and use the resulting map $f(\vec{x}_0,z)$ to obtain both the image intensity via (\ref{eqn:transform}) and phase function via (\ref{eqn:interp2}) at any point along the direction of propagation. In doing so, we have effectively replaced explicit knowledge of the index fluctuations $\eta(\vec{x},z)$ with the physical principle of action minimization and a sample pair of images that have been so influenced. We have further demonstrated that in solving (\ref{eqn:Kanto}) we are approximately solving parabolic wave equation for an image propagating in turbulent media, Eqn. (\ref{eqn:parabolic}).
The solution is exact as the propagation distance shrinks, or in the case that the refractive index does not possess a transverse gradient. Given knowledge of the refractive index profile, however, one can augment the action (\ref{eqn:action}) and attempt to solve the system exactly, even in this more complicated situation. Alternatively, given a sequence of images along the propagation path (see e.g., Fig. \ref{fig:path}), one could infer a piecewise-constant approximation of the refractive index profile. Each of these extensions represents a potentially fruitful area of research.
We believe the physical model described above could inform a new category of computational imaging methods for overcoming the barrier imposed by turbulence in open air imaging and communications. With regards to image enhancement, current algorithms for removing the effects of turbulence use an image registration-based procedure for spatially aligning (warping) sequential frames in a video segment \cite{Zhu:13,Furhad:16} . Our theory suggests that rather than being aligned, consecutive frames should be morphed instead via transport-based modeling. Moreover, the model linking clean and corrupted images should not be linear (e.g., ``deconvoution'' methods, see again \cite{Zhu:13,Furhad:16}), but should instead involve the inversion of optimal transport.
In yet another application, orbital angular momentum has recently been used to develop free-space optical communication strategies that augment the throughput of existing links \cite{Willner:15}. State of the art methods for decoding the nonlinear effects of turbulent channels involve the use of deep convolutional neural networks \cite{Doster:16}, and hence have a limited bandwidth (e.g. $\sim 1$ kilo bits/second) due to the high computational cost. The modeling described above can potentially be used to inform more computationally efficient decoding methods.
\appendices
\section{Continuity and Momentum from the Parabolic Wave Equation}\label{sec:appendixA}
The parabolic wave equation is written \cite{Fannjiang:05}
\begin{align}
i2k_0\partial_z \Psi(\vec{x},z)+\nabla_X^2 \Psi(\vec{x},z)+k_0^2\eta(\vec{x},z)\Psi(\vec{x},z)=0
\end{align}
where $k_0$ is the wavenumber, $\eta(\vec{x},z)$ is the perturbation to the refractive index, i.e., $n^2(\vec{x},z)=1+\eta(\vec{x},z)$. The EM field $\Psi(\vec{x},z)$ is in $V/m$ and the notation $\nabla_X^2=\partial^2/\partial_{x_1}^2+\partial^2/\partial_{x_2}^2$ is the Laplacian w.r.t. the transverse coordinates $\vec{x}\equiv (x_1,x_2)$ and $z$ is the direction of propagation. Henceforth we will remove the arguments and simply note that the EM field, magnitude, and phase are all functions of the transverse coordinates $\vec{x}$ and $z$. Now rescale the spatial coordinates by the wavelength so that $z'=\frac{k_0}{2}z$, $x_1'=k_0x_1$ and $x_2'=k_0x_2$ in which case the spatially non-dimensionalized wave equation becomes
\begin{align}
i\partial_{z'} \Psi+\nabla_{X'}^2 \Psi+\eta\Psi=0.
\label{eqn:parabolicA}
\end{align}
To transform this expression we can use the so-called Madelung transformation which sets $\Psi(\vec{x}',z')\equiv \rho(\vec{x}',z')^{1/2}e^{i\phi(\vec{x}',z')/2}$.
For ease of notation we drop the $'$ and state explicitly that we are working with non-dimensional lengths. Form the identity
\begin{align}
\frac{\nabla_X \Psi}{\Psi}&=\frac{\frac{1}{2}\rho^{-1/2}\nabla_X\rho e^{i\phi/2}+i\frac{1}{2}e^{i\phi/2}\rho^{1/2}\nabla_X\phi}{\rho^{1/2}e^{i\phi/2}}\nonumber \\
&=\frac{\nabla_X\rho}{2\rho}+i\frac{1}{2}\nabla_X\phi.
\label{eqn:ident1}
\end{align}
Recognizing that $\rho=\Psi\Psi^*$ and substituting into (\ref{eqn:ident1})
\begin{align}
\frac{\nabla_X\Psi}{\Psi}&=\frac{\nabla_X\left(\Psi\Psi^*\right)}{2\Psi\Psi^*}+\frac{i}{2}\nabla_X\phi\nonumber \\
&=\frac{(\nabla_X\Psi)\Psi^*+(\nabla_X\Psi^*)\Psi}{2\Psi\Psi^*}+\frac{i}{2}\nabla_X\phi\nonumber \\
\frac{\nabla_X\Psi}{2\Psi}-\frac{\nabla_X\Psi^*}{2\Psi^*}&=\frac{i}{2}\nabla_X\phi\nonumber
\end{align}
and then multiplying both sides by $\rho=\Psi\Psi^*$
\begin{align}
(\nabla_X\Psi)\Psi^*-(\nabla_X\Psi^*)\Psi^*&=i\rho\nabla_X\phi\nonumber
\end{align}
and finally taking the divergence of both sides gives
\begin{align}
(\nabla^2_X\Psi)\Psi^*&+\nabla_X\Psi^*\nabla_X\Psi-(\nabla^2_X\Psi^*)\Psi-\nabla_X\Psi\nabla_X\Psi^*\nonumber \\
&=i\nabla_X\cdot(\rho\nabla_X\phi)\nonumber \\
\nabla_X\cdot(\rho\nabla_X\phi)&=-i[(\nabla^2_X\Psi)\Psi^*-(\nabla^2_X\Psi^*)\Psi].
\label{eqn:ident2}
\end{align}
Now, returning to (\ref{eqn:parabolicA}) we note that the complex conjugate of the EM field similarly satisfies
\begin{align}
-i\partial_{z} \Psi^*+\nabla_X^2 \Psi^*+\eta\Psi^*=0.
\label{eqn:parabolicB}
\end{align}
Multiplying (\ref{eqn:parabolicA}) by $-i\Psi^*$ and (\ref{eqn:parabolicB}) by $i\Psi$ and adding gives
\begin{align}
&\left(\partial_{z} \Psi\right)\Psi^*-i(\nabla_X^2 \Psi)\Psi^*-i\rho \eta=0\nonumber \\
+~~~&\left(\partial_{z} \Psi^*\right)\Psi+i(\nabla_X^2 \Psi^*)\Psi+i\rho \eta=0.\nonumber \\
\hline\nonumber \\
&\partial_{z}\rho-i[(\nabla^2_X\Psi)\Psi^*-(\nabla^2_X\Psi^*)\Psi]=0\nonumber
\end{align}
which can be combined with (\ref{eqn:ident2}) to yield
\begin{align}
\partial_{z}\rho+\nabla_X\cdot(\rho\nabla_X\phi)=0
\end{align}
which, after defining $v=\nabla_X\phi$, is exactly the continuity equation. Note that the velocity is dimensionless as are the distances associated with differentiation. The units are thus dictated solely by the units of $\rho$ which are $V^2/m^2$.
To obtain the momentum equation one again uses the identities $\Psi=\rho^{1/2}e^{i\phi/2}$, $\rho=\Psi\Psi^*$, $v=\nabla_X\phi$ and substitute directly into (\ref{eqn:parabolic}). Making note of the previous result (the terms of the continuity equation appear and can therefore be set equal to zero), and using the identity
\begin{align}
\frac{\nabla_X^2\rho}{2\rho}-\frac{(\nabla_X\rho)^2}{4\rho^2}=\frac{\nabla_X^2(\rho^{1/2})}{\rho^{1/2}}
\end{align}
one has
\begin{align}
\partial_z\phi+\frac{1}{2}(\nabla_X\phi)^2=2\frac{\nabla_X^2(\rho^{1/2})}{\rho^{1/2}}+2\eta.
\label{eqn:phase5}
\end{align}
The term $\frac{\nabla_X^2(\rho^{1/2})}{\rho^{1/2}}$ is referred to in optics as the ``diffraction term'' \cite{Gureyev:95}, or in the quantum literature, the ``quantum potential'' \cite{Bohm:84}. This term is typically neglected in optics given certain assumptions about the spatial variability in intensity with respect to a wavelength \cite{Saleh:91},~\cite{Gureyev:95}. We too neglect this term, and in the next section (\ref{sec:appendixB}) provide additional justification for its removal.
Finally, taking the spatial gradient $\nabla_X$ of both sides and recognize that $\nabla_X\left[(\nabla_X\phi)^2\right]=\nabla_X(v\cdot v)=2(v\times(\nabla_X\times v))+2(v\cdot\nabla_X)v$. Noting that a necessary and sufficient condition for representing the velocity as the gradient of a potential is $\nabla_X\times v=0$ \cite{Panton:84} we finally obtain the form (\ref{eqn:momentum2}).
\section*{Simplification of the diffraction term }\label{sec:appendixB}
In the derivation of the wave equation we excluded the divergence of the electric field on the physical reasoning that the fluctuations in the atmosphere were relatively minor. In what follows, however, we show that the constitutive law given by \eqref{eqn:constraint} can be used to relate the diffraction term in \eqref{eqn:phase5} to the refractive index and, by extension, to better understand the conditions under which this term can be safely neglected.
Returning to the vector description of the electric field, for linearly polarized light we may write $\vec{E}(\vec{x})=\{\rho^{1/2}\cos(\gamma)\hat{x}_1,~\rho^{1/2}\sin(\gamma)\hat{x}_2\}$ where $\gamma$ is the polarization angle, measured relative to the $\hat{x}_1$.
Using this representation for the electric field we can expand the relationship expressed in (\ref{eqn:constraint}) as
\begin{align}
\nabla_X\cdot &\rho^{1/2}\left[\cos\left(\gamma\right)\hat{x}_1+\sin\left(\gamma\right)\hat{x}_2\right]=\nonumber \\
&-\rho^{1/2}\left[\cos\left(\gamma\right)\hat{x}_1+\sin\left(\gamma\right)\hat{x}_2\right]\cdot 2\frac{\nabla_X n}{n}.
\label{eqn:test5}
\end{align}
Expanding the first line gives
\small
\begin{align}
&\nabla_X\cdot \rho^{1/2}\left[\cos\left(\gamma\right)\hat{x}_1+\sin\left(\gamma\right)\hat{x}_2\right]=\nonumber \\
&~~\nabla_X \rho^{1/2}\cdot \left[\cos\left(\gamma\right)\hat{x}_1+\sin\left(\gamma\right)\hat{x}_2\right]\nonumber \\
&~~+\rho^{1/2}\left[-\frac{\partial \gamma}{\partial x_1}\hat{x}_1+\frac{\partial \gamma}{\partial x_2}\hat{x}_2\right]\cdot \left[\sin\left(\gamma\right)\hat{x}_1+\cos\left(\gamma\right)\hat{x}_2\right]
\end{align}
\normalsize
so that the entire expression given by \eqref{eqn:test5} can be written
\begin{align}
&\left\{\frac{\nabla_X\rho^{1/2}}{\rho^{1/2}}+2\frac{\nabla_X n}{n}\right\}\cdot \left[\cos\left(\gamma\right)\hat{x}_1,~\sin\left(\gamma\right)\hat{x}_2\right]\nonumber \\
&+\nabla_X\times \left[-\sin\left(\gamma\right)\hat{x}_1,~\cos\left(\gamma\right)\hat{x}_2\right]=0.
\end{align}
For the expression to hold for arbitrary angle of polarization (which may be different at every spatial location $\vec{x}$ \cite{Born:99}), the term in brackets must equate to zero. Thus, simplifying the intensity term and rearranging we have
\begin{align}
\frac{\nabla_X \rho}{2\rho}&=-2\frac{\nabla_X n}{n}
\label{eqn:constitagain}
\end{align}
This is a vector equation relating intensity and polarization angle to the refractive index in the transverse plane. The term involving the curl of $ \left[-\sin\left(\gamma\right)\hat{x}_1,~\cos\left(\gamma\right)\hat{x}_2\right]$ points in the direction of propagation hence it can be set equal to zero.
Now, taking the divergence of both sides of the remaining terms in (\ref{eqn:constitagain}) gives
\begin{align}
\nabla_X\cdot \frac{\nabla_X \rho}{2\rho}&=-2\nabla_X\cdot \frac{\nabla_X n}{n}.
\end{align}
Continuing with the divergence operator we have
\begin{align}
\frac{\nabla_X^2\rho}{2\rho}&-\frac{(\nabla_X \rho)^2}{2\rho^2}=-2\nabla_X\cdot \left(\frac{\nabla_X n}{n}\right)\nonumber \\
\frac{\nabla_X^2\rho}{2\rho}&-\frac{(\nabla_X \rho)^2}{2\rho^2}=-2\left[\frac{\nabla_X^2n}{n}-\frac{(\nabla_X n)^2}{n^2}\right]
\label{eqn:constit3}
\end{align}
The term on the left hand side can be split into three terms, two of which we already know how to combine into what we need. Specifically,
\begin{align}
\frac{\nabla_X^2\rho}{2\rho}-\frac{(\nabla_X \rho)^2}{2\rho^2}&=\frac{\nabla_X^2\rho}{2\rho}-\frac{(\nabla_X \rho)^2}{4\rho^2}
-\frac{(\nabla_X \rho)^2}{4\rho^2}\nonumber \\
&=\frac{\nabla_X^2\rho^{1/2}}{\rho^{1/2}}-\frac{(\nabla_X \rho)^2}{4\rho^2}
\end{align}
in which case \eqref{eqn:constit3} becomes
\begin{align}
\frac{\nabla_X^2\rho^{1/2}}{\rho^{1/2}}&=\frac{(\nabla_X \rho)^2}{4\rho^2}-2\left[\frac{\nabla_X^2n}{n}-\frac{(\nabla_X n)^2}{n^2}\right]
\label{eqn:constit4}
\end{align}
However, by squaring both sides of \eqref{eqn:constitagain} we can replace the first term on the right-hand-side of \eqref{eqn:constit4} so that
\begin{align}
&\frac{\nabla^2\rho^{1/2}}{\rho^{1/2}}=-2\frac{\nabla_X^2n}{n}+6\left(\frac{\nabla_Xn}{n}\right)^2\nonumber \\
&=-2\left[\nabla\cdot \frac{\nabla_Xn}{n}+\left(\frac{\nabla_Xn}{n}\right)^2\right]+6\left(\frac{\nabla_Xn}{n}\right)^2\nonumber \\
&=-2\nabla\cdot \frac{\nabla_Xn}{n}+4\left(\frac{\nabla_Xn}{n}\right)^2\nonumber \\
&=-\nabla^2_X\log(n^2)+(\nabla_X\log(n^2))^2
\end{align}
Thus, for small perturbations to the index $\eta\ll 1$ the approximation $\log(1+\delta)\approx \delta$ for $\delta\ll 1$ means we could alternatively have written the last line above as $-\nabla^2_X\eta+(\nabla_X\eta)^2$. These terms are clearly higher-order in terms of the index perturbations, hence are properly neglected in the analysis.
\bibliographystyle{IEEEtran}
|
1,941,325,221,051 | arxiv | \section{Introduction}
A table contains an ordered arrangement of rows and columns that are widely used to present a set of facts about some information~\cite{lewandowsky}. They are widely used in research articles, data analysis, newspapers, magazines, invoices and financial documents. Tables present multiple information points for a large number of items in rows and columns that are easy to perceive and analyze. They structure the information to provide a visual summary of the most valuable information contained in the document. It is for this reason that the table recognition systems have captured the interest of a large number of researchers to make contributions in this domain over the past two decades.
Tables have numerous layouts which makes it very hard for conventional feature engineering approaches to decode table structures generically. These approaches generally rely on visual features like ruling lines, spacing between different columns, type of data in the table cells, their relationships with overlapping neighbors or color encoded cell blocks. They perform reasonably well on the tables of a particular layout or a business case but fail to scale across multiple domains.
In the recent years, researchers have greatly improved the results of computer vision problems by applying deep learning techniques. Schreiber et al.~\cite{schreiber_icdar17} proposed a deep learning based approach for recognizing rows and columns of tables in document images. Their proposed system employs a semantic segmentation (FCN-Xs architectures) model with custom tweaking to the hyper-features as well as skip pooling to enhance the segmentation results. The major limitation of this method is the way FCN processes the table. Each stride of an FCN filter maps a portion of the input image pixels to an output pixel. This fails to capture the fact that the rows and columns in a table follow a unique repetitive sequence of in-between spacing and data length as the information of the next and the previous row-column elements is not taken into account. Also, the receptive field of the CNN based models does not process the entire row or column in a single stride. In this paper, we overcome this limitation by using a sequential modeling approach. Specifically two bi-directional GRUs are used. One bi-directional GRU identifies the row boundaries while the other identifies the column boundaries. Each bi-direction GRU has its own fully connected layer to classify the input as either a row-boundary or a column-boundary. Our approach successfully overcomes the limitations of a CNN based model and provides a data-driven approach towards a general, layout independent table structure extraction system.
We have benchmarked our system on publicly available UNLV dataset~\cite{unlv} where it outperformed T-Recs~\cite{kieninger_das98, kieninger_icdar01} table structure recognition system. It is to be noted that no part of the UNLV dataset has been used in the training process.
The rest of this paper is organized in the following sections: Section II consists of the related work in table structure recognition domain. Section III elaborates our proposed methodology that consists of a pre-processing module and a classification module. Section IV presents the evaluation metrics while benchmarking and evaluation of the proposed algorithm is detailed in Section V. Section VI provides the conclusive remarks as a guideline for the future work in this domain.
\section{Related Work}
A substantial amount of work has been done to identify the structure of a table both using heuristic-based methods as well as using deep learning. Kieninger et al.~\cite{kieninger99,kieninger_das98,kiegner_icapr99} proposed a system which was one of the earliest successful attempts on table structure extraction problem called {T-Recs}. The input to this system is the word bounding boxes. These boxes are then grouped into rows and columns using a bottom-up approach by evaluating the vertical and horizontal overlaps between the boxes to form a segmentation graph. The major problem in this approach is that the output depends on a large number of parameters values that are heuristically set. Besides, the algorithm fails if the preceding OCR step does not correctly identify words bounding boxes (for example if the character recognizer misses dots and commas in numeric data).
Wang et al.~\cite{wang_04} proposed a data-driven approach similar to the X-Y cut algorithm~\cite{shafait_pami08} that is based on probability optimization technique to solve table structure extraction problem. This statistical algorithm uses probabilities that are derived from a large training corpus. This method also takes into account the distances between adjacent words and it works on single column, double column and mixed column layouts.
Shigarov et al.~\cite{shigarov_16} proposed a method that relies on PDF metadata with information including font and text bounding boxes. The algorithm uses ad-hoc heuristics for recovering table cells from text chunks and ruling lines. The algorithm combines these text chunks into text blocks through a text block recovery algorithm and then uses a threshold to configure the block vertically or horizontally.
Zanibbi et al.~\cite{zanibbi_das_04} presented a survey for table recognition systems in terms of interactions of table models, observations, transformations, and inferences. Their survey answers questions about what and when some decisions are made by table structure recognition systems. Furthermore, this survey outlines the dataset used for the training and evaluation of these systems.
Jianying et al.~\cite{jianying01} proposed a general algorithm for table structure extraction from an already detected table region. In their proposed methodology, they have used hierarchical clustering for column detection. Additionally, the system uses lexical and spatial criteria to classify headers of tables. They have used a directed acyclic attribute graph or DAG for evaluation of table structure extraction.
Wang et al.~\cite{wangt_dar01} proposed an automatic ground truth generation system which can generate a large amount of accurate ground truth for table recognition systems. They use novel background analysis table recognition algorithms and an X-Y cut algorithm for identifying table regions in the document images. This system takes line and word segmentation results as input and outputs table cell detection results.
Kasar et al.~\cite{Kasar_icdar_15} proposed a technique for table structure extraction based on query-patterns. This approach is a client-driven approach in which the client will provide the query pattern based on the location of key fields in the document. The input query pattern is then converted into a relational graph in which the nodes represent the features and the edges represent the spatial relationship between these nodes.
Shamilian et al.~\cite{shamilian} proposed a system that reads layout of the tables in machine printed form. They have provided a graphical user interface (GUI) for users to define contextual rules to identify key fields inside a table. The system can also be manually retargeted to new layouts by the user. This system has been applied to more than 400 distinct tabular layouts.
Schreiber et al.~\cite{schreiber_icdar17} proposed a deep learning based approach for table structure recognition. This system uses semantic segmentation model with FCN-8 architecture and skip pooling features to detect rows and columns of a table. Additionally, they have vertically stretched the table images in order to increase the precision on row detection. Furthermore, they have used CRF to improve the results of semantic segmentation model. Siddiqui el al.~\cite{siddiqui_decnt} also proposed a deep learning based method based on Deep Deformable Convolutional Neural Network (CNN) for table detection.
In this paper, we have proposed a novel solution for table structure extraction using a sequential model, assuming that the table has already been detected using an existing algorithm (e.g.~\cite{gilani_icdar17}. In the proposed methodology, the table images are first pre-processed by applying binarization, noise removal, and morphological transformation. These transformed images are then passed to a bi-directional Gated Recurrent Unit (GRU) recurrent neural network that detects rows and columns in the table.
\section{Proposed Methodology}
The proposed method is divided into three modules: Image pre-processing, a row-column classifier and post-processing. The pre-processing step plays a crucial role in converting the table images containing text to natural images that do not contain textual features. These images are then passed to the classifier that uses rows and columns as time steps to classify each row and column. In the post-processing step, the segmentation space generated by the classifier is parsed to give a single line prediction of rows and columns. This section explains each module in greater detail.
\subsection{Image Pre-processing}
The first and the foremost step is pre-processing the table images. This step plays a preliminary role in converting the raw table images to a simpler form so that the layout or structure of the table is more apparent. The goal of this transformation is to increase the efficiency of our classifier by removing unnecessary detail from the input images.
The images are first cleaned up by removing the ruling lines and other non-text foreground objects. The cleaned image is then run through adaptive binarization~\cite{shafait_binarize08} so that the pixel intensities are uniform. Once the images have been binarized, they are resized to a fixed dimension of $1600 \times 512$ as the neural network is designed to process fixed size inputs.
\begin{figure}[ht]
\centering
\includegraphics[scale = 0.190]{row_network.jpeg}
\caption{Neural Architecture for row classification: Passing a $(1600 \times 512)$ pre-processed image to a bi-directional GRU with an input of size $(1600 \times 1)$ at each timestep. The bi-directional GRU outputs a $(512 \times 2)$ vector which is post-processed to get a single regressed row segmentation boundary.}
\label{row}
\end{figure}
After binarization, three iterations of dilation transform are applied to the resized image using a rectangular kernel. In the case of column detection, the dilation kernel is a vertical dilation filter of dimensions $3 \times 5$ and in the case of row detection, it is a horizontal dilation filter of dimensions $5 \times 3$. These dilation operations join the adjacent rows and columns, which helps the model to pick up the pattern of the row and the column separators. The transformed images are then normalized to have values between 0 and 1 to be fed to the subsequent recurrent neural network.
\subsection{Model}
This section provides details of the proposed methodology and it is further divided into two parts i) Column Classification ii) Row Classification. These two tasks are not very different by nature yet they require different model organization.
The crux of our approach is to identify segmentation space between the rows and the columns using recurrent neural networks. Different architectures of recurrent neural networks are proposed in the literature. We have selected Gated Recurrent Unit (GRU)~\cite{schuster, chung14} and Long Short-Term Memory (LSTM) networks~\cite{Kasar_icdar_15, shamilian} for our algorithm because of their ability to incorporate contextual information without vanishing gradient problem. The results demonstrate (see Section~\ref{results} that GRUs outperform the LSTMs by a significant margin for both row and column classification. An analysis of the results showed that the LSTM networks, due to their inherent complexity, tend to overfit on the simpler data. The later sections in this paper will only discuss the approach with GRUs for brevity, as the approach with LSTMs is quite similar.
The bi-directional GRU takes rows and columns as timesteps and use the information of previous row-column elements to predict future ones. This approach provides a significant improvement over the CNN based models because of the memory cells in GRUs that learns the pattern of inter-row and inter-column spacing and the sequence of repetition of row-column elements effectively. This approach outperformed Schreiber et al.~\cite{schreiber_icdar17} table structure extraction system based on semantic segmentation by a significant margin. The architectures for row and column classification are detailed in the following two sections.
\begin{figure}[ht]
\centering
\includegraphics[scale = 0.190]{column_network.jpeg}
\caption{Neural Architecture for column classification: Passing a $(1600 \times 512)$ pre-processed image to a bi-directional GRU with an input of size $(512 \times 1)$ at each timestep. The bi-directional GRU outputs a $(1600 \times 2)$ vector which is post-processed to get a single regressed column segmentation boundary.}
\label{column}
\end{figure}
\subsubsection{Column Classification}
The neural architecture for column recognition classifies each column of the image as either a column or a whitespace between two columns. The images are passed one at a time and each image is considered to be a batch like in stochastic gradient descent (SGD). The pre-processed input image of dimension $1600 \times 512$ within a single batch is split into $1600$ sequences (columns), each consisting of $512$ pixel values. We have used a hidden dimension of size $512$. The two-layer GRU is initialized with hidden dimensions $(4 \times 1 \times 512)$ corresponding to 2 * number of layers * batch size * hidden dimension size.
The GRU processes the image as $1600$ “timesteps”, each timestep corresponding to a column with $512$ input pixel values. At each timestep, the GRU has the information about all the columns to the left and the right (if any) of the current column, as well as the pixel values contained within the current column being evaluated. Using this information, the GRU can learn to identify the gap between the columns as those columns containing mostly white pixels and having two column regions on their left and right sides.
The GRU outputs a tensor of shape $1600 \times 512$ corresponding to sequence length times hidden dimension. This tensor is then passed through a fully connected layer which outputs a $1600 \times 2$ shaped tensor. This output is finally passed through a softmax layer which gives the final output of shape $1600 \times 2$, consisting of binary class probabilities for each of the $1600$ columns.
\subsubsection{Row Classification}
The neural architecture for row detection is a transpose of the column classifier and it classifies each row of the image as either a row or a whitespace between two rows. The images are fed one at a time and each image is considered to be a batch. The pre-processed input image of dimension $1600 \times 512$ within a single batch is split into $512$ sequences (rows), each consisting of $1600$ pixel values. We have used a hidden dimension of size $1024$. The 2-layer GRU is initialized with hidden dimensions $(4 \times 1 \times 1024)$ corresponding to 2 * number of (layers * batch size * hidden dimension size).
In the case of row classification, there are $512$ timesteps with $1600$ inputs each. At each timestep, the GRU has information about all the rows above and below the current row as well as the pixel values within the current row.
The GRU outputs a tensor of shape $512 \times 1600$ corresponding to sequence length x hidden dimension. This tensor is then passed through a fully connected layer which outputs a $512 \times 2$ shaped tensor. This output is finally passed through a softmax layer which gives an output of shape $512 \times 2$, consisting of binary class probabilities for each of the $512$ rows.
The last step in the classification is parsing the segmentation space predicted by the classifier. We take the midpoint of the segmentation space and applying the logic to drop the leftmost and the rightmost predictions in the case of columns and the top and the bottom predictions in the case of rows. This step regresses the output to a single line prediction of rows and column separators.
The complete model architectures for row and column classification are shown in Figure~\ref{column} and~\ref{row}.
\subsection{Training}
We used Adam optimizer paired with binary cross entropy loss function to train our models. A typical table image contains more rows and columns than the whitespace between them. Initial attempts at training resulted in a model that always predicted a row-column element and failed to detect the whitespace due to this class imbalance problem. So, we took measures to reduce this class imbalance problem and applied weighting to our loss function to penalize an incorrectly predicted row-column element only $66\%$ as much as an incorrectly predicted whitespace element.
The dataset used for training consisted of freely available document images downloaded from various sources. The tables, rows and columns were manually labelled using custom tools. With a fixed learning rate of $0.0005$, we trained the column classifier for $10$ iterations over $323$ images and the row classifier for $35$ iterations for $286$ images.
\section{Performance Measures}
Various researchers have used different evaluation metrics ranging from simple precision and recall to more complex evaluation algorithms. In this paper, we have used the performance evaluation algorithm described in Shahab et al.~\cite{shahab_das10} to evaluate the performance of our model for two main reasons: i) This metric paints the detailed picture of how the algorithm performs using six different measures. ii) It is a general purpose metric that can be applied to any type of segments such as tables, rows, columns and cells.
The methodology proposed by Shahab et al.~\cite{shahab_das10}, starts with numbering the ground truth segments and the detected segments. A correspondence matrix is then created with $m$ rows and $n$ columns where $m$ is the number of ground truth segments and $n$ is the number of detected segments in an image. The [G\textsubscript{i},S\textsubscript{j}] entry in the matrix represents the number of pixels in the $ i^{th} $ ground truth segment that overlap with the $ j^{th} $ detected segment. $ \abs{G_i} $ represents the total number of pixels in the $i^{th}$ ground truth segment and $\abs{ {S_j} }$ represents the total number of pixels in the $ j^{th} $ detection. Once the correspondence matrix for an image has been created, we can define the following measures:
\begin{figure*}[ht]
\centering
\includegraphics[scale = 0.35]{output.jpeg}
\caption{Results of our proposed table structure extraction approach on a few images from UNLV dataset showing examples of correct, over-segmented and under-segmented detections}
\label{results}
\end{figure*}
\subsection{Correct Detections}
This measure shows the total number of ground truth segments that have a large intersection with a detected segment and the detected segment does not have a significant overlap with any other ground truth segment. That is, for a detected segment $S_j$ and ground truth segments $G_i$:
\vspace{1pt}
\begin{center}
$ \frac{\abs{G_i \cap S_j}}{\abs{G_i}} > 0.9 $ and $ \frac{\abs{G_k \cap S_j}}{\abs{S_j}} < 0.1$ \hspace{4pt} $ \forall k \neq i $
\end{center}
\subsection{Partial Detections}
The total number of ground truth segments that have a significant intersection with a single detected segment. However, the intersection is not large enough to be counted as a correct detection. That is,
\vspace{1pt}
\begin{center}
$ 0.1 < \frac{\abs{G_i \cap S_j}}{\abs{G_i}} < 0.9 $ and $ \frac{\abs{G_i \cap S_k}}{\abs{G_i}} < 0.1$ \hspace{4pt} $ \forall k \neq j $
\end{center}
\subsection{Over segmentation}
The total number of ground truth segments that have a large overlap with two or more detected segments. An over-segmented detection means that multiple detected segments span over a single ground truth. Mathematically,
\vspace{1pt}
\begin{center}
$ 0.1 < \frac{\abs{G_i \cap S_j}}{\abs{G_i}} < 0.9 $ \hspace{4pt} $ $
\end{center}
holds for more than one detected segments $S_j$ for a particular $G_i$.
\subsection{Under segmentation}
This is the inverse of over-segmentation i.e. the number of detected segments that have a large intersection with more than one ground truth segment. An under-segmented detection means that a single detection spans over multiple ground truth segments. Mathematically,
\begin{center}
$ 0.1 < \frac{\abs{G_i \cap S_j}}{\abs{S_j}} < 0.9 $ \hspace{4pt} $ $
\end{center}
holds for more than one ground truth segment $G_i$ for a particular $S_j$
\subsection{Missed segments}
These are the number of ground truth segments that do not have a large overlap with any of the detected segments. These are segments that our algorithm should have detected but failed to do so.
\begin{center}
$ \frac{\abs{G_i \cap S_j}}{\abs{G_i}} < 0.1 $ \hspace{4pt} $ \forall j $
\end{center}
\subsection{False Positive Detections}
The inverse of missed segments, these are segments detected by the algorithm but which are not actually present in the ground truth. These are foreground pixels that algorithm has mistakenly detected as segments.
\begin{center}
$ \frac{\abs{G_i \cap S_j}}{\abs{S_j}} < 0.1 $ \hspace{4pt} $ \forall i $
\end{center}
Figure~\ref{results} show our model output for some sample images on UNLV dataset showing correct, over-segmented and under-segmented detections.
\section{Experiments and Results}
We have used publicly available UNLV dataset~\cite{unlv} for the evaluation of our approach. The dataset spans a number of documents of varying layouts and domains including newspapers, research articles, magazines, technical reports etc. There are $2,889$ images in UNLV dataset~\cite{unlv}, out of which there are $427$ images containing at least one table. The dataset also includes accompanying, manually drawn ground truths for table boundaries. The ground-truth for columns, rows and cells was presented in~\cite{shahab_das10}. Since our work is focused on table structure extraction rather than table detection, we cropped the tables from the images using UNLV ground truth files, resulting in $557$ tables. We then evaluated our model’s outputs against the ground truths provided in~\cite{shahab_das10}. We did not use any image from the UNLV dataset in the training and validation process of our model and thus all the images were unseen by the model.
We have benchmarked our approach with {T-Recs} system by Kieninger et al.~\cite{kieninger99}, a non-deep-learning based algorithm for table structure extraction. Our approach provided a significant improvement in correct column detections, from $40.51\%$ to $55.31\%$ and from $54.98\%$ to $58.45\%$ in the case of row detection. On the other hand, the number of partial detections has gone down which is explained by the higher number of over-segmentations and under-segmentations in our approach as compared to {T-Recs} (see Table~\ref{table:column} and~\ref{table:row} for comparison results).
\begin{table*}
\centering
\caption{The results of evaluating our system on 427 binary 300-dpi scanned UNLV dataset pages containing table zones. The following benchmark is for column segmentation.}
\label{table:column}
\begin{tabular}{| l | r | r | r |}
\hline
& \multicolumn{3}{c}{\textbf{Accuracy\%}} \vline \\\cline{2-4}
& & \multicolumn{2}{c}{\textbf{Our Approach}} \vline \\\cline{3-4}
\textbf{Performance Measures} & \textbf{T-Recs} & \textbf{Bi-directional LSTM} & \textbf{Bi-directional GRU} \\ \hline
Correct Detections & 40.51 & 49.05 & 55.31 \\ \hline
Partial Detections & 18.57 & 15.13 & 12.13 \\ \hline
Missed Detections & 13.50 & 6.99 & 3.12 \\\hline
Over Segmented Detections & 13.50 & 18.44 & 12.14 \\ \hline
Under Segmented Detections & 5.11 & 20.55 & 16.75 \\ \hline
False Positive Detections & 0.88 & 1.20 & 0.08 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Results of evaluating our system on 427 binary 300-dpi scanned UNLV dataset pages containing table zones. The following benchmark is for row segmentation.}
\label{table:row}
\begin{tabular}{| l | r | r | r |}
\hline
& \multicolumn{3}{c}{\textbf{Accuracy\%}} \vline \\\cline{2-4}
& & \multicolumn{2}{c}{\textbf{Our Approach}} \vline \\\cline{3-4}
\textbf{Performance Measures} & \textbf{T-Recs} & \textbf{Bi-directional LSTM} & \textbf{Bi-directional GRU} \\ \hline
Correct Detections & 54.98 & 51.62 & 58.45 \\ \hline
Partial Detections & 12.45 & 17.13 & 13.35 \\ \hline
Missed Detections & 10.69 & 8.39 & 2.50 \\\hline
Over Segmented Detections & 6.27 & 4.24 & 8.33 \\ \hline
Under Segmented Detections & 7.70 & 5.30 & 14.67\ \\ \hline
False Positive Detections & 0.12 & 0.59 & 0.15 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Comaprison with Schreiber et al.~\cite{schreiber_icdar17} on ICDAR 2013 dataset using the same methods for calculating precision, recall and F1 score as described in Schreiber et al.~\cite{schreiber_icdar17}}
\label{table:schreiber}
\begin{tabular}{| l | r | r |}
\hline
& \multicolumn{2}{c}{\textbf{Accuracy\%}} \vline \\\cline{2-3}
\textbf{Performance Measures} & \textbf{Schreiber et al.} & \textbf{Our Approach} \\ \hline
Precision & 95.93 & 96.92 \\ \hline
Recall & 87.36 & 90.12 \\ \hline
F1 Score & 91.44 & 93.39 \\\hline
\end{tabular}
\end{table*}
Our proposed solution is also compared with Schreiber et al.~\cite{schreiber_icdar17} which is the state-of-the-art deep learning based approach towards table structure recognition. For that purpose, we used the publicly available ICDAR 2013 table competition dataset containing $67$ documents with $238$ pages, since this dataset was used in~\cite{schreiber_icdar17}. The results of this comparison are shown in Table~\ref{table:schreiber}.
The benchmarking results exhibit that our approach outperforms the existing approaches by a significant margin. There is an increase in the overall correct detections and a decrease in segmentation errors and missed detections. From Table~\ref{column} and~\ref{row}, GRUs outperform LSTMs because of its simpler architecture that is less prone to overfitting.
\section{Conclusion}
This paper proposed a novel approach for table structure extraction using GRU based sequential models for deep learning. This approach provides a significant improvement over heuristic algorithms and CNN based models~\cite{schreiber_icdar17}, owing to the powerful representation of the sequence models that capture the repetitive row/column structures in tables. In the future, we plan to extend this work to develop a coherent framework for information extraction from table cells.
\bibliographystyle{ieeetr}
|
1,941,325,221,052 | arxiv | \section{Introduction}
A Brain-Computer Interface (BCI) translates brain signals into messages or commands for an interactive task. This enables a wide range of applications from clinic to industry for both patients and healthy users, such as rehabilitation devices for stroke patients \cite{pichiorri15}, controllable wheelchairs and prostheses \cite{zhang15}, new gaming input devices \cite{coyle2013}, to name a few. Among different brain activity monitoring modalities, noninvasive approaches based on electroencephalography (EEG) use multiple electrodes placed on the skull surface to record the activity of cerebral cortical neurons \cite{britton16} and are widely used in many BCI studies thanks to their ease of implementation, reduced costs and high availability \cite{nicolas12}. The most popular EEG signals used to control BCI systems are P300 evoked potentials, steady-state visual evoked potentials (SSVEP) and motor imagery (MI) which is the focus of our work. Specifically, MI refers to the imagination of moving certain body parts without actual movement \cite{schuster11}. Different MI tasks result into discriminable patterns observed from the oscillatory activities in the sensorimotor
cortex region of the brain \cite{pfurtscheller99}. Imagination of left hand, right hand, foot and tongue movements are the most investigated MI tasks in the BCI literature \cite{lotte18}.
Handcrafted feature extraction methods coupled with conventional classifiers like Linear Discriminant Analysis (LDA), Support Vector Machines (SVM), Bayesian classifiers, and Nearest Neighbor classifiers have been used in a number of studies for MI task recognition \cite{lotte18}. A widely used approach is to extract and combine band power features from different channel(electrode) signals to capture connectivity patterns among different regions of the sensorimotor cortex and, ultimately, their interaction and engagement with each other. This is thought to play a fundamental role in accomplishing movement imaginations \cite{liu16}. Common spatial patterns (\emph{CSP}) were introduced to this end in \cite{ramoser00} and received a large share of research in the field \cite{blankertz07,lotte10,rivet10,samek13,yger15}, but their effectiveness depended on subject-specific frequency bands. This problem was alleviated by the popular filter bank CSP (\emph{FBCSP}) \cite{ang12} that decomposes the EEG into multiple frequency pass bands prior to spatial filtering, feature selection and classification. This method also won the BCI Competition IV \cite{tangermann12} for 4-class motor imagery recognition (Dataset 2a) and was since used as a reference method for comparison.
Given their effectiveness in other fields \cite{he15,silver16}, deep learning methods, and in particular Convolutional Neural Networks (CNNs)\cite{lecun15}, have the potential to learn both
effective features and classifiers simultaneously from raw EEG
data. Several studies have recently explored deep learning for MI
classification \cite{lu16,schirrmeister17,sturm16,tabar16,lawhern18}. Notably, \cite{schirrmeister17} showed that their \emph{Shallow
ConvNet} (one temporal convolution, one spatial convolution, squaring and mean pooling) could outperform their \emph{Deep ConvNet} (temporal convolution, spatial convolution, then three layers of standard convolution) as well as \emph{FBCSP}. A similar result was achieved by \cite{lawhern18} with \emph{EEGNet}, a compact lightweight network (one temporal convolution, one depthwise convolution, one separable convolution, and a fully connected layer) that compared favorably with \emph{Deep ConvNet} and performed on par with \emph{Shallow ConvNet}. These results indicate that shallow networks having a small number of parameters are beneficial for MI applications that are characterized by very small numbers of training examples because of the difficulty in performing millions or even thousands of mental commands during training sessions.
In this paper we propose \emph{Sinc-EEGNet}, a 4-layer CNN architecture that combines the benefits of both EEG frequency band decomposition of classical methods, such as \emph{FBCSP}, and automatic feature learning and extraction of lightweight CNN models, such as \emph{EEGNet}. In particular, the first convolutional layer of our network is restricted to use parameterized sinc functions that implement band pass filters. The subsequent depthwise and separable convolution layers learn a spatial filter and combine the features from the different frequency bands previously selected, which are then inputted to the final classification layer. An overview of the proposed architecture is shown in Fig. \ref{fig:overview}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/overview.pdf}
\caption{An overview of the proposed \emph{Sinc-EEGNet} architecture.}
\label{fig:overview}
\end{figure*}
\section{Sinc layer}
A standard CNN convolution layer applied on a one-dimensional discrete time-domain signal $s[t]$ performs convolutions with $F$ one-dimensional filters $h_1, ..., h_F$ each having $K$ learnable weights. Conversely, the Sinc layer performs convolutions with $F$ predefined functions $g_1, ..., g_F$ each implementing a learnable bandpass filter $G$ as the difference between two low-pass filters in the frequency domain:
\begin{equation}\label{eq:rectfilter}
G[f] = rect\left(\frac{f}{2f_2}\right)-rect\left(\frac{f}{2f_1}\right)
\end{equation}
where $f_1$ and $f_2>f_1$ are the learnable low and high cutoff frequencies. Using the inverse Fourier transform, the time-domain filter $g$ is obtained as:
\begin{equation}\label{eq:sincfilter}
g[t] = 2f_2sinc(2\pi f_2 t)-2f_1sinc(2\pi f_1 t)
\end{equation}
where the sinc function is defined as $sinc(x)=sin(x)/x$.
The cutoff frequencies are initialized by sampling from a Gaussian distribution with mean and variance equal to $f_s/4$, where $f_s$ represents the
sampling frequency of the input signal. The constraint $f_2>f_1$ is implemented by using in Eq. \ref{eq:sincfilter} the following cutoff frequencies $f_1^{abs}$ and $f_2^{abs}$:
\begin{equation}\label{eq:f1}
f_1^{abs}=|f_1|
\end{equation}
\begin{equation}\label{eq:f2}
f_2^{abs}=f_1 + |f_2-f_1|.
\end{equation}
Because of the discrete approximation of $g$, the resulting bandpass filter is nonideal and may present ripples in the passband and limited
attenuation in the stopband. To alleviate this problem, we multiply $g$ with the popular Hamming window $w$ \cite{dsp05} defined as:
\begin{equation}\label{eq:hamming}
w[t]=0.54-0.46 \cdot \cos\left(\frac{2\pi t}{L}\right)
\end{equation}
where $L$ is the number of discrete samples used to approximate $g$.
The sinc convolutional layer transforming the input signal $s[t]$ into the band-decomposed output signal $o_1,...,o_F$ is then defined by:
\begin{equation}\label{eq:sinc_conv}
o_i[t]=s[t]*\left(g_i[t]\cdot w[t]\right).
\end{equation}
\section{The Sinc-EEGNet architecture}
The proposed Sinc-EEGNet is a combination and adaptation of the Sinc convolution layer originally proposed by \cite{ravanelli18} for speech recognition with \emph{SincNet}, and \emph{EEGNet} \cite{lawhern18} for what concerns the spatial filtering implemented with depthwise convolution. Specifically, the architecture of Sinc-EEGNet (see Fig. \ref{fig:overview} and Table \ref{tab:architecture}) consists of four blocks described as follows:
\begin{enumerate}
\item \emph{Sinc Convolution}. The first block takes in input a signal having $C$ channels and $T$ time samples, and performs convolution with $F_1$ sinc filters having $L$ time samples. Compared to the first standard convolution layer used in other CNN architectures such as \emph{EEGNet}, here the sinc filters are explicitly designed to learn the optimal band decomposition for the MI classification task and, when the CNN is trained with data from a single BCI user, this will reflect the peculiarities of the EEG oscillatory activity of that user. Another advantage is the reduced number of parameters, from $K\times F_1$ of the standard convolution to $2\times F_1$ of the sinc convolution. This also implies faster convergence and better generalization capabilities especially when using small training sets as in the case of MI applications. Computational efficiency also is improved since the filters are symmetric, thus the convolution can be performed on one side of the filter and inheriting the result for the other half.
\item \emph{Depthwise Convolution}. Similarly to \emph{EEGNet} \cite{lawhern18}, we use a Depthwise Convolution layer \cite{chollet17} of size $(C, 1)$ to learn $D$ spatial filters for each of the $F_1$ inputted feature maps across the channel dimension, for a total of $F_2=D\times F_1$ filters. Combined with the first layer that performs optimal band decomposition, this two-step sequence can be considered a `learnable' version of the well known \emph{FBCSP} \cite{ang12} approach.
\item \emph{Separable Convolution}. Similarly to \emph{EEGNet}, we summarize each feature map individually using a Depthwise Convolution of size $(1, 16)$, and then merge the outputs using $F_2$ $(1, 1)$ Pointwise Convolutions. This allows optimal combination of the information within and across feature maps.
\item \emph{Classification}. The last layer is a fully connected layer that receives the flattened features from the previous layer and maps them to 4 decision classes (left hand, right hand, foot, tongue).
\end{enumerate}
At the end of blocks 1-3 we apply Average Pooling of size $(1, 4)$ for dimensionality reduction, Layer Normalization \cite{ba16}, Dropout regularization \cite{srivastava14}, and CELU activation \cite{barron17}.
Layer Normalization, as opposed to Batch Normalization \cite{ioffe15} used in other architectures (\emph{EEGNet}, \emph{Deep ConvNet}, \emph{Shallow ConvNet}), calculates the mean and variance across channels instead than batches. This is especially useful for BCI datasets characterized by a high number of channels(electrodes) and small batch sizes resulting from the scarcity of training data. As to the CELU activation, it is an improvement over the ELU activation \cite{clevert15} used in other architectures (\emph{EEGNet}, \emph{Deep ConvNet}, \emph{Shallow ConvNet}) since its derivative does not diverge and it contains both the linear transfer function and ReLU \cite{nair10} activation as special cases.
\ctable[
caption={Sinc-EEGNet architecture, where $C$ = number of channels, $T$ = number of time points, $L$ = number of sinc samples, $F_1$ =
number of temporal filters, $D$ = number of spatial filters, $F_2$ = number of
pointwise filters, and $N$ = number of classes.},
label = tab:architecture,
width = \columnwidth,
pos = !t,
doinside=\scriptsize]{m{1.0cm}m{3.1cm}m{1.2cm}m{1.0cm}XXm{1.3cm}}
{
}
{ \toprule
Block & Layer & filters & size & params & Output & Activation\\ \toprule
\multirow{9}{*}{1} & Input & & & & $(C,T)$ & \\ \cmidrule{2-7}
& Reshape & & & & $(1, C,T)$ & \\ \cmidrule{2-7}
& Sinc Convolution & $F_1$& $(1,L)$ & $2\times F_1$ & $(F_1, C,T)$ & \\ \cmidrule{2-7}
& Average Pooling & & $(1,4)$ & & $(F_1, C,\frac{T}{4})$ & \\ \cmidrule{2-7}
& Layer Normalization & & & $2\times F_1$ & $(F_1, C,\frac{T}{4})$ & CELU \\ \cmidrule{2-7}
& Dropout & & & & $(F_1, C,\frac{T}{4})$ & \\ \cmidrule{1-7}
\multirow{6}{*}{2} & Depthwise Convolution & $D\times F_1$& $(C,1)$ & $C\times D\times F_1$ & $(D\times F_1, 1,\frac{T}{4})$ & \\ \cmidrule{2-7}
& Average Pooling & & $(1,4)$ & & $(D\times F_1, 1,\frac{T}{16})$ & \\ \cmidrule{2-7}
& Layer Normalization & & & $2\times D\times F_1$ & $(D\times F_1, 1,\frac{T}{16})$ & CELU \\ \cmidrule{2-7}
& Dropout & & & & $(D\times F_1, 1,\frac{T}{16})$ & \\ \cmidrule{1-7}
& Depthwise Convolution & $D\times F_1$& $(1,16)$ & $16\times D\times F_1$ & $(D\times F_1, 1,\frac{T}{16})$ & \\ \cmidrule{2-7}
& Layer Normalization & & & $2\times D\times F_1$ & $(D\times F_1, 1,\frac{T}{16})$ & CELU \\ \cmidrule{2-7}
& Dropout & & & & $(D\times F_1, 1,\frac{T}{16})$ & \\ \cmidrule{2-7}
3 & Pointwise Convolution & $F_2$& $(1,1)$ & $F_2\times(D\times F_1)$ & $(F_2, 1,\frac{T}{16})$ & \\ \cmidrule{2-7}
& Average Pooling & & $(1,4)$ & & $(F_2, 1,\frac{T}{64})$ & \\ \cmidrule{2-7}
& Layer Normalization & & & $2\times F_2$ & $(F_2, 1,\frac{T}{64})$ & CELU \\ \cmidrule{2-7}
& Dropout & & & & $(F_2, 1,\frac{T}{64})$ & \\ \cmidrule{1-7}
\multirow{3}{*}{4} & Flatten & & & & $F_2\times \frac{T}{64}$ & \\ \cmidrule{2-7}
& Fully Connected & & & $N\times F_2\times \frac{T}{64}$ & $N$ & Softmax \\
\bottomrule \\
}
\section{Experiments}
The EEG data used in this study comes from the BCI Competition IV Dataset 2A \cite{tangermann12}. The data consists of four classes of imagined movements of left and right hands, feet and tongue recorded from 9 subjects during two separate sessions, each composed by 288 trials. The EEG data were originally recorded using $C=22$ Ag/AgCl electrodes(channels), sampled at 250 Hz and bandpass filtered between
0.5 and 100 Hz. We applied a further bandpass filtering to suppress frequencies above 64 Hz and resampled the timeseries to 128 Hz as in \cite{lawhern18}. Z-score standardization was used to normalize the signals within each trial.
EEG data were splitted for training and testing according to three different paradigms:
\begin{enumerate}
\item \emph{Competition-based}. The training and test sets were the same as indicated in the BCI Competition. This allowed to compare our method with reference methods from the literature that reported their results using the same data split, namely \emph{FBSCP} \cite{ang12}, \emph{Deep ConvNet} \cite{schirrmeister17}, and \emph{Shallow ConvNet} \cite{schirrmeister17} as well as all other participants to the original challenge.
\item \emph{Within-subject}. For each subject, a dedicated experiment was performed using only data from that subject from the BCI Competition training and test sets.
\item \emph{Cross-subject}. For each subject, a dedicated experiment was performed using only data from other subjects from the BCI Competition training set, and only data from that subject from the BCI Competition test set.
\end{enumerate}
In all the experiments, we performed a four-class classification using accuracy as the summary measure. In the within- and cross-subject experiments, we also trained and tested an \emph{EEGNet} with $F_1=8$ and $D=2$, which was the best performing CNN reported in \cite{lawhern18}. As to our \emph{Sinc-EEGNet}, we chose $D=2$ for a fair comparison with \emph{EEGNet}, but we set $F_1=32$ since our Sinc layer is specifically designed for frequency band decomposition and thus can benefit from learning a wide variety of bandpass filters. This can be seen in Fig. \ref{fig:filters} that shows 32 distinct filters learnt by \emph{Sinc-EEGNet} in the competition-based experiment. The number of samples $L$ used to discretize the sinc functions was set to $64$ that resulted from a trade-off between approximation precision and computational complexity.
All the CNNs were trained using backpropagation and Adam optimizer \cite{kingma14} with weight updates that proceeded in batches of $20$ samples for $100$ epochs. The base learning rate was set to $10^{-3}$. Momentum and weight decay were set respectively to $0.9$ and $2\times10^{-2}$. Following \cite{lawhern18}, for the Dropout layers we chose $p=0.5$
for within-subject experiments, and $p=0.25$ for competition-based and cross-subject experiments that used more training data and thus required less regularization. The loss function was categorical cross-entropy.
\ctable[
caption={Comparison of classification accuracies between our method and reference methods on the BCI Competition IV-2A.},
label = tab:results,
pos = !t,
doinside=\scriptsize]{m{4.0cm}m{2.0cm}}
{
}
{ \toprule
Method & Accuracy\\ \toprule
\emph{FBCSP} & $68.0\%$ \\ \cmidrule{1-2}
\emph{Deep ConvNet} & $70.9\%$ \\ \cmidrule{1-2}
\emph{Shallow ConvNet} & $73.7\%$ \\ \cmidrule{1-2}
\emph{Sinc-EEGNet} & $75.39\%$ \\
\bottomrule \\
}
\section{Results}
The comparison between \emph{Sinc-EEGNet} and the reference methods from the literature on the competition-based data split are reported in Table \ref{tab:results}. Remarkably, \emph{Sinc-EEGNet} outperforms all other methods in terms of accuracy and sets a new state-of-the-art on the BCI Competition IV-2A with an accuracy of $75.39\%$ that improves \emph{FBCSP} by $17.39\%$. As to the within- and cross-subject experiments, \emph{EEGNet} yielded an average accuracy of $60.99\%$ and $58.75\%$, respectively, and \emph{Sinc-EEGNet} of $70.56\%$ and $58.98\%$, respectively. Also in this case, our method exhibited superior performance, with an improvement of almost $10\%$ accuracy in the more practically adopted within-subject classification.
\section{Conclusions}
In this work we proposed \emph{Sinc-EEGNet}, a lightweight convolutional neural network for EEG-BCI-based motor imagery classification that learns optimal band decomposition and spatial filtering, mimicking the behavior of the well-known \emph{FBCSP} but learning the filters directly from the raw EEG data. Our method outperformed reference methods from the literature, including \emph{FBCSP} and \emph{EEGNet}, on the publicly available BCI Competition IV-2A dataset. To the best of our knowledge, this is the first work that validated the use of learnable bandpass filters in the first layer of a CNN for EEG signal classification. Future work will investigate alternative frequency filters, such as Difference of Gaussian (DoG) filter, that are less subject to discrete approximation issues, and architecture variants that explore different spatial filtering and feature map combination approaches.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/filters.pdf}
\caption{The 32 sinc filters learnt by \emph{Sinc-EEGNet} on the BCI Competition IV Dataset 2A.}
\label{fig:filters}
\end{figure*}
\bibliographystyle{splncs04}
|
1,941,325,221,053 | arxiv | \section{Introduction}
Fuzzy spaces are noncommutative geometries which arise from quantizing certain compact K{\"a}hler manifolds. The most prominent such space is the fuzzy sphere, which was f\/irst constructed by Berezin \cite{Berezin:1974du}. In the original construction, the aim was the same as that of geometric quantization, i.e.\ to provide a general quantization prescription for a particle whose phase space is an arbitrary Poisson manifold. Today, fuzzy spaces attract most interest for dif\/ferent reasons: First, fuzzy spaces appear quite naturally in various contexts in string theory where they replace parts of the classical geometry of the target space with an approximate quantum geo\-met\-ry. Closely related is the observation that fuzzy geometries seem to emerge from the dynamics of matrix models and thus they could be crucial in background independent formulations of theo\-ries of gravity. And f\/inally one can regulate quantum f\/ield theories on K{\"a}hler manifolds by putting the theory on the corresponding Berezin-quantized or fuzzy manifold.
The idea of using fuzzy spaces as regulators for quantum f\/ield theories goes back to the early 1990's \cite{Madore:1991bw,Grosse:1995ar}. This approach is very appealing, as the def\/inition of scalar quantum f\/ield theories on fuzzy spaces is under complete control: All functional integrals are automatically well-def\/ined because the algebra of functions on a fuzzy space is f\/inite dimensional. Taking the large volume limit of the fuzzy space, we can even regulate scalar quantum f\/ield theories on f\/lat spaces and thus try to compete with the lattice approach. The main advantage of fuzzy regularization over the latter is that all the isometries of the original K{\"a}hler manifold survive the quantization procedure.
Particularly nice spaces to use in a fuzzy regularization are the complex projective spaces, as they are the Berezin-quantizable manifolds with the largest possible symmetry groups. Furthermore, their quantization is straightforward and can be done completely in terms of group theory. As usual in a ``good'' quantization, real functions are mapped to hermitian operators on a Hilbert space, which is f\/inite dimensional in Berezin quantization. Real scalar f\/ield theories are therefore simply hermitian matrix models.
The most prominent hermitian matrix models are given by a potential consisting of a trace over a polynomial in the matrix variable. One can therefore switch directly to an eigenvalue formulation. In the case of scalar f\/ield theories on fuzzy spaces, this is not possible because the kinetic term yields a coupling to a number of f\/ixed ``external'' matrices.
A f\/irst attempt at gaining an analytical handle on fuzzy scalar f\/ield theories was made in \cite{Steinacker:2005wj}. A new method to overcome the problem of external matrices was then proposed in
\cite{O'Connor:2007ea}. Here, a~high-temperature expansion of the kinetic term in the partition function was performed and the resulting expressions could be evaluated analytically via group theoretic methods. It was shown that the resulting partition function can be rewritten as the partition function of a multitrace matrix model. This partition function can then be computed analytically for both f\/inite and inf\/inite matrix sizes using, e.g., orthogonal polynomials or the saddle point approximation. For~$\CPP^1$, this computation was performed to second order in the inverse temperature $\beta$ in~\cite{O'Connor:2007ea}. In this paper, we continue this work and generalize the results to third order in $\beta$ and to arbitrary~$\CPP^n$.
One of the motivations for this work is to explain the phase diagram for scalar f\/ield theory on fuzzy $\CPP^1$ which has been obtained via numerical methods in~\cite{GarciaFlores:2005xc}, see also~\cite{Panero:2006bx} for a more detailed study as well as~\cite{Panero:2006cs} for a review and further numerical results. The numerical results suggest that the phase diagram is invariant under a particular multiscaling. We can therefore restrict ourselves to the limit of inf\/inite matrix size, in which we can use the saddle point approximation to compute the partition function of our model.
Further reasons to compute the multitrace matrix model to third order in $\beta$ are the possibility to use this result in a similar study of scalar f\/ield theory on $\FR\times \CPP^1$ as well as our intent to discuss the link to (deformed) integrable hierarchies in future work.
In the analysis of the phase diagram, we will focus our attention on the three lowest-dimensional fuzzy spaces $\CPP^1_F$, $\CPP^2_F$ and $\CPP^3_F$. In the f\/irst case, the goal will be to compare the resulting phase diagram with the numerically obtained one. The quantum f\/ield theory on the second space corresponds in the large volume limit to a scalar quantum f\/ield theory of $\phi^4$-type on $\FR^4$. While admittedly it is not clear what the Lagrangian of the f\/ield theory on $\FR^4$ being regularized actually is, this presents an example of both a well-def\/ined and renormalizable four-dimensional noncommutatively deformed $\phi^4$-theory. The theory on $\CPP^3$ could be interpreted as a regularization of a non-renormalizable f\/ield theory, and one might hope for signs of this in the matrix model.
The paper is structured as follows. In Section~\ref{section2}, we review the construction of fuzzy $\CPP^n$ and scalar f\/ield theory on this noncommutative space. Section~\ref{section3} describes the high-temperature expansion in detail and the results are presented to order $\beta^3$. In Section~\ref{section4}, we analyze the thus obtained multitrace matrix model for the three lowest-dimensional fuzzy $\CPP^n$ and we conclude in Section~\ref{section5}. Conventions, rather technical details and helpful intermediate results are given in the appendix.
\section[Scalar field theory on fuzzy CP**n]{Scalar f\/ield theory on fuzzy $\boldsymbol{\CPP^n}$}\label{section2}
The general mathematical framework containing the quantization of complex projective space which is referred to as {\em fuzzy $\CPP^n$} in the physics literature is known as {\em Berezin--Toeplitz quantization}, see e.g.\ \cite{IuliuLazaroiu:2008pk} and references therein for a detailed discussion. In the case of $\CPP^n$, there is a shortcut to the general constructions of Berezin--Toeplitz quantization which originates from the fact that $\CPP^n$ is the coset space $\sU(n+1)/\sU(1)\times\sU(n)$. We will use this group theoretic approach here, as it has the additional advantage of allowing for simple computations of quantities like spectra of quadratic Casimirs and their eigenspaces, which we will need for our further discussion.
\subsection[Berezin quantization of CP**n]{Berezin quantization of $\boldsymbol{\CPP^n}$}
The Hilbert space $\mathscr{H}_\ell$ which we use in Berezin quantizing $\CPP^n$ is the space of global holomorphic sections of the line bundle $\mathcal{O}(\ell)$ over $\CPP^n$ with $\ell\geq 0$. As a vector space, $\mathscr{H}_\ell$ is spanned by the homogeneous polynomials of degree $\ell$ in the homogeneous coordinates $z^0,\ldots,z^n$ on $\CPP^n$. Recall that $\CPP^n\cong\mathsf{SU}(n+1)/\mathsf{S}(\sU(1)\times\sU(n))$, and $\mathscr{H}_\ell$ forms a representation of $\mathsf{SU}(n+1)$ which is given by the totally symmetrized tensor product of $\ell$ fundamental representations. In terms of Dynkin labels, this representation reads as $(\ell,0,\ldots,0)$ and has dimension
\begin{equation*}
N_{n,\ell}:=\dim(\mathscr{H}_\ell)=\dim(\ell,0,\ldots,0)=\frac{(n+\ell)!}{n!\ell!}.
\end{equation*}
We will f\/ind it convenient to map the polynomials to elements of the $\ell$-particle Hilbert space in the Fock space of $n+1$ harmonic oscillators with creation and annihilation operators satisfying the algebra $[\hat{a}_\alpha,\hat{a}_\beta^\dagger]=\delta_{\alpha\beta}$ and $\hat{a}_\alpha |0\rangle=0$ for $\alpha,\beta=0,\ldots,n$. We thus identify
\begin{equation*}
\mathscr{H}_\ell\cong{\rm span}(\hat{a}^\dagger_{\alpha_1}\cdots\hat{a}^\dagger_{\alpha_\ell}|0\rangle).
\end{equation*}
The Berezin symbol map $\sigma_\ell:\mathsf{End}\,(\mathscr{H}_\ell)\rightarrow \mathcal{C}^\infty(\CPP^n)$ is def\/ined as
\begin{equation*}
\sigma_\ell(\hat{f})(z):=\langle z,\ell|\hat{f}| z,\ell\rangle,
\end{equation*}
where $|z,\ell\rangle$ are the Perelomov coherent states,
\begin{equation*}
|z,\ell\rangle:=\frac{(\hat{a}_\alpha {\bar{z}}^\alpha)^\ell}{\ell!}|0\rangle.
\end{equation*}
The quantization map is given by the inverse of $\sigma_\ell$ on the set $\Sigma_\ell:=\sigma_\ell(\mathsf{End}\,(\mathscr{H}_\ell))\subsetneq \mathcal{C}^\infty(\CPP^n)$ of quantizable functions. Explicitly, we have
\begin{equation*}
\sigma^{-1}_\ell\left(\frac{z_{\alpha_1}\cdots z_{\alpha_\ell}\, {\bar{z}}_{\beta_1}\cdots {\bar{z}}_{\beta_\ell}}{|z|^{2\ell}}\right)=\frac{1}{\ell!}\, \hat{a}^\dagger_{\alpha_1}\cdots\hat{a}^\dagger_{\alpha_\ell}|0\rangle\langle 0|\hat{a}_{\beta_1}\cdots\hat{a}_{\beta_\ell}.
\end{equation*}
Furthermore, $\sigma^{-1}_\ell(1)=\unit$ and real functions are mapped to hermitian operators in $\mathsf{End}\,(\mathscr{H}_\ell)$.
Note that for $\CPP^1$, the real part of $\Sigma_\ell$ is given by the spherical harmonics with maximal angular momentum $\ell$. In general, the endomorphisms $\mathsf{End}\,(\mathscr{H}_\ell)\cong \Sigma_\ell$ split into irreducible representations of of $\mathsf{SU}(n+1)$ according to
\begin{equation*}
\underbrace{\yng(6)}_{\ell}\otimes \underbrace{\overline{\yng(6)}}_{\ell} = \mathbf{1} \oplus n
\left\{\phantom{\yng(2,1,1)}\right.\hspace{-1cm}\underbrace{\yng(2,1,1)}_2 \oplus\, n
\left\{\phantom{\yng(2,1,1)}\right.\hspace{-1cm}\underbrace{\yng(4,2,2)}_4 \oplus \cdots
\end{equation*}
or equivalently, written in terms of Dynkin labels:
\begin{equation*}
(\ell,0,\ldots,0)\otimes\overline{(\ell,0,\ldots,0)}=(\ell,0,\ldots,0)\otimes(0,\ldots,0,\ell)
=\oplus_{m=0}^\ell(m,0,\ldots,0,m).
\end{equation*}
The generators of $\mathfrak{su}(n+1)$ are represented on $\mathsf{End}\,(\mathscr{H}_\ell)$ by the adjoint action of hermitian matrices $L_i$, and we introduce the quadratic Casimir operator according to
\begin{equation*}
C_2 \hat{f}:=[L_i,[L_i,\hat{f}]].
\end{equation*}
The eigenvalues of $C_2$ are positive and given on the irreducible subspace with Dynkin labels $(m,0,\ldots,0,m)$ by\footnote{Note that our conventions for $C_2$ dif\/fer from \cite{O'Connor:2007ea} by a factor of~2.} $2m(m+n)$. The degeneracy of each of these eigenspaces is given by
\begin{equation*}
N_{n,m}^2-N_{n,m-1}^2=\frac{n(2m+n)((m+n-1)!)^2}{(m!)^2(n!)^2}.
\end{equation*}
Because of $C_2(\sigma^{-1}_\ell(f))=\sigma^{-1}_\ell(\Delta f)$, where $f\in\Sigma_\ell$ and $\Delta$ is the Laplace operator on $\CPP^n$, it is justif\/ied to identify $C_2$ with the Laplace operator on fuzzy $\CPP^n$.
The matrices $L_i$ represent the generators of $\mathfrak{su}(n+1)$ and thus satisfy the algebra
\begin{equation*}
[L_i,L_j]=:\di f_{ijk}L_k,
\end{equation*}
where the $f_{ijk}$ are the structure constants of $\mathfrak{su}(n+1)$. We choose the $L_i$ such that
\begin{equation*}
\tr(L_i)=0,\qquad L_i^2=c_L \unit\qquad \mbox{and}\qquad \tr(L_iL_j)=\frac{c_L N_{n,\ell}}{(n+1)^2-1}\delta_{ij}.
\end{equation*}
In the adjoint representation $R=(1,0,\ldots,0,1)$, they satisfy the Fierz identity
\begin{equation}\label{eq:FierzIdentitysun}
L_i^{\alpha\beta}L_i^{\gamma\delta}=\delta^{\alpha\delta}\delta^{\beta\gamma}
-\frac{1}{n+1}\delta^{\alpha\beta}\delta^{\gamma\delta},
\end{equation}
from which we conclude that
\begin{equation*}
\tr_R(L_iL_i)=(n+1)^2-1\qquad \mbox{and}\qquad \tr_R(L_iL_j)=\delta_{ij}.
\end{equation*}
With the above relation, one readily verif\/ies the following identity for the structure constants:
\begin{equation}\label{eq:StructureConstantsIdentity}
f_{ijk}f_{ijl}=2 (n+1)\delta_{kl}.
\end{equation}
Using the overcompleteness relation for the Perelomov coherent states,
\begin{equation*}
\int \dif{\mu}|z,\ell\rangle\langle z,\ell|={\rm vol}(\CPP^n)\unit,
\end{equation*}
where $\dd\mu=\frac{\omega^n}{n!}$ is the Liouville measure obtained from the K{\"a}hler form $\omega$ yielding the Fubini--Study metric, one readily deduces a formula for integration: Given a function $f\in \Sigma_\ell$, the integral can be written as a trace over the quantized function $\sigma_\ell^{-1}(f)\in\mathsf{End}\,(\mathscr{H}_\ell)$:
\begin{equation*}
\int \dif{\mu}f=\frac{{\rm vol}(\CPP^n)}{N_{n,\ell}}\tr(\sigma_\ell^{-1}(f)).
\end{equation*}
\subsection[Quantum scalar field theory on CP**nF]{Quantum scalar f\/ield theory on $\boldsymbol{\CPP^n_F}$}
As we are interested in matrix models, it is convenient to switch from the label $\ell$ of our representations to the label $N_{n,\ell}$ and drop the subscript. One should, however, keep in mind that only for $\CPP^1$, there is an $\ell$ for every value of~$N$. In the following, we will represent elements of~$\mathsf{End}\,(\mathscr{H}_\ell)$ by hermitian matrices $\Phi$ of dimension~$N\times N$.
In the previous section, we collected all the necessary results for writing down a scalar f\/ield theory on fuzzy $\CPP^n$. Putting everything together, we arrive at the following action functional\footnote{We implicitly reabsorbed all volume factors by a rescaling of the f\/ield $\Phi$ and the couplings.} on $\mathsf{End}\,(\mathscr{H}_\ell)$:
\begin{equation}\label{eq:ActionFieldTheory}
S[\Phi]:=\tr\left(\Phi C_2\Phi+r\,\Phi^2+g\,\Phi^4\right) =
\tr\left(\Phi[L_i,[L_i,\Phi]]+r\,\Phi^2+g\,\Phi^4\right).
\end{equation}
As we work with hermitian generators $L_i$, the quadratic Casimir operator $C_2$ has positive eigenvalues and for $r\in\FR$ and $g>0$, the action is therefore bounded from below. This, together with the f\/inite dimensionality of $\mathsf{End}\,(\mathscr{H}_\ell)$, enables us to introduce the well-def\/ined functional integral
\begin{equation}\label{eq:PartitionFunction}
\mathcal{Z}:=\int \mathscr{D} \Phi~\de^{-\beta S[\Phi]}:=\int \dif{\mu_D(\Phi)}\de^{-\beta S[\Phi]},
\end{equation}
where $\dd \mu_D(\Phi)$ is the Dyson measure on the set of hermitian matrices of dimension $N\times N$.
Recall that we can diagonalize a hermitian matrix $\Phi$ according to $\Phi=\Omega\Lambda\Omega^\dagger$, where $\Omega\in\sU(N)$ and $\Lambda=\diag(\lambda_1,\ldots,\lambda_N)$ is the diagonal matrix of eigenvalues of $\Phi$. Under this decomposition, the Dyson measure splits into an eigenvalue part and an ``angular'' integration over $\sU(N)$:
\begin{equation*}
\int \dd\mu_D(\Phi)=\int\prod_{i=1}^N\dif{\lambda_i}\Delta^2(\Lambda)\int\dd \mu_H(\Omega),
\end{equation*}
where $\dd\mu_H(\Omega)$ is the Haar measure\footnote{That is the unique measure on $\sU(N)$ which is invariant under left and right group multiplication and norma\-li\-zed according to $\int \dd\mu_H(\Omega)=1$.} and $\Delta(\Lambda)$ is the Vandermonde determinant
\begin{equation*}
\Delta(\Lambda) := \det \big([\lambda_i^{j-1}]_{ij}\big) = \prod_{i>j} (\lambda_i-\lambda_j).
\end{equation*}
In the case of simple hermitian matrix models consisting of traces (and multitraces) over polynomials in $\Phi$, the angular integration is trivial, because $\tr(\Phi^n)=\tr(\Lambda^n)$, and reduces to a~constant volume factor. The remaining integral over the eigenvalues can then be computed by standard methods as e.g.\ the saddle point approximation or orthogonal polynomials. Here, however, the kinetic term contains the f\/ixed external matrices $L_i$ which obstruct a straightforward translation to the eigenvalue picture.
\subsection[The toy models N=n+1 on CP**n]{The toy models $\boldsymbol{N=n+1}$ on $\boldsymbol{\CPP^n}$}\label{sec:toymodel}
In the case $N=n+1$, i.e.\ when $\ell=1$ and $\mathsf{End}\,(\mathscr{H}_1)$ forms the adjoint representation of $\mathfrak{su}(n+1)$, the kinetic term of our model \eqref{eq:ActionFieldTheory} can be evaluated explicitly by using the Fierz identity~\eqref{eq:FierzIdentitysun}. We f\/ind here that
\begin{equation}\label{eq:toymodelKineticTerm}
\tr(\Phi C_2\Phi)= \frac{\tr(K)}{N^3-N}\big(N\tr(\Phi^2)-\tr(\Phi)\tr(\Phi)\big),
\end{equation}
where $\tr(K)$ stands for the sum over the eigenvalues of $C_2$ on $\mathsf{End}\,(\mathscr{H}_1)$. Note that, as necessary, the kinetic term vanishes for $\Phi\sim\unit$. We will use this class of toy models for consistency checks of our computations below.
\section{The high-temperature expansion}\label{section3}
As it does not seem possible to compute the partition function \eqref{eq:PartitionFunction} analytically, we perform a~high-temperature expansion as suggested in~\cite{O'Connor:2007ea}. That is, we separate out the kinetic term in the functional integral and Taylor-expand its exponential, assuming $\beta$ to be small. As $\beta$ is usually inversely proportional to the temperature in statistical mechanics models, this expansion is also known as a a high-temperature expansion in the literature. For each of the terms appearing in this expansion, the integral over the angular part of the Dyson measure can be performed -- in principle straightforwardly -- using group theoretic methods. The results can be rewritten in terms of multitrace terms, and, after putting them back into the exponential of the functional integral, one ends up with a multitrace matrix model.
\subsection{Setup of the expansion}
Let us consider our model \eqref{eq:ActionFieldTheory} on fuzzy $\CPP^n_F$ with the dimension of the quantum Hilbert space~$\mathscr{H}_\ell$ being~$N$. The space of quantized functions $\mathsf{End}\,(\mathscr{H}_\ell)$ is spanned by the generators\footnote{Cf.\ Appendix~\ref{appendixA} for our Lie algebra conventions.} $\tau_\mu$, $\mu=1,\ldots,N^2$ of $\mathfrak{u}(N)$. We start by rewriting the kinetic term of the action in the following way:
\begin{equation*}
\tr(\Phi C_2 \Phi) = \tr\left(\Phi[L_i,[L_i,\Phi]]\right) = \tr(\tau_\mu[L_i,[L_i,\tau_\nu]])\,\tr(\Phi\,\tau_\mu)\,\tr(\Phi\,\tau_\nu) =: K_{\mu\nu}\Phi_\mu\Phi_\nu.
\end{equation*}
Because of $C_2\unit_N=0$, we have $K_{\mu\nu}\Phi_\mu\Phi_\nu=K_{mn}\Phi_m\Phi_n$, $m,n=1,\ldots,N^2-1$. The expansion of the kinetic term in the action now reads as
\begin{equation*}
\de^{-\beta\tr(\Phi C_2\Phi)}=1-\beta K_{mn}\Phi_m\Phi_n+\frac{\beta^2}{2}(K_{mn}\Phi_m\Phi_n)^2-\frac{\beta^3}{6}(K_{mn}\Phi_m\Phi_n)^3+\mathcal{O}(\beta^4),
\end{equation*}
and we will restrict our attention in the following to the terms up to order $\mathcal{O}(\beta^3)$.
We want to perform the integral over the $\sU(N)$ part of the Dyson measure, i.e.\ to integrate out the angular degrees of freedom in $\Phi$. For this, we decompose the hermitian matrix $\Phi$ according to $\Phi=\Omega\Lambda\Omega^\dagger$, where $\Omega\in\sU(N)$ and $\Lambda=\diag(\lambda_1,\ldots,\lambda_N)$. The integrals we have to evaluate at order $\mathcal{O}(\beta^k)$ are thus of the form
\begin{equation}\label{eq:HaarIntegrals}
\mathscr{I}_k\ :=\ \int \dif{\mu_H(\Omega)}\prod_{i=1}^k K_{m_in_i}\tr(\Omega\Lambda\Omega^\dagger\tau_{m_i})\tr(\Omega\Lambda\Omega^\dagger\tau_{n_i}),
\end{equation}
where the essential part in index notation is given by
\begin{equation}\label{eq:UNintegrals}
\int \dd\mu_H(\Omega)\, \Omega_{\alpha_1\beta_1}\cdots\Omega_{\alpha_{2k}\beta_{2k}}
\Omega^\dagger_{\gamma_i\delta_i}\cdots\Omega^\dagger_{\gamma_{2k}\delta_{2k}}.
\end{equation}
Various algorithms have been proposed in the literature to compute integrals of the type \eqref{eq:UNintegrals}, cf.\ e.g.~\cite{Creutz:1978ub,Maekawa:1986ec,Aubert:2003ak}. The most involved integral of the form \eqref{eq:UNintegrals} which we are interested in is the one for $k=3$, which is already very dif\/f\/icult to handle by the suggested methods. Fortunately, the integrals \eqref{eq:HaarIntegrals} allow for a further simplif\/ication \cite{O'Connor:2007ea}, which is then accessible via group theoretic methods. Using $\tr(A)\tr(B)=\tr(A\otimes B)$ and $AB\otimes CD=(A\otimes C)(B\otimes D)$, we rewrite~\eqref{eq:HaarIntegrals} according to
\begin{gather*}
\mathscr{I}_k = \int \dd\mu_H(\Omega)\,K_{m_1n_1}\cdots K_{m_kn_k}\\
\phantom{\mathscr{I}_k =}{} \times \tr\left((\Omega\otimes\cdots\otimes\Omega)(\Lambda\otimes\cdots\otimes\Lambda)
(\Omega^\dagger\otimes\cdots\otimes\Omega^\dagger)(\tau_{m_1}\otimes\tau_{n_1}\otimes
\cdots\otimes\tau_{m_k}\otimes\tau_{n_k})\right).
\end{gather*}
The idea presented in \cite{O'Connor:2007ea} is now to use the orthogonality relation of the Haar measure \eqref{eq:OrthogonalityRelation} to evaluate these integrals. We thus have
\begin{equation*}
\mathscr{I}_k=K_{m_1n_1}\cdots K_{m_kn_k}\sum_\rho \frac{1}{\dim(\rho)}\tr_\rho(\Lambda\otimes\cdots\otimes\Lambda)
\tr_\rho(\tau_{m_1}\otimes\tau_{n_1}\otimes\cdots\otimes\tau_{m_k}\otimes\tau_{n_k}),
\end{equation*}
where the sum is taken over the irreducible representations contained in the tensor product of $2k$ fundamental representations of $\mathsf{SU}(N)$. The traces $\tr_\rho$ are taken in the representation $\rho$, and we have
\begin{equation*}
\tr_\rho(\Lambda\otimes\cdots\otimes\Lambda)=\chi_\rho(\Lambda),
\end{equation*}
where $\chi_\rho(\Lambda)$ denotes the character of $\Lambda$ in the representation~$\rho$. Characters of representations of $\mathsf{SU}(N)$ can easily be calculated using e.g.\ the formulas in \cite{Bars:1980yy}. The remaining challenge is therefore to evaluate $\tr_\rho(\tau_{m_1}\otimes\tau_{n_1}\otimes\cdots\otimes\tau_{m_k}\otimes\tau_{n_k})$.
\subsection[The restricted traces tr rho(.)]{The restricted traces $\boldsymbol{{\rm tr}_\rho(\cdot)}$}
Consider again the generators $\tau_m$ of $\mathfrak{su}(N)$, and denote their matrix components by $\tau_m^{\alpha\beta}$, $\alpha,\beta=1,\ldots,N$. The full trace over the tensor products of matrices in index notation is given by
\begin{equation*}
\tr(\tau_{m_1}\otimes\cdots\otimes\tau_{m_{2k}})=\tau_{m_1}^{\alpha_1\beta_1}\cdots
\tau_{m_{2k}}^{\alpha_{2k}\beta_{2k}}\delta_{\alpha_1\beta_1}\cdots\delta_{\alpha_{2k}\beta_{2k}}.
\end{equation*}
To evaluate the restricted traces $\tr_\rho(\cdot)$, we need to project onto the irreducible representations which we do using projectors $\mathcal{P}^{(i,j)}_{2k}$ constructed from Young symmetrizers. The technical details of the construction of these projectors are given in Appendix~\ref{appendixB}. Explicitly, we let the projec\-tor~$\mathcal{P}^{(i,j)}_{2k}$ act onto the indices $\beta$ appearing in the Kronecker deltas to restrict to a representa\-tion~$\rho^{(i,j)}$:
\begin{equation*}
\tr_{\rho^{(i,j)}}(\tau_{m_1}\otimes\cdots\otimes\tau_{m_{2k}})
=\tau_{m_1}^{\alpha_1\beta_1}\cdots\tau_{m_{2k}}^{\alpha_{2k}\beta_{2k}}
\mathcal{P}^{(i,j)}_{2k}\delta_{\alpha_1\beta_1}\cdots\delta_{\alpha_{2k}\beta_{2k}}.
\end{equation*}
The completeness relation \eqref{eq:ProjectorCompleteness} for the projectors $\mathcal{P}^{(i,j)}_{2k}$ translates into the following completeness relation for the restricted traces:
\begin{equation*}
\sum_{i,j}\tr_{\rho^{(i,j)}}(\tau_{m_1}\otimes\cdots\otimes\tau_{m_{2k}})
=\tr(\tau_{m_1}\otimes\cdots\otimes\tau_{m_{2k}}),
\end{equation*}
which can serve as a f\/irst consistency check of the correctness of the calculated projectors $\mathcal{P}^{(i,j)}_{2k}$. A second test is to verify that each individual restricted trace indeed reduces to the character if all the $\tau_{m}$ are equal:
\begin{equation*}
\tr_{\rho^{(i,j)}}(\Lambda\otimes\cdots\otimes\Lambda)=\chi_{\rho^{(i,j)}}(\Lambda).
\end{equation*}
Let us now compute the combined sums of the restricted traces for each type of Young tableaux and contract the $\tau_{m}$ with the $K_{mn}$ to simplify the results. That is, we compute the following expressions:
\begin{equation*}
K_{m_1m_2}\cdots K_{m_{2k-1}m_{2k}}\tr_{\rho^{(i,j)}}(\tau_{m_1}\otimes\tau_{m_2}\otimes\cdots
\otimes\tau_{m_{2k-1}}\otimes\tau_{m_{2k}}).
\end{equation*}
For $k=1$ and $k=2$, corresponding to the contributions at orders $\mathcal{O}(\beta)$ and $\mathcal{O}(\beta^2)$, these sums have already been calculated in \cite{O'Connor:2007ea}. They are
\begin{equation*}
\begin{aligned}
\tyng(2)&\ :\ +\frac{1}{2}\tr(K),\\
\tyng(1,1)&\ :\ -\frac{1}{2}\tr(K),
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\tyng(4) &\ :\ -\frac{20\tr(K)-(4+N)\tr(K)^2-2N\tr(K^2)}{24N},\\
\tyng(2,2) &\ :\ +\frac{(\tr K)^2+2\tr K^2}{6},\\
\tyng(3,1) &\ :\ +\frac{20\tr(K)-(4+N)\tr(K)^2-2N\tr(K^2)}{8N},\\
\tyng(2,1,1) &\ :\ -\frac{20\tr(K)+(N-4)\tr(K)^2+2N\tr(K^2)}{8N},\\
\tyng(1,1,1,1) &\ :\ +\frac{20\tr(K)+(N-4)\tr(K)^2+2N\tr(K^2)}{24N}.
\end{aligned}
\end{equation*}
The lengthy result for $k=3$ is given in Appendix~\ref{appendixC}.
In calculating these results, we used many identities which we will brief\/ly comment on now: First of all, using the Fierz identity for the generators of $\mathfrak{u}(N)$ as well as the relations for the~$L_i$, we compute that for arbitrary $A,B\in\mathfrak{u}(N)$,
\begin{gather}
K_{\mu\nu}\tr(\tau_\mu A)\tr(\tau_\nu B) = 2 c_L \tr(AB)-2\tr(L_iAL_iB),\nonumber\\
K_{\mu\nu}\tr(\tau_\mu A\tau_\nu B) = 2 c_L \tr(A)\tr(B)-2\tr(L_iA)\tr(L_iB).\label{eq:K-resolutions}
\end{gather}
Applying these relations to $\tr(K):=K_{\mu\nu}\tr(\tau_\mu\tau_\nu)$ yields
\begin{equation*}
c_L=\frac{\tr(K)}{2N^2}\qquad \mbox{and}\qquad d_g:=(n+1)^2-1=\frac{\tr(K)^2}{N^2\tr(K^2)-\tr(K)^2}.
\end{equation*}
The identities \eqref{eq:K-resolutions} allow us to successively rewrite expressions involving $K_{\mu\nu}$ in terms of traces over products of the $L_i$, which in turn can be reduced using $L_i^2=c_L\unit_N$ and the identity for the structure constants~\eqref{eq:StructureConstantsIdentity}. Some useful intermediate results are collected in Appendix~\ref{appendixC}.
\subsection{The multitrace matrix model}
Combining the reduced traces with the characters in the various representations, we arrive at the following expressions for the $\mathscr{I}_k$:
\begin{gather*}
\mathscr{I}_1=\frac{\tr(K)}{N^2-1}\tr(\Lambda^2)-\frac{\tr(K)}{N^3-N}\tr(\Lambda)^2,\\
\mathscr{I}_2= \frac{10 \tr(K) \left(-2 \left(1+N^2\right)+\tr(K)\right)+4 \left(3-2 N^2\right) \tr\left(K^2\right)}{N \left(-36+N^2 \left(-7+N^2\right)^2\right)}\tr(\Lambda^4)\\
\phantom{\mathscr{I}_2=}{}+\frac{40 \left(2+2 N^2-\tr(K)\right) \tr(K)+16 \left(-3+2 N^2\right) \tr\left(K^2\right)}{N^2 \left(-36+N^2 \left(-7+N^2\right)^2\right)}\tr(\Lambda^3)\tr(\Lambda)\\
\phantom{\mathscr{I}_2=}{}+ \frac{20 \left(-3\!+\!2 N^2\right) \tr(K)\!+\!\left(30\!-\!14 N^2\!+\!N^4\right) \tr(K)^2\!+\!2 \left(18\!-\!6 N^2\!+\!N^4\right) \tr\left(K^2\right)}{N^2 \left(-36+N^2 \left(-7+N^2\right)^2\right)}\tr(\Lambda^2)^2\\
\phantom{\mathscr{I}_2=}{}-\frac{2 \left(100 \tr(K)+\left(-14+N^2\right) \tr(K)^2+2 \left(6+N^2\right) \tr\left(K^2\right)\right)}{N \left(-36+N^2 \left(-7+N^2\right)^2\right)}\tr(\Lambda^2)\tr(\Lambda)^2\\
\phantom{\mathscr{I}_2=}{}+\frac{100 \tr(K)+\left(-14+N^2\right) \tr(K)^2+2 \left(6+N^2\right) \tr\left(K^2\right)}{N^2 \left(-36+N^2 \left(-7+N^2\right)^2\right)}\tr(\Lambda)^4.
\end{gather*}
The result for $\mathscr{I}_3$ is lengthy and because it can be easily calculated from the list of restricted traces given in Appendix~\ref{appendixC}, we refrain from presenting it here. Note that all the above integrals pass the f\/irst consistency check: We have $\mathscr{I}_k=0$ if $\Lambda\sim\unit_N$.
To rephrase the perturbative expansion in terms of an ef\/fective action, we re-exponentiate the terms. That is, we write
\begin{equation*}
\de^{-\beta(S_1+S_2+S_3)}=1-\beta\mathscr{I}_1+\frac{\beta^2}{2}\mathscr{I}_2-\frac{\beta^3}{6}\mathscr{I}_3+\mathcal{O}(\beta^4),
\end{equation*}
where we demand that the $S_i$ are polynomials in the eigenvalues of order $2i$. As the same holds by def\/inition for the $\mathscr{I}_i$, we can match both sides order by order and arrive at
\begin{equation*}
S_1=\mathscr{I}_1, \qquad S_2=\frac{\beta}{2}(\mathscr{I}_1^2-\mathscr{I}_2), \qquad S_3=\frac{\beta^2}{6}(2\mathscr{I}_1^3-3\mathscr{I}_1\mathscr{I}_2+\mathscr{I}_3).
\end{equation*}
We can now perform the second consistency check of our result and compare the re-exponen\-tiated action with the toy model $N=n+1$ from Section~\ref{sec:toymodel}. In the representation $N=n+1$, we have
\begin{gather*}
\tr(K)=2n(1+n)(2+n),\qquad\!\!\! \tr(K^2)=4n(1+n)^2(2+n),\qquad\!\!\! \tr(K^3)=8n(1+n)^3(2+n).\!
\end{gather*}
Plugging this into the expressions for $S_1,S_2$ and $S_3$, we f\/ind that $S_1$ indeed reduces to the kinetic term of the toy model~\eqref{eq:toymodelKineticTerm}. Since the terms $S_2$ and $S_3$ vanish as required, our results pass this consistency check as well.
By re-inserting the integration over the Haar measure, we can return from the eigenvalues to the full hermitian matrices $\Phi$. We thus obtain the multitrace matrix model with action $S=S_1+S_2+S_3$ with
\begin{equation*}
S_1= \frac{\tr(K)}{N^2-1}\tr\big(\Phi^2\big)-\frac{\tr(K)}{N^3-N}\tr(\Phi)^2,
\end{equation*}
etc. The involved expressions for $S_2$ and $S_3$ are again lengthy but easily calculated from the results given above.
Altogether, we obtained a multitrace matrix model whose partition function approximates the partition function of fuzzy scalar f\/ield theory on complex projective space up to order $\mathcal{O}(\beta^3)$. This approximation should be valid in particular for large values of the couplings $r$ and $g$.
\section[Large N solutions of the model]{Large $\boldsymbol{N}$ solutions of the model}\label{section4}
The partition function of the multitrace matrix model we obtained in the previous section can now be evaluated analytically for f\/inite $N$ using the methods of orthogonal polynomials. As we are mainly interested in the phase diagram, we consider instead the large $N$ limit and use the saddle point method to determine the partition function here. We try to be self-contained and present the involved steps in detail.
\subsection[The large N limit]{The large $\boldsymbol{N}$ limit}
The phase diagram determined numerically in \cite{GarciaFlores:2005xc} is invariant under the multiscaling limit
where $N\rightarrow\infty$ and $N^2\beta g$ as well as $N^{3/2}\beta r$ are kept f\/ixed. This justif\/ies to solve our model in the large $N$ limit to compare it with the phase diagram. In this limit, the discrete set of eigenvalues goes over into a continuous function: We rescale $\lambda_i\rightarrow \lambda(i/N)=:\lambda(x)$, with $0< x \leq 1$. The traces turn correspondingly into integrals: $\tr(\Phi^j)=\sum_i\lambda_i^j\rightarrow N\int_0^1\dd x\,\lambda(x)^j$.
The formulas for general $n$ turn out to be very lengthy and dif\/f\/icult to handle. Therefore we will restrict our attention in the following to the three projective spaces $\CPP^1$, $\CPP^2$ and~$\CPP^3$. The f\/irst case $n=1$ is interesting, as we would like to compare the resulting phase diagram to the one numerically obtained in~\cite{GarciaFlores:2005xc}. The second case $n=2$ is a well-def\/ined, four-dimensional quantum f\/ield theory with quartic potential. The third case $n=3$ is interesting as the corresponding scalar f\/ield theory on $\FR^6$ is not renormalizable.
The eigenvalues of the quadratic Casimir, the degeneracy of the corresponding eigenspaces and the dimension of the representation $\overline{(\ell,0,\ldots,0)}\otimes(\ell,0,\ldots,0)$ for the cases $n=1,2,3$ are listed in the following table:
\begin{center}
\begin{tabular}{r|c|c|c}
& $\CPP^1$ & $\CPP^2$ & $\CPP^3$ \\
\hline
eigenvalues of $C_2$ & $2\ell(\ell+1)$ & $2\ell(\ell+2)$ & $2\ell(\ell+3)$\tsep{2pt}\\
degeneracy of eigenspaces & $1+2\ell$ & $(1+\ell)^3$ & $\tfrac{1}{12}(1+\ell)^2(2+\ell)^2(3+2\ell)$\\
$N_{n,\ell}$ & $\ell+1$ & $\tfrac{1}{2}(\ell+1)(\ell+2)$ & $\tfrac{1}{6}(\ell+1)(\ell+2)(\ell+3)$\\
\end{tabular}
\end{center}
As the function $N_{n,\ell}$ is only surjective for $n=1$, and thus we cannot f\/ind an $\ell$ for every value of $N$, we will rewrite the multitrace matrix model in terms of~$\ell$.
From the table above, we easily evaluate the various traces over $K$ appearing in the action of the multitrace matrix model. We have for $\CPP^1$:
\begin{gather*}
\tr(K)=\ell(1+\ell)^2 (2+\ell),\qquad \tr(K^2)=\tfrac{4}{3}\ell^2(1+\ell)^2(2+\ell)^2,\\
\tr(K^3) =\tfrac{2}{3}\ell^2(1+\ell)^2(2+\ell)^2(3\ell^2+6\ell-1),
\end{gather*}
for $\CPP^2$:
\begin{gather*}
\tr(K)=\tfrac{1}{3}\ell(1+\ell)^2 (2+\ell)^2(3+\ell),\qquad
\tr(K^2)=\tfrac{1}{2}\ell^2(1+\ell)^2(2+\ell)^2(3+\ell)^2,\\
\tr(K^3) =\tfrac{1}{5}\ell^2(1+\ell)^2(2+\ell)^2(3+\ell^2)(4\ell^2+12\ell-1),
\end{gather*}
and for $\CPP^3$:
\begin{gather*}
\tr(K) =\tfrac{1}{24}\ell(1+\ell)^2(2+\ell)^2(3+\ell)^2(4+\ell),\\
\tr(K^2) =\tfrac{1}{15}\ell^2(1+\ell)^2(2+\ell)^2(3+\ell)^2(4+\ell)^2,\\
\tr(K^3) =\tfrac{1}{45}\ell^2(1+\ell)^2(2+\ell)^2(3+\ell)^2(4+\ell)^2(5\ell^2+20\ell-1).
\end{gather*}
In the limit $\ell\rightarrow\infty$, the expressions for the various matrix models simplify. Switching to the eigenvalue description and using the moments
\begin{equation*}
c_n:=\int \dd x\, \lambda^n(x),
\end{equation*}
we can write them down explicitly. On the three fuzzy $\CPP^n$s, we have the models
\begin{gather*}
\beta S^{(n=1)}=
\beta\ell^3(c_2-c_1^2)-\beta^2\frac{\ell^4}{3}
\left(c_1^2-c_2\right)^2-\beta^3\frac{4\ell^5}{27}\left(2c_1^3-3c_1c_2+c_3\right)^2\\
\phantom{\beta S^{(n=1)}=}{}
+\beta \ell r c_2+\beta \ell g c_4-\ell^2\int\dd x\,\dd y\,\log|\lambda(x)-\lambda(y)|,\\
\beta S^{(n=2)}= \beta\frac{2\ell^4}{3}(c_2-c_1^2)-\beta^2\frac{2\ell^4}{9}\left(c_1^2-c_2\right)^2\\
\phantom{\beta S^{(n=2)}=}{} - \beta^3\frac{8\ell^4}{405}\left(12c_1^6-36c_1^4c_2+21c_1^2c_2^2+8c_2^3+10c_1(2c_1^2-3c_2)c_3+5c_3^2\right)\\
\phantom{\beta S^{(n=2)}=}{} +\beta\frac{\ell^2}{2}r c_2+\beta\frac{\ell^2}{2}g c_4-\frac{\ell^4}{4}\int\dd x\,\dd y\,\log|\lambda(x)-\lambda(y)|,\\
\beta S^{(n=3)}= \beta\frac{\ell^5}{4}\left(c_2-c_1^2\right)-\beta^2\frac{3\ell^4}{20}\left(c_1^2-c_2\right)^2\\
\phantom{\beta S^{(n=3)}=}{}
- \beta^3\frac{\ell^3}{25}\left(2c_1^6-6c_1^4c_2-3c_1^2c_2^2+10c_2^3+6c_1(2c_1^2-3c_2)c_3+3c_3^2\right)\\
\phantom{\beta S^{(n=3)}=}{}
+\beta\frac{\ell^3}{6} r c_2+\beta\frac{\ell^3}{6} g c_4-\frac{\ell^6}{36}\int\dd x\,\dd y\,\log|\lambda(x)-\lambda(y)|,
\end{gather*}
where the repulsive log-term arises as usual from exponentiating the Vandermonde determinant. The $\ell$-dependence of the log-term is due to the factor $N_{n,\ell}^2$, which in turn originated from rewriting the double sum as a double integral. In the above expressions, subleading terms in $\ell$ have been suppressed in each summand.
As a next step, we have to f\/ind the appropriate multi-scaling behavior of the constants~$\beta$,~$r$,~$g$ and the continuous eigenvalues $\lambda(x)$. The coef\/f\/icients of the log-terms determines the desired scaling behavior of the total action. We f\/ix the remaining scalings by demanding that the whole action scales homogeneously and that $\beta g$ scales with $N^2$, as for an ordinary hermitian matrix model. We thus f\/ind the following rescalings:
\begin{equation*}
\begin{tabular}{c|llll}
$\CPP^1$ & $\beta\rightarrow \ell^{-\frac{1}{2}}\beta$, & $\lambda(x)\rightarrow \ell^{-\frac{1}{4}}\lambda(x)$, & $r\rightarrow \ell^{2}r$, & $g\rightarrow \ell^{\frac{5}{2}}g$\phantom{\Big(} \\
\hline
$\CPP^2$ & $\beta\rightarrow \beta$, & $\lambda(x)\rightarrow \lambda(x)$, & $r\rightarrow \ell^2r$, & $g\rightarrow \ell^2g$\phantom{\Big(} \\
\hline
$\CPP^3$ & $\beta\rightarrow \ell^{\frac{1}{2}}\beta$, & $\lambda(x)\rightarrow \ell^{\frac{1}{4}}\lambda(x)$, & $r\rightarrow \ell^{2}r$, & $g\rightarrow \ell^{\frac{3}{2}}g$\phantom{\Big(} \\
\end{tabular}
\end{equation*}
Note that the scalings for $\CPP^1$ indeed agree with the ones numerically determined in \cite{GarciaFlores:2005xc} as well as the ones calculated in \cite{O'Connor:2007ea}.
As a f\/inal simplif\/ication, we note that our theory is invariant under $\Phi\rightarrow -\Phi$, as the potential is even. We expect the eigenvalues to respect this symmetry\footnote{As we will see later, this assumption is not correct in a small part of the conf\/iguration space. At this point, it serves as a very useful approximation to keep the terms in the action manageable.}, and therefore we put all the odd moments $c_{2n+1}$, $n\in\NN$ to zero. Moreover, we replace the integral over $x$ by an integral over the eigenvalue density $\rho(\lambda):=\frac{\dd x}{\dd \lambda}$. We thus eventually arrive at the following three models, which we wish to solve:
\begin{gather*}
\beta S^{(n=1)}= \beta \left(1-\frac{\beta}{3}c_2+r\right)c_2+\beta g c_4-\int\dd \lambda\,\dd \mu\,\rho(\lambda)\log|\lambda-\mu|\rho(\mu),\\
\beta S^{(n=2)}= \beta\left(\frac{8}{3}-\frac{8\beta}{9}c_2-\frac{256\beta^2}{405}c_2^2+2r\right)c_2+2\beta g c_4-\int\dd \lambda\,\dd \mu\,\rho(\lambda)\log|\lambda-\mu|\rho(\mu),\\
\beta S^{(n=3)}= \beta\left(9-\frac{27\beta}{5}c_2-\frac{72\beta^2}{5}c_2^2+6r\right)c_2+6\beta g c_4-\int\dd \lambda\,\dd \mu\,\rho(\lambda)\log|\lambda-\mu|\,\rho(\mu).
\end{gather*}
\subsection{Solving the models}
We will now calculate the partition functions of our models using the saddle point method: In the large $\ell$ limit, the path integral localizes on classical solutions, or saddle points, of the action\footnote{Note that we had to switch to the eigenvalue formulation of the matrix models f\/irst, as the zero modes corresponding to the angular degrees of freedom contained in $\Phi$ would have rendered the approximation invalid.}. These solutions, which are valid only for a restricted range of the coupling constants, can be easily obtained using standard methods in random matrix theory. We start from the action\footnote{We have included a Lagrange multiplier $\xi$ to f\/ix the normalization of the eigenvalue density.}
\begin{equation*
S[\rho(\lambda)]=\int_\mathcal{I}\dd\lambda\,\rho(\lambda)V(\lambda)-\int_{\mathcal{I}\times \mathcal{I}}\dd\lambda\,\dd\mu\,\rho(\lambda)\log|\lambda-\mu|\,\rho(\mu)+\xi\left(\int_\mathcal{I}\dd\lambda\,\rho(\lambda)-1\right),
\end{equation*}
where $\mathcal{I}$ is the union of open intervals on the real line over which $\rho(\lambda)$ has support. The saddle point equation is obtained by varying the above equation with respect to $\rho(\lambda)$:
\begin{equation}\label{eq:presaddlepoint}
V(\lambda)-2\int_\mathcal{I}\dd\mu\,\rho(\mu)\,\log|\lambda-\mu|+\xi=0.
\end{equation}
Note that our potentials satisfy $V(\lambda)=0$ at $\lambda=0$ and we can therefore determine the Lagrange multiplier $\xi$ by solving the saddle point equation at this special point if $0\in\mathcal{I}$:
\begin{equation*}
\xi_{0\in\mathcal{I}}=2\int_\mathcal{I}\dd\mu\,\rho(\mu)\log|\mu|;
\end{equation*}
otherwise, one has to choose a dif\/ferent value of $\lambda$ to obtain $\xi$. We def\/ine the free energy~$F$ as $F:=-\log(\mathcal{Z})$, where $\mathcal{Z}$ is the partition function of our model. In the saddle point approximation, this reduces to $F=\beta S[\rho(\lambda)]$, which we can evaluate using \eqref{eq:presaddlepoint}:
\begin{equation*}
F=\tfrac{1}{2}\int_\mathcal{I} \dd \lambda\, \rho(\lambda) V(\lambda)-\tfrac{1}{2}\xi.
\end{equation*}
To f\/ind the eigenvalue density $\rho(\lambda)$, it is convenient to replace \eqref{eq:presaddlepoint} with its derivative\footnote{When doing this, one obviously has to vary each moment: $\delta c_2^2=2c_2\delta c_2$, etc.} with respect to $\lambda$:
\begin{equation}\label{saddlepoint}
V'(\lambda) =
2\int_\mathcal{I} \hspace{-0.44cm}-~\dd \mu\,\frac{\rho(\mu)}{\lambda-\mu}.
\end{equation}
This is a singular integral equation, and its general solution can be found e.g.\ in~\cite{9001607004}, see also~\cite{DiFrancesco:1993nw}. First of all, one introduces the {\em resolvent} $W(\lambda)$, which is an analytic function on $\FC\backslash \mathcal{I}$, def\/ined according to
\begin{equation*}
W(\lambda):=\int \dd \mu\, \frac{\rho(\mu)}{\lambda-\mu}.
\end{equation*}
Note that for large $\lambda$, we have $W(\lambda)\sim \frac{1}{\lambda}$. The resolvent is related to the eigenvalue densi\-ty~$\rho(\lambda)$ and the Cauchy principal value appearing in the equation of motion through the Plemelj formula, and we arrive at
\begin{gather*}
\rho(\lambda)=-\frac{1}{2\pi\di}(W(\lambda+\di\eps)-W(\lambda-\di\eps)),\\
V'(\lambda)=W(\lambda+\di\eps)+W(\lambda-\di\eps).
\end{gather*}
The f\/irst equation determines $\rho(\lambda)$ in terms of the resolvent, and the second equation is a much simpler equation than \eqref{saddlepoint}, which f\/ixes the resolvent\footnote{Strictly speaking, it f\/ixes the resolvent only up to regular terms, which, however, are absent as can be seen from the large $\lambda$ behavior $W(\lambda)\sim\frac{1}{\lambda}$.} and thus the eigenvalue density. One can show that the resolvent satisf\/ies the Schwinger--Dyson equation
\begin{equation*}
W^2(\lambda)-V'(\lambda)W(\lambda)+\tfrac{1}{4}R(\lambda)=0,
\end{equation*}
where
\begin{equation*}
R(\lambda)=4\int \dd \mu\, \rho(\mu)\frac{V'(\lambda)-V'(\mu)}{\mu-\lambda}
\end{equation*}
is a polynomial of degree $d-2$. The solution to the above equation reads as
\begin{equation*}
W(\lambda)=\tfrac{1}{2}(V'(\lambda)\pm \underbrace{\sqrt{V'{}^2(\lambda)-R(\lambda)}}_{:=\omega(\lambda)}),
\end{equation*}
where $\omega(\lambda)$ describes the part of $W(\lambda)$ containing the branch cuts.
Explicit solutions are now obtained by making assumptions about the support $\mathcal{I}$ of the eigenvalue density $\rho(\lambda)$.
The simplest assumption is that $\mathcal{I}$ consists of a single interval. This is expected if the potential either consists of one deep well or if the eigenvalue f\/illing is such that all the local minima of the potential are more than f\/illed up. In this case, the resolvent has to have a branch cut over $\mathcal{I}:=(\delta_1,\delta_2)$ and the corresponding solution is therefore known as a~{\em single-cut solution}. The resolvent's singular part has to contain exactly two roots, cf.\ e.g.~\cite{Brezin:1977sv}:
\begin{equation*}
\omega^2(\lambda)=M^2(\lambda)(\lambda-\delta_1)(\lambda-\delta_2)=V'{}^2(\lambda)-R(\lambda).
\end{equation*}
One can now make a general ansatz for the polynomials $M(\lambda)$ and $R(\lambda)$. Together with the self-consistency condition that all the moments $c_n$ satisfy their def\/ining relation
\begin{equation*}
c_n := \int \dd \lambda\, \rho(\lambda) \lambda^n,
\end{equation*}
we can solve for all unknowns and determine $\rho(\lambda)$. Note that the normalization condition on the eigenvalue density $c_0=1$ is equivalent to the less involved condition that the asymptotic behavior of the resolvent is $W(\lambda)=\frac{1}{\lambda}+\mathcal{O}(\frac{1}{\lambda^2})$.
When having a double well potential, we also expect solutions where $\mathcal{I}$ is given by the union of two disjoint intervals $\mathcal{I}=(\delta_1,\delta_2)\cup(\eps_1,\eps_2)$. Correspondingly, the singular part of the potential contains four roots and we make the ansatz:
\begin{equation*}
\omega^2(\lambda)=M^2(\lambda)(\lambda-\delta_1)(\lambda-\delta_2)
(\lambda-\eps_1)(\lambda-\eps_2)=V'{}^2(\lambda)-R(\lambda).
\end{equation*}
This solution is known as a {\em double-cut solution}.
It is important to stress that in general, all solutions will be valid only on a subset of the full parameter space of the model under consideration. This subset is characterized by the condition $\rho(\lambda)\geq 0$ (and therefore $c_{2n}\geq 0$) as well as the condition that $\mathcal{I}$ is of the assumed form. It is called the {\em existence domain} of a solution, and its boundary in parameter space can correspond to a phase transition.
In the following, we will present all the solutions for the various models together with their existence domains. For $\CPP^1$, we also give the explicit expressions for the free energies. We will consider three kinds of solutions: the symmetric single-cut solution, the symmetric double-cut solution and the asymmetric single-cut solution. In the latter case, we should strictly speaking include all the odd moments $c_{2n-1}$, $n\in\NN$, which we dropped in our actions. However, the full action would be very dif\/f\/icult to handle analytically, and we hope to make at least qualitative statements with our truncation.
The solutions for closely related models, in which $r$ is kept f\/ixed, the coef\/f\/icient of $c_2^2$ is a~parameter and the coef\/f\/icient of $c_2^3$ vanishes, have been computed in \cite{Das:1989fq} for the symmetric single-cut type and \cite{Shishanin:2007zz} for the two other types.
\subsection[Solutions of the model on CP**1]{Solutions of the model on $\boldsymbol{\CPP^1}$}
For the symmetric single-cut case, we assume that the eigenvalue density $\rho(\lambda)$ has support on the interval $\mathcal{I}=(-d,+d)$. The solution we obtain from the procedure described above together with the conditions $W(z)\sim \frac{1}{z}$ and $\int_\mathcal{I} \dd\lambda\,\lambda^2\,\rho(\lambda)=c_2$ reads as
\begin{gather}
c_2=\frac{3(2d^2r\beta+3d^4g\beta+2d^2\beta-4)}{4d\beta^2} ,\qquad \rho(\lambda)= \frac{\sqrt{d^2-\lambda^2}(4-d^2g\beta(d^2-4\lambda^2)}{2d^2\pi} ,\nonumber\\
48+d^2\beta(d^6g\beta^2+4d^2(\beta-9g)- 24(1+r))=0.\label{eq:solCP11CS}
\end{gather}
The free energy of this solution is given by
\begin{equation*}
F=\frac{1}{64} \left(40+d^2 \beta \left(-12 d^2 g+4 (1+r)+d^4 g (1+r) \beta \right)-64 \log\left(\frac{d}{2}\right)\right).
\end{equation*}
The solution exists, if both $\rho(\lambda)$ and $c_2$ are nowhere negative. The condition $\rho(\lambda)\geq 0$ amounts to
\begin{equation*}
r>-1+\frac{2(\beta-3g)}{3\sqrt{\beta g}},
\end{equation*}
while $c_2\geq 0$ is always satisf\/ied in the existence domain of the solution~\eqref{eq:solCP11CS}.
Next, we assume a symmetric double-cut support for $\rho(\lambda)$ on $\mathcal{I}=(-\sqrt{s+d},-\sqrt{s-d})\cup(\sqrt{s-d},\sqrt{s+d})$. The solution here reads as
\begin{gather*}
\rho(\lambda)=\frac{2}{\pi}g\beta\lambda\sqrt{(s+d-\lambda^2)(\lambda^2-s+d)} ,\qquad c_2=s,\qquad d=\frac{1}{\sqrt{\beta g}},\qquad s=\frac{3(1+r)}{2(\beta-3g)}.
\end{gather*}
To evaluate the free energy, we compute $\xi$ at $\lambda=\sqrt{s}$ and use the relation
\begin{equation*}
F=\tfrac{1}{2}\left(\int_\mathcal{I}\dif{\lambda}\rho(\lambda)
\left(V(\lambda)-\log(\lambda-\sqrt{s})-\log(\lambda+\sqrt{s})\right)\right)
+\tfrac{1}{4}V(\sqrt{s})+\tfrac{1}{4}V(-\sqrt{s}).
\end{equation*}
The result is
\begin{equation*}
F=\frac{9 g-3 \left(3+4 r+2 r^2\right) \beta +(6 g-2 \beta ) \log(4 g \beta )}{24 g-8 \beta }.
\end{equation*}
The symmetric double-cut solution exists, if $s=c_2>0$, i.e.\ if $r<-1$ and $3g>\beta$ or $r>-1$ and $3g<\beta$ and if $s>d$. The latter condition yields
\begin{equation*}
r<-1+\frac{2(\beta-3g)}{3\sqrt{\beta g}},
\end{equation*}
and we see that the boundary of the existence domain of the symmetric double-cut solution matches that of the symmetric single-cut solution. We therefore expect a phase transition at this boundary. At the point $(r_0,g_0)=(-1,\frac{\beta}{3})$, something interesting happens: Here, the equation for $s$ becomes trivially satisf\/ied and $s$ is unconstrained. The two cuts can thus be arbitrarily far apart. At this point, the action reduces to
\begin{equation*}
S_{(r_0,g_0)}=-\frac{\beta}{3}c_2^2+\frac{\beta}{3}c_4,
\end{equation*}
and we have a competition of the single-trace and the multitrace potential term.
The asymmetric single-cut solution with support on $\mathcal{I}=(s-d,s+d)$ with $s\neq 0$ is given by
\begin{gather*}
\rho(\lambda)=\frac{2 g \beta \sqrt{d^2-(s-\lambda )^2} \left(\lambda (s+\lambda )-d^2\right)}{\pi } , \qquad c_2=\frac{3(1+3d^2g+r+2gs^2)}{2\beta}, \\
s=\sqrt{\frac{4+3d^4g\beta}{8d^2g\beta}} ,\qquad 4 \beta +g \left(-12-12 d^2 (1+r) \beta +5 d^8 g \beta ^3+d^4 \beta (-45 g+11 \beta )\right)=0.
\end{gather*}
To evaluate the free energy, we determine the Lagrange multiplier $\xi$ at $\lambda=s$. Our def\/initions then yield
\begin{equation*}
F=\frac{8 (1+r)+d^2 g \left(-24+d^2 \beta \left(-3 d^2 g+14 (1+r)+5 d^4 g (1+r) \beta \right)\right)+32 d^2 g \log\left(\frac{d}{2}\right)}{32 d^2 g}.
\end{equation*}
The asymmetric single-cut solution exits if $\rho\geq 0$ and if $d$ is real and positive. The f\/irst condition implies
\begin{equation*}
r<\frac{-135 \sqrt{15} g-135 \sqrt{g} \sqrt{\beta }+41 \sqrt{15} \beta }{135 \sqrt{g} \sqrt{\beta }},
\end{equation*}
and $c_2$ and $s$ are automatically positive. The second condition amounts to $g>\frac{\beta}{3}$.
\subsection[Phase structure on CP**1]{Phase structure on $\boldsymbol{\CPP^1}$}
If existence domains do not overlap, we expect a phase transition at the boundary. If, however, two solutions exist for the same parameters, then the solution with the lowest free energy will be adopted. The resulting phase diagram is depicted in Fig.~\ref{fig1}.
\begin{figure}[h]
\centerline{\includegraphics{Saemann-Fig1}}
\caption{The phase diagram on $\CPP^1$ for $\beta=\frac{1}{2}$. Dashed lines describe phase boundaries, solid lines boundaries of existence domains. See the text for more details.}\label{fig1}
\end{figure}
Here, the symmetric single-cut, the double-cut and the asymmetric single-cut solutions are labelled as I, II and III. The boundary of the existence domain between I and II describes the usual second order phase transition of the hermitian matrix model. The existence domain of III is fully contained in the existence domain of II. There is indeed a region of the parameter space, where the asymmetric f\/illing yields a lower value for the free energy than the symmetric f\/illing. This is particularly interesting, as it was very dif\/f\/icult to extract the II--III phase transition from the numerical data in~\cite{GarciaFlores:2005xc}. We can thus conf\/irm the numerical f\/indings. Furthermore, there is the forbidden region in the parameter space for $g<\frac{\beta}{3}$. We expect that higher-order corrections in $\beta$ would deform this boundary.
Altogether, we have obtained the general features of the phase diagram found in \cite{GarciaFlores:2005xc}: We have three distinct phases, which come together at the point $(r_0,g_0)=(-1,1/6)$, which corresponds to $(b,c)=(-0.5,1/12)$ in the conventions of \cite{GarciaFlores:2005xc}. This compares to numerically found values of $(b,c)=(-0.8\pm 0.08,0.15\pm 0.05)$. The discrepancy is due to the fact that the triple point is in a region of the parameter space, where the kinetic term is not small compared to the potential terms.
The ef\/fects due to the asymmetric single-cut region have to be considered only qualitatively, because we have dropped all the odd moments from the action using symmetry arguments. This was done to keep the solutions under analytical control. The critical line found in \cite{GarciaFlores:2005xc} corresponds here to the dashed curve. Including the odd momenta would presumably straightened this curve.
The discrepancies compared to \cite{O'Connor:2007ea} arise from the fact that there, a contribution to the action, labelled $K\urcorner K$, was neglected in the large $N$ limit while we included it here. This yielded a~dif\/ferent model with the opposite sign of the $c_2^2$ term.
\subsection[Solutions of the model on CP**2]{Solutions of the model on $\boldsymbol{\CPP^2}$}
Let us now be brief in repeating the analysis for $\CPP^2$: The single-cut solution with support on $\mathcal{I}=(-d,d)$ is given by
\begin{gather*}
\rho(\lambda)=\frac{2 \beta \sqrt{d^2-\lambda ^2} \left(-120 c_2 \beta -128 c_2^2 \beta ^2+45 \left(4+3 d^2 g+3 r+6 g \lambda ^2\right)\right)}{135 \pi } ,\\c_2=\tfrac{1}{8} (2d^2+d^6g\beta) ,\qquad
-270+d^2 \beta (405 d^2 g+270 r-8 (-45+2 c_2 \beta (15+16 c_2 \beta )))=0 ,
\end{gather*}
and the boundary of its existence domain is
\begin{equation*}
r>-\frac{4}{3}-\sqrt{\frac{2g}{\beta }}+\frac{64 \beta +60 \sqrt{2g \beta }}{135 g} .
\end{equation*}
The double-cut solution with support $\mathcal{I}=(-\sqrt{s+d},-\sqrt{s-d})\cup(\sqrt{s-d},\sqrt{s+d})$ reads as:
\begin{gather*}
\rho(\lambda)=\frac{2 \lambda \sqrt{g \beta \big(2-4 g \beta \left(s-\lambda ^2\right)^2\big)}}{\pi } ,\qquad c_2 =s ,\\
d=\frac{1}{\sqrt{2\beta g}} ,\qquad -128 s^2+45 (4+3 r+6 g s) \beta ^2-120 s \beta ^3=0 ,
\end{gather*}
and this solution is admissible, if $s$ is real and $s>d$. This yields the following bounds:
\begin{equation*}
\frac{-405 g^2+360 g \beta -592 \beta ^2}{384 \beta ^2}<r<-\frac{4}{3}-\sqrt{\frac{2g}{\beta }}+\frac{64 \beta +60 \sqrt{2g \beta }}{135 g}.
\end{equation*}
Note that the left and the right bound touch at a single point in the $r$-$g$-plane.
The asymmetric single-cut solution has support $\mathcal{I}=(s-d,s+d)$ and is given by
\begin{gather*}
\rho(\lambda)=\frac{4 g \beta \sqrt{d^2-(s-\lambda )^2} \left(\lambda (s+\lambda )-d^2\right)}{\pi },\qquad
c_2=\frac{1}{4}d^2g(16s^4+10d^2s^2-d^4),\\
s=\sqrt{\frac{2+3d^4g\beta}{8d^2g\beta}},\qquad 45 \left(4+9 d^2 g+3 r+6 g s^2\right)-120 c_2 \beta -128 c_2 \beta ^2=0.
\end{gather*}
This solution is valid for
\begin{equation*}
r>-\frac{4}{3}-\frac{\sqrt{15 g}}{\sqrt{2 \beta }}+\frac{82 \sqrt{2\beta }}{27 \sqrt{15g}}+\frac{26896 \beta }{18225 g}.
\end{equation*}
The moment $c_2$ and the center of the cut $s$ are automatically positive. Contrary to the case of~$\CPP^1$, there is no upper bound on $r$ for this solution.
The expressions for the free energies on~$\CPP^2$ are not presented as they are lengthy but can be calculated quite straightforwardly.
\begin{figure}[h]
\centerline{\includegraphics{Saemann-Fig2}}
\caption{The boundaries of the various existence domains on $\CPP^2$ for $\beta=\frac{1}{2}$. See the text for more details.}\label{fig2}
\end{figure}
The boundaries of the various solutions are depicted in Fig.~\ref{fig2}. The solid line corresponds to the usual matrix model phase transition, i.e.\ the boundary of the existence domain of the symmetric single-cut solution. It is also the upper-right boundary for the existence domain of the double-cut solution, whose lower-left boundary is the dashed line. The dotted line is the lower-left boundary of the existence domain of the asymmetric single-cut solution. The area in the lower left corner is the forbidden region in parameter space.
\subsection[Solutions of the model on CP**3]{Solutions of the model on $\boldsymbol{\CPP^3}$}
We list now the three solutions for the model on $\CPP^3$. This model dif\/fers from that on $\CPP^2$ only in the magnitude of the coef\/f\/icients in the action. The supports are chosen in the same way as on $\CPP^1$ and $\CPP^2$.
Symmetric single-cut-solution:
\begin{gather*}
\rho(\lambda)=\frac{3 \beta \sqrt{d^2-\lambda ^2} \left(10 r-3 (-5+6 c_2 \beta (1+4 c_2 \beta ))+10 g \left(d^2+2 \lambda ^2\right)\right)}{5 \pi },\\
c_2=\tfrac{1}{8}(2d^2+d^6g\beta),\qquad
10-3 d^2 \beta (15 d^2 g+10 r-3 (-5+6 c_2 \beta (1+4 c_2 \beta )))=0,\\
r>\frac{-5 g \left(9+2 \sqrt{\frac{6g}{\beta }}\right)+16 \beta +6 \sqrt{6 g \beta }}{30 g}.
\end{gather*}
Double cut solution:
\begin{gather*}
\rho(\lambda)=\frac{2 \lambda \sqrt{g \beta \big(6-36 g \beta \left(s-\lambda ^2\right)^2\big)}}{\pi },\\
c_2=s,\qquad d=\frac{1}{\sqrt{2\beta g}},\qquad 10 r+20 g s-3 (-5+6 s \beta (1+4 s \beta ))=0,\\
\frac{-100 g^2+180 g \beta -1161 \beta ^2}{720 \beta ^2}<r<\frac{-5 g \left(9+2 \sqrt{\frac{6g}{\beta }}\right)+16 \beta +6 \sqrt{6g \beta }}{30 g}.
\end{gather*}
Asymmetric single-cut solution:
\begin{gather*}
\rho(\lambda) =\frac{12 g \beta \sqrt{d^2-(s-\lambda )^2} \left(\lambda (s+\lambda )-d^2\right)}{\pi } ,\qquad c_2=-\frac{3}{4} d^2 g \left(d^4-10 d^2 s^2-16 s^4\right) \beta ,\\
s=\frac{\sqrt{2+9 d^4 g \beta }}{2 d \sqrt{6\beta g}}, \qquad 30 d^2 g+10 r+20 g s^2-3 (-5+6 c_2 \beta (1+4 c_2 \beta ))=0,\\
r>-\frac{3}{2}-\frac{\sqrt{5g}}{\sqrt{2\beta }}+\frac{41 \sqrt{\beta }}{10 \sqrt{10g}}+\frac{1681 \beta }{450 g}.
\end{gather*}
\begin{figure}[h]
\centerline{\includegraphics{Saemann-Fig3}}
\caption{The boundaries of the various existence domains on $\CPP^3$ for $\beta=\frac{1}{2}$.}\label{fig3}
\end{figure}
The boundaries for the existence domains are presented in Fig.~\ref{fig3}. The meaning of the lines is the same as for $\CPP^2$. Not surprisingly, the phase diagram is essentially identical to that for~$\CPP^2$. Unfortunately, there is no feature hinting at the non-renormalizability of $\phi^4$-theory on~$\FR^6$.
\section{Conclusions}\label{section5}
In this paper, we computed the partition function of scalar quantum f\/ield theory on fuzzy $\CPP^n$ to third order in the inverse temperature $\beta$, generalizing the results of \cite{O'Connor:2007ea}. As this theory can be interpreted as a noncommutative deformation of scalar quantum f\/ield theory on $\FR^{2n}$ in the large $N$ limit, we also demonstrated the existence of a nontrivial such theory.
We started by expanding the exponential of the kinetic term in the partition function. We then used group theoretic methods to integrate out the zero modes of the action and obtained a multitrace matrix model. This model was then solved via the saddle point approximation in the large $N$ limit. In principle, however, the partition function of the matrix model could have been computed as well at f\/inite $N$ using orthogonal polynomials.
We presented the explicit classical solutions on which the partition function localizes in the large $N$ limit and discussed the arising phase diagrams for $\CPP^1$, $\CPP^2$ and $\CPP^3$. We conf\/irmed the f\/indings of the numerical analysis of \cite{GarciaFlores:2005xc,Panero:2006bx} for $\CPP^1$ and reproduced qualitatively -- and partly quantitatively -- the phase diagram found numerically. That is, we conf\/irmed the existence of three distinct phases and conf\/irmed analytically their properties suggested by the numerical studies. We also found a triple point which agrees to an acceptable degree with the one found numerically.
Here, it was particularly interesting that we found a large region of the parameter space in which an asymmetric single-cut solution was energetically favorable to a symmetric double-cut solution, even though our potential was symmetric. Such situations have been studied in the past, see e.g.~\cite{Cicuta:1986tm}. Physically, the existence of this spontaneous symmetry breaking in the large~$N$ limit can be explained as follows. Consider the matrix model at f\/inite~$N$. Here, we have to introduce explicitly a symmetry breaking term for an asymmetric phase to exist, as tunnelling of the eigenvalues would otherwise restore symmetry in the eigenvalue f\/illing. After taking $N$ to inf\/inity, there is no more tunnelling, and the symmetry breaking term can be safely switched of\/f, preserving the asymmetric conf\/iguration.
It would be interesting to push the analysis of the phase diagrams further and, for example, to include all odd moments and examine the possibility of a smooth transition of the f\/illing fractions between phases II and III. We then might be able to reproduce the slope for the linear phase boundary found in \cite{GarciaFlores:2005xc}. It might also be interesting to compare our results to the f\/indings of \cite{Gubser:2000cd}, where the phase structure of noncommutative f\/ield theories on Moyal space was analyzed and \cite{Das:2008bc}, where questions arising from \cite{Gubser:2000cd} were discussed on the fuzzy sphere. Moreover, we intend to use the results found here to study scalar f\/ield theory on $\FR\times S^2$ as well as the relation of our multitrace matrix model to (deformed) integrable hierarchies in the future. Finally, recall that multitrace matrix models had been proposed as candidates for conformal f\/ield theories with $c>1$ coupled to gravity \cite{Das:1989fq}. One might be able to make sense of our models in this context, as well.
\vspace{-2pt}
|
1,941,325,221,054 | arxiv | \section{Introduction}
This is a contituation of \cite{M1} where we gave a complete reducibility
criterion for a tensor product $V\tp Z$ of two irreducible modules of highest weight over the
(classical or) quantum universal enveloping algebra $U_q(\g)$ of a semi-simple Lie algebra $\g$. It is formulated in terms of a
contravariant symmetric bilinear form on $V\tp Z$, which is the product of the contravariant forms on the tensor factors.
Specifically, $V\tp Z$ is completely reducible if and only if the form is non-degenerate when restricted
to the span $(V\tp Z)^+\subset V\tp Z$ of singular vectors.
In this paper, we develop an efficient computational method for practical use of that criterion. It reveals a close relation
of the form with the extremal projector \cite{AST,KT}, which was pointed out for some special cases in \cite{M1}.
We employ a parametrization of $(V\tp Z)^+$ by a certain subspace in one of the tensor factors, e.g. $V^+_Z\subset V$.
It is isomorphic to $\Hom_{U_q(\g_+)}(Z^*, V)$,
where $U_q(\g_+)$ denotes the positive nilpotent subalgebra in $U_q(\g)$,
and the star designates the dual module of lowest weight. The subspace $V^+_Z$ is identified with the kernel of
the left ideal annihilating the lowest vector in $Z^*$.
We consider the pull-back of the contravariant form from $(V\tp Z)^+$ to $V^+_Z$ which we call extremal twist.
Regarded as a linear map from $V^+_Z$ to its dual vector space, this pull-back
relates two natural constructions of singular vectors in $V\tp Z$.
The extremal twist can be obtained as a representation of a universal element $\Theta_Z$ from a certain extension
of $U_q(\g)$, which itself can be expressed through a lift of the inverted invariant pairing $Z\tp Z^*\to \C$.
Such a pairing is non-degenerate and unique up to a normalization, thanks to irreducibility of $Z$.
The element $\Theta_Z$ appeared before in the theory of dynamical twist, for $Z$ a parabolic Verma module relative to
a Levi subalgebra $\k\subset \g$, cf. \cite{EV,KM}.
When $\k$ is the Cartan subalgebra $\h\subset \g$ and $Z$ is an ordinary Verma module,
the inverse element $\Theta^{-1}_Z$ participated in construction of dynamical Weyl group in \cite{EV}.
It equals the shifted extremal projector $p_\g(\zt)$ of $U_q(\g)$, by the
highest weight $\zt$ of the module $Z$. In this paper we extend that relation to all irreducible $Z$
of highest weight,
provided certain regularity assumptions on the operator $p_\g(\zt)$ as
a trigonometric rational function of $\zt$ are fulfilled.
This finding reduces the problem of semi-simplicity of tensor products to
computing the determinant of $p_\g(\zt)$. The shifted extremal projector
is naturally interpreted as the universal inverse of the contravariant form transferred from $(\>\cdot\> \tp Z)^+$
to $\Hom_{U_q(\g_+)}(Z^*, \>\cdot\>)$.
As an example, we consider a parabolic Verma module $Z$ relative to a Levi subalgebra $U_q(\k)\subset U_q(\g)$.
Such a module is parabolically induced from a finite dimensional $U_q(\k)$-module $X$ of highest weight $\zt$.
The factor $p_\k(\zt)$ entering $p_\g(\zt)$ is invertible on the subspace of concern for every finite dimensional module $V$.
Then $p_\g(\zt)$ essentially reduces to a product $p_{\g/ \k}(\zt)$ of shifted $\s\l(2)$-projectors over the roots
from $\Rm^+_\g-\Rm^+_\k$. The universal extremal twist $\Theta_Z$ coincides with
$p_{\g/ \k}^{-1}(\zt)$ up to an invertible factor which degenerates to $1$ for scalar $X$.
The poles of $p_{\g/ \k}^{-1}(\zt)$ correspond to
reducible $Z$.
As another application, we compute the extremal twist for $Z$ the base module for a quantum sphere $\Sbb^{2n}$, \cite{M3},
and thereby prove that all tensor products $V\tp Z$ with finite dimensional quasi-classical $U_q(\s\o(2n+1))$-modules $V$
are completely reducible.
\section{Quantized universal enveloping algebras}
\label{SecPrelim}
Suppose that $\g$ is a complex semi-simple Lie algebra and $\h\subset \g$ its Cartan subalgebra. Fix
a triangular decomposition $\g=\g_-\op \h\op \g_+$ with nilpotent Lie subalgebras
$\g_\pm$.
Denote by $\Rm$ the root system of $\g$, and by $\Rm^+$ the subset of positive roots with basis $\Pi$
of simple roots.
Choose an inner product $(\>.\>,\>.\>)$ on $\h$ as a multiple of the restricted Killing form
and transfer it to $\h^*$ by duality.
For each $\la\in \h^*$ denote by $h_\la$ an element of $\h$ such that $\mu(h_\la)=(\mu,\la)$, for all $\mu\in \h^*$.
Set $\la^\vee=2\frac{\la}{(\la,\la)}$ for non-zero $\la\in \h^*$.
By $U_q(\g)$ we understand the standard quantum group, cf. \cite{D,CP}. It is a $\C$-algebra
with the set of generators $e_\al$, $f_\al$, and $q^{\pm h_\al}$, $\al \in \Pi$, obeying
$$
q^{h_\al}e_\bt=q^{(\al,\bt)}e_\bt q^{h_\al},
\quad
[e_\al,f_\bt]=\dt_{\al,\bt}\frac{q^{h_\al}-q^{-h_\al}}{q_\al-q_\al^{-1}},
\quad
q^{h_\al}f_\bt=q^{-(\al,\bt)}f_\bt q^{h_\al},\quad \al, \bt \in \Pi,
$$
where $q_\al=q^{\frac{(\al,\al)}{2}}$ and $q^{h_\al}q^{-h_\al}=1=q^{-h_\al}q^{h_\al}$.
The elements $e_\al$ and $e_{-\al}=f_\al$ satisfy the q-Serre relations
$$
\sum_{k=0}^{1-a_{\al \bt}}(-1)^k {1-a_{\al \bt}\choose k}_{q_\al}e_{\pm \al}^{k}e_{\pm \bt}e_{\pm \al}^{1-a_{\al\bt}-k}, \quad \al\not =\bt.
$$
We use the notation $a_{\al \bt}=(\bt,\al^\vee)$ for the Cartan matrix, and ${m \choose n}_q=\frac{[m]_q!}{[n]_q![m-n]_q!}$, where $[m]_q!=[1]_q\cdot \ldots \cdot [m]_q$. Here and throughout the paper we write $[z]_q=\frac{q^z-q^{-z}}{q-q^{-1}}$ for $z\in \h+\C$.
The complex parameter $q\not =0$ is assumed not a root of unity.
Fix the comultiplication in $U_q(\g)$ as
\be
\Delta(f_\al)= f_\al\tp 1+q^{-h_\al}\tp f_\al,\quad \Delta(q^{ h_\al})=q^{h_\al}\tp q^{ h_\al},\quad\Delta(e_\al)= e_\al\tp q^{h_\al}+1\tp e_\al.
\label{coprod}
\ee
Then the antipode $\gm$ acts on the generators by the assignment $\gm( f_\al)=- q^{h_\al}f_\al$, $\gm( q^{h_\al})=q^{- h_\al}$, $\gm( e_\al)=- e_\al q^{-h_\al}$. The counit returns $\eps(e_\al)=\eps(f_\al)=0$, and $\eps(q^{ h_\al})=1$.
Denote by $U_q(\h)$, $U_q(\g_+)$, $U_q(\g_-)$ the subalgebras in $U_q(\g)$ generated by, respectively, $\{q^{\pm h_\al}\}_{\al\in \Pi}$, $\{e_\al\}_{\al\in \Pi}$, and $\{f_\al\}_{\al\in \Pi}$. The algebra $U_q(\g)$ is a free $U_q(\g_-)-U_q(\g_+)$-bimodule
generated by $U_q(\h)$ and features a triangular decomposition $U_q(\g)=U_q(\g_-)U_q(\h)U_q(\g_+)$
as in the classical case.
The quantum Borel subgroups $U_q(\b_\pm)=U_q(\g_\pm)U_q(\h)$ are Hopf subalgebras in $U_q(\g)$.
We will need the following involutive maps on $U_q(\g)$.
The assignment
\be
\si\colon e_\al\mapsto f_\al, \quad\si\colon f_\al\mapsto e_\al, \quad \si\colon q^{h_\al}\mapsto q^{-h_\al}
\label{sigma}
\ee
extends to an algebra automorphism of $U_q(\g)$ and coalgebra anti-automorphism.
The involution $\omega=\gamma^{-1}\circ \si$ preserves the comultiplication but flips the multiplication.
All $U_q(\g)$-modules are assumed left and diagonalizable over $U_q(\h)$. Given a module $V$, we write
$V[\la]$ for its subspace of weight $\la \in \h^*$. This notation applies to any $U_q(\h)$-module as well.
We denote by $\La(V)\subset \h^*$ the set of weights of a $U_q(\h)$-module $V$.
\subsection{Contravariant form on $V\tp Z$ and extremal twist}
In this section we recall a criterion for a tensor product $V\tp Z$ to be completely reducible, following \cite{M1}.
A symmetric bilinear form $\langle \>.\>,.\>\rangle$ on a module $Z$ is called contravariant with
respect to involution $\omega$ if
$\langle x z, w\rangle=\bigl\langle z,\omega(x)w\bigr\rangle$ for all $z,w\in Z$ and all $x\in U_q(\g)$.
It is known that every highest weight module has a unique, up to a scalar multiplier, contravariant form,
which is non-degenerate if and only if the module is irreducible.
Let us recall its construction.
Let $\wp\colon U_q(\g)\to U_q(\h)$ denote the projection along $\g_-U_q(\g)+U_q(\g)\g_+$ facilitated by
the triangular decomposition.
If $Z$ is the Verma module with highest weight $\zt$ and the highest vector $1_Z$, then the form is defined
by $\langle x 1_Z, y 1_Z\rangle=\zt\left(\wp \bigl(\omega(x)y\bigr)\right)$ for all $x,y\in U_q(\g)$.
Its kernel is the maximal proper submodule, therefore the form
transfers to any quotient module.
Suppose that $X$ is a module of lowest weight $\xi$ and $Z$ is a module of highest weight $\zt$.
We
extend the tensor product $X\tp Z$ to $X\hat \tp Z$ as follows.
For $\bt \in \Z\Pi$, we define $(X\hat \tp Z)[\xi+\zt+\bt]$ as the vector space of formal sums
over $\mu,\nu\in \Z_+\Pi$ subject to $\mu-\nu=\bt$ of tensors from $X[\mu+\xi]\tp Z[-\nu+\zt]$. Then $X\hat \tp Z$
consists of finite linear combinations of elements from $(X\hat \tp Z)[\xi+\zt+\bt]$ with $\bt \in \Z\Pi$.
It is easy to see that the $U_q(\g)$-action on $X\tp Z$ extends to an action of
$X\hat \tp Z$. We also apply this construction to tensor products of diagonalizable
$U_q(\h)$-modules with finite dimensional weight spaces whose weights are bounded from below and, respectively, from above.
The contravariant form on $Z$ is equivalent to an invariant paring $Z\tp Z'\to \C$, where $Z'$ is
the opposite module of lowest weight $-\zt$. They are related by a linear isomorphism
$\id\tp \si_Z\colon Z\tp Z\to Z\tp Z'$, where $\si_Z(f 1_Z)=\si(f)1_{Z'}$ for $f\in U_q(\g_-)$ and $1_{Z'}$ is
the lowest vector in $Z'$. If the form is non-degenerate, $Z'$ is isomorphic to
$Z^*$, and there exists a $U_q(\g)$-invariant element (the innverse form) $\Sc\in Z'\hat\tp Z$.
The converse is also true.
\begin{propn}
Suppose there exists a $U_q(\b_+)$-invariant element $\Sc\in Z'\hat \tp Z$
such that $\Sc_1\langle 1_Z,\Sc_2\rangle =1_{Z'}$. Then $Z$ is irreducible,
and $\Sc$ is the inverse invariant form.
\label{lift is singular}
\end{propn}
\begin{proof}
For any $f\in U_q(\g_-)$, one has $\omega(f)=(\gm^{-1}\circ \si )(f)\in U_q(\b_+)$. Then
$$
\Sc_1\langle \Sc_2,f1_Z\rangle = \Sc_1\left\langle (\gm^{-1}\circ \si )(f)\Sc_2,1_Z\right\rangle
= \si (f)\Sc_1\langle \Sc_2,1_Z\rangle=\si (f)1_{Z'}=\si_Z(f1_{Z}).
$$
In other words, the map $z\mapsto \si_Z^{-1}(\Sc_1)\langle \Sc_2,z\rangle$
is identical on $Z$. Therefore the contravariant form is non-degenerate,
and $\left \langle z \tp w, (\si_Z^{-1}\tp \id)(\Sc)\right\rangle = \langle z, w\rangle $ for all $z,w\in Z$ as required.
\end{proof}
Suppose that $Z$ is irreducible and let $V$ be another irreducible module of highest weight.
Denote by $(V\tp Z)^+$ the span of singular vectors in $V\tp Z$, i.e. the space of $U_q(\g_+)$-invariants.
Define canonical contravariant symmetric bilinear form on $V\tp Z$ as the product of contravariant forms on $V$ and $Z$.
\begin{thm}[\cite{M1}]
The tensor product $V\tp Z$ is completely reducible if and only if the canonical form is non-degenerate when restricted
to $(V\tp Z)^+$.
\label{canonical}
\end{thm}
\noindent
Note that the form is non-degenerate on the entire $V\tp Z$ but may not be so on $(V\tp Z)^+$.
To compute the restricted form, we parameterize $(V\tp Z)^+$ with
a vector space $V^+_Z=\Hom_{U_q(\g_+)}(Z^*,V)$.
We identify it with a subspace in $V$
annihilated by the left ideal $I^+_Z\subset U_q(\g_+)$ that kills the lowest vector in $Z^*$.
The annihilator $V^\perp_Z$ of $V^+_Z$ with respect to the contravariant form coincides with $\omega(I^+_Z)V$.
The linear map $\bar \dt\colon V\tp Z\to V$, $\bar \dt\colon v\tp z\mapsto v\langle 1_Z,z\rangle$
yields an isomorphism $(V\tp Z)^+\to V^+_Z$. We denote by $\dt$ the inverse isomorphism.
Regard $Z$ as a module over $U_q(\g_-)$ and its right dual module ${}^*\!Z$ as one over $U_q(\g_+)$.
Denote by $\Fc\in U_q(\g_+)\hat \tp U_q(\g_-)$ a lift of $\Sc\in {}^*\!Z\hat \tp Z$
under a linear section of the $U_q(\g_+)\tp U_q(\g_-)$-module
homomorphisms.
The element $\Theta_Z=\gamma^{-1}(\Fc_2)\Fc_1$ belongs to a certain extension of $U_q(\g)$
and gives rise to a linear map $\theta_{V,Z}\colon V^+_Z\to V/V^\perp_Z$ by
$$
\bigl \langle \theta_{V,Z}(v),w \bigr \rangle =\bigl \langle \Theta_Z(v),w \bigr\rangle,
$$
which is independent of the choice of lift $\Fc$ for $\Sc$.
\begin{propn}[\cite{M1}]
\label{V-Z-extr}
The form $\bigl\langle \theta_{V,Z}(\>.\>),\>.\>\bigr\rangle$ is the pullback of the canonical form
under the isomorphism $V^+_Z\to (V\tp Z)^+$.
\end{propn}
The vector space $V/V^\perp_Z$ can be identified with a subspace ${}^+\!V_Z\subset V$ that is transversal to $V^\perp_Z$
since the contravariant form on $V$ is non-degenerate. If it is non-degenerate when restricted to $V^+_Z$, then $V=V^+_Z\op V^\perp_Z$, and one can set ${}^+\!V_Z=V^+_Z$.
Then the linear map $\theta_{V,Z}$ becomes an operator from $\End(V^+_Z)$.
\subsection{Braid group action on $U_q(\g)$ and a Cartan-Weyl basis}
The algebra $U_q(\g)$ admits a Poincar\'{e}-Birkhoff-Witt (BPW)-like basis of ordered monomials in
"root vectors", which are constructed from the generators via an action
of the braid group, \cite{CP}, Ch.8.1. We need this basis to
write extremal projectors of $U_q(\g)$ in next sections.
Define
$m_{\al \bt}=2,3,4,6$ for $\al,\bt \in \Pi$ if the entries of the Cartan matrix satisfy $a_{\al \bt}a_{\bt \al}=0,1,2,3$, respectively.
The braid group $\Bc_\g$ associated with $\g$ is generated by
elements $T_\al$, $\al \in \Pi$, subject to the relations
$(T_\al T_\bt)^{m_{\al \bt}}=(T_\bt T_\al)^{m_{\al \bt}}$, $\al \not =\bt$.
The group $\Bc_q$ admits a homomorphism onto the Weyl group $\Wc$ with the kernel
generated by the relations $T_\al^2=1$, $\al \in \Pi$, and sending
$T_\al$ to the simple reflections $\si_\al \in \Wc$.
The length $\ell(T)$ of an element of $T\in \Bc_\g$ is defined as the minimal number of generators
in a presentation of $T$, which is called a reduced decomposition of $T$. The length
of an element of a Weyl group is defined similarly, as the number of simple reflections in a reduced
decomposition.
There is a length preserving section of the surjection $\Bc_\g\to \Wc$, which is a map of sets.
Define a $T_\al$-action on generators of the quantum group by
$$
\quad T_\al(f_\al)=-q^{-h_\al}e_\al ,\quad T_\al(q^{h_\bt})=q^{h_\bt-a_{\al\bt}h_\al} , \quad T_\al(e_\al)=-f_\al q^{h_\al},
$$
$$
\quad T_\al(e_\bt)=\sum_{k=0}^{-a_{\al \bt}} \frac{(-1)^{a_{\al \bt}-k}q^{-k}}{[k]_q![-a_{\al \bt}-k]_q!}e_{ \al}^{-a_{\al\bt}-k}e_{\bt}e_{ \al}^{k}, \quad \al \not =\bt,
$$
$$
\quad T_\al(f_\bt)=\sum_{k=0}^{-a_{\al \bt}} \frac{(-1)^{a_{\al \bt}-k}q^{k}}{[k]_q![-a_{\al \bt}-k]_q!}f_{ \al}^{k}f_{\bt}f_{ \al}^{-a_{\al\bt}-k}, \quad \al \not =\bt.
$$
It extends to an algebra automorphism of $U_q(\g)$. The operators $\{T_{\al}\}_{\al \in \Pi}$ amount to
an action of $\Bc_\g$ on $U_q(\g)$.
\begin{propn}[\cite{CP}, Prop. 8.1.6]
Let $w\in \Wc$ be such that $\bt=w(\al)\in \Pi$ for some simple root $\al$.
Then $T_\omega(e_{\al})=e_{\bt}$ and $T_\omega(f_{\al})=f_{\bt}$.
\label{braid_simple}
\end{propn}
Let $\si_{i_1}\ldots \si_{i_N}$,
where $\si_i=\si_{\al_i}$ and $N=\#\Rm^+$, be
a reduced decomposition of the longest element of $\Wc$.
Define a sequence of positive roots by
$$
\mu^1=\al_{i_1},\quad \mu^2=\si_1(\al_{i_2}), \quad \ldots \quad \mu^N=\si_1\ldots \si_{N-1}(\al_{i_N}).
$$
This sequence induces a total ordering on $\Rm^+$, called normal, such that any root of the form $\al+\bt$ with $\al,\bt\in \Rm^+$
is between $\al$ and $\bt$.
Any subset $\tilde\Pi \subset \Pi$ generates a root subsystem $\tilde \Rm\subset \Rm$.
There is a normal ordering where all roots form $\tilde \Rm^+$ are on the right of the roots from $\Rm^+\backslash \tilde \Rm^+$,
see e.g. \cite{Zh}, Exercise 1.7.10.
We will use this fact in Section \ref{Sec_Parabolic} in relation with Levi subalgebras.
A Cartan-Weyl basis in $U_q(\g)$ depends on a normal ordering and is defined as follows. The root $\mu^1$ is simple, so $e_{\mu^1}$
and $f_{\mu^1}$ are the corresponding
Chevalley generators. For $k>1$ set
$$
e_{\mu^k}=T_{\al_{i_1}}\circ \ldots \circ T_{\al_{i_{k-1}}}(e_{\al_{i_k}}), \quad
f_{\mu^k}=T_{\al_{i_1}}\circ \ldots \circ T_{\al_{i_{k-1}}}(f_{\al_{i_k}}).
$$
Proposition \ref{braid_simple} guarantees that the simple root generators are included in this set.
It is known that normally ordered monomials in these elements deliver a PBW basis in, respectively, $U_q(\g_\pm)$ when $q$ is not a root
of unity.
Regarding $U_q(\g)$ as a $\C[q,q^{-1}]$-algebra, define an anti-automorphism $\tilde\omega$ by
\be
\tilde \omega\colon e_\al\mapsto f_\al, \quad\tilde \omega\colon f_\al\mapsto e_\al, \quad \tilde \omega\colon q^{h_\al}\mapsto q^{-h_\al},
\quad
\tilde \omega\colon q\mapsto q^{-1}.
\label{tilde-omega}
\ee
It commutes with the action of $\Bc_\g$.
\subsection{Properties of the Cartan-Weyl basis}
For each $\mu\in \Rm^+$, one has
$
[e_\mu,f_\mu]=a_\mu\frac{q^{h_\mu}-q^{-h_\mu}}{q_\mu-q^{-1}_\mu}
$
for some $a_\mu\in \C^\times$. This relation facilitates an embedding $\iota_\mu\colon U_q\bigl(\s\l(2)\bigr)\to U_q(\g)$
determined by the assignment
$$
f\mapsto \frac{1}{a_\mu}f_\mu, \quad e\mapsto e_\mu, \quad q^{h}\mapsto q^{h_\mu}, \quad
q\mapsto q_\mu,
$$
where $e,f,q^{h}$ are the standard generators of $U_q\bigl(\s\l(2)\bigr)$
satisfying $q^h eq^{-h}=q^2 e $, $q^h fq^{-h}=q^{-2} f$, and $[e,f]=[h]_q$.
We denote by $U_q(\g^\mu)$ the image of $\iota_\mu$ under this embedding.
For $\al, \bt \in \Rm^+$ such that $\al<\bt$, denote by $U^+_{\al,\bt}$ the $U_q(\h)$-submodule in $U_q(\g)$ under the
multiplication generated by the monomials $e_{\mu^i}^{k_i}\ldots e_{\mu^j}^{k_j}$
with $\al\leqslant \mu^i<\ldots<\mu^j\leqslant \bt $ and $\sum_{s}k_s>0$.
We set $U^-_{\al, \bt}=\tilde \omega (U^+_{\al, \bt})$ and denote $U^\pm_{\al\leqslant }=U^\pm_{\al,\mu^N}$ and $U^\pm_{\leqslant \al}=U^\pm_{\mu^1,\al}$.
We will also use the obvious notation $U^\pm_{< \al}$ and $U^\pm_{\al <}$
involving only the root vectors starting with the roots next to $\al$.
Given two vector subspaces in $A,B\subset U_q(\g)$ we denote $A\bullet B=A+B+AB$, where the last term is the linear span of products of
elements form $A$ and $B$.
\begin{propn}
The $U_q(\h)$-modules $U^\pm_{\leqslant \al}$, $U^\pm_{\al \leqslant}$,
$U^-_{\leqslant \al}\bullet U^+_{\bt \leqslant}$, and $U^-_{\bt \leqslant}\bullet U^+_{\leqslant \al}$ with $\al<\bt$ are associative (non-unital) subalgebras in $U_q(\g)$
\label{PBW com_rel}
\end{propn}
\begin{proof}
Set $\mu^i=\al$ and $\mu^j=\bt$ with $i<j$ and put $\al'=\mu^{i+1}$ and $\bt'=\mu^{j-1}$, so that $\al<\al'\leqslant \bt'<\bt$.
Then the following relations hold, \cite{KT}:
$$
[e_\al,e_\bt]_{q^{(\al,\bt)}}\in U^+_{\al',\bt'},
\quad
[f_\bt,f_\al]_{\bar q^{(\al,\bt)}}\in U^-_{\al',\bt'},
\quad
[e_\bt,f_\al]\in U^-_{<,\al}\bullet U^+_{\bt,<},
\quad
[e_\al,f_\bt]\in U^-_{\bt, <}\bullet U^+_{<,\al}.
$$
Here and further on we use the shortcut $\bar q=q^{-1}$.
Note that the second and fourth inclusions are obtained from the first and third by applying the automorphism $\tilde \omega$,
which flips $U_{\mu,\nu}^+$ and $U_{\mu,\nu}^-$.
Now the proof is straightforward.
\end{proof}
Note that these algebras have trivial subspace of zero weight.
\begin{propn}
\label{M-F}
For all $\mu\in \Rm^+$,
$$
\Delta(e_\mu)\in e_\mu\tp q^{h_\mu} + 1\tp e_\mu + U^+_{\mu< }\tp U^+_{< \mu}, \quad
\Delta(f_\mu)\in f_\mu\tp 1 + q^{-h_\mu}\tp f_\mu+U^-_{<\mu}\tp U^-_{\mu<}.
$$
\end{propn}
\begin{proof}
There is an invertible element $\tilde \Ru_\mu\in 1\tp 1+ U^-_{<\mu}\hat \tp U^+_{<\mu}$ such that
the coproduct $\Delta(e_\mu)$ can be expressed as
$$
\Delta(e_\mu)=\tilde \Ru_{<\mu}(e_\mu \tp q^{h_\mu}+1\tp e_\mu)\tilde \Ru_{<\mu}^{-1}.
$$
This can be found in \cite{KT}, Proposition 8.3 (for a twisted coproduct as compared to ours, so our $\tilde \Ru$ is flipped).
As $\Delta(e_\mu)\in U_q(\g_+)\tp U_q(\b_+)$, it suffices to evaluate both sides of this equality on the tensor product
of the right "universal Verma modules", i.e. the quotients of $U_q(\g)$ by the right ideal $J$ generated by $f_\al$, $\al \in \Pi$:
$$
\Delta(e_\mu)=(e_\mu \tp q^{h_\mu})\tilde \Ru_{<\mu}^{-1}+1\tp e_\mu \mod J\tp J.
$$
Pushing the left leg of $\tilde \Ru_{<\mu}^{-1}$ to the left with the use of the third inclusion from Proposition \ref{PBW com_rel} one proves the left equality.
The other is obtained by applying $\tilde \omega$,
which flips the comultiplication.
\end{proof}
\begin{propn}
For any ordered sequence of positive roots $\mu^i<\ldots <\mu^k$
the projection $\wp$ annihilates $U_{\mu^i\leqslant}^-\bullet U_{\leqslant\mu^i}^+\bullet \cdots \bullet U_{\mu^k\leqslant}^-\bullet U_{\leqslant \mu^k}^+$.
\label{absorbtion}
\end{propn}
\begin{proof}
The statement follows from the inclusion
\be
U_{i\leqslant}^-\bullet U_{\leqslant i}^+\bullet \cdots \bullet U_{k\leqslant}^-\bullet U_{\leqslant k}^+\subset
U_{i\leqslant}^-\bullet U_{\leqslant k}^+,
\label{-+}
\ee
where we write $U_{i\leqslant}^-=U_{\mu^i\leqslant}^-$ and $U_{\leqslant i}^+=U_{\leqslant \mu^i}^+$.
It is clearly true if $k=i$. Suppose it is proved for $k\geqslant i$.
Then
$$
U_{i\leqslant}^-\bullet U_{\leqslant i}^+\bullet \cdots \bullet U_{{k+1} \leqslant}^-\bullet U_{\leqslant {k+1}}^+\subset
U_{i\leqslant}^-\bullet U_{\leqslant k}^+\bullet U_{{k+1} \leqslant}^-\bullet U_{\leqslant {k+1}}^+\subset
U_{i\leqslant}^-\bullet U_{{k+1} \leqslant}^-\bullet U_{\leqslant k}^+\bullet U_{\leqslant {k+1}}^+
$$
The right inclusion is due to Proposition (\ref{PBW com_rel}). The result is contained in $U_{i \leqslant}^-\bullet U_{\leqslant {k+1}}^+$,
again by Proposition (\ref{PBW com_rel}).
Induction on $k$ completes the proof.
\end{proof}
\section{Extremal twist and extremal projector}
\label{Sec_con_form_proj}
We start with the case of $\g=\s\l(2)$ and normalize the inner product so that $(\al,\al)=2$ for its only positive root $\al$.
Set $e=e_\al$, $f=f_\al$, and $q^{\pm h}=q^{\pm h_\al}$ to be the standard generators of $U_q(\g)$.
Extend $U_q(\g)$ to $\hat U_q(\g)$ by including infinite sums of elements from $\C[f]\C[e]$ of same weights
with coefficients in the field of fractions $\C(q^{\pm h})$.
Similar extension works for general semi-simple $\g$ making $\hat U_q(\g)$
an associative algebra, see e.g. \cite{KT}.
Define an element $p(t)\in \hat U_q\bigl(\s\l(2)\bigl)$ depending on a complex parameter $t$ by
\be
\label{translationed_proj}
p(t)=\sum_{k=0}^\infty f^k e^k \frac{(-1)^{k}q^{k(t-1)}}{[k]_q!\prod_{i=1}^{k}[h+t+i]_q}
\in \hat U_q(\g).
\ee
It is stable under the involution $\omega$.
For every module $V$ with locally nilpotent action of $e$, the function $t\mapsto p(t)$ is a rational trigonometric
endomorphism of every weight space. On a module of highest weight
$\la$,
it acts by
\be
p(t)v=c\prod_{k=1}^{l}\frac{[t-k]_q}{[t+\xi(h)+k]_q}v,
\label{proj_eigen}
\ee
where $v$ is a vector of weight $\xi=\la-l\al$ and $c=q^{-l \xi(h)+2l^2+l(l-1)}\not =0$.
For general $\g$ and $\mu\in \Rm^+$ let $p_\mu(t)$ denote the image of $p(t)$
in $\hat U_q(\g)$ under the embedding $\iota_\mu\colon \hat U_q(\s\l(2))\to \hat U_q(\g)$.
Put $\la_i=2\frac{(\la,\mu^i)}{(\mu^i,\mu^i)}\in \C$ for $\la\in \h^*$ and $\mu^i\in \Rm^+$. Define
\be
p_\g(\la)=p_{\mu^1}(\rho_1+\la_1)\cdots p_{\mu^N}(\rho_N+\la_N),
\label{factorization}
\ee
assuming $\{\mu^i\}_{i=1}^N$ normally ordered.
It is independent of a normal ordering and turns to the extremal projector $p_\g$ at $\la=0$, \cite{AST,KT}, which is the only element
of $\hat U_q(\g)$ satisfying
$$
p_\g^2=p_\g, \quad e_\al p_\g =0 =p_\g f_\al , \quad \forall \al \in \Pi.
$$
Uniqueness implies that $p_\g$ is $\omega$-invariant.
Let $V$ and $W$ be vector spaces.
Suppose that $\C^k\ni \la\mapsto F(\la)\in \Hom(W,V)$ is a rational trigonometric function.
We say that $F(\la_0)$ admits a regularization if there is $\eta\in \C^k$ such that
the function $\C\ni t\mapsto F(\la_0+t\eta)$, is regular at $t=0$.
If its value is independent of $\eta$, then we say that $F(\la_0)$ is well defined.
Furthermore, we say that $p_\g$ admits a regularization on a subspace $W$ of a $U_q(\g)$-module $V$
if so does $p_\g(\la)$ at $\la=0$ and the image of the regularized map $W\to V$ is in $U_q(\g_+)$-invariants.
Suppose that $V$ and $Z$ are irreducible modules of highest weights $\nu$ and, respectively, $\zt$. Let $1_V\in V$, $1_Z\in Z$
be their highest vectors.
\begin{propn}
Suppose that $W\subset V$ is a vector subspace such that $p_\g\colon W\tp 1_Z\to (V\tp Z)^+$ admits a regularization.
Then $p_\g(\zt)\colon W\to V^+_Z$ admits a regularization. Furthermore,
\label{twist-cocycle}
\be
\dt\circ p_\g(\zt)(w)=p_\g(w\tp 1_Z), \quad w\in W.
\label{tw-coc-form}
\ee
\end{propn}
\begin{proof}
By Proposition \ref{M-F}, for all $\al\in \Rm^+$ and all $n\in \Z_+$, the coproducts satisfy
$$
\Delta (f_\al^n)= f_\al^n\tp 1\mod U_q(\g)\tp U^-_{\al\leqslant},
\quad
\Delta (e_\al^n)= e_\al^n\tp q^{n h_\al}\mod U_q(\g)\tp U^+_{\leqslant \al}.
$$
With $\eta\in \h^*$, $t\in \C$, and $w\in W$, evaluation of
$\bar \dt\bigr(p_\g(w\tp 1_Z)\bigr)=p_\g^{(1)}(t\eta)w \tp \bigl\langle 1_Z,p_\g^{(2)}(t\eta) 1_Z \bigr\rangle$ reduces to the
replacement
$$
\Delta(q^{h_\al})\to q^{h_\al}\tp q^{h_\al},\quad \Delta(f_\al) \to f_\al\tp 1, \quad \Delta(e_\al)\to e_\al \tp q^{h_\al}
$$
in $\Delta\bigl(p_\g(t\eta)\bigr)$ for each $\al \in \Rm^+$, because the remainder vanishes in view of Proposition \ref{absorbtion}. This calculation returns $p_\g(\zt+t\eta)(w)$,
which proves the assertion.
\end{proof}
From now to the end of the section we assume that the map
\be
\pi\colon v\mapsto p_\g(v\tp 1_Z)\in (V\tp Z)^+
\label{inv_twist}
\ee
admits a regularization on ${}^+\!V_Z$.
Then we have a map
\be
\bar \theta_{V,Z}\colon {}^+\!V_Z\to V^+_Z, \quad \bar \theta_{V,Z}=\bar \dt\circ \pi.
\label{inv_twist1}
\ee
It is an immediate corollary of Proposition \ref{twist-cocycle} that
$\bar \theta_{V,Z}$ defines a symmetric bilinear form $\langle \bar \theta_{V,Z}(\>.\>),\>.\>\rangle$ on ${}^+\!V_Z$,
which is the pull-back of the canonical form on $(V\tp Z)^+$ under (\ref{inv_twist}).
We will prove that this form is essentially the inverse to the form determined by $\theta_{V,Z}$.
\begin{thm}
\label{Shap-proj}
The bilinear forms $\bigl\langle \theta_{V,Z}(\>.\>),\>.\> \bigr\rangle$ and $\bigl\langle\bar \theta_{V,Z}(\>.\>),\>.\> \bigr\rangle$
are non-degenerate simultaneously. In that case, they are inverse to each other.
\end{thm}
\begin{proof}
\vspace{10pt}
\noindent
Suppose that $\theta_{V,Z}$ is inverible and compute $\bigl\langle (\dt \circ \bar \theta_{V,Z}\circ \theta_{V,Z})(v), \dt(w)\bigr\rangle $
for a pair of vectors $v, w\in V^+_Z$
in two different ways as follows (we put $u=\dt(w)$ below for short).\\
i) Applying the definition (\ref{inv_twist1}) we find the matrix element equal to
$\left\langle p_\g\bigl(\theta_{V,Z}(v)\tp 1_Z\bigr),u\right\rangle $.
Presenting $p_\g=1+\sum_{i} \phi_i\psi_i$, where $\phi_i\in U_q(\g_-)$ and $\psi_i\in U_q(\b_+)$
carry non-zero weight,
we continue with
$$
\bigl\langle \theta_{V,Z}( v)\tp 1_\zt, u\bigr\rangle +\sum_{i}\bigl\langle \phi_i\psi_i(\theta_{V,Z}(v)\tp 1_\zt),u\bigr\rangle=
\bigl\langle \theta_{V,Z}(v)\tp 1_\zt,u\bigr\rangle=\bigl\langle \theta_{V,Z}(v),w\bigr\rangle=\bigl\langle v,\theta_{V,Z}(w)\bigr\rangle.
$$
The sum on the left vanishes because
$\bigl\langle \phi_i\psi_i(\ldots), u\bigr\rangle=\bigl\langle\psi_i(\ldots), (\eps\circ \omega )(\phi_i) u\bigr\rangle =0$.\\
ii) By Proposition \ref{V-Z-extr} the matrix element in question is equal to
$\langle \bar \theta_{V,Z}\circ \theta_{V,Z}(v),\theta_{V,Z}(w)\rangle $.
Since $\theta_{V,Z}$ is invertible, the image of $\theta_{V,Z}$ is ${}^+\!V_Z$,
and $\bar \theta_{V,Z}\circ \theta_{V,Z}=\id$ on $V^+_Z$.
\vspace{10pt}
Now suppose that $\bar \theta_{V,Z}$ is invertible
and evaluate $\bigl\langle\dt\circ \bar \theta_{V,Z}(v), \dt\circ\bar \theta_{V,Z}(w)\bigr\rangle$
for $v,w\in {}^+\!V_Z$ in two different ways as follows.
\\
i)
On the one hand, it is equal to
$$
\bigl\langle p_\g(v\tp 1_Z), p_\g(w\tp 1_Z)\bigr \rangle=\bigl\langle v\tp 1_Z, \omega(p_\g)\circ p_\g(w\tp 1_Z)\bigr \rangle
=\Bigl\langle v,\bar \dt\bigl( p_\g(w\tp 1_Z)\bigr) \Bigr\rangle = \bigl\langle v, \bar \theta_{V,Z}(w) \bigr\rangle.
$$
ii) On the other hand, it is equal to $\bigl\langle \theta_{V,Z}\circ \bar \theta_{V,Z}(v),\bar \theta_{V,Z}(w) \bigr\rangle$
by Proposition \ref{V-Z-extr}.
Since the image of $\bar \theta_{V,Z}$ is $V^+_Z$, one arrives at $\theta_{V,Z}\circ \bar \theta_{V,Z}=\id$ on ${}^+\!V_Z$.
\end{proof}
It follows that the regularization $\lim_{t\to 0}p_\g(t\eta)|_{V^+_Z\tp 1_Z}$ may depend on $\eta$ only if the contravariant form is degenerate on $(V\tp Z)^+$.
\subsection{On regularization of extremal projector}
Regularization of the extremal projector is crucial for application of Theorem \ref{Shap-proj} to calculation of the extremal twist.
In this section we point out some facts of practical use.
It is natural to employ decomposition of $p_\g(\la)$ to a product of the root factors (\ref{factorization}).
\begin{propn}
\label{reg_proj}
Let $V$ be a $U_q(\g)$-module and put $W=V[\mu]$ for some weight $\mu\in \La(V)$.
Fix a normal order on $\Rm^+$ and suppose that $p_\al(\rho_\al)$ are well defined
on $W$ for all $\al \in \Rm^+$. Then the operator $p_\g(0)=\prod_{\al \in \Rm^+}^< p_\al(\rho_\al)$ is well defined on $W$ and independent of the normal order.
\end{propn}
\begin{proof}
Each factor in $p_\g(\la)$ corresponding to a root $\al \in \Rm^+$ depends on $\la$ through a regular function $\la\mapsto (\la,\al^\vee)$
and is well defined once admits a regularization.
This implies the assertion.
\end{proof}
Note that one has to consider the entire weight space for $W$ because it is {\em a priori} invariant under all root factors
in $p_\g(\la)$.
Clearly the statement holds true for $W$ a sum of weight spaces.
\begin{propn}
For any $r\in \C$ the operator $p_\al(r)$, $\al\in \Rm^+$, is well defined on a subspace of weight $\xi$ satisfying
$(\la,\al^\vee)+r\in \Z_+$,
in any $U_q(\g_+^\al)$-locally finite $U_q(\g^\al)$-module.
\label{easy_case}
\end{propn}
\begin{proof}
Denominators $\prod_{i=1}^{k}[(\la,\al^\vee)+r+i]_{q_\al}$ in (\ref{translationed_proj}) do not vanish on such weight spaces
($q$ is not a root of unity).
\end{proof}
Although Proposition \ref{easy_case} is rather crude it proves to be useful.
We also need more delicate criteria, rather in a more special situation.
\begin{lemma}
\label{reg_fin_dim}
Let $V$ be a finite dimensional $U_q(\g^{\al})$-module, $\al \in \Rm^+$.
For any $r\in \N$ the operator $p_\al(r)$ is well defined on $V$.
\end{lemma}
\begin{proof}
We can assume that $V$ is irreducible.
Let $\mu=\frac{m}{2}\al$, $m \in \Z_+$, be the highest weight of $V$, and $t\in \C$.
The eigenvalue of $p_\al(t+r)$ on the subspace of weight $\mu-l \al$ with $0\leqslant l\leqslant m$ is proportional to $\frac{\prod_{k=1}^{l}[t+r-k]_{q_\al}}{\prod_{k=0}^{l-1}[t+r+m-l-k]_{q_\al}}
$, cf. (\ref{proj_eigen}).
As $t\to 0$, the denominator may have the only vanishing factor that corresponds to non-negative $k=m+r -l \leqslant l-1$.
But it
is cancelled by a factor in the enumerator unless $r>l$ which contradicts
the previous inequality in view of $l\leqslant m$.
\end{proof}
As a consequence, we obtain the following important special case.
\begin{propn}
The extremal projector $p_\g$ is well defined on every dominant weight space of a locally finite $U_q(\g)$-module $ V$.
\label{reg_loc_fin}
\end{propn}
\begin{proof}
We can assume that $V$ is irreducible. Consider the case $\g=\g^\al\simeq \s\l(2)$ first.
It follows from (\ref{proj_eigen}) that $p_\g(\la)$ is regular on $V$ at $\la=0$.
For all $\xi \in \La(V)$ with $(\xi,\al^\vee)\geqslant 0$ the operator $p_\g(0)$ projects $V[\xi]$
to the space of highest weight.
For general $\g$, all root factors in $p_\g$ are well defined by Lemma \ref{reg_fin_dim} and independent of the normal order
by Proposition \ref{reg_proj}. For each simple $\al$ choose a normal order such that $\al$ is in the left-most position.
Then $p_\g(0)$ has the factor $p_\al(1)$ on the left that maps all $V[\xi]$ with $(\xi,\al^\vee)\geqslant 0$
to $\ker e_\al$.
Therefore the operator $p_\g(0)$ restricted to dominant weight spaces of $V$ takes values in $U_q(\g_+)$-invariants.
\end{proof}
Remark that although $p_\g(0)$ is well defined on every finite dimensional module by Proposition \ref{reg_proj},
non-dominant weight spaces are not generally killed by $p_\g(0)$, so it is not a projector to $U_q(\g_+)$-invariants.
That can be seen already on the example of $\g=\s\l(2)$, by examining (\ref{proj_eigen}) for $\dim V>2$ and $\xi(h)\leqslant -2$.
\section{The case of parabolic Verma modules}
\label{Sec_Parabolic}
In this section we compute extremal twist for tensor product of finite dimensional and parabolic Verma modules.
The key issue is regularization of the operator $\Delta(p_\g)$ restricted to a certain subspace in
the tensor product.
We address it relaxing the assumption that parabolic modules involved are irreducible.
\subsection{Regularization of extremal projector}
\label{RegExtPr_parabolic}
Let $\k\subset \g$ be a Levi subalgebra, i.e. a reductive Lie algebra of maximal rank
whose basis of simple roots $\Pi_\k$ is a subset in $\Pi_\g=\Pi$.
A weight $\la$ defines a one-dimensional representation $\C_\la$ of $U_q(\k)$ if and only if $q^{2\la(h_\al)}=\pm 1$ for
all $\al\in \Pi_\k$.
We can assume plus as the other cases can be covered by tensoring with appropriate one-dimensional
$U_q(\g)$-modules.
Let $\c\subset \h$ denote
the center of $\k$ and $\c^*\subset \h^*$ the subset of weights $\la$ such that $\la(h_\al)=0$ for all $\al\in \Pi_\k$.
Identify the Cartan subalgebra of the semi-simple part of $\k$ with the orthogonal complement of
$\c$ in $\h$. Then the weight lattice $\Lambda_\k$ of $\k$ is a subset of $\nu\in \h^*$ such that $(\nu, \al^\vee)\in \Z$, $\al\in \Pi_\k$ and $(\nu,\la)=0$,
$\la\in \c^*$.
Fix $\xi\in \La^+_\k$, $\la \in \c^*$ and set $\zt=\xi+\la$. Denote by $X$ the finite dimensional irreducible $U_q(\k)$-module of highest weight $\xi$.
Fix $Z$ to be the quotient of the Verma module $\hat M_{\zt}$ with the highest vector $1_\zt$ by the sum of the submodules
$U_q(\g)e_\al^{m_\al+1}1_{\zt}$, where $m_\al=(\xi,\al^\vee)\in \Z_+$ for $\al\in \Pi_\k$.
The module $Z$ is locally finite over $U_q(\k)$, \cite{M2}, and $U_q(\k)1_Z\simeq X\tp \C_\la$,
where $1_Z$ is highest vector of $Z$.
The annihilator $I^-_{Z}\subset U_q(\g_-)$ of the vector $1_Z\in Z$ is independent of $\la$ as well as
the left ideal $I^+_Z=\si(I^-_Z)\subset U_q(\g_+)$.
In this Section, we set $V^+_Z$ to be the kernel of $I^+_Z$ in $V$.
When $Z$ is irreducible, $V^+_Z$ is the generalized extremal subspace parameterizing singular vectors in $V\tp Z$, as before.
We have $V^+_Z=V^+_X$, where $V$ in the right-hand side is considered as a $U_q(\k)$-module,
so $V^+_X$ is parameterizing singular vectors in $V\tp X$. By ${}^+\!V_Z$ we understand a subspace in $V$
that is dual to $V_Z^+$ with respect to the contravariant form. This notation is also compatible with restriction of
the representation to $U_q(\k)$, that is, ${}^+\!V_Z={}^+\!V_X$.
Denote by $\c^*_{reg}$ the set of weights $\la\in \c^*$
such that $(\la,\al^\vee)\not \in \Z$ for all $\al \in \Rm^+_{\g/\k} =\Rm^+_{\g} - \Rm^+_{\k} $. It is an open subset in $\c^*$
in the Euclidean topology.
Choose a normal ordering on $\Rm_\g^+$ such $\Rm_{\g/\k}^+<\Rm_\k^+$. Denote by $p_\k(\zt)$ the shifted extremal projector of $U_q(\k)$
and by $p_{\g/ \k}(\zt)$ the product
$$
p_{\g/ \k}(\zt)=\prod_{\mu^i\in \Rm^+_{\g/\k}}^{<}p_{\mu^i}(\rho_i+\zt_i).
$$
Note that $p_\k(\zt)=p_\k(\xi)$ is independent of the summand $\la\in \c^*$.
The factorization
$
p_\g(\zt)=p_{\g/ \k}(\zt)p_{\k}(\xi)
$
facilitates regularization of $p_\g(\zt)$ on
${}^+\!V_Z$ and $(V\tp Z)^+$ as explained next.
\begin{propn}
For any finite dimensional $U_q(\k)$-module $Y$, the operator $p_\k(\xi)$ is well defined and invertible on ${}^+\!Y_X$. Furthermore,
$
\theta_{Y,X}=p_\k^{-1}(\xi)
$.
\end{propn}
\begin{proof}
All weights of ${}^+\!Y_X\tp 1_X$ are dominant when restricted to $\Rm_{\k}$.
Applying Proposition \ref{reg_loc_fin} to (the semi-simple part) of $\k$, we conclude that $p_\k$ is well defined on ${}^+\!Y_X\tp 1_X\subset Y\tp X$
and therefore $p_\k(\xi)$ is well defined on ${}^+\!Y_X$, by Theorem \ref{Shap-proj}.
It is invertible and its inverse equals $\theta_{Y,X}$, since $Y\tp X$ is completely reducible.
\end{proof}
\begin{corollary}
For each $\la \in \c^*_{reg}$ the linear maps
$p_\g(\zt)\colon {}^+\!V_Z\to V$, $v\mapsto p_\g(\zt)v$ and $\pi\colon {}^+\!V_Z\to (V\tp Z)^+$,
$v\mapsto p_\g(v\tp 1_Z)$ are well defined. Furthermore,
$$
p_\g(\zt)v=p_{\g/\k}(\zt)\bar \theta_{V,{X}}v, \quad p_\g(v\tp 1_Z)= p_{\g/\k}(0)p_\k(v\tp 1_Z)
$$
for all $v\in {}^+\!V_Z$.
\label{regularization_parabolic}
\end{corollary}
\begin{proof}
The subspace ${}^+\!V_Z\tp 1_Z$ lies in the finite dimensional $U_q(\k)$-submodule isomorphic to $V\tp X$, so $p_\k$ is well defined on it.
Moreover, $p_{\k/\g}(0)$ is well defined on ${}^+\!V_Z\tp 1_Z$ as none of denominators in (\ref{translationed_proj}) turns zero.
Now the proof follows from Proposition \ref{twist-cocycle}.
\end{proof}
Remark that in the special case of $\xi=0$ corresponding to a scalar parabolic module $Z$
one can take ${}^+\!V_Z = V^+_Z$. Then $p_\k(\xi)$ is identical on ${}^+\!V_Z$
and drops from the factorization.
\subsection{Extremal twist for parabolic modules}
\label{Sec_Semi-simplicity}
In order calculate the extremal twist, we first
work out a necessary condition for parabolic Verma modules to be irreducible that is fulfilled for generic highest weight.
Note that complete irreducibility criteria for classical parabolic Verma modules are given in \cite{Jan2}.
We do not appeal to deformation arguments but make use of the relation (\ref{tw-coc-form}) between the inverse invariant pairing
and extremal projector.
The idea of our approach originates from Proposition \ref{lift is singular}.
However, we cannot directly apply the extremal projector to construct a singular vector
in $Z'\tp Z$ ($Z'$ is the opposite parabolic module of lowest weight $-\zt$) since weights of $Z'$ are not bounded from above. We
approximate $Z'$ with a sequence of finite dimensional modules $\{V_\mu\}$ and modify
Proposition \ref{lift is singular} accordingly.
We then construct $\Sc\in Z'\hat \tp Z$ as a projective limit of singular vectors in $V_\mu\tp Z$.
Suppose that $u=u^1\tp u^2\in V\tp Z$ (Sweedler notation) is a singular vector such that $\bar \dt(u)=v\in V$ is not zero.
Define a linear map $\psi_{v}\colon Z\to V$ as $\psi_{v}(z)=u^{1}\langle u^2,z\rangle$, for all $z\in Z$.
It factors to a composition
$Z\to Z^*\to V$, where the first arrow is the contravariant form
regarded as a linear map from $Z$ to its restricted ($U_q(\h)$-locally finite) dual $Z^*$.
\begin{propn}
\label{part_iso}
For any element $f\in U_q(\g_-)$ of weight $-\bt$, $\psi_{v}(f1_Z)=q^{-(\zt+\mu,\bt)}\si(f)v$,
where $\mu$ is the weight of $v$. In particular, $v\in V^+_Z$.
\end{propn}
\begin{proof}
It is sufficient to prove the equality for $f$ a monomial in Chevalley generators.
For simple $\bt \in \Pi$ one has
$$
\bigl(1\tp \omega(f_\bt )\bigr)u=-(1\tp q^{-h_\bt}e_\bt )u=-\bigl(\gm(e_\bt)\tp q^{-h_\bt} \bigr)u=(e_\bt q^{-h_\bt}\tp q^{-h_\bt} )u
=\bigl(\si(f_\bt) \tp 1\bigr)q^{-h_\bt}u.
$$
This implies $\bigl(1\tp \omega(f)\bigr)u=q^{-(\zt+\mu,\bt)}\bigl(\si(f)\tp 1\bigr)u$ for all $\bt$ and all monomial $f$.
Now the proof is immediate.
\end{proof}
Now regard $\La^+_\k$ as a natural sublattice in $\La^+_\g$ and for fixed $\xi \in \La^+_\k$ define $\c^*_{\xi,\Z}$ as the set of
integral weights
$\xi+\la$ with $\la\in \La_\g\cap \c^*$. In other words, $\c^*_{\xi,\Z}$ is the shift by $\xi$ of the sublattice generated by
the fundamental weights dual to $\Pi_\g^\vee-\Pi_\k^\vee$.
Introduce a partial ordering on $\c^*_{\xi,\Z}$ by setting $\nu\prec \nu$ if $(\nu,\al^\vee)< (\mu,\al^\vee)$ for all $\al\in \Pi_{\g}-\Pi_{\k}$.
Let $\c^*_{\xi,\Z_+} \subset \c^*_{\xi,\Z}$ be the subset of dominant weights.
For $\mu\in \c^*_{\xi,\Z_+}$ set $V_\mu$ to be the finite dimensional $U_q(\g)$-module of lowest weight $-\mu$.
Its highest weight is $-w(\mu)$, where $w$ is the longest element of the Weyl group of $\Rm_\g$.
\begin{propn}
For all $\mu,\nu\in \c^*_{\xi,\Z_+}$, there is an inclusion $Z^+_{V_\mu}\subset Z^+_{V_\nu}$ once $\mu\prec \nu$.
\end{propn}
\begin{proof}
The left ideal $I^+_{V_\mu}$ is generated by $\{e_\al^{m_\al'+1}\}_{\al\in \Pi}$
with $m_\al'=m_{-w(\al)}$, where $m_\bt=(\mu,\bt^\vee)\in \Z_+$, cf. \cite{Jan1}.
Clearly $I^+_{V_\mu}\supset I^+_{V_\nu}$ if $\mu\prec \nu$. Then
$Z^+_{V_\mu}=\ker(I^+_{V_\mu})\subset \ker(I^+_{V_\nu})=Z^+_{V_\nu}$ as required.
\end{proof}
Denote by $J_\mu^+\supset I^+_Z$ the left ideal in $U_q(\g_+)$ annihilating the lowest vector in $V_\mu$. It is generated by $\{e_\al^{m_\al+1}\}_{\al\in \Pi}$
with $m_\al=(\mu,\al^\vee)\in \Z_+$.
There is a $U_q(\g_+)$-invariant projection
$\wp_{\mu}\colon Z'\to V_{\mu}\simeq U_q(\g_+)/J_\mu^+$.
The following lemma facilitates approximation of $Z'$ with an increasing sequence of $V_\mu$.
\begin{lemma}
\label{weight_spaces}
For each $\bt\in \Z_+\Pi_{\g}$ there exists $\mu \in \c^*_{\xi,\Z_+}$ such that
$\dim V_\mu[-\mu+\bt] = \dim Z'[-\zt+\bt]$.
\end{lemma}
\begin{proof}
It is sufficient to take $\mu$ with $m_\al$ higher than the height of $\bt$ for all $\al\in \Pi_\g-\Pi_\k$.
Then the kernel of $\wp_{\mu}$ has no weight $\bt$.
\end{proof}
\noindent
Since $J_{\nu}^+\subset J_\mu^+$ for $\mu\prec \nu$,
the projection $\wp_\nu$ factorizes to $\wp_\mu=\wp_\nu\circ \wp_{\nu,\mu}$ with
an $U_q(\g_+)$-equivariant projection $\wp_{\nu,\mu} \colon V_{\nu}\to V_{\mu}$.
Lemma \ref{weight_spaces} then implies $\cap_{\mu} \J_\mu^+=I^+_Z$, where the intersection
is over $\mu\in \c^*_{\xi,\Z_+}$, and $ Z'$ is a projective limit of $U_q(\g_+)$-modules $V_\mu$.
The lowest vector $v_\mu\in V_\mu$ belongs to $(V_\mu)^+_Z$, and Corollary \ref{regularization_parabolic} implies
that a singular vector $u_\mu= p_{\g}(p_{\g}^{-1}(\zt)v_\mu\tp 1_\zt)$ with $\bar \dt (u_\mu)=v_\mu$ is well defined
for $\la \in \c^*_{reg}$ (it follows from (\ref{proj_eigen}) that $p_{\g/\k}(\zt)$ and therefore $p_{\g}(\zt)$
are invertible for such $\la$).
\begin{corollary}
\label{part_irred}
The module $Z$ is irreducible once $\la \in \c^*_{reg}$.
\end{corollary}
\begin{proof}
Let $\bt \in \Z_+\Pi_{\g}$ be such that $Z[\zt-\bt]\not =\{0\}$. Take $\mu\in \La^+_\xi$ sufficiently large so that $V_\mu[-\mu+\bt]\simeq Z'[-\zt+\bt]$ (that is possible in view of Lemma \ref{weight_spaces}).
The map $\psi_{v_\mu}$ is an isomorphism between $Z[\zt-\bt]$ and $V_\mu[-\mu+\bt]$ by Proposition \ref{part_iso}.
Therefore the contravariant form is non-degenerate on $Z[\zt-\bt]$ and hence on all weight subspaces of $Z$.
\end{proof}
\begin{propn}
\label{twist_parab}
Let $Z$ be an irreducible parabolic Verma module of highest weight $\zt=\xi+\la\in \La^+_\k\op \c^*$. For every finite dimensional module $V$,
the extremal twist $\theta_{V,Z}$ is the operator $p_{\k}(\xi)^{-1}p_{\g/\k}(\zt)^{-1}$ restricted to $V^+_Z$.
\end{propn}
\begin{proof}
This is true for $\la\in \c^*_{reg}$ by Corollary \ref{part_irred}.
The operator $\theta_{V,M_\la}$ is a rational trigonometric function of $\la\in \c^*$ coinciding with $p_{\k}(\xi)^{-1}p_{\g/\k}(\zt)^{-1}$
on an open subset $\c^*_{reg}\subset \c^*$ and therefore on $\c^*$.
\end{proof}
\noindent
As a consequence we conclude that if $\la\in \c^*$ is a pole of the map
$p_{\g/\k}(\xi+\la)^{-1}\colon V^+_Z\to V$ then the module $Z$
is reducible.
\subsection{Equivariant star product on Levi conjugacy classes}
In this section we give an expression for an equivariant star product on homogeneous spaces with Levi stabilizer subgroup.
Such a space is realized as a conjugacy class, and the Poisson structure is restricted from the Semenov-Tian-Shansky bracket
on the total group. The corresponding star product was constructed in \cite{EEM} with the help of dynamical twist, which
reduces to the inverse contravariant
form, \cite{AL}. While that solves the problem in principle, an explicit expression of the inverse form for a general parabolic
module is unknown. In this section we give an alternative formula for the star product in terms of extremal projector, which
is absolutely explicit. The idea of our approach is close to \cite{Khor} (for the special case of $\k=\h$) and
based on relation (\ref{tw-coc-form}).
In this section we assume that $\xi=0$ and $\zt=\la$. For the module $Z$ we take the scalar parabolic Verma module $M_\la$ of highest weight $\la$.
Then $V^+_{M_\la}$ is the space $V^{\k_+}$ of $U_q(\k_+)$-invariants in $V$.
One can check that the contravariant form is non-degenerate when restricted to $V^{\k_+}$ so we can choose
${}^+\!V_{M_\la}=V^{\k_+}$. The projector $p_\k$ reduces to identity on $V^{\k_+}$, which gives
$p_\g(\la)=p_{\g/\k}(\la)\in \End(V^{\k_+})$.
Let $\Ac$ denote the quantized Hopf algebra of polynomial functions on an algebraic group $G$ with the Lie algebra $\g$.
The quantum group $U_q(\g)$ acts on $\Ac$ by right translations, according to $x\tr a=a^{(1)}(x,a^{(2)})$.
Fix a weight $\la \in \c^*_{reg}$, so that $M_\la$ is irreducible, and let $\Fc\in U_q(\g_+)\hat\tp U_q(\g_-)$ be a lift
of the inverse invariant paring $M_\la\tp M_\la'\to \C$.
It defines a
bi-differential operator on $\Ac$ by
\be
\label{star_prod0}
\Ac\tp \Ac \stackrel{\Fc }{\longrightarrow} \Ac\tp \Ac \stackrel{\cdot }{\longrightarrow} \Ac,
\ee
where $\cdot$ is the multiplication on $\Ac$. This operation is parameterized by $\la$ and it is known to be associative
when restricted to the subspace $\Ac^\k$ of $U_q(\k)$-invariants in $\Ac$.
Denote by $\Phi$ the composition map
$$
\Ac\tp \Ac\tp M_\la\stackrel{\langle 1_\la, \>. \>\rangle}{\longrightarrow} \Ac\tp \Ac \stackrel{\cdot }{\longrightarrow} \Ac,
$$
where the left arrow is the contravariant pairing of the $M_\la$-factor with the highest vector $1_\la$.
The formula (\ref{tw-coc-form}) in combination with regularization of Section \ref{RegExtPr_parabolic}
gives the following presentation of the star product in terms of the Zhelobenko cocycle.
\begin{propn}
The star-product on $\Ac^\k$ restricted from (\ref{star_prod0}) is presentable as
\be
f\star g= \Phi \Biggl( p_{\g/\k}(0)\Bigl(p_{\g/\k}^{-1}(\la)f\tp p_{\g/\k}(0)
\bigl(p_{\g/\k}^{-1}(\la) g\tp 1_\la \bigr)\Bigr)\Biggr),\quad f,g\in \Ac^\k,
\label{star_prod}
\ee
where the action of $U_q(\g)$ on $\Ac$ is $\tr$.
\end{propn}
\begin{proof}
Given a finite dimensional module $V$ and $v\in V^{\k}\subset V^{\k_+}$, one has
$$\Fc(v\tp 1_\la) = p_{\g/\k}(0)\bigl(p_{\g/\k}^{-1}(\la)v\tp 1_\la\bigr).$$
The vector in the right-hand side is $U_q(\k)$-invariant and generates a submodule isomorphic to $M_\la\subset V\tp M_\la$, so
one can iterate this operation with $w\in W^\k$ for another finite dimensional module $W$ and get a vector in $W\tp V\tp M_\la$.
Pairing of the $M_\la$-factor
with $1_\la$ is $U_q(\k)$-invariant and yields a tensor $\Fc (w\tp v)\in (W\tp V)^\k$.
Now take $f$ and $g$ from $\Ac^\k$, which is a direct sum of finite dimensional modules thanks to the
Peter-Weyl decomposition. Then
$$
\Fc\bigl(f\tp \Fc(g\tp 1_\la )\bigr)
=p_{\g/\k}(0)\Bigl(p_{\g/\k}^{-1}(\la)f\tp p_{\g/\k}(0)\bigl(p_{\g/\k}^{-1}(\la) g\tp 1_\la \bigr)\Bigl).
$$
Applying $\Phi$ yields $f\star g$ in the left-hand side.
\end{proof}
\section{Application to vector bundles on quantum spheres}
We conclude this presentation by illustrating Theorems \ref{canonical} and \ref{Shap-proj}
with an example relevant to quantum even sphere, \cite{M2}.
Here $Z$ is fixed to a base module that supports the quantization
of $\C[\Sbb^{2n}]$ as a subalgebra in $\End_\C(Z)$, \cite{M3}. The module $V$ varies over all equivalence classes of finite dimensional quasi-classical irreducible representations of $U_q\bigl(\s\o(2n+1)\bigr)$.
Unlike in Section \ref{Sec_Parabolic}, the subspaces $V^+_Z\subset V$ are hard to evaluate while their reciprocals
$Z^+_V\subset Z$ are known from \cite{M2}, which enables us to compute $\theta_{Z,V}$ via (\ref{proj_eigen}) and (\ref{factorization}).
Thus Theorem \ref{canonical} benefits from alternative parameterizations of singular vectors
that prove to be most convenient for
particular calculations.
In this section, we fix $\g=\s\o(2n+1)$ and $\k=\s\o(2n)\subset \g$. Note that
there is no natural quantization of $U(\k)$ as a subalgebra in $U_q(\g)$, contrary to the case of Levi $\k$.
Let $\{\ve_i\}_{i=1}^n$ denote the orthonormal basis of short roots in $\Rm^+$. We
enumerate the basis of simple positive roots as $\al_n=\ve_n-\ve_{n-1},\ldots, \al_2= \ve_2-\ve_{1}, \al_1=\ve_1$.
We choose $\la\in \h^*$ such that $q^{2(\la,\ve_i)}=-q^{-1}$ for all $i=1,\ldots, n$
and define $Z$ as the module of highest weight $\la$ whose canonical generator $1_Z$ is annihilated by $f_{\al_i}$ with
$i>1$ and by $[[f_{\al_2},f_{\al_1}]_q,f_{\al_1}]_{\bar q}$, $\bar q=q^{-1}$.
Set
$e_{\ve_{1}}=e_{\al_{1}}$ and $f_{\ve_{1}}=f_{\al_{1}}$ and
furthermore
$$
e_{\ve_{i+1}}=[e_{\al_{i+1}},e_{\ve_i}]_{q}, \quad
f_{\ve_{i+1}}=[f_{\ve_i}, f_{\al_{i+1}}]_{\bar q}
$$
for $i>1$.
Weight vectors $ f_{\ve_1}^{m_1}\ldots f_{\ve_{n}}^{m_{n}}1_Z$ with $m_i$ taking all
possible values in $\Z_+$ deliver an orthogonal basis in $Z$, \cite{M3}.
The module $Z$ is a quotient of a parabolic Verma module relative to the Levi subalgebra $\l\subset \g$
with the basis of simple roots $\Pi_\l=\{\al_i\}_{i=2}^n$. Therefore it is locally finite over $U_q(\l)$.
Fix a finite dimensional $U_q(\g)$-module $V$ of highest weight $\nu$ and put $\ell_i=(\nu,\al_i^\vee)\in \Z_+$, $i=1,\ldots, n$.
The ideal $I^+_V$ determining $Z^+_V=\ker I^+_V\subset Z$, is generated by $\{e_{\al_i}^{\ell_i+1}\}_{i=1}^n$.
There is an orthogonal decomposition $Z=Z^+_V\op \omega(I^+_V)Z$
with
$$
Z^+_V =\Span\{ f_{\ve_1}^{m_1}\ldots f_{\ve_{n}}^{m_{n}}1_Z\}_{m_1\leqslant \ell_1, \ldots, m_n\leqslant \ell_n },\quad
\omega(I^+_V)Z =\Span\{ f_{\ve_1}^{k_1}\ldots f_{\ve_{n}}^{k_{n}}1_Z\}_{k_1, \ldots, k_n},
$$
where $k_i>\ell_i$ for some $i=1,\ldots,n$.
The weight $\nu$ is expanded in the orthogonal basis $\{\ve_i\}_{i=1}^n$ as
$$
\nu=\frac{\ell_1}{2}\sum_{i=1}^{n}\ve_i+\sum_{i=2}^{n}\ell_i\sum_{j=i}^n\ve_j,
\quad
(\nu,\ve_k)=\frac{\ell_1}{2}+\sum_{i=2}^{n}\ell_i\sum_{j=i}^n\dt_{j,k}
=\frac{\ell_1}{2}+\sum_{i=2}^{k}\ell_i, \quad k=1,\ldots, n.
$$
\begin{propn}
For any quasi-classical finite dimensional module $V$, the extremal projector $p_\g=p_{\g/\l}p_\l$ is well defined
on $1_V\tp Z^+_V$.
\label{proj_defined_on VM}
\end{propn}
\begin{proof}
Denote by $W=\sum_{\xi\in \La(Z^+_V)} (V\tp Z)[\nu+\xi]$, the sum of weight spaces in $V\tp Z$ of all weights of singular vectors. It contains $1_V\tp Z^+_V$ as a vector subspace.
We will show that all factors $p_\al(t)$, $\al \in \Rm^+$, are well defined at $t=(\rho,\al^\vee)$ on
$W$.
That is true for $\al\in \Rm^+_\l$ since $Z$ is locally finite over $U_q(\l)$. Moreover,
$p_\l(0) W$ is in $U_q(\l_+)$-invariants since all weights of $W$ are
$\Rm_\l$-dominant (by virtue of Proposition \ref{reg_loc_fin} for $\g=\l$). So we can further assume $\al \in \Rm^+_{\g/\l}$.
Present $\xi\in \La(Z^+_V)$ as $\xi=\la-\sum_{i=1}^{n}m_i\ve_i$ with $m_i\leqslant \ell_i$.
For $\al=\ve_i+\ve_j$ with $i<j$ we find
$$
[(\nu+\xi+\rho,\al^\vee)]_{q_\al}=[\ell_1+\sum_{l=2}^{i}\ell_l+\sum_{l=2}^{j}\ell_l-m_i-m_j +i+j-2]_q.
$$
The integer in the square brackets in the right-hand side is positive, hence $p_\al(\rho_\al)$ is well defined, by Proposition \ref{easy_case}.
For short roots $\al=\ve_i$, $i=1,\ldots,n$, the expression $[(\nu+\xi+\rho,\al^\vee)+k]_{q_\al}$
does not turn zero at all $k\in \Z$ as
it is proportional to $q^{\frac{1}{2}+k'}+q^{-\frac{1}{2}-k'}$ for some integer $k'$ ($q$ is not a root of unity). So
the series (\ref{translationed_proj}) for $p_\al(t)$ is regular at $t=(\rho,\al^\vee)$.
This also proves that the extremal projector $p_{\al_1}(\rho_1)$ is well defined on $W$.
Finally, $p_\g(0)W$ is annihilated by each $e_{\al}$ with $\al\in \Pi$ since one can choose a normal order with $\al$ on the left.
\end{proof}
As all weights of $Z$ are multiplicity free, we can write, up to a non-zero factor:
$$\theta_{Z,V} w\propto \prod_{\al\in \Rm_{\g/\l}^+}\prod_{k=1}^{l_{\xi,\al}}\frac{[(\nu+\rho+\xi,\al^\vee)+k]_{q_\al}}{[(\nu+\rho,\al^\vee)-k]_{q_\al}}w,
\quad w\in Z^+_V[\xi],
$$
where $l_{\xi,\al}=\max\{l \in \Z:e_\al^lw\not =0 \}$. This is a corollary of the formula
(\ref{proj_eigen}). In particular, for $\xi=\la-\sum_{i=1}^{n}m_i\ve_i$ we have
$l_{\xi, \ve_{i}}=m_i$ and $l_{\xi, \ve_j+\ve_{i}}=\min(m_j,m_i)$,
where $i\not =j$.
Introduce the shortcuts $\phi_{\xi,\al,k}$ for $\frac{[(\nu+\rho+\xi,\al^\vee)+k]_{q_\al}}{[(\nu+\rho,\al^\vee)-k]_{q_\al}}$.
Then
$$
\det(\theta_{Z,V})\propto\prod_{\xi}\prod_{\al\in \Rm_{\g/\l}^+}\prod_{k=1}^{l_{\xi,\al}}\phi_{\xi,\al,k},
\quad \mbox{where} \quad
\xi\in \{\la-\sum_{i=1}^{n}m_i\ve_i\}_{m_i\leqslant \ell_i}.
$$
Note that factors corresponding to roots $\al \in \Rm_\l^+$ are absent in the product because the operator
$p_\l(\nu)$ is invertible on $Z^+_V$ due to local finiteness of $Z$ with respect to $U_q(\l)$.
\begin{propn}
The operator $\theta_{Z,V}$ is invertible.
\end{propn}
\begin{proof}
We should prove that $\phi_{\xi,\al,k}\not =0$ for all $\al \in \Rm^+_{\g/\l}$.
For short $\al$, neither the denominator nor enumerator in $\phi_{\xi,\al,k}$ turn zero since
they are of the form $[(\la,\al^\vee)+k]_{q^\frac{1}{2}}$ with $k\in \Z$, cf. the proof of Proposition \ref{proj_defined_on VM}. So we have to check it only for $\al=\ve_i+\ve_j\in \Rm^+_{\k/\l}$, $i\not =j$.
Then
$$
\phi_{\xi,\al,k}=\frac{[\ell_1+\sum_{l=2}^{i}\ell_l+\sum_{l=2}^{j}\ell_l-m_i-m_j +i+j-2+k]_q}{[\ell_1+\sum_{l=2}^{i}\ell_l+\sum_{l=2}^{j}\ell_l+i+j-2-k]_q}
$$
does not vanish since $k\leqslant l_{\xi,\al}=\min\{m_i,m_j\}\leqslant \min\{\ell_i,\ell_j\}.$
\end{proof}
\begin{corollary}
For any quasi-classical finite dimensional $U_q(\g)$-module $V$, the tensor product $V\tp Z$ is completely reducible.
\end{corollary}
The irreducible components of $V\tp Z$ are pseudo-parabolic modules described in \cite{M2}.
|
1,941,325,221,055 | arxiv | \section{\label{}}
\section{Introduction}
\label{sec:introduction}
A superconductor in a magnetic field parallel to its surface can be metastable to flux penetration up to a (mis-named) \emph{superheating field} $H_{\text{sh}}$, which is above the field at which magnetism would penetrate in equilibrium ($H_{\text{sh}}>H_c$ and $H_{\text{sh}}>H_{c1}$ for type-I and type-II superconductors, respectively). Radio-frequency cavities used in current particle accelerators routinely operate in this metastable regime, which has prompted recent attention on theoretical calculations of this superheating field~\cite{catelani08, transtrum11}. The first experimental observation of the superheating field dates back to 1952~\cite{garfunkel52}, and a quantitative description has been given early by Ginzburg in the context of Ginzburg-Landau (GL) theory~\cite{ginzburg58}. Since then, there have been many calculations of the superheating field within the realm of GL~\cite{kramer68, gennes65, galaiko66, kramer73, fink69, christiansen69, chapman95, dolgert96}. In particular, Transtrum et al.~\cite{transtrum11} studied the dependence of the superheating field on the GL parameter $\kappa$. Here we use their results and simple re-scaling arguments to study the effects of material anisotropy in the superheating field of layered superconductors.
The layered structure of many unconventional superconductors is not only linked with the usual high critical temperatures of these materials; it also turned small corrections from anisotropy effects into dominant properties~\cite{tinkham96}. For instance, the critical current of polycrystalline magnesium diboride is known to vanish far below the upper critical field, presumably due to anisotropy of the grains (the boron layers inside each grain start superconducting at different temperatures, depending on the angle between the grain layers and the external field)~\cite{patnaik01, eisterer03}. Cuprates, such as BSCCO, exhibit even more striking anisotropy, with the upper critical field varying by two orders of magnitude depending on the orientation of the crystal with respect to the direction of the applied magnetic field~\cite{tinkham96}.
One would expect that such anisotropic crystals also display strong anisotropy on the superheating field. Here we show that this is typically not true near the critical temperature. Type II superconductors, which often display strong anisotropic properties, also have a large ratio between penetration depth and coherence length (the GL parameter $\kappa$), which, as we shall see, considerably limits the effects of the Fermi surface anisotropy on the superheating field. At low temperatures, heuristic arguments suggest that crystal anisotropy might be important for the superheating field of multi-band superconductors, such as MgB${}_2$ (section~\ref{sec:mgb2}).
It is usually convenient to characterize crystalline anisotropy by the ratio of the important length scales of superconductors, within Ginzburg-Landau theory,
\begin{eqnarray}
\gamma = \frac{\lambda_c }{ \lambda_a } = \frac{ \xi_a }{ \xi_c } = \sqrt{\frac{m_c}{m_a}},
\label{eq:anisotropy}
\end{eqnarray}
where $\lambda$ is the penetration depth, $\xi$ is the coherence length, $m$ is the effective mass, and the indices $c$ and $a$ are associated with the layer-normal axis $\bm{c}$, and an in-plane axis, respectively. Note that $\lambda_i$ is associated with the screening by supercurrents flowing along the $i$-th axis~\cite{tinkham96}. Hence for a magnetic field parallel to a flat surface of superconductor, $\lambda=\lambda_c$ only when $\bm{c}$ is perpendicular to both the magnetic field and the surface normal; counterintuitively, $\lambda=\lambda_a$ for $\bm{c}$ parallel to the magnetic field or the surface normal. In this paper, we show that the anisotropy of $H_{\text{sh}}$ is larger for larger $\gamma$ and smaller $\kappa_\parallel$, and behaves asymptotically as: $H_{\text{sh}}^\parallel / H_{\text{sh}}^\perp \approx 1$ for $\kappa_\parallel \gg 1 / \gamma$, and $H_{\text{sh}}^\parallel / H_{\text{sh}}^\perp \approx \gamma^{1/2}$, for $\kappa_\parallel \ll 1$. We begin with two simple qualitative calculations that motivate the two limiting regimes intuitively. We shall then turn to the full GL calculation, which we map, using a suitable change of variables and rescaling of the vector potential, onto an isotropic free energy, and discuss the implications of these results for several materials. We then discuss a generalization of our simple estimates for MgB${}_2$ at lower temperatures, using results from a two-gap model, and make some concluding remarks.
\section{Simple estimations of $H_{\text{sh}}$ in the large and small-$\kappa$ regimes}
\label{sec:estimations}
In this section, we discuss two simple arguments to motivate and estimate
the superheating field for both isotropic and anisotropic superconductors.
These complement the systematic calculation within Ginzburg-Landau theory
presented in section~\ref{sec:gl}. Our first estimate applies both to small
and large $\kappa$ superconductors; for large $\kappa$, it discusses the initial
entry of the core of a vortex into the superconductor. The second estimate
(for large $\kappa$, generalizing Bean and Livingston~\cite{bean64}), discusses
the field needed to push the core from near the surface into the bulk of the
superconductor, fighting the attraction of the vortex to the surface.
Both methods yield estimates for the superheating field that
are compatible, up to an overall factor, with the estimates of anisotropic
Ginzburg-Landau theory of section~\ref{sec:gl}. However, we shall discuss
qualitative differences between the sinusoidal modulations at $H_{\text{sh}}$
predicted by linear stability theory and the unsmeared vortices used in
these two simple pictures. Indeed, we shall see in section~\ref{sec:mgb2}
that these two pictures, and a plausible but uncontrolled linear stability
analysis, give {\em different} predictions for the anisotropy in the most
important immediate application, magnesium diboride.
Let the superconductor occupy the half space $x>0$, and the magnetic field $\bm{H}$ be parallel to the $z$ axis. Figure~\ref{fig:vortexNucleation} illustrates vortex nucleation in a type-II superconductor for this configuration. With this choice for the system geometry, we neglect effects of field bending over sample corners, which can play a very important role in the flux penetration of real samples. However, we note that these effects are not appreciable for RF cavities for particle accelerators, which have an approximate cylindrical shape in the regions of high magnetic fields.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{Fig/fig1.eps}
\caption{(Color online) Illustrating vortex nucleation in a type-II superconductor occupying the half space $x>0$, and subject to a magnetic field parallel to the vacuum-superconductor interface.%
\label{fig:vortexNucleation}}
\end{figure}
Let us start with a heuristic estimate of the superheating field for type I superconductors. At an interface between superconductor and insulator (or vacuum), the order parameter $\psi$ is not suppressed; however, if we force a slab of magnetic field into the superconductor thick enough to force the surface to go normal and $\psi\rightarrow 0$, the superconductivity will be destroyed over a depth $\xi$, the coherence length of the SC, with energy cost per unit area $[{H_c}^2 / (8\pi)] \xi_i$, with $i=a$ and $i=c$, for $\bm{c}\parallel z $ and $\bm{c} \perp z$, respectively. The necessary width of the magnetic slab should be set by the Meissner magnetic penetration depth $\lambda$, with approximate energy gain per unit area, given by the magnetic pressure times the depth, or: $[H_{\text{sh}} / (4\pi)] (H_{\text{sh}} \lambda_i)$. Thus $H_{\text{sh}} / (\sqrt{2}H_c) \approx (1/2) (\lambda_i / \xi_i)^{-1/2} = (1/2) {\kappa_i}^{-1/2}$, which is close to the exact result: $H_{\text{sh}} / (\sqrt{2}H_c)(\kappa \ll 1) = 2^{-3/4} \kappa^{-1/2}$ for isotropic Fermi surfaces~\cite{transtrum11}. The anisotropy of the superheating field is then proportional to $\gamma^{1/2}$, assuming $\kappa \ll 1$ for $\bm{c}$ parallel and perpendicular to the magnetic field.
\begin{figure}[!ht]
(a) \par\smallskip
\centering
\includegraphics[width=\linewidth]{Fig/fig2a.eps}
\\
(b) \par\smallskip
\centering
\includegraphics[width=\linewidth]{Fig/fig2b.eps}
\caption{(Color online) (a) Illustration of the penetration of a vortex core into a type-II anisotropic superconductor with anisotropy axis $\bm{c} \parallel z$ (perpendicular to the plane of the figure). (b) Vortex and vortex core acquire an ellipsoidal shape when $\bm{c}$ lies in the $xy$ plane. Here the superconductor surface lies horizontally and vertically, for $\bm{c}$ parallel to $x$ and $y$, respectively. The magnetic field is parallel to $z$ for both (a) and (b). We can estimate the superheating field from the calculation of the work necessary to push a vortex core into the superconductor, thus destroying the Meissner state. For anisotropic vortices, the superheating field turns out to be proportional to the area of the green (black) boxes for the superconductor boundary surface parallel (perpendicular) to the $\bm{c}$ axis. These estimates simplify the calculations of Bean and Levingston~\cite{bean64}, which consider the vanishing of the surface energy barrier felt by a single penetrating vortex in a type-II superconductor. More generally, the Ginzburg-Landau approach takes into account the cooperative effects due to the penetration of multiple vortices~\cite{transtrum11}. %
\label{fig:HshEstimation}}
\end{figure}
For type II superconductors, consider the penetration of a vortex core into the superconductor, as illustrated in FIG.~\ref{fig:HshEstimation}a. The vortex and vortex core correspond to the blue and red regions, respectively. The magnetic field $\bm{H}$ is again parallel to $z$ (perpendicular to the plane of the figure), and the anisotropy axis $\bm{c}$ is either parallel (FIG.~\ref{fig:HshEstimation}a) or perpendicular (FIG.~\ref{fig:HshEstimation}b) to $z$; the gray region in FIG.~\ref{fig:HshEstimation}a illustrates a superconductor occupying the semi-infinite space $x>0$. Vortex and vortex core acquire an ellipsoidal shape when $\bm{c}$ lies in the $xy$ plane (FIG.~\ref{fig:HshEstimation}b); here the superconductor surface lies horizontally and vertically when $\bm{c} \parallel x$ and $\bm{c}\parallel y$, respectively. We can estimate the superheating field by comparing the work (per unit length) that is necessary to push a vortex core into the superconductor (thus destroying the Meissner state) with the condensation energy:
\begin{eqnarray}
\frac{H_{\text{sh}}}{4\pi} \Delta H \approx \frac{{H_c}^2}{8\pi} S_{\text{vc}},
\label{eq:hshtype2_eq1a}
\end{eqnarray}
where $S_{\text{vc}}$ is the area of the vortex core (red region in FIG.~\ref{fig:HshEstimation}), and $\Delta H$ is given by
\begin{eqnarray}
\Delta H = \frac{\Phi_0}{S_{\text{v}}} S_{\Delta},
\label{eq:hshtype2_eq1b}
\end{eqnarray}
where $\Phi_0$ is the fluxoid quantum~\cite{tinkham96}. $S_{\text{v}}$ is the total sectional area of the vortex; e.g. $S_{\text{v}} = \pi \, \lambda^2$ for isotropic superconductors. $S_\Delta$ is the amount of vortex area that penetrates when the vortex core is pushed into the superconductor; it is approximately equal to the areas of the green, black, and orange dashed rectangles in FIG.~\ref{fig:HshEstimation}, for $\bm{c} \parallel x$, $y$ and $z$, respectively. Table~\ref{tab:vortices} shows equations for $S_{\text{v}}$, $S_{\text{vc}}$ and $S_\Delta$ in terms of the penetration and coherence lengths, with $\bm{c}$ parallel to each cartesian axis. Equations (\ref{eq:hshtype2_eq1a}-\ref{eq:hshtype2_eq1b}) then read:
\begin{eqnarray}
H_{\text{sh}} = \frac{{H_c}^2 \pi^2}{8 \, \Phi_0} \times
\begin{cases}
\lambda_c \, \xi_c, & \text{if } \bm{c} \parallel y, \\
\lambda_a \, \xi_a, & \text{if } \bm{c} \parallel x \text{ or } z.
\end{cases}
\label{eq:estimateHsh}
\end{eqnarray}
Interestingly, for $\bm{c} \parallel y$, the penetrating vortex area is the area of the \emph{black} dashed box (see FIG.~\ref{fig:HshEstimation}b), whereas the superheating field is proportional to the area of the dashed \emph{green} box. Conversely, for $\bm{c} \parallel x$, the penetrating vortex area is the area of the \emph{green} dashed box, whereas the superheating field is proportional to the area of the dashed \emph{black} box. Within GL theory, $\lambda_a \, \xi_a = \lambda_c \, \xi_c$, suggesting that the superheating field is isotropic. Plugging $\Phi_0=2\,\sqrt{2}\, \pi H_c \, \lambda_i \, \xi_i$ into Eq. \eqref{eq:estimateHsh}, we find $H_{\text{sh}} / (\sqrt{2} H_c) \approx 0.1$, which is independent of $\kappa$, as in the exact calculations for isotropic Fermi surfaces~\cite{transtrum11}, but off by an overall factor of five from the linear stability results: $H_{\text{sh}} / (\sqrt{2} H_c) (\kappa \gg 1)\approx 0.5$. In section~\ref{sec:gl}, we show that $H_{\text{sh}}$ is isotropic within GL for $\kappa \gg 1$. In section~\ref{sec:mgb2}, we discuss recent work at lower temperatures using the two-band model for MgB${}_2$, which then suggests a substantial anisotropy.
\begin{table}[h]
\centering
\begin{tabular}{ | l | c | c | c | }
\hline
& $\bm{c} \parallel x$ & $\bm{c} \parallel y$ & $\bm{c} \parallel z$ \\ \hline
$S_{\text{v}}$ & $\pi {\lambda_a} \lambda_c$ & $\pi {\lambda_a} \lambda_c$ & $\pi {\lambda_a}^2$ \\
$S_{\text{vc}}$ & $\pi {\xi_a} \xi_c$ & $\pi {\xi_a} \xi_c$ & $\pi {\xi_a}^2$ \\
$S_{\Delta}$ & $4 \, \lambda_c \, \xi_c$ & $4 \, \lambda_a \, \xi_a$ & $4 \, \lambda_a \, \xi_a$ \\
\hline
\end{tabular}
\footnotesize
\caption{Area of the vortex ($S_{\text{v}}$), area of the vortex core ($S_{\text{vc}}$) and approximated penetrating field area ($S_{\Delta}$; area of the dashed rectangles in FIG.~\ref{fig:HshEstimation}) for $\bm{c}$ parallel to each cartesian axis.}
\label{tab:vortices}
\end{table}
After the vortex core penetrates the superconductor, the vortex is
subject to an attractive force toward the interface due to the
boundary condition (there is no normal current at the surface).
Bean and Livingston~\cite{bean64} used this to give a second
simple, intuitive estimate of the superheating field.
They model this force as an interaction with an `image vortex' of
opposite sign outside the superconductor, starting the vortex center (somewhat
arbitrarily) at a distance $x=\xi$ from the interface -- precisely where
our estimate left the vortex. The superheating field is set by the competition
between magnetic pressure and the attractive long-range force.
This leads to the equation
\begin{equation}
H_\text{sh} = \frac{\Phi_0}{4 \pi} \frac{1}{ \lambda \, \xi}.
\label{eq:BLestimate}
\end{equation}
Using the GL relation: $\Phi_0 = 2 \sqrt{2} \, \pi \, H_c \, \lambda \, \xi$,
one finds $H_{\text{sh}} \approx 0.71 H_c$. How can we incorporate
crystal anisotropy into this simple calculation? If vortex and vortex core
have the same shape, we can use Eq.~\eqref{eq:changeVariables} to map
the anisotropic system into an isotropic one with $\xi_y$ and
$\lambda_x$ replacing $\xi$ and $\lambda$. This mapping preserves
magnetic fields, but not loop areas in the $xy$ plane, so that the
fluxoid quantum $\Phi_0$ rescales to $\tilde{\Phi}_0 = (\xi_y/\xi_x)
\Phi_0$ under this change of coordinates. Thus,
$H_{\text{sh}}=\tilde{\Phi}_0/(4 \pi \lambda_x \, \xi_y) = \Phi_0/(4 \pi
\lambda_x \, \xi_x) \approx 0.71 H_c$, which is isotropic and compatible
with the first simple argument, and the results in the next section for
the large $\kappa$ limit of the anisotropic GL theory.
It is interesting and convenient that these two fields (condensation
energy associated with vortex core nucleation and attractive force due
to the boundary conditions) are of the same scale. Bean and Livingston's
estimate results in $H_{\text{sh}}/ H_c = 0.71$, of the same form as our
estimate but larger and closer to the true GL calculation
$H_{\text{sh}}/ H_c = 0.75$. However, we should mention that while the
{\em field} needed to push the core into the superconductor is close to
that needed to push the vortex past the attractive force towards the
`image-vortex', the two contributions contribute very differently to the
energy barrier. Bean and Livingston's force can act on a scale longer by a
factor $\kappa = \lambda/\xi$ than our core nucleation, and will
dominate the barrier height for $H$ near $H_{c1}$.
How is GL different from these two simple pictures? First, the GL calculation
incorporates both the initial core penetration and the long-range attractive
force. Second, it accounts for the cooperative effects of multiple vortices
entering at the same time. Third, and perhaps most important, the physical
picture near $H_{\text{sh}}$ is quite different. As discussed
in~\cite{transtrum11}, the wavelength of the sinusoidal instability within
GL theory is $2\pi k_c \propto \kappa^{1/4} \xi$. The single vortex within
our model and Bean and Livingston have sharp cores of size $\xi$; the correct
linear-stability result has the superconducting order parameter varying
smoothly over a longer length larger by $\kappa^{1/4}$. We shall see in
section~\ref{sec:mgb2} that taking these three basic methods outside
the realm where GL theory is valid yields three quite different predictions
for the anisotropy in the superheating field.
\section{Ginzburg-Landau theory of the superheating field anisotropy}
\label{sec:gl}
Let us flesh out these intuitive limits into a full calculation. A phenomenological generalization of GL theory that incorporates the anisotropy of the Fermi surface was initially proposed by Ginzburg~\cite{ginzburg52}, and revisited later, using the microscopic theory, by several authors~\cite{caroli63, gorkov64, tilley65}. In this approach, the gauge-invariant derivative terms are multiplied by an anisotropic effective mass \emph{tensor} that depends on integrals over the Fermi surface (see e.g. Eq. 2 of Ref.~\cite{tilley65}). The mass tensor is a multiple of the identity matrix for cubic crystals, such as Nb, Nb${}_3$Sn and NbN, which belong to the next generation of superconducting accelerator cavities. In this case, the dominant effects of the Fermi surface anisotropy are higher-order multipoles, which may be added using, e.g. nonlocal terms of higher gradients~\cite{hohenberg67}. On the other hand, as it should be anticipated, mass anisotropy can lead to important effects on layered superconductors, such as MgB${}_2$ and some iron-based superconductors (also considered for RF cavities), at least insofar as the GL formalism is accurate. Simple arguments within GL theory can be used to show that the anisotropy of the upper-critical and lower-critical fields is proportional to $\gamma$; i.e. $H_{c2}^\perp / H_{c2}^\parallel = \gamma = H_{c1}^\parallel / H_{c1}^\perp$, where the perpendicular (parallel) symbol indicates that the applied magnetic field is perpendicular (parallel) to the $\bm{c}$ axis. The effects of Fermi surface anisotropy on the properties of superconductors have been theoretically studied by many authors~\cite{ginzburg52, gorkov64, tilley65, hohenberg67, daams81, kogan03}.
One possible generalization of the Ginzburg-Landau free energy to incorporate anisotropy effects has been written down by Tilley~\cite{tilley65}:
\begin{eqnarray}
f_s - f_n &=& \sum_{i,j \in \{x,y,z\}} \frac{1}{2\,m_{ij}} \left(-\frac{\hbar}{i} \frac{\partial \psi^*}{\partial x_i} - \frac{e^*}{c} A_i \psi^* \right)
\nonumber \\ && \quad
\times \left(\frac{\hbar}{i} \frac{\partial \psi}{\partial x_j} - \frac{e^*}{c} A_j \psi \right)+ \alpha |\psi|^2 + \frac{\beta}{2} |\psi|^4
\nonumber \\ && \quad
+ \frac{\left(\bm{H}_a-\nabla \times \bm{A} \right)^2}{8 \pi},
\label{free_energy_1}
\end{eqnarray}
where $f_s$ and $f_n$ are the free energy densities of the superconducting and normal phases, respectively; $\psi$ is the superconductor order parameter, $\bm{A}$ is the vector potential, and $\bm{H}_a$ is an applied magnetic field. Anisotropy is incorporated in the effective mass tensor $M = ((m_{ij}))$, whose components can be conveniently expressed as a ratio of integrals over the Fermi surface (see Eq. (2) of Ref.~\cite{tilley65}). $e^*$ is the effective charge, $\alpha$ and $\beta$ are energy constants, and $\hbar$ and $c$ are Plank's constant (divided by $2\pi$) and the speed of light, respectively. The thermodynamic critical field is given by~\cite{tinkham96}: $H_c = \sqrt{4 \pi \alpha^2 / \beta}$, independent of mass anisotropy. Eq. \eqref{free_energy_1} can then be written in a more convenient form:
\begin{eqnarray}
\frac{\left(f_s - f_n \right)}{{H_c}^2/(4\pi)} &=& \sum_i \left[\left(\xi_i\frac{\partial f}{\partial x_i}\right)^2 + \left(\xi_i \frac{\partial \phi}{\partial x_i} - \frac{A_i}{\sqrt{2} H_c \lambda_i} \right)^2 f^2 \right]
\nonumber \\ && \quad
+ \frac{1}{2} \left(1- f^2\right)^2 + \frac{1}{2{H_c}^2}\left(\bm{H}_a - \nabla \times \bm{A} \right)^2,
\label{free_energy_2}
\end{eqnarray}
where we have assumed a layered superconductor with the anisotropy axis $\bm{c}$ aligned with one of the three Cartesian axes, so that $i\in \{x,y,z\}$ in the first term of the right-hand side, and we have dropped an irrelevant additive constant $1/2$. Also, we have rewritten the order parameter as $\psi = |\psi_\infty| \, f \, e^{i \phi}$, where $f$ and $\phi$ are scalar fields, and $\psi_\infty=-\alpha/\beta$ is the solution infinitely deep in the interior of the superconductor~\cite{tinkham96}. The anisotropic penetration depth and coherence lengths are given by $\lambda_i = (m_i c^2 / (4\pi |\psi_\infty|^2 {e^*}^2) )^{1/2}$, and $\xi_i = ( \hbar^2 / ( 2m_i (-\alpha) ) )^{1/2}$, respectively.
Let the pairs of characteristic lengths $(\lambda_c, \xi_c )$ and $(\lambda_a, \xi_a )$ be associated with the layer-normal and an in-plane axis, respectively. Define:
\begin{eqnarray}
\kappa_\parallel \equiv \frac{\lambda_a}{\xi_a}, \quad \kappa_\perp \equiv \frac{\lambda_c}{\xi_a} = \frac{\lambda_a}{\xi_c} = \gamma \, \kappa_\parallel,
\end{eqnarray}
where the last two relations can be verified using the definition of $\lambda_i$, $\xi_i$, and $\gamma$. Following previous calculations of the superheating field~\cite{catelani08, transtrum11}, we also let $\bm{H}_a$ be parallel to $z$, and the superconductor occupy the half-space region $x>0$, so that symmetry constraints imply that $A_z=0$, and all fields should be independent of $z$. Thus, if the anisotropy axis $\bm{c}$ is parallel to $z$, our GL free energy (Eq. \ref{free_energy_2}) is directly mapped into the isotropic free energy of Transtrum et al.~\cite{transtrum11}, with $\xi$ and $\lambda$ replaced by $\xi_a$ and $\lambda_a$, respectively. In particular, the solution for the superheating field $H_{\text{sh}}$ as a function of $\kappa$ is given in Ref.~\cite{transtrum11} using $\kappa_\parallel$ instead of $\kappa$. If $\bm{c}$ is parallel to $x$ or $y$, there are a number of scaling arguments that can be used to map the anisotropic free energy into the isotropic one~\footnote{For instance, Blatter et al. recognized that by making a change of coordinates and redefining the magnetic field and vector potential, one could make isotropic the derivative term by introducing anisotropy in the magnetic energy terms~\cite{tinkham96, blatter92}.}. Here we consider the change of coordinates and rescaling of the vector potential:
\begin{eqnarray}
\bm{r} = \left(\frac{\xi_x}{\xi_y} \tilde{x}, \tilde{y}, \tilde{z} \right),
\quad
\bm{A}=\left( \tilde{A}_x, \frac{\xi_x}{\xi_y} \tilde{A}_y, \tilde{A}_z\right).
\label{eq:changeVariables}
\end{eqnarray}
Note that this change of variables does not change the magnetic field, since $\bm{H}_a$ is aligned with the $z$ axis, so that the $z$-component of the field is given by: $\partial A_y / \partial x - \partial A_x / \partial y = \partial \tilde{A}_y / \partial \tilde{x} - \partial \tilde{A}_x / \partial \tilde{y}$. This coordinate transformation maps the anisotropic free energy into an isotropic one with $\xi_y$ and $\lambda_x$ replacing $\xi$ and $\lambda$. In particular, now the solution for the superheating field is given in Ref.~\cite{transtrum11} using $\kappa_\perp=\gamma \kappa_\parallel$ instead of $\kappa$. In this paper, we only consider the two representative cases: $\bm{c} \parallel z$ and $\bm{c} \perp z$, as we do not expect appreciable qualitative changes for arbitrary orientations of $\bm{c}$ with respect to the $z$ axis. Notice the interesting fact that a crystal might be a type I superconductor ($\kappa_\parallel < 1 / \sqrt{2}$) when $\bm{c}$ is parallel to $z$, and yet be a type II superconductor if $\gamma \kappa_\parallel> 1/ \sqrt{2}$ when $\bm{c}$ is perpendicular to $z$ (see Fig.~\ref{fig:kappaxgammaDiagram}). This interesting property of anisotropic superconductors has been discussed in Ref.~\cite{kogan14}, and confirmed experimentally in the work of Ref.~\cite{koike80}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Fig/fig3.eps}
\caption{(Color online) Showing regions in $\kappa_\parallel \times \gamma$ space where the crystal is always type I (left region to blue solid lines), always type II (right region to red solid lines), or might be of either type (region between red and blue lines), depending on the orientation of the crystal. The shaded blue and orange regions correspond to regions where the Ginzburg-Landau superheating field anisotropy can be approximated by $\gamma^{1/2}$ and $1$, respectively, within $10\%$ of accuracy. %
\label{fig:kappaxgammaDiagram}}
\end{figure}
Now we turn our attention to the anisotropy of the superheating field:
\begin{eqnarray}
\frac{H_{\text{sh}}^\parallel}{H_{\text{sh}}^\perp}=\frac{H_{\text{sh}} (\kappa_\parallel)}{H_{\text{sh}} (\kappa_\perp)}=\frac{H_{\text{sh}} (\kappa_\parallel)}{H_{\text{sh}} (\gamma \, \kappa_\parallel)}.
\label{eq:anisotropyHsh}
\end{eqnarray}
For general $\kappa$, approximate solutions for the superheating field for isotropic systems are given by Eqs. (10) and (11) of Ref.~\cite{transtrum11}, which we reproduce here for convenience:
\begin{eqnarray}
\frac{H_{\text{sh}}}{\sqrt{2}H_c} \approx
2^{-3/4} \kappa^{-1/2} \frac{1+ 4.6825120 \, \kappa + 3.3478315 \, \kappa^2 }{1+ 4.0195994 \, \kappa + 1.0005712 \, \kappa^2},
\label{eq:asymp_small}
\end{eqnarray}
for small $\kappa$, and
\begin{eqnarray}
\frac{H_{\text{sh}}}{\sqrt{2}H_c} \approx
\frac{\sqrt{10}}{6} + 0.3852 \, \kappa^{-1/2},
\label{eq:asymp_large}
\end{eqnarray}
for large $\kappa$. We can use approximations \eqref{eq:asymp_small} and \eqref{eq:asymp_large} to find asymptotic solutions for the superheating field anisotropy:
\begin{eqnarray}
\frac{H_{\text{sh}}^\parallel}{H_{\text{sh}}^\perp} \approx
\left\{\begin{array}{ll}
\gamma^{1/2}, & \text{for } \kappa \ll 1/\gamma, \\
1, & \text{for } \kappa \gg 1,
\end{array}
\right.
\label{eq:limitAnisotropy}
\end{eqnarray}
with $\gamma>1$. These asymptotic solutions span a large region in the phase diagram of Fig. \ref{fig:kappaxgammaDiagram}, with the shaded blue and orange regions corresponding to regions where the superheating field anisotropy can be approximated by $\gamma^{1/2}$ and $1$, respectively. Figure~\ref{fig:gammash} shows a plot of the anisotropy of the superheating field as a function of the mass anisotropy for several values of $\kappa_\parallel$. The dotted lines are asymptotic solutions given by Eq. \eqref{eq:limitAnisotropy}. In order to make this plot we considered the solution for the superheating field to be given by Eqs. \eqref{eq:asymp_small} and \eqref{eq:asymp_large} for $\kappa<\kappa_{\text{th}}$ and $\kappa \ge \kappa_{\text{th}}$, respectively, where the threshold $\kappa_{\text{th}} \approx 0.56$ is found by equating the right-hand sides of the two approximate solutions. It is clear that the combination of large $\gamma$ and small $\kappa$ yields the largest anisotropy of $H_{\text{sh}}$. Notice that the deviation from the simple asymptotic solution at small $\kappa_\parallel$ scales as $(H_{\text{sh}}^\parallel / H_{\text{sh}}^\perp - \gamma^{1/2}) / \gamma^{1/2} = \mathcal{O} (\kappa \gamma)$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{Fig/fig4.eps}
\caption{(Color online) Anisotropy of the Ginzburg-Landau superheating field as a function of mass anisotropy for several values of $\kappa_\parallel$. The dotted lines are the limiting solutions given by Eq.~\ref{eq:limitAnisotropy}.%
\label{fig:gammash}}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{ | l | c | c | c | }
\hline
\text{Material} & $\kappa_\parallel$ & $\gamma$ & $H_{\text{sh}}^{\parallel}/H_{\text{sh}}^{\perp}$ \\ \hline
Ag${}_5$Pb${}_2$O${}_6$ (Ref.~\cite{mann07}) & $\sim 0.0096$ & $\sim 1.43$ & $\sim 1.2$ \\
C${}_8$K (Ref.~\cite{koike80}) & $\sim 0.32$ & $\sim 6.2$ & $\sim 1.6$ \\
NbSe${}_2$ (Ref.~\cite{trey73}) & $\sim 9$ & $\sim 3.33$ & $\sim 1.1$ \\
MgB${}_2$ (Refs.~\cite{chen01, kogan03}) & $\sim 26$ & $\sim 2.6$ & $\sim 1.05$ \\
BSCCO (Refs.~\cite{stintzing97, tinkham96}) & $\sim 87$ & $\sim 150$ & $\sim 1.07$ \\
YBCO (Refs.~\cite{stintzing97, tinkham96}) & $\sim 99$ & $\sim 7$ & $\sim 1.04$ \\
\hline
\end{tabular}
\footnotesize
\caption{Ginzburg-Landau parameter $\kappa_\parallel$ with $\bm{c}$ parallel to the $z$ axis, mass anisotropy $\gamma$, and superheating field anisotropy for different materials.}
\label{tab:materialAnistropies}
\end{table}
In Table~\ref{tab:materialAnistropies} we compare the anisotropy of the superheating field for different materials; we also present the values that we used for $\kappa_{\parallel}$ and $\gamma$ in each case. As we have stressed before, the superheating field anisotropy is largest for small $\kappa_\parallel$ and large $\gamma$. Note that even though type-I superconductors have small $\kappa$, we have not found anisotropy parameters for elemental superconductors in the literature, probably because anisotropy plays a minor role for most of them. Just a few well-studied non-elemental superconductors are of type I, such as the layered silver oxide Ag${}_5$Pb${}_2$O${}_6$, with a mass anisotropy of about $1.43$, and $\kappa_\parallel \approx 0.01 < 1/\sqrt{2} $. On the other hand, type-II superconductors are known for their large anisotropies. The critical fields of BSCCO, for instance, can vary by two orders of magnitude depending on the orientation of the crystal. Yet the anisotropy effects on the superheating field are undermined (Eq. \eqref{eq:anisotropyHsh}) by the flat behavior of $H_{\text{sh}}$ at large $\kappa$. These effects are also illustrated in Fig.~\ref{fig:hsh}, where we plot the solution $H_{\text{sh}}/(\sqrt{2}H_c)$ as a function of $\kappa$, using the asymptotic solutions given by Eqs. \eqref{eq:asymp_small} and \eqref{eq:asymp_large} for $\kappa<=0.56$ and $\kappa>0.56$, respectively (this approximated solution is remarkably close to the exact result~\cite{transtrum11}). Note that within GL theory $H_{\text{sh}}/H_c$ depends on material properties only through the parameter $\kappa$. The points in FIG. \ref{fig:hsh} correspond to the solutions of the superheating field using $\kappa_\parallel$ and $\kappa_\perp$ for Ag${}_5$Pb${}_2$O${}_6$ (blue), C$_{8}$K (purple), MgB${}_2$ (red), BSCCO (dark red). Superconductors with $\kappa_\parallel \approx 1$ can have an enormous anisotropy $\gamma$, say $\sim 10^{5}$, and yet the superheating field will be nearly isotropic.
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\linewidth]{Fig/fig5.eps}
\caption{(Color online) Ginzburg-Landau superheating field $H_{\text{sh}}/(\sqrt{2}H_c)$ as a function of $\kappa$. The points correspond to the solutions using $\kappa_\parallel$ and $\kappa_\perp$ for Ag${}_5$Pb${}_2$O${}_6$ (blue), C${}_8$K (purple), MgB${}_2$ (red), BSCCO (dark red).\label{fig:hsh}}
\end{figure}
One should bear in mind that GL formalism is accurate only in the narrow ranges of temperatures near the critical point. Beyond this range, one must rely either on generalizations of GL to arbitrary temperatures~\cite{tewordt63, werthamer63}, or more complex approaches using BCS theory, Eilenberger semi-classical approximation and strong-coupling Eliashberg theory. However, note that GL and Eilenberger theories yield similar quantitative results for the temperature dependence of the superheating in the limit of large $\kappa$ for isotropic Fermi surfaces (see e.g. Ref.~\cite{catelani08}).
\section{Low-temperature anisotropy of the superheating field for MgB${}_2$}
\label{sec:mgb2}
We now turn to MgB$_2$, an anisotropic, layered superconductor which would
likely in practice be used at temperatures $T\ll T_c$ where GL theory is not
a controlled approximation. Here we discover that we get three rather
different estimates for the anisotropy in the superheating field, from our
two simple estimates of section~\ref{sec:estimations} and from an uncontrolled GL-like
linear stability analysis.
The striking qualitative difference between low temperature MgB$_2$ and
GL theory is the violation of the GL anisotropy relation:
$\lambda_c / \lambda_a \neq \xi_a / \xi_c$. For anisotropic superconductors, this originates in the mass dependence of the penetration and coherence lengths ($\lambda \sim m^{1/2}$ whereas $\xi \sim m^{-1/2}$). Experiments~\cite{angst02,budko02,cubitt03a,cubitt03b,lyard04,budko15} and theoretical calculations~\cite{kogan02a,kogan02b,kogan03} for MgB${}_2$ suggest that this relation is violated at lower temperatures; the anisotropies of $\lambda$ and $\xi$ exhibit opposite temperature dependences, with
\begin{eqnarray}
\gamma_\lambda = \lambda_c / \lambda_a
\end{eqnarray}
increasing, whereas
\begin{eqnarray}
\gamma_\xi = \xi_a / \xi_c
\end{eqnarray}
decreases~\footnote{Here we assume that the anisotropy $\gamma_\xi = \xi_a / \xi_c$ is equivalent to the anisotropy of the upper-critical field $\gamma_H = H_{c2,a} / H_{c2,c}$, as in the work of Ref.~\cite{kogan02b}.} with temperature. Figure~\ref{fig:lowTmgb2} shows an illustration of a vortex section near $T=0$. Using calculations from Ref.~\cite{kogan03}, $\gamma_\lambda$ and $\gamma_\xi$ become equal only at $T=T_c$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\linewidth]{Fig/fig6.eps}
\caption{(Color online) Illustrating a vortex and vortex core (blue and red regions) of MgB${}_2$ near $T=0$ (we increased $\xi_a$ by a factor of $30$ with respect to $\lambda_a$, so that the core features become discernible; the small black region in the center corresponds to the actual scale). Near zero temperature, the field penetration region is calculated to be nearly isotropic ($\lambda_a \approx \lambda_c$), whereas the core shape anisotropy is predicted to reach a maximum ($\xi_a \approx 6\, \xi_c$) (Ref.~\cite{kogan03}).
\label{fig:lowTmgb2}}
\end{figure}
We can use our first method to estimate the low-temperature superheating field anisotropy by relaxing the constraint $\lambda_c \, \xi_c = \lambda_a \, \xi_a $ in Eq.~\eqref{eq:estimateHsh} of section \ref{sec:estimations}, resulting,
\begin{eqnarray}
\frac{H_{\text{sh}}^{c \perp y}}{H_{\text{sh}}^{c \parallel y}} = \frac{\gamma_\xi}{\gamma_\lambda},
\label{eq:anisotropyMgB2}
\end{eqnarray}
where $H_{\text{sh}}^{c \perp y}$ means either $H_{\text{sh}}^{c \parallel x}$ or $H_{\text{sh}}^{c \parallel z}$. $H_{\text{sh}}$ is isotropic near $T=T_c$, since $\gamma_\xi \approx \gamma_\lambda$.
Our other two estimates rely on an uncontrolled approximation---using Eq.~\eqref{free_energy_2} with the low temperature values of $\lambda$ and $\xi$. This is not justified microscopically, as the calculations of section~\ref{sec:gl}.
Our second estimate draws on Bean and Livingston to estimate the anisotropy.
For the case
$\bm{c}$ in the $xy$ plane and $\lambda_c/\lambda_a \neq \xi_a / \xi_c$, rather than
using Eq.~\eqref{eq:changeVariables}, let us consider the rescaling:
$\bm{r} = ((\lambda_y/\lambda_x)\, \tilde{x}, \tilde{y}, \tilde{z} )$, and $\bm{A} = (\tilde{A}_x, (\lambda_y / \lambda_x) \, \tilde{A}_y, \tilde{A}_z )$. If we plug these equations into Eq.~\eqref{free_energy_2}, assuming $\gamma_\lambda \neq \gamma_\xi$, we would obtain a GL theory that is isotropic in $\lambda$, but anisotropic in $\xi$, with $\lambda \rightarrow \lambda_x$, $\xi_x \rightarrow (\lambda_x / \lambda_y) \xi_x$, and $\Phi_0 \rightarrow \tilde{\Phi}_0 = (\lambda_x/\lambda_y) \Phi_0$. Now we can plug the new lengths and $\tilde{\Phi}_0$ into Bean and Livingston's calculation to obtain:
\begin{align}
H_{\text{sh}} & = \frac{\tilde{\Phi}_0}{4 \pi} \frac{1}{ \lambda_x \, (\lambda_x / \lambda_y) \xi_x}
= \frac{\Phi_0}{4 \pi}
\begin{cases}
(\lambda_c \, \xi_c)^{-1} & \text{for } \bm{c} \parallel x, \\
(\lambda_a \, \xi_a)^{-1} & \text{for } \bm{c} \parallel y.
\end{cases}
\label{eq:BLaniEstimate}
\end{align}
Note that unlike the GL case, $\Phi_0$ cannot be written as $2 \, \sqrt{2} \, \pi \, H_c \, \lambda \, \xi$, and the superheating field is not isotropic. We find that $H_{\text{sh}}^{c \parallel x} / H_{\text{sh}}^{c \parallel y} = \gamma_\xi / \gamma_\lambda$, as in Eq.~\eqref{eq:anisotropyMgB2}. Unlike our first estimate, where $z$ and $x$ are equivalent directions, in this adaptation of Bean and Livingston's we find that $y$ and $z$ are equivalent directions. On the one hand, the only relevant component of the coherence length is the one that is parallel to the $x$ axis in Bean and Livingston's argument. On the other hand, our estimates assign different energy barriers to vortex core sections with different areas ($\pi \xi_a^2$ for $\bm{c} \parallel z$, and $\pi \xi_a \xi_c$ for $\bm{c} \parallel y$).
Finally, we note that, while the GL free energy of Eq.~\ref{free_energy_1} enforces
the high-temperature anisotropy relation violated by low-temperature MgB$_2$,
when we rewrite it as Eq.~\ref{free_energy_2} we get a legitimate,
albeit uncontrolled, description
of a superconductor with independent anisotropies for $\lambda$ and $\xi$.
A direct numerical calculation using linear stability analysis on Eq.~\ref{free_energy_2} for the parameters of MgB${}_2$ yields an almost isotropic result: $H_{\text{sh}}^{c \parallel x} / H_{\text{sh}}^{c \parallel y} \approx 1$, and $H_{\text{sh}}^{c \parallel z} / H_{\text{sh}}^{c \parallel x} = 1.03$. Analytical calculations in the large-$\kappa$ limit (using the methods developed in the Appendix of Ref.~\cite{transtrum11}) corroborate this result; the anisotropy vanishes in the high $\kappa$ limit of Eq.~\eqref{free_energy_2} for independent $\lambda$ and $\xi$ as well.
What do these estimates suggest for MgB$_2$? Near $T=0$, the theoretical calculations of Ref.~\cite{kogan03} using a two-gap model for MgB${}_2$ suggest that $\gamma_\xi \approx 6$ and $\gamma_\lambda \approx 1$. Experimental results agree with the theoretical predictions near zero temperature, with $\gamma_\lambda$ being almost isotropic~\cite{cubitt03a,cubitt03b}, and $\gamma_\xi \approx 6-7$ (see e.g. Ref.~\cite{budko15}). However, beware that reported experimental results for $\gamma_\xi$ range from $\approx 1$ to $\approx 13$ (see Ref.~\cite{kogan03} and references therein).
\begin{table}[h]
\centering
\begin{tabular}{ | l | c | c | c | c | }
\hline
\multirow{2}{*}{Approach} & \multicolumn{3}{c |}{ $H_{\text{sh}}$ ( Tesla ) } & \multirow{2}{*}{Max. Anis.} \\ \hhline{~---~}
& $\bm{c} \parallel \bm{x}$ & $\bm{c} \parallel \bm{y}$ & $\bm{c} \parallel \bm{z}$ & \\ \hline
1st estimate & $0.04$ & $0.006$ & $0.04$ & $\sim 6$ \\
1st (corrected) & $0.2$ & $0.03$ & $0.2$ & $\sim 6$ \\
2nd estimate (B \& L) & $1.13$ & $0.18$ & $0.18$ & $\sim 6$ \\
``GL'' (Eq.~\eqref{free_energy_2}) & $0.21$ & $0.22$ & $0.22$ & $\sim 1$ \\
\hline
\end{tabular}
\footnotesize
\caption{Estimates of the superheating field and maximum anisotropy of low-temperature MgB$_2$ for the three geometries.}
\label{tab:mgb2Tab}
\end{table}
We summarize our estimates of the superheating field for the three geometries in Table~\ref{tab:mgb2Tab}, using $H_c (0) = 0.26 \,\text{T}$ from Ref.~\cite{wang01}. Recall that our first estimates were off from actual GL calculations by a factor of five. We hence multiply $H_{\text{sh}}$ by this factor at lower temperatures, and use this correction to calculate the results displayed on the second row of the table: ``1st (corrected)''. The last row summarizes the results of the last paragraph, and the last column shows the maximum superheating field anisotropy according to the three methods. In comparison, for Nb the superheating field from Ginzburg-Landau theory extrapolated to low temperature is $0.24$ Tesla~\cite{padamsee09}.
Several things to note about these estimates.
(1)~All three methods suggest that, perhaps with suitable surface alignment,
MgB$_2$ can have superheating fields comparable to current Nb cavities, with
a much higher transition temperature (and hence much lower Carnot cooling
costs and likely much lower surface resistance). (2)~One of the three
methods suggests that a particular alignment could yield a significantly
higher superheating field than Nb. (3)~It is not a surprise that these
three estimates differ. As discussed in section~\ref{sec:estimations},
the three methods have rather different microscopic pictures of the
superheating instability; the surprise is that they all give roughly the
same estimate within GL theory. (The further agreement within anisotropic
GL can be understood as a consequence of our coordinate transformation,
Eq.~\ref{eq:changeVariables}.)
Before plunging into an intense development
effort for MgB$_2$ cavities, it would be worthwhile to find out whether
there are dangerous surface orientations, or surface orientations that
would provide significant enhancements -- both of which are allowed
by one of our current estimates. Clearly a direct experimental measurement
on oriented single crystal samples would be ideal, although the engineering
challenge of reaching the theoretical maximum superheating field for
a new material could be daunting. Alternatively, it would be challenging
but possible do a more sophisticated theoretical calculation for the
superheating anisotropy. Eilenberger theory could be solved either
numerically~\cite{transtrum16} or in the high-$\kappa$
limit~\cite{catelani08} to address lower temperatures. Eliashberg
theory~\cite{kortus01,an01,choi02,liu01}, which incorporates realistic modeling of the
two anisotropic gaps and anisotropic electron-phonon couplings, could
be generalized to add a free surface and the resulting system could
be solved using linear stability analysis.
\section{Concluding Remarks}
\label{sec:conclusions}
To conclude, we used a generalized Ginzburg-Landau approach to investigate the effects of Fermi surface anisotropy on the superheating field of layered superconductors. Using simple scaling arguments, we mapped the anisotropic problem into the isotropic one, which has been previously studied by Transtrum et al.~\cite{transtrum11}, and show that the superheating field anisotropy depends only on two parameters, $\gamma = \lambda_c / \lambda_a$ and $\kappa_\parallel = \lambda_a / \xi_a$. $H_{\text{sh}}^\parallel / H_{\text{sh}}^\perp$ is larger when $\gamma$ is large and $\kappa_\parallel$ is small, and displays the asymptotic behavior $H_{\text{sh}}^\parallel / H_{\text{sh}}^\perp \approx 1$ for $\kappa_\parallel \gg 1 / \gamma$, and $H_{\text{sh}}^\parallel / H_{\text{sh}}^\perp \approx \gamma^{1/2}$, for $\kappa_\parallel \ll 1$, suggesting that the superheating field is typically isotropic for most layered unconventional superconductors, even for very large $\gamma$ (see Table~\ref{tab:materialAnistropies}), when GL is valid. We surmise that the anisotropy of the superheating field is even smaller for cubic crystals, where higher-order and/or non-linear terms have to be included in the GL formalism.
As a practical question, accelerator scientists have explored stamping radio-frequency cavities out of single-crystal samples, to test whether grain boundaries were limiting the performance of particle accelerators. Our study was motivated by the expectation that one could use this expertise to control the surface orientation in the cavity. Such control likely may yield benefits through either optimizing anisotropic surface resistance or optimizing growth morphology, for deposited compound superconductors (growing Nb$_3$Sn from a Sn overlayer). Our calculations suggest that, for the high-$T_c$, high-$\kappa$ materials under consideration for the next generation of superconducting accelerator cavities, that the theoretical bounds for the maximum sustainable fields will not have a significant anisotropy near $T=T_c$. However, the extension of our intuitive arguments for MgB${}_2$ to low temperatures, using results from a two-gap model within BCS theory (Ref.~\cite{kogan03}), suggest a high value for the anisotropy of $H_{\text{sh}}$ near $T=0$, contrasting with the numerical linear stability analysis of Eq.~\eqref{free_energy_2} using the parameters for low-temperature MgB$_2$, which suggest that the superheating field is still isotropic. This motivates further investigations by means of more sophisticated approaches and experiments controlling surface orientation.
\begin{acknowledgments}
We would like to thank G. Catelani, M. Liepe, S. Posen, and J. She for useful conversations. This work was supported by the U.S. National Science Foundation under Award OIA-1549132, the Center for Bright Beams, and the Grant No. DMR-1312160.
\end{acknowledgments}
|
1,941,325,221,056 | arxiv | \section{Introduction}
The large $N_c$ limit of QCD suggested by 't Hooft \cite{HOOFT}
and the power counting rules of Witten \cite{WITTEN} lead to
a consistent perturbative $1/N_c$ expansion method to study baryon spectroscopy,
which allows to compute $1/N_c$ corrections in a systematic way.
A perspective on the current research status can be found, for example, in
Ref. \cite{TRENTO}.
The method is based on
the result that baryons satisfy a contracted
spin-flavor algebra in the
large $N_c$ limit of QCD \cite{DM}, which reduces to SU(2$N_f$) for
ground state baryons, where $N_f$ is the number of flavors.
For $N_c \rightarrow \infty $ the baryon masses are degenerate.
At large $N_c$, the mass splitting starts at order $1/N_c$ for
the ground state baryons (N = 0 band). They belong to the
$\bf 56$ representation of SU(6), and have been described with remarkable success
\cite{DM,DJM94,DJM95,CGO94,Jenk1,JL95,DDJM96}.
The applicability of the approach to excited states is a subject of
current investigation.
Although the SU(6) symmetry is broken for excited states, the
experimental facts suggest a small breaking, which then
implies that the $1/N_c$ expansion can still be applied. In this case the splitting starts at
order $N^0_c$, as we shall see below.
The excited states belonging to the $[{\bf 70},1^-]$ multiplet (N = 1 band)
have been studied extensively in SU(4) ($N_f$ = 2)
\cite{CGKM,Goi97,PY1,PY2,CCGL,CaCa98,BCCG,SCHAT,Pirjol:2003ye,cohen1}.
The approach has been extended to $N_f$ = 3 in Ref. \cite{SGS}
and it included first order in SU(3) symmetry breaking. There are also
a few studies of the physically important multiplets
belonging to the N = 2 band. These are related to
$[{\bf 56'},0^+]$ in SU(4) \cite{CC00}, to $[{\bf 56},2^+]$ in
SU(6) \cite{GSS} and to $[{\bf 70},\ell^+]$ in SU(4) \cite{MS2}.
The method had also been applied to highly excited nonstrange and strange
baryons \cite{MS1} belonging to the $[{\bf 56},4^+]$ multiplet (N = 4 band).
So far, configuration mixing has been neglected in the N = 2 band. It would involve
new parameters under the form of mixing angles which, to be well
determined, would require, generally, much more than the existing data.
However the power counting for configuration mixing is quite
well established \cite{GOITY05}.
The 35 SU(6) generators are
\begin{equation}\label{SU6}
S^i = \frac{\sigma^i }{2} \otimes\, \mbox{l\hspace{-0.53em}1};~ T^a =\, \mbox{l\hspace{-0.53em}1} \otimes \frac{ \tau^a }{2};
~ G^{ia} = \frac{\sigma^i }{2} \otimes \frac{ \tau^a }{2},
\end{equation}
where $i=1,2,3$ and $a=1,2, \ldots , 8$. For excited states the mass operator is a linear combination of SU(2$N_f$)
and SO(3) scalars with coefficients to be determined from a fit.
They incorporate the dynamics of quarks and it is important to understand
their behaviour. Operators which break SU(2$N_f$),
but are rotational invariant, can also be added to the
mass operator. They embed the SU(3)-flavor breaking, due to the difference
in the mass of the strange and nonstrange quarks.
The general form of an SU(6) $\times$ SO(3) scalar is
\begin{equation}\label{OLFS}
O^{n} = \frac{1}{N^{n-1}_c} O^{(k)}_{\ell} \cdot O^{(k)}_{SF},
\end{equation}
where $O^{(k)}_{\ell}$ is a $k$-rank tensor in SO(3) and $O^{(k)}_{SF}$
a $k$-rank tensor in SU(2), but scalar in SU(3)-flavor.
This implies that $O^{n}$ is a combination
of SO(3) generators $\ell_i$ and of SU(6) generators (see below).
In calculating the mass spectrum,
the general procedure is to split the baryon into an excited quark
and a core. The latter is in its ground state for the N = 1 band but
generally carries some excitation for N $>$ 1 (for example the
$[{\bf 70},\ell^+]$ multiplet \cite{MS2}). The excitation is implemented into the
orbital part of the wave function.
The advantage of this method is that the problem is reduced to
the known case of the ground state, because the spin-flavor part
of the core wave function remains always symmetric. But a disadvantage
is that one introduces a large number of operators of type (\ref{OLFS}).
Let us denote the excited quark operators by $\ell^i_q$,
$s^i, t^a$ and $g^{ia}$ and the corresponding core operators by
$\ell^i_c$, $S^i_c, T^a_c$ and $G^{ia}_c$. Then, for example,
for the $[{\bf 70},1^-]$ multiplet with $N_f = 2$ one has 12 linearly independent
operators up to $1/N_c$ power included \cite{CCGL}.
In this practice
the matrix elements of the excited quark are straightforward, as being
single-particle operators. The matrix elements of the core operators
$S^i_c$ and $T^a_c$ are also simple to calculate, while those of $G^{ia}_c$
are more involved.
Analytic group theory formulas for the matrix elements of all SU(4)
generators have been derived in Ref. \cite{HP}. They are factorized
according to a generalized Wigner-Eckart theorem into a reduced
matrix element and an SU(4) Clebsch-Gordan coefficient. They have
been used in nuclear physics, which is governed by the SU(4) symmetry,
but can
be straightforwardly be applied to a system of arbitrary $N_c$ quarks
containing the isodoublet $u, d$. Recently we have extended the approach
of Ref. \cite{HP} to SU(6) \cite{MS3} and obtained matrix elements of
all SU(6) generators between symmetric $[N_c]$ states.
These matrix elements are used below.
The matrix elements of $G^{ia}_c$ with nonzero
strangeness presented in Ref. \cite{PS} are particular cases of the results
of Ref. \cite{MS3}.
We should keep in mind that the excited states are resonances and have
a finite width. Generic large $N_c$ counting rules give widths of order
$N^0_c$ \cite{CGKM,PY1,PY2,CaCa98,cohen1,cohen2}. According to Ref.
\cite{CGKM} the narrowness of the excited states is an artifact of simple
quark model assumptions. Here, as in constituent quark models, we do
ignore the finite width and treat the resonances as bound states.
The paper is organized as follows. In the next section we recall the orbital
structure of the wave functions of the $[{\bf 70},\ell^+]$ baryon multiplet.
Section 3 is devoted to the formalism of the mass operator.
In Sec. 4 we present results for the masses of 47 nonstrange and strange
baryons, most of which are predictions. The last section contains our
conclusions. Appendix A is devoted to the operators $O_3$, $O_4$ and to the isospin operator $O_6$
which are of order $\mathcal{O}(N_c^0)$.
The first two are operators for which the matrix elements change the analytic
form as a function of $N_c$, when going
from $N_f = 2$ to $N_f = 3$. Appendix B gives the general formula for
the matrix elements of SU(3)-flavor breaking operators needed to construct $B_1$, $B_2$ and $B_4$.
Appendix C gives the matrix elements of the spin-orbit operator $O_2$.
\section{The wave functions of $[{\bf 70},\ell^+]$ excited states}
For the time being, we adopt the usual practice and divide the system
of $N_c$ quarks into an excited quark and a core, which can be excited or not.
Below we use
the notations given in our previous work \cite{MS2}.
We introduce the quark model indices $\rho$ and
$\lambda$ to distinguish
between the two independent orbital wave functions of the multiplet $[{\bf 70},\ell^+]$.
The first is associated with states which are antisymmetric under
the permutation of the first two particles while the second implies
symmetry under the same permutation.
Then, for $\ell = 0$ the orbital wave function is
\begin{equation}\label{L0}
|{\bf N_c-1,1}, 0^+\rangle_{\rho,\lambda} =
\sqrt{\frac{1}{3}}|[N_c-1,1]_{\rho,\lambda}(0s)^{N_c-1}(1s)\rangle
+\sqrt{\frac{2}{3}}|[N_c-1,1]_{\rho,\lambda}(0s)^{N_c-2}(0p)^2\rangle.
\end{equation}
In the first term $1s$ is the first (single particle)
radially excited state with $n=1$, $\ell = 0$
($N=2n+\ell$). In the second term
the two quarks are excited to the $p$-shell to get $N=2$. They are coupled to
$\ell = 0$. By analogy, for $\ell = 2$ one has
\begin{equation}\label{L2}
|{\bf N_c-1,1}, 2^+\rangle_{\rho,\lambda} =
\sqrt{\frac{1}{3}}|[N_c-1,1]_{\rho,\lambda}(0s)^{N_c-1}(0d)\rangle
+\sqrt{\frac{2}{3}}|[N_c-1,1]_{\rho,\lambda}(0s)^{N_c-2}(0p)^2\rangle,
\end{equation}
where the two quarks in the $p$-shell are coupled to $\ell = 2$.
One can see that
the coefficients of the linear combinations (\ref{L0}) and (\ref{L2})
are independent of $N_c$
so that both terms have to be considered in the
large $N_c$ limit. In Eqs. (\ref{L0}) and (\ref{L2}) the first term can be
treated as in the $[{\bf 70},1^-]$ sector, \emph{i.e.} as an excited quark
coupled to a ground state core
\cite{CGKM,Goi97,PY1,PY2,CCGL,CaCa98,BCCG,SGS,SCHAT,Pirjol:2003ye,cohen1}.
The second term will be treated here as an excited quark coupled to an
excited core. To see this, we rewrite it by using the fractional parentage
technique to get
\begin{eqnarray}\label{CFP}
|[N_c-1,1]_{\rho,\lambda} (0s)^{N_c-2}(0p)^2,\ell^+ \rangle &=&
\sqrt{\frac{N_c-2}{N_c}} \Psi_{[N_c-1]}((0s)^{N_c-2}(0p)) \phi_{[1]}(0p) \nonumber \\
& & -\sqrt{\frac{2}{N_c}} \Psi_{[N_c-1]}((0s)^{N_c-3}(0p)^2)\phi_{[1]}(0s),
\end{eqnarray}
both for $\ell = 0$ and 2. Here all states are normalized.
The first factor in each term in the right-hand side is a symmetric
$(N_c-1)$-particle wave function
and $\phi_{[1]}$ is a one particle wave function associated to
the $N_c$-th particle. One can see that
for large $N_c$ the coefficient of the first term is $\mathcal{O}(1)$ and of the
second $\mathcal{O}(N_c^{-1/2})$.
Then, in the large $N_c$ limit, one can neglect the second term and take into account
only the first term,
where both the core and $N_c$-th particle have an $\ell = 1$ excitation.
Each of the above configurations $(0s)^{N_c-1}(1s)$, $(0s)^{N_c-1}(0d)$
or $(0s)^{N_c-2}(0p)^2$ represent orbital parts of a given total wave function.
We denote by $\ell_q$ and $\ell_c$ the angular momenta of the excited quark
and of the excited core respectively. They are coupled to a total angular
momentum $\ell$. Then in SU(6) $\times$ SO(3) the most general form of the wave
function is
\begin{eqnarray}\label{EXCORE}
\lefteqn{|\ell S;JJ_3;(\lambda \mu) Y I I_3\rangle
\sum_{m_c,m_q,m_\ell,S_3}
\left(\begin{array}{cc|c}
\ell_c & \ell_q & \ell \\
m_c & m_q & m_\ell
\end{array}\right)
\left(\begin{array}{cc|c}
\ell & S & J \\
m_\ell & S_3 & J_3
\end{array}\right)}
\nonumber \\
& \times &
\sum_{p p'} c^{[N_c-1,1]}_{p p'}(S)
|S S_3; p \rangle
|(\lambda \mu)Y I I_{3}; p' \rangle
|\ell_qm_q\rangle |\ell_cm_c\rangle,
\end{eqnarray}
where
\begin{equation}
|S S_3; p \rangle = \sum_{m_1,m_2}
\left(\begin{array}{cc|c}
S_c & \frac{1}{2} & S \\
m_1 & m_2 & S_3
\end{array}\right)
|S_cm_1 \rangle |1/2m_2 \rangle,
\end{equation}
with $p = 1$ if $S_c = S - 1/2$ and $p = 2$ if $S_c = S + 1/2$ and
\begin{eqnarray}\label{statessu(3)}
|(\lambda \mu)Y I I_{3}; p' \rangle =
\sum_{Y_c,I_c,I_{c_3},y,i,i_{3}}
\left(\begin{array}{cc|c}
(\lambda_c \mu_c) & (10) & (\lambda \mu) \\
Y_cI_cI_{c_3} & y i i_{3} & YII_{3}
\end{array}\right)
|(\lambda_c \mu_c) Y_cI_cI_{c_3}\rangle
|(10) y i i_{3} \rangle,
\end{eqnarray}
where $p' = 1$ if $(\lambda_c \mu_c) = (\lambda - 1, \mu)$,
$p' = 2$ if $(\lambda_c \mu_c) = (\lambda + 1, \mu - 1)$ and $p' = 3$ if $(\lambda_c \mu_c) = (\lambda, \mu + 1)$.
The spin-flavor part of the wave function (\ref{EXCORE})
of symmetry $[N_c-1,1]$ results from the inner product of the
spin and flavor wave functions.
The indices $p$ and $p'$ represent the row where the last particle
(the excited quark) is located in the Young diagram of SU(2)-spin
and SU(3)-flavor states respectively. Thus the
coefficients $c^{[N_c-1,1]}_{p p'}(S)$ are isoscalar factors
\cite{book,ISOSC} of the
permutation group of ${N_c}$ particles, the expressions of which are \cite{MS3}
\begin{eqnarray}\label{SU2}
c^{[N_c-1,1]}_{11}(S) & = & - \sqrt{\frac{(S + 1)(N_c - 2 S)}{N_c(2 S + 1)}},
\nonumber \\
c^{[N_c-1,1]}_{22}(S) & = & \sqrt{\frac{S[N_c+2(S + 1)]}{N_c(2 S + 1)}},
\nonumber \\
c^{[N_c-1,1]}_{12}(S) & = & c^{[N_c-1,1]}_{21}(S) = 1,
\nonumber \\
c^{[N_c-1,1]}_{13}(S) & = & 1.
\end{eqnarray}
In Eqs. (\ref{decuplet2})-(\ref{singlet2}) below, we illustrate their
application for $N_c$ = 7. In each inner product
the first Young diagram corresponds to spin and the second to flavor. Accordingly,
one can see that Eq. (\ref{decuplet2}) stands for $^210$, Eq. (\ref{octet4})
for $^48$, Eq. (\ref{octet2}) for $^28$ and Eq. (\ref{singlet2}) for $^21$.
Each inner product contains the corresponding isoscalar factors and
the position of the last particle is marked with a cross. In the right hand side, from the location of the cross one can read off the values of $p$ and of $p'$.
The equations are
\begin{eqnarray}
\label{decuplet2}
\raisebox{-9.0pt}{\mbox{\begin{Young}
& & & & & \cr
$\times$ \cr
\end{Young}}}\
& = &
c^{[6,1]}_{21}\! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & \cr
& & $\times$\cr
\end{Young}}} \ \times \! \! \! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & & $\times$\cr
& \cr
\end{Young}}}\ ,
\\ \nonumber
\\
\label{octet4}
\raisebox{-9.0pt}{\mbox{\begin{Young}
& & & & & \cr
$\times$ \cr
\end{Young}}}\
& = &
c^{[6,1]}_{12}\! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & & $\times$\cr
& \cr
\end{Young}}} \ \times \! \! \! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & \cr
& & $\times$\cr
\end{Young}}}\ ,
\\ \nonumber
\\
\label{octet2}
\raisebox{-9.0pt}{\mbox{\begin{Young}
& & & & & \cr
$\times$ \cr
\end{Young}}}\
&=& c^{[6,1]}_{11}\! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & $\times$\cr
& & \cr
\end{Young}}} \ \times \! \! \! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & $\times$\cr
& & \cr
\end{Young}}} \nonumber \\
& & + \ c^{[6,1]}_{22}\! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & \cr
& & $\times$\cr
\end{Young}}} \ \times \! \! \! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & \cr
& & $\times$\cr
\end{Young}}}\ ,
\\ \nonumber
\\
\label{singlet2}
\raisebox{-9.0pt}{\mbox{\begin{Young}
& & & & & \cr
$\times$ \cr
\end{Young}}}\
&=& c^{[6,1]}_{13}\! \! \!
\raisebox{-9.0pt}{\mbox{
\begin{Young}
& & & $\times$\cr
& & \cr
\end{Young}}} \ \times \! \! \! \! \!
\raisebox{-15pt}{\mbox{
\begin{Young}
& & \cr
& & \cr
$\times$ \cr
\end{Young}}}\ .
\end{eqnarray}
For the configurations $(0s)^{N_c-1}(1s)$ and $(0s)^{N_c-1}(0d)$
the expression (\ref{EXCORE}) slightly simplifies because $\ell_c = 0$. Only
for the configuration $(0s)^{N_c-2}(0p)^2$ the core is excited with $\ell_c = 1$,
in agreement with the discussion following Eq. (\ref{CFP}).
\section{The mass operator}
For the $[{\bf 70},\ell^+]$ multiplet the mass operator
can be written as the linear combination
\begin{equation}
\label{massoperator}
M_{[{\bf 70},\ell^+]} = \sum_{i=1}^6 c_i O_i + d_1 B_1 + d_2 B_2 + d_4B_4,
\end{equation}
where the operators $O_i$ are of type (\ref{OLFS}) and $B_i$ are
SU(6) breaking operators defined below.
The values of the coefficients $c_i$ and $d_i$
which encode the QCD dynamics, are given in Table \ref{operators}.
They were found by a numerical fit described in the next section.
The building blocks of $O_i$ and $B_i$
are the excited core operators $\ell^i_c$,
$S^i_c$, $T^a_c$ and $G^{ia}_c$ and the excited quark operators $\ell^i_q$,
$s^i$, $t^a$ and $g^{ia}$. We also introduce the rank $k=2$ tensor
operator \footnote{The irreducible spherical tensors are defined according to
Ref. \cite{BRINK}.}
\begin{equation}\label{TENSOR}
\ell^{(2),ij}_{ab}=\frac{1}{2}\left\{\ell^i_a,\ell^j_b\right\}-\frac{1}{3}\delta_{i,-j}\vec{\ell}_a\cdot\vec{\ell}_b,
\end{equation}
with $a=c$, $b=q$ or vice versa or $a=b=c$ or $a=b=q$. For simplicity
when $a=b$, we use a single index $c$, for the core, or $q$ for the
excited quark so that the tensor operators become $\ell^{(2),ij}_c$ and
$\ell^{(2),ij}_q$ respectively. The latter case represents the tensor
operator used in the analysis of the $[{\bf 70},1^-]$ multiplet (see \emph{e.g.}
Ref. \cite{CCGL}).
There are many linearly independent operators $O_i$ and $B_i$
which can be constructed from the excited quark and the core operators.
Here, due to lack of data,
we have considered a restricted list containing the most dominant
operators in the mass formula. The selection was determined from the
previous experience of Refs. \cite{CCGL} and \cite{MS2} for $N_f = 2$
and of Ref. \cite{SGS} for $N_f$ = 3. The
operators $O_i$ entering Eq. (\ref{massoperator}) are listed
in Table \ref{operators}. $O_1$ is linear in $N_c$ and is the
most dominant in the mass formula. At $N_c \rightarrow \infty $
is the only one which survives. $O_2$ is the dominant part
of the spin-orbit operator. It acts on the excited quark and is of
order $N^0_c$. The operator $O_3$ is a composite two-body operator.
It contains the tensor operator (\ref{TENSOR}) which acts on the
excited quark and the generators $g^{ia}$ and $G^{ja}_c$ acting on the
the excited quark and on the core respectively.
The contribution of $G^{ja}_c$ sums coherently, thus it
introduces an extra power in $N_c$,
which implies that the matrix elements $O_3$ are of order $N^0_c$.
For the same reason the matrix elements of
$O_4$ are also of order $N^0_c$.
As explained in the next section, we could not obtain its coefficient $c_4$,
because of scarcity of data for the $[{\bf 70},\ell^+]$ multiplet.
The spin-spin operator $O_5$ is of order $1/N_c$, but its
contribution dominates over all the other terms of the mass operator
containing spin.
Here we take into account the isospin-isospin operator, denoted by $O_6$,
having matrix elements of order $N^0_c$ due to the
presence of $T_c$ which sums coherently. Up to a subtracting constant,
it is one of the four independent operators of order $N^0_c$,
which, together with $O_1$, are needed to describe the
submultiplet structure of $[{\bf 70},1^-]$ \cite{COLEB}. Incidently, this operator
has been omitted in the analysis of Ref. \cite{SGS}. Its coefficent $c_6$ is
indicated in Table \ref{operators}.
In Tables \ref{NUCLEON}, \ref{DELTA} and \ref{SINGLET}
we show the diagonal matrix elements
of the operators $O_i$ for octet, decuplet and flavor singlet states
respectively. From these tables one can obtain the large $N_c$ mentioned
above. Details about $O_3$ are given in Appendix A.
Its matrix elements change the analytic dependence on $N_c$
in going from SU(2) to SU(3). This happens for octet resonances
which can be seen
by comparing the column 3 of Table \ref{NUCLEON} with the corresponding
result from Ref. \cite{MS2}. The change is that the factor $N_c + 1$ in SU(2)
becomes $N_c + 1/3$ in SU(3). The same change takes place for all operators
$O_i$ containing $G^{ja}_c$ as for example the operator $O_4$ also
presented in Appendix A.
The SU(6) breaking operators, $B_1$ and $B_2$ and $B_4$ in the notation
of Ref. \cite{SGS},
expected to contribute to the mass are listed in Table \ref{operators}.
The operators $B_1$, $B_2$ are the standard breaking operators
while $B_4$ is directly related to the spin-orbit splitting.
They break the SU(3)-flavor symmetry to first order in $\epsilon \simeq 0.3$
where $\epsilon$ is proportional to the mass difference between the strange
and $u, d$ quarks.
Table
V gives the matrix elements of the excited quark operator
$t_8$ and of the core operator $T^c_8$ which are necessary to construct
the matrix elements of $B_1$ and $B_2$.
These expressions have been obtained as indicated in Appendix B.
It is interesting to note that they are somewhat different from those
of Ref. \cite{SGS}. However for all cases with physical quantum numbers but any $N_c$, our values are
identical to those of Ref. \cite{SGS}, so that for $N_c = 3$ there is
no difference.
For completeness, Table \ref{b3} gives the matrix elements
of $3\ell^ig^{i8}$ needed to construct $B_4$.
They were obtained from the formula (\ref{B4}) derived in Appendix B. As above, they are different from these of Ref. \cite{SGS} except for physical quantum numbers.
Unfortunately none of
the presently known resonances has nonvanishing matrix elements for $B_4$.
By definition all $B_i$ have zero matrix elements for nonstrange resonances.
In addition, the matrix elements of $B_4$ for $\ell$ = 0 resonances also cancel
and for the two remaining experimentally known strange resonances they also
cancel out.
For this reason the coefficient $d_4$ could not be determined.
\section{Results}
Comparing Table I with our previous
results Ref. \cite{MS2} for nonstrange baryons, one can see that the
addition of strange baryons in the fit
have not much changed the values of the coefficients $c_1$ and $c_5$
(previously $c_4$).
The spin-orbit coefficient $c_2$
had changed sign but in absolute value remains small.
The resonance $F_{05} (2100)$ is mostly responsible for this change.
But actually the crucial experimental input for the spin-orbit contribution
should come from $\Lambda$'s, as in the case of the
$[{\bf 70},1^-]$ multiplet \cite{SGS}. Unfortunately data for
the two flavor singlets with $\ell \neq 0$,
$^2\Lambda'[{\bf 70},2^+]5/2$ and $^2\Lambda'[{\bf 70},2^+]3/2$,
which are spin-orbit partners are missing (see Table \ref{MASSES}).
If observed, they
will help to fix the strength and sign of
the spin-orbit terms unambiguously inasmuch as $O_3$, $O_4$ and $O_5$ do
not contribute to their mass.
Presently, due to the large uncertainty obtained from the fit of $c_2$,
there is still some overlap with the value obtained from nonstrange
resonances.
The coefficient $c_3$ is about twice smaller in absolute value now. Interestingly,
the present values of the coefficients $c_1$, $c_2$ and $c_5$ follow the
trend discussed in Ref. \cite{MS2}, namely the spin-spin
and the spin-orbit contributions decrease with the excitation energy,
the dominant part remaining the spin-spin term,
similar to constituent quark model results
with a hyperfine interaction.
Regarding the SU(3) breaking terms, the coefficient
$d_1$ is has opposite sign as compared to that of Ref. \cite{SGS} and
is about four times larger in absolute value.
The coefficient $d_2$ has the same sign and about the same order
of magnitude. One can conclude that the
SU(3)-flavor breaking is roughly similar in the $[{\bf 70},1^-]$ and the
$[{\bf 70},\ell^+]$ multiplets.
The resonances belonging to the $[\bf 70,\ell^+]$ together with their
calculated masses are presented in Table \ref{MASSES}.
The angular momentum coupling allows for 8 octets, with $J$ ranging from
7/2 to 1/2, three decuplets with $J$ from 5/2 to 1/2 and three flavor singlets
with $J$ = 5/2, 3/2 or 1/2. Ignoring isospin breaking, there are in all 47
resonances
from which 12 are fitted and 35 are predictions.
The best fit gave $\chi^2_{\rm dof} \simeq 1$.
Among the presently 12 resonances only five are new, the strange resonances.
This reflects the fact that the experimental situation is still
rather poor in this energy range. The known resonances are three-, two- and one-star.
For all masses the main contribution comes from the operator $O_1$.
In the context of a constituent quark model this corresponds to the
contribution of the spin-independent part of the Hamiltonian, namely
the free mass term plus the kinetic and the confinement energy. A difference
is that, this contribution is constant for all resonances here,
while in quark models the mass difference between the strange and the $u, d$
quarks is taken into account explicitly in the free mass term. Here
this difference is embedded into the flavor breaking terms $B_i$ .
The spin-orbit operator $O_2$ naturally contributes to states with $\ell \neq 0$ only.
The operator $O_3$ contributes to states with $S = 3/2$ only. For $S=1/2$ states it gives no contribution either due to the cancellation of a $6j$ coefficient or when the wave function has $S_c=0$, as for example for flavor singlet states.
We have analyzed the role of the operator $O_4$ described in
Appendix B. This is an operator of order $N^0_c$, like
$O_2$, $O_3$ and $O_6$.
As in Refs. \cite{CCGL} and \cite{SGS},
the combination $O_2 + O_4$ is of order $1/N_c$ for octets and decuplets,
but this is no longer valid for flavor singlets. It means that the operators
$O_2$ and $O_4$ are independent in SU(3) and both have to be included in the fit.
However, the inclusion of $O_4$ considerably deteriorated the fit,
by abnormally increasing the spin-orbit contribution with one order of
magnitude. Therefore the contribution of $O_4$
cannot be constrained with the present data and we have to wait
until more data will be available, especially on strange resonances.
To estimate the role of the isospin-isospin operator $O_6$ we have made
a fit without the contribution of this operator. This fit gave
$\chi^2_{\rm dof} \simeq 0.9$ and about the same values for
$c_i$ and $d_i$ as that with $O_6$ included. This means that the
presence of $O_6$ is not essential at the present stage.
The fitted value of the $N(1990) F_{17}$ resonance slightly deteriorates
with respect to the SU(4) case. The reason is the negative contribution
of the spin-orbit term. Further analysis, based on more data, is
needed in the future, to clarify the change of sign in the spin-orbit term.
Of special interest is the fact that the resonance $\Lambda(1810) P_{01}$
gives the best fit when interpreted as a flavor singlet. Such an interpretation
is in agreement with that of Ref. \cite{GR} where the baryon spectra were
derived from a flavor-spin hyperfine interaction, rooted in
pseudo-scalar meson (Goldstone boson) exchange. Thus the
flavor-spin symmetry is common to both calculations. Moreover, the
dynamical origin of the operator $O_3$, which does not directly contribute
to $\Lambda(1810) P_{01}$, but plays an important role
in the total fit, is thought to be related to pseudo-scalar meson exchange
\cite{CCGL}. Hopefully, this study may help
in shedding some light on the QCD dynamics hidden in the coefficients
$c_i$.
\section{Conclusions}
The present results confirm the behaviour of some of the coefficients $c_i$ of the
mass formula at large excitation energy, observed previously \cite{MS2}.
This shows that the importance of spin-dependent terms of the mass operators
vanish with the excitation energy. At any energy, these terms are dominated
by the spin-spin contribution, like in constituent quark model studies.
Thus the $1/N_c$ expansion can provide a deeper understanding of the successes
of the quark models.
We have also found that the SU(3) breaking corrections are comparable in
size with the $1/N_c$ corrections, as for the
$[{\bf 70},1^-]$ multiplet \cite{SGS} which successfully explained the
$\Lambda(1520) - \Lambda(1405)$ splitting.
The analysis of the $[{\bf 70}, \ell^+]$ remains an open problem. It depends
on future experimental data which may help to clarify the role of
various terms contributing to the mass operator and in particular of
$O_2$ and $O_4$. The present approach provides the theoretical
framework to pursue this study.
|
1,941,325,221,057 | arxiv | \section{Introduction}\label{sec:intro}
$\mathrm{MINER}{\nu}\mathrm{A}$ is an on-axis neutrino-nucleus scattering experiment at Fermilab's NuMI (Neutrinos at the Main Injector) beamline that will measure interaction cross-sections and event kinematics in exclusive and inclusive states to high precision.
The experiment will also examine nuclear effects and parton distribution functions (PDF's) using a variety of targets materials.
\subsection{Precision Neutrino Oscillation Experiments and Nuclear Effects}\label{subsec:precneut}
Interaction details are crucial for neutrino energy estimation and in background separation in oscillation experiments.
For example, in a $\nu_{\mu}$ disappearance experiment, the two-flavor disappearance relation is show in Eq. \ref{eq:numudisp}:
\begin{equation}
P\left( \nu_{\mu} \rightarrow \nu_{\mu} \right) = 1 - \sin^2 \left( 2\theta_{23} \right) \sin^2 \left( \frac{1.27 \Delta m_{23}^2 (eV^2) L(km)}{E_{\nu}(GeV)} \right)
\label{eq:numudisp}
\end{equation}
Experiments cannot directly measure the neutrino energy that appears in this relation, and visible energy is a function of flux, cross-section, and detector response.
Interactions occur in dense nuclear matter, making final state interactions (FSI) significant in producing the observed particles.
Near-to-Far Detector ratios cannot entirely eliminate the associated uncertainties because the energy spectra at the two detectors are different due to beam, oscillation, matter, and even nuclear effects if the detector materials differ.
\begin{figure}[h]
\includegraphics[width=16pc]{./nunubarAsym.pdf}\hspace{2pc}
\begin{minipage}[b]{14pc}\caption{\label{fig:nunubarAsym}Oscillation asymmetry between neutrinos and anti-neutrinos as a function of $\sin^2 2\theta_{13}$ for different CP violating phases. Figure adapted from \cite{cpvio}.}
\end{minipage}
\end{figure}
Understanding these effects is especially important in light of recent ``hints'' of a large $\theta_{13}$ \cite{t2k}. As Figure \ref{fig:nunubarAsym} illustrates, for a fixed value of the CP violating phase, $\delta$, the $\nu$-$\bar{\nu}$ oscillation asymmetries are smaller at larger values of $\theta_{13}$. Therefore, for larger values of $\theta_{13}$, experiments are measuring a smaller difference between two larger numbers and systematic errors increase in relative importance.
In this case, $\mathrm{MINER}{\nu}\mathrm{A}$ cross-section and kinematic measurements increase in value.
Additionally, $\mathrm{MINER}{\nu}\mathrm{A}$ supports a nuclear effects program complementary to charged lepton scattering measurements. Many quantities of interest carry large uncertainties: axial form factors as a function of A and momentum transfer ($Q^2$), quark-hadron duality, $x$-dependent nuclear effects, etc.
\section{The $\mathrm{MINER}{\nu}\mathrm{A}$ Detector and Operations}\label{sec:detlive}
$\mathrm{MINER}{\nu}\mathrm{A}$ is a horizontal stack of 120 similar modules weighing approximately two tons each.
Each module has an inner detector (ID) composed of triangular plastic scintillator strips and an outer detector (OD) steel frame instrumented with plastic scintillator bars.
Most modules feature an ID composed of two stereoscopic planes of scintillator, but some in the nuclear target and calorimetric regions of the detector give up one or both scintillator planes for target or absorber materials.
See Figure \ref{fig:detector} for a schematic of the detector.
\begin{figure}
\begin{center}
\includegraphics[width=28pc]{./DetectorGraphic01.pdf}
\end{center}
\caption{\label{fig:detector}The $\mathrm{MINER}{\nu}\mathrm{A}$ detector as of Summer 2011. (The water target is not installed yet.)}
\end{figure}
Figure \ref{fig:potlive} illustrates integrated data collection and livetime.
$\mathrm{MINER}{\nu}\mathrm{A}$ began its physics run in March, 2010 in the NuMI beam forward horn current (FHC) mode, focusing $\pi^{+}$ mesons (``neutrino mode''). Reverse horn current (RHC) mode data, focusing $\pi^{-}$ mesons (``anti-neutrinos mode''), was taken prior to March, 2010 and again in Winter 2010-2011.
\begin{figure}[ht]
\begin{minipage}[b]{0.6\linewidth}
\includegraphics[width=20pc]{./LiveTime.pdf}\hspace{2pc}
\end{minipage}
\begin{minipage}[b]{0.4\linewidth}
\caption{\label{fig:potlive}Accumulated protons on target (P.O.T.) and live-time. Prior to March 2010, we integrated data using a partially constructed version of the detector.}
\end{minipage}
\end{figure}
\section{Particle ID}\label{sec:pid}
There are a number of exciting analyses underway in $\mathrm{MINER}{\nu}\mathrm{A}$, but early results will focus on charged-current reactions. As such, muon identification is important. Figure \ref{fig:MuonEffic} illustrates muon classification and reconstruction efficiency in neutrino Monte Carlo (MC). $\mathrm{MINER}{\nu}\mathrm{A}$ uses GENIE for event generation \cite{genie}. Current reconstruction efforts are focused on muon tracks matched into the MINOS near detector \cite{minos}, and future efforts will begin to emphasize particles stopping in $\mathrm{MINER}{\nu}\mathrm{A}$.
\begin{figure}
\begin{center}
\includegraphics[width=28pc]{./MuonEffic.pdf}
\end{center}
\caption{\label{fig:MuonEffic}Muon reconstruction topologies and efficiency in $\mathrm{MINER}{\nu}\mathrm{A}$. Reconstruction efficiencies for particles matched into MINOS include the effect of reconstruction efficiency in that detector.}
\end{figure}
The first step in a stopped muon analysis is Michel electron identification. Figure \ref{fig:MichelDisplay} provides an illustration of an event candidate from data and Figure \ref{fig:MichelFits} is a data to MC comparison of Michel energies and lifetimes in the $\mathrm{MINER}{\nu}\mathrm{A}$ tracker. Table \ref{tab:michellife} gives the results of fits to the muon lifetime for the plot on the right in Figure \ref{fig:MichelFits}.
\begin{figure}
\begin{center}
\includegraphics[width=20pc]{./MichelDisplay.pdf}
\end{center}
\caption{\label{fig:MichelDisplay}A Michel electron candidate in $\mathrm{MINER}{\nu}\mathrm{A}$ data.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=28pc]{./MichelFits.pdf}
\end{center}
\caption{\label{fig:MichelFits}A Michel electron energies and lifetimes in $\mathrm{MINER}{\nu}\mathrm{A}$.}
\end{figure}
\begin{table}
\caption{\label{tab:michellife}Michel electron lifetime fits. MC is background-free $\mu^-$, while data contains a small $\mu^+$ contamination. The nominal $\mu^-$ lifetime in carbon is 2026 ns.}
\begin{center}
\begin{tabular}{lc}
\br
& Muon Lifetime in Plastic (ns) \\
\mr
MC & $2100 \pm 10$ \\
Data & $2120 \pm 20$ \\
\br
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}\label{sec:conclusion}
$\mathrm{MINER}{\nu}\mathrm{A}$ is actively collecting data and building software infrastructure for physics analysis. Operations are stable and we are progressing in particle identification and event reconstruction.
\section*{Acknowledgments}
This work was supported by DOE Grant No. DE-FG02-91ER40685.
\section*{References}
|
1,941,325,221,058 | arxiv | \section{Introduction}
\subsection{The goal of this paper}
In the past decade, a lot of efforts have been made on the theoretical prediction and experimental searching for topological superconductors (TSC) and topological insulators (TI) in non-interacting or weakly-interacting systems \cite{KaneRMP,ZhangRMP}. However, in realistic materials, strong electronic interactions typically play very important role and can not be neglected or treated as perturbations, especially in low dimensional systems. Therefore, a complete construction and classification of TSC/TI in interacting fermionic systems become a very important but challenging problem. It turns out that a large class of TSC/TI require certain symmetry protection and they can be connected to a trivial disorder phase (e.g. an s-wave BCS-superconductor or an atomic insulator) in the absence of global symmetry. Such kind of ``integer" TSC/TI are short-range entangled quantum states and they are actually the simplest examples of symmetry-protected topological (SPT) phases \cite{ZCGu2009}.
Thanks to the cutting-edge breakthrough in the classification and construction of SPT phases for interacting bosonic and fermionic systems recently \cite{pollmann10,chen11a,chen11b,schuch11,XieChenScience,cohomology,invertible1,invertible2,invertible3,Kapustin2014,wen15,ZCGu2012,Kapustin2015,Kapustin2017,general1,general2}, a complete understanding of ``integer" TSC/TI protected by internal symmetry(e.g., time reversal symmetry or spin rotational symmetry) for interacting electronic systems has been achieved \cite{fidkowski10, fidkowski11,wangc-science,invertible2,ChongWang2014,Witten}. In general, by ``gauging'' the internal (unitary) symmetry~\cite{LevinGu,Gu-Levin,gauging1,threeloop,ran14,wangj15,wangcj15,lin15,gauging3,dimensionalreduction,gauging2,2DFSPT,braiding} and investigating the braiding statistics of the corresponding gauge fluxes/flux lines, different SPT phases can be uniquely identified.
Moreover, gapless edge states or anomalous surface topological orders have also been proposed as another very powerful way to characterize different SPT phases in interacting systems~\cite{Ashvin2013,ChongWang2013,XieChen2015,ChenjieWang2016,XLQi2013,Senthil2013,Lukasz2013,XieChen2014,ChongWang2014}.
In recent years, the notion of SPT phases was further extended to systems with crystalline symmetry protection and the so-called crystalline SPT phases have been intensively studied \cite{TCI,Fu2012,ITCI,reduction,building,correspondence,SET,230,BCSPT,Jiang2017,Kane2017,Shiozaki2018,ZDSong2018,defect,realspace,KenX,rotation,LuX,YMLu2018,Cheng2018,Hermele2018,Po2020}.
Crystalline SPT phases are not only of conceptual importance, but also provide great opportunities towards experimental realization since space group symmetries naturally exist for any realistic material.
Crystalline TI is the simpest example of crystalline SPT phases, and it has already been realized in many different materials \cite{TCIrealization1,TCIrealization2,TCIrealization3,TCIrealization4}. For free fermionic systems, there are two systematic methods for classifying and characterizing the crystalline TI: one is the so-called ``\textit{symmetry indicators}'' \cite{230,indicator3,indicator5,indicator1,indicator2,Po2020}, which classifies and characterizes the crystalline TI by symmetry representations of band structures at high-symmetry momenta; another is a real-space construction based on the concept of topolgical crystal \cite{reduction,realspace}. Very recently,
boundary modes \cite{surfaceTCI,d-2,rotationsurface} of the so-called higher-order TSC/TI \cite{higher4,higher2,higher3,higher1,higher5,RXZhang2020} protected by crystalline symmetry(with additional time reversal symmetry in certain cases)
also attracts a lot of interest in both 2D and 3D.
In general, an $nth$-order TSC/TI protects gapless modes at the
system boundary of codimension $n$. For example, a second-order 3D TI has gapless states on
its hinges, while its surfaces are gapped, and a third-order 3D TI has gapless states on its corners,
while both its surfaces and hinges are gapped. Nevertheless, most of these studies are still focusing on free fermionic systems and it is not quite clear whether the corresponding gapless boundary modes are stable or not against interactions.
On the other hand, for interacting bosonic systems, it was pointed out that the classification of crystalline SPT phases is closely related to the SPT phases with internal symmetry. In Ref.~\cite{correspondence},
a ``\textit{crystalline equivalence principle}'' was proposed with a rigorous mathematical proof: i.e., crystalline topological phases with space group symmetry $G$ are in one-to-one correspondence with topological phases protected by the same internal symmetry $G$, but acting in a twisted way, where if an element of $G$ is a mirror reflection (orientation-reversing symmetry), it should be regarded as a time-reversal symmetry (antiunitary symmetry). This principle indicates the profound relationship between crystalline SPT phases and SPT phases protected by internal symmetry. Thus, the classification of crystalline SPT phases for free-fermion and interacting bosonic systems can be computed systematically.
Despite to the huge success in understanding crystalline SPT phases for free-fermion and interacting bosonic systems, a systematical understanding of crystalline SPT phases for interacting fermionic systems is still lacking. Although it has been believed that the strategy of classification schemes \cite{correspondence,defect,realspace,KenX} should still work and some simple examples have been studied~\cite{rotation,YMLu2018,dihedral}, most studies are focusing on the systems with point group symmetry only and the generic cases are unclear. Recent study on the generalizing of ``\textit{crystalline equivalence principle}'' into interacting fermionic systems shed new light towards a complete understanding of crystalline SPT phases for interacting fermion.
In Ref. \onlinecite{dihedral}, by some explicit calculations for both crystalline SPT phases and SPT phases protected by internal symmetry, it has been demonstrated that the \textit{crystalline equivalence principle} is still valid for 2D crystalline SPT phases protected by point group symmetry, but in a twisted way, where spinless (spin-1/2) fermions should be mapped into spin-1/2 (spinless) fermions.
In this paper, we aim at systematically constructing and classifying crystalline TSC/TI for 2D interacting fermionic systems and establishing a general paradigm of real-space construction for interacting fermionic crystalline SPT phases. We will consider both spinless and spin-1/2 fermionic systems. In particular, we obtain an intriguing fermionic TSC that cannot be realized in either free-fermion or interacting bosonic systems: a $p4m$ (\#11 wallpaper group) symmetric 2D system with spinless fermions. These TSC can be realized in systems with co-planar spin order and might have very interesting experimental implementations. Furthermore, we compare all our results with the classifications of 2D fermionic SPT (FSPT) phases protected by corresponding internal symmetries. We confirm the crystalline equivalence principle for generic 2D interacting fermionic systems, where a mirror reflection symmetry action should be mapped onto a time-reversal symmetry action, and that spinless (spin-1/2) fermionic systems should be mapped into spin-1/2 (spinless) fermionic systems.
Our general real-space construction scheme includes following three major steps:
\begin{description}
\item[Cell decomposition] For a specific wallpaper group, firstly we can divide it into an assembly of unit cells; then we divide each unit cell into an assembly of lower-dimensional blocks.
\item[Block-state decoration] For a specific wallpaper group with cell decomposition, we can decorate lower-dimensional block-state on different blocks. A gapped assembly of block-states is called \textit{obstruction free} decoration.
\item[Bubble equivalence] For a specific obstruction-free decoration, we need to further examine
whether such a decoration can be trivialized or not. Finally, the obstruction and trivialization free block state decoration corresponds to a 2D fermionic crystalline SPT phase.
\end{description}
\subsection{Space group symmetry for spinless and spin-1/2 systems\label{spinSec}}
Here we also would like to clarify the precise meaning of ``spinless'' and ``spin-1/2'' fermions for systems with and without $U^f(1)$ charge conservation.
For a fermionic system with total symmetry group $G_f$, there is always a subgroup $\mathbb{Z}_2^f=\{1,P_f=(-1)^{F}\}$, where $F$ is the total number of fermions. $\mathbb{Z}_2^f$ is the center of $G_f$ because all physical symmetries commute with $P_f$, i.e., cannot change fermion parity of the system. In particular, for systems without $U^f(1)$ charge conservation, we can define the bosonic (physical) symmetry group by a quotient group $G_b=G_f/\mathbb{Z}_2^f$. In reverse, for a given physical symmetry group $G_b$, there are many different fermionic symmetry groups $G_f$ which are the central extension of $G_b$ by $\mathbb{Z}_2$. It can be expressed by the following short exact sequence:
\begin{align}
0\rightarrow\mathbb{Z}_2^f\rightarrow G_f\rightarrow G_b\rightarrow0
\label{SES1}
\end{align}
and different extensions $G_f$ are characterized by different factor systems of Eq. (\ref{SES1}) that are 2-cocycles $\omega_2\in\mathcal{H}^2(G_b,\mathbb{Z}_2)$. Consequently, we denote $G_f$ as $\mathbb{Z}_2^f\times_{\omega_2}G_b$.
For systems with additional $U^f(1)$ charge conservation, the element of $U(1)$ is $U_\theta=e^{i\theta F}$. Aforementioned fermion parity operator $P_f=U_\pi$ is the order 2 element of $U(1)$, hence we denote this charge conservation symmetry by $U^f(1)$ with a superscript $f$. It is easy to notice that $U^f(1)$ charge conservation is a normal subgroup of the total symmetry group $G_f$, which can be expressed by the following short exact sequence:
\begin{align}
0\rightarrow U^f(1)\rightarrow G_f\rightarrow G\rightarrow0
\label{SES2}
\end{align}
where $G:=G_f/U^f(1)$. In reverse, for a given physical symmetry group $G$, we can define $G_f=U^f(1)\rtimes_{\omega_2}G$.
Here $\omega_2$ is related to the extension of the physical symmetry group $G$. The multiplication of the total symmetry group $G_f$ is defined as:
\begin{align}
(1,g)\times(1,h)=\left(e^{2\pi i\omega_2(g,h)F},gh\right)\in G_f
\end{align}
with $\omega_2\in\mathbb{R}/\mathbb{Z}=[0,1)$ as a $U(1)$ phase, associated with $g,h\in G$. Therefore $\omega_2$ is a 2-cocycle in $\mathcal{H}^2(G,\mathbb{R}/\mathbb{Z})$.
The spin of fermions (spinless or spin-1/2) is characterized by different 2-cocycles $\omega_2$ for both cases, and the spinless/spin-1/2 fermions correspond to trivial/nontrivial $\omega_2$. For example, consider even-fold dihedral group $D_{2n}$ symmetry with two generators $\boldsymbol{R}$ and $\boldsymbol{M}$ satisfying $\boldsymbol{R}^{2n}=\boldsymbol{M}^2=I$ ($n\in\mathbb{Z}$ and $I$ is identity).
Different extensions of fermion parity are characterized by different 2-cocycles $\omega_2$:
\begin{align}
\omega_2\in\mathcal{H}^2(D_{2n},\mathbb{Z}_2)=\mathbb{Z}_2^3
\end{align}
In particular, the spinless fermions corresponding to the 2-cocycle $\omega_2$ satisfy:
\begin{align}
\left\{
\begin{aligned}
&\boldsymbol{R}^{2n}=1\\
&\boldsymbol{M}^2=1\\
\end{aligned}
\right.
\end{align}
Hence we can simply choose the trivial 2-cocycle $\omega_2(a_g,b_h)=1$ for $\forall a_g,b_h\in D_{2n}$ for spinless fermions.
The spin-1/2 fermions corresponding to the 2-cocycle $\omega_2$ satisfy:
\begin{align}
\left\{
\begin{aligned}
&\boldsymbol{R}^{2n}=P_f\\
&\boldsymbol{M}^2=P_f\\
&\boldsymbol{M}\boldsymbol{R}\boldsymbol{M}^{-1}\boldsymbol{R}=1
\end{aligned}
\right.
\end{align}
To satisfy these conditions, we choose the 2-cocycle $\omega_2$ as following. For $\forall a_g,b_h\in D_{2n}$, we have:
\begin{align}
\omega_2(a_g,b_h)=&\left\lfloor\frac{\left[(-1)^{g+h}a\right]_{2n}+\left[(-1)^{h}b\right]_{2n}}{2n}\right\rfloor\nonumber\\
&+(1-\delta_a)(a+1)h+g\cdot h
\end{align}
where we define $[x]_n\equiv x(\mathrm{mod}~n)$, $\lfloor x\rfloor$ as the greatest integer less than or equal to $x$, and
\begin{align}
\delta_a=
\left\{
\begin{aligned}
&1~~\mathrm{if}~a=0\\
&0~~\mathrm{otherwise}
\end{aligned}
\right.
\end{align}
Here we notice that the translation operations are not relevant to the spin of fermions.
\subsection{Summary of main results}
\begin{table}[t]
\renewcommand\arraystretch{1.2}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$~~G_b~~$&$~~~E_{0}^{\mathrm{1D}}~~~$&$~~~E_{0}^{\mathrm{0D}}~~~$&~~~$\mathcal{G}_0$~~~\\
\hline
$p1$&${\color{red}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2\times\mathbb{Z}_4}$\\
\hline
$p2$&$0$&${\color{red}\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2}$\\
\hline
$pm$&${\color{red}\mathbb{Z}_2^3}$&$~{\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2}~$&${\color{red}\mathbb{Z}_2^5}\times{\color{blue}\mathbb{Z}_2}$\\
\hline
$pg$&${\color{red}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2\times\mathbb{Z}_4}$\\
\hline
$cm$&${\color{red}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2}$\\
\hline
$pmm$&$0$&${\color{red}\mathbb{Z}_2^4}\times{\color{blue}\mathbb{Z}_2^4}$&${\color{red}\mathbb{Z}_2^4}\times{\color{blue}\mathbb{Z}_2^4}$\\
\hline
$pmg$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2^2}$\\
\hline
$pgg$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2}$\\
\hline
$cmm$&$0$&${\color{red}\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2^2}$\\
\hline
$p4$&$0$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_4\times\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_4\times\mathbb{Z}_2}$\\
\hline
$p4m$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2^3}$&${\color{red}\mathbb{Z}_2^4}\times{\color{blue}\mathbb{Z}_2^3}$\\
\hline
$p4g$&$0$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^2}$\\
\hline
$p3$&$0$&${\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_3^3}$&$~{\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_3^3}~$\\
\hline
$~~~p3m1~~~$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2}$\\
\hline
$p31m$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2\times\mathbb{Z}_3}$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2\times\mathbb{Z}_3}$\\
\hline
$p6$&$0$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_3^2}$&${\color{red}\mathbb{Z}
_2^2}\times{\color{blue}\mathbb{Z}_3^2}$\\
\hline
$p6m$&$0$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^2}$\\
\hline
\end{tabular}
\caption{Interacting classification of 2D crystalline TSC for spinless fermionic systems. The results are listed layer by layer, together with their group structures (represented by $\mathcal{G}_0$). We label the classification indices with fermionic/bosonic root phases with red/blue. The fermionic $\mathbb{Z}_4$ indices are obtained from nontrivial extensions between 1D and 0D block-states, thus stacking two root phases will become another fermionic crystalline TSC. In particular, 1D block-state of the $p4m$ case is an intriguing fermionic SPT phase that cannot be realized by free-fermion and interacting bosonic systems.}
\label{spinless}
\end{table}
\begin{table}[t]
\renewcommand\arraystretch{1.2}
\begin{tabular}{|c|c|c|c|c|}
\hline
$~~G_b~~$&$~~~E_{1/2}^{\mathrm{1D}}~~~$&$~~~E_{1/2}^{\mathrm{0D}}~~~$&$~~~\mathcal{G}_{1/2}~~~$\\
\hline
$p1$&${\color{red}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_4\times\mathbb{Z}_2}$\\
\hline
$p2$&${\color{red}\mathbb{Z}_2^3}$&${\color{red}\mathbb{Z}_4^4}$&${\color{red}\mathbb{Z}_4\times\mathbb{Z}_8^3}$\\
\hline
$pm$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_4^2}$&${\color{red}\mathbb{Z}_4\times\mathbb{Z}_8}$\\
\hline
$pg$&${\color{red}\mathbb{Z}_2^2}$&$~{\color{red}\mathbb{Z}_2}~$&${\color{red}\mathbb{Z}_4\times\mathbb{Z}_2}$\\
\hline
$cm$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_4}$&${\color{red}\mathbb{Z}_2\times\mathbb{Z}_4}$\\
\hline
$pmm$&$~0~$&${\color{blue}\mathbb{Z}_2^8}$&${\color{blue}\mathbb{Z}_2^8}$\\
\hline
$pmg$&${\color{red}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_4^3}$&${\color{red}\mathbb{Z}_4\times\mathbb{Z}_8^2}$\\
\hline
$pgg$&${\color{red}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_4^2}$&${\color{red}\mathbb{Z}_2\times\mathbb{Z}_4\times\mathbb{Z}_8}$\\
\hline
$cmm$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_4}\times{\color{blue}\mathbb{Z}_2^4}$&${\color{red}\mathbb{Z}_8}\times{\color{blue}\mathbb{Z}_2^4}$\\
\hline
$p4$&${\color{red}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_8^2\times\mathbb{Z}_4}$&${\color{red}\mathbb{Z}_2\times\mathbb{Z}_8^3}$\\
\hline
$p4m$&$0$&${\color{blue}\mathbb{Z}_2^6}$&${\color{blue}\mathbb{Z}_2^6}$\\
\hline
$p4g$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_8}\times{\color{blue}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}_2\times\mathbb{Z}_8}\times{\color{blue}\mathbb{Z}_2^2}$\\
\hline
$p3$&$0$&${\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_3^3}$&${\color{red}\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_3^3}$\\
\hline
$~~~p3m1~~~$&$0$&${\color{red}\mathbb{Z}_4}$&${\color{red}\mathbb{Z}_4}$\\
\hline
$p31m$&$0$&${\color{red}\mathbb{Z}_4}\times{\color{blue}\mathbb{Z}_3}$&${\color{red}\mathbb{Z}_4}\times{\color{blue}\mathbb{Z}_3}$\\
\hline
$p6$&${\color{red}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}_{12}\times\mathbb{Z}_4}\times{\color{blue}\mathbb{Z}_3}$&${\color{red}\mathbb{Z}_{12}\times\mathbb{Z}_8}\times{\color{blue}\mathbb{Z}_3}$\\
\hline
$p6m$&$0$&${\color{blue}\mathbb{Z}_2^4}$&${\color{blue}\mathbb{Z}_2^4}$\\
\hline
\end{tabular}
\caption{Interacting classification of 2D crystalline TSC for spin-1/2 fermionic systems. The results are listed layer by layer, together with their group structure (represented by $\mathcal{G}_{1/2}$). We label the classification indices with fermionic/bosonic root phases with red/blue. For $p4$ case, two of three $\mathbb{Z}_8$ fermionic indices are from 4-fold rotation, thus stacking two root phases will obtain a bosonic SPT phase; similar for the fermionic $\mathbb{Z}_8$ index of the $p4g$ case. All other fermionic $\mathbb{Z}_8$ indices are obtained from nontrivial extension between 1D and 0D block-states, and for these cases, stacking two fermionic root phases will become another fermionic crystalline TSC. In addition, the $\mathbb{Z}_{12}$ index of $p6$ case is also obtained from 6-fold rotation, and stacking two fermionic root phases will lead to a bosonic phase.}
\label{spin-1/2}
\end{table}
We summarize all classification results of 2D crystalline TSC for both spinless and spin-1/2 fermionic systems. We label the classification attributed to $p$-dimensional block-state decorations by $E_{p}^{p\mathrm{D}}$. For the systems with spinless fermions, the classification results are summarized in Table. \ref{spinless}, and the classification data are listed layer by layer, i.e., classification contributed by 0D/1D block-state decorations, respectively. For the systems with spin-1/2 fermions, the classification results are summarized in Table. \ref{spin-1/2} layer by layer. Furthermore, we also study the group structure of the classifications by explicitly investigating the possible nontrivial stacking relation between 1D and 0D block-states: For certain cases, stacking of several 1D block-states can be deformed into a 0D block-state, hence the total group could be a nontrivial extension between 1D and 0D block-states.
In particular, we label the classification indices with fermionic root phase by red, and the classification indices with bosonic root phase by blue. There is a subtle issue in this terminology: we demonstrate this subtlety by \#2 wallpaper group $p2$ with spin-1/2 fermions as an example. As illustrated in Fig. \ref{p2}, the on-site symmetry group of arbitrary 0D block is $\mathbb{Z}_4^f$ as the nontrivial $\mathbb{Z}_2^f$ extension of $\mathbb{Z}_2$. Then the classification data of each 0D block is $\mathbb{Z}_4$ and the physical meaning of this classification index is that there is a nontrivial extension between fermionic classification data $\mathbb{Z}_2$ and bosonic classification data $\mathbb{Z}_2$. Therefore, the root phase of this classification index is a complex fermion which is a fermionic 0D phase, and the corresponding classification index should be labeled by red, however, the stacking of two root phases will become a bosonic SPT phase.
\begin{table*}[t]
\renewcommand\arraystretch{1.2}
\begin{tabular}{|c|c|c|c|c|}
\hline
$~~G_b~~$&~spinless~~&~spin-1/2~~\\
\hline
$p1$&${\color{red}\mathbb{Z}}$&${\color{red}\mathbb{Z}}$\\
\hline
$p2$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2^4}$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2^3}\times{\color{blue}\mathbb{Z}_2^4}$\\
\hline
$pm$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^2}$\\
\hline
$pg$&${\color{red}\mathbb{Z}}$&${\color{red}\mathbb{Z}}$\\
\hline
$cm$&${\color{red}\mathbb{Z}}\times{\color{blue}\mathbb{Z}_2}$&${\color{red}\mathbb{Z}}\times{\color{blue}\mathbb{Z}_2}$\\
\hline
$pmm$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2^{3}}\times{\color{blue}\mathbb{Z}_2^7}$&${\color{red}2\mathbb{Z}}\times{\color{blue}\mathbb{Z}_2^{8}}$\\
\hline
$pmg$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^3}$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^3}$\\
\hline
$pgg$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^2}$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^2}$\\
\hline
$cmm$&${\color{red}\mathbb{Z}\times\mathbb{Z}_2^2}\times{\color{blue}\mathbb{Z}_2^4}$&${\color{red}2\mathbb{Z}\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^5}$\\
\hline
$p4$&${\color{red}\mathbb{Z}\times\mathbb{Z}_4\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_4^2\times\mathbb{Z}_2}$&${\color{red}\mathbb{Z}\times\mathbb{Z}_4\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_4^2\times\mathbb{Z}_2}$\\
\hline
$p4m$&${\color{red}\mathbb{Z}\times\mathbb{Z}_4\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^5}$&${\color{red}2\mathbb{Z}\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^6}$\\
\hline
$p4g$&$~{\color{red}\mathbb{Z}\times\mathbb{Z}_4}\times{\color{blue}\mathbb{Z}_4\times\mathbb{Z}_2}~$&$~{\color{red}\mathbb{Z}\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_4\times\mathbb{Z}_2^2}~$\\
\hline
$p3$&${\color{red}\mathbb{Z}\times\mathbb{Z}_3^2}\times{\color{blue}\mathbb{Z}_3^3}$&${\color{red}\mathbb{Z}\times\mathbb{Z}_3^2}\times{\color{blue}\mathbb{Z}_3^3}$\\
\hline
$~~p3m1~~$&$~{\color{red}\mathbb{Z}\times\mathbb{Z}_3^2}\times{\color{blue}\mathbb{Z}_2}~$&$~{\color{red}\mathbb{Z}\times\mathbb{Z}_3^2}\times{\color{blue}\mathbb{Z}_2}~$\\
\hline
$p31m$&${\color{red}\mathbb{Z}\times\mathbb{Z}_3}\times{\color{blue}\mathbb{Z}_3\times\mathbb{Z}_2}$&${\color{red}\mathbb{Z}\times\mathbb{Z}_3}\times{\color{blue}\mathbb{Z}_3\times\mathbb{Z}_2}$\\
\hline
$p6$&$~{\color{red}\mathbb{Z}\times\mathbb{Z}_3\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_6\times\mathbb{Z}_3\times\mathbb{Z}_2}~$&$~{\color{red}\mathbb{Z}\times\mathbb{Z}_3\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_6\times\mathbb{Z}_3\times\mathbb{Z}_2}~$\\
\hline
$p6m$&${\color{red}\mathbb{Z}\times\mathbb{Z}_3\times\mathbb{Z}_2}\times{\color{blue}\mathbb{Z}_2^3}$&${\color{red}2\mathbb{Z}\times\mathbb{Z}_3}\times{\color{blue}\mathbb{Z}_2^4}$\\
\hline
\end{tabular}
\caption{The interacting classification of crystalline TI for 2D interacting fermionic systems. The results for both spinless and spin-1/2 fermions are summarized together. We note that the classifications are the same for those wall paper groups with only one reflection axis. We label the classification indices with fermionic/bosonic root phases with red/blue. }
\label{insulator U(1)}
\end{table*}
For 2D crystalline TI protected by both wall paper group and $U^f(1)$ charge conservation symmetry,
we generalize the procedures of real-space construction highlighted in Sec. \ref{general} to include the internal $U^f(1)$ symmetry.
It turns out that 1D block-state decoration does not contribute any nontrivial crystalline topological phase because of the absence of nontrivial 1D root phase in the presence of $U^f(1)$ symmetry.
All results of classification are summarized in Table \ref{insulator U(1)}, and we label the classification indices with fermionic root phase by red, and the classification indices with bosonic root phase by blue.
The rest of the paper is organized as follows: In Sec. \ref{general}, we introduce the general paradigm of the real-space construction of crystalline SPT phases protected by wallpaper group in 2D interacting fermionic systems. In Sec. \ref{example}, we explicitly show how to construct and classify the wallpaper group SPT phase in 2D interacting fermionic systems for five different crystallographic systems by using real-space construction, for both spinless and spin-1/2 fermions. All classification results are summarized in Tables \ref{spinless} and \ref{spin-1/2}. Furthermore, we also classify the crystalline TI in 2D interacting fermionic systems with additional $U^f(1)$ charge conservation by using similar real-space construction scheme in Sec. \ref{insulator}, and the results are summarized in Table \ref{insulator U(1)}.
In Sec. \ref{principle}, by comparing these results with the classification results of 2D FSPT phases protected by the corresponding on-site symmetry groups, we verify the \textit{crystalline equivalence principle} for generic 2D interacting fermionic systems.
Finally, conclusions and discussions about further applications of real-space construction and experimental implications are presented in Sec. \ref{conclusion}.
In Supplementary Materials, we first discuss the 2D crystalline TI protected by point group symmetry and compare the results with the classifications of 2D FSPT phases protected by the corresponding internal symmetry, then we discuss the real space construction of TSC and TI for all remaining cases of wallpaper groups \cite{supplementary}.
\section{General paradigm of real-space construction\label{general}}
In this section, we highlight the general paradigm of real-space construction of crystalline SPT phases for 2D interacting fermionic systems. There are three major steps: Firstly, we decompose the whole system into an assembly of unit cells, each unit cell is composed by several lower-dimensional blocks; secondly we decorate some proper lower-dimensional block-states on them and check their validity (for SPT phases, we require a fully gapped bulk ground state without ground-state degeneracy), that is, if the bulk state of a block-state construction cannot be fully gapped, we call such a decoration as \textit{obstructed}; finally we consider the so-called bubble equivalence to investigate all possible \textit{trivializations}(We note that certain block-states decorations actually lead to a trivial crystalline SPT phase). An obstruction-free and trivialization-free decoration corresponds to a nontrivial crystalline SPT phase. Below we demonstrate these procedures in full details by using the \#14 wallpaper group $p3m1$ as an example.
\subsection{Cell decomposition}
For a 2D system with an arbitrary wallpaper group symmetry, we can divide the whole system into an assembly of unit cells, where different unit cells are identical and related by translation symmetries, as illustrated in the left panel of Fig. \ref{cell}. Therefore, we should only specify the physics in each unit cell because of the presence of translational symmetry.
Then we decompose a specific unit cell of the wallpaper group $p3m1$ into an assembly of lower-dimensional blocks (see the right panel of Fig. \ref{cell}). Here $\boldsymbol{R}_{\mu_3}$ represents 3-fold rotational symmetry operation centred at the 0D block labeled by $\mu_3$, and $\boldsymbol{M}_{\tau_1}$ represents the reflection symmetry operation with the axis (indicated by the vertical dashed line in right panel of Fig. \ref{cell}) coincided with the 1D block labeled by $\tau_1$.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{p3m1_cell.pdf}
\caption{Cell decomposition of \#14 wallpaper group $p3m1$. Left panel illustrates the inter-cell decomposition that decomposes a lattice to an assembly of unit cells; right panel illustrates the intra-cell decomposition that decompose a unit cell to an assembly of lower-dimensional blocks.}
\label{cell}
\end{figure*}
The physical background of the ``intra-cell'' decomposition is the ``extended trivialization'' in each cell \cite{reduction}. Suppose $|\psi\rangle$ is an SPT state that cannot be trivialized by a symmetric finite-depth local unitary transformation. Due to the translational symmetry, $|\psi\rangle$ can be expressed in terms of a direct product of the wavefunctions of all cells:
\begin{align}
|\psi\rangle=\bigotimes\limits_{c}|\psi_c\rangle
\end{align}
Because of the presence of the translational symmetry, investigation of a specific $|\psi_c\rangle$ in a cell is enough for understanding the SPT state $|\psi\rangle$, since different $|\psi_c\rangle$'s are related by translational symmetries. As a consequence, $|\psi_c\rangle$ will inherit the property that cannot be trivialized by a symmetric finite-depth local unitary transformation $O^{\mathrm{loc}}$. Nevertheless, we can still define an alternative local unitary to \textit{extensively} trivialize $|\psi_c\rangle$. First we can trivialize the region $\sigma$ (see the right panel of Fig. \ref{cell}): restrict $O^{\mathrm{loc}}$ to $\sigma$ as $O_{\sigma}^{\mathrm{loc}}$ and act it on $|\psi_c\rangle$:
\begin{align}
O_\sigma^{\mathrm{loc}}|\psi_c\rangle=|T_\sigma\rangle\otimes|\psi_c^{\bar\sigma}\rangle
\end{align}
where the system is in the product state $|T_\sigma\rangle$ in region $\sigma$ and the remainder of the system $\bar\sigma$ is in the state $|\psi_c^{\bar\sigma}\rangle$. To trivialize the system symmetrically, we denote that $V_gO^{\mathrm{loc}}_\sigma V_g^{-1}$ trivializes the region $g\sigma$, where $g$ is the group element of dihedral group $D_3$ generated by 3-fold rotation $\boldsymbol{R}_{\mu_3}$ and reflection $\boldsymbol{M}_{\tau_1}$. Therefore, we act on $|\psi_c\rangle$ with:
\begin{align}
O^{\mathrm{loc}}=\bigotimes_{g\in D_3}V_gO^{\mathrm{loc}}_\sigma V_g^{-1}
\end{align}
which results in an \textit{extensively trivialized} wavefunction:
\begin{align}
&|\psi_c'\rangle=O_R^{\mathrm{loc}}|\psi_c\rangle\nonumber\\
&=\bigotimes_{g\in D_3}|T_{g\sigma}\rangle\otimes\bigotimes_{j=1,h\in D_3}^3|\psi_{h\tau_j}\rangle\otimes\bigotimes_{k=1,p\in D_3}^3|\psi_{p\mu_k}\rangle
\end{align}
where $\tau_j,j=1,2,3$ and $\mu_k,k=1,2,3$ label the 1D and 0D blocks as illustrated in the right panel of Fig. \ref{cell}. Now all nontrivial topological properties of $|\psi_c\rangle$ are encoded in lower-dimensional block-states $|\psi_{h\tau_j}\rangle$ and $|\psi_{p\mu_k}\rangle$, hence all nontrivial properties of $|\psi\rangle$ are encoded in lower-dimensional blocks in different unit cells.
\subsection{Block-state decoration}
Subsequently, with cell decompositions, we can decorate some proper lower-dimensional block-states on the corresponding lower-dimensional blocks. Some symmetry operations act internally on some lower-dimensional blocks, hence the lower-dimensional block-states should respect the corresponding on-site symmetry on which they decorated. As an example, we still consider the \#14 wallpaper group $p3m1$ with the cell decomposition as illustrated in Fig. \ref{cell}, the 3-fold rotational symmetry operations act on $g\mu_j$ ($g\in D_3$ and $j=1,2,3$) internally, and reflection symmetry operations act on $h\tau_k$ ($h\in D_3$ and $k=1,2,3$) internally, hence the root phases decorated on 0D and 1D blocks are 0D FSPT phases protected by $\mathbb{Z}_3\rtimes\mathbb{Z}_2$ on-site symmetry and 1D FSPT phases protected by $\mathbb{Z}_2$ on-site symmetry, respectively. All $d$D block-states form the group $G_{d\mathrm{D}}$, and all block-states form the following group:
\begin{align}
\{\mathrm{BS}\}=\bigotimes_{d=0}^2G_{d\mathrm{D}}
\end{align}
Here ``BS'' is the abbreviation of ``block-states''.
Furthermore, the decorated states should respect the \textit{no-open-edge condition}. Once we decorate some lower-dimensional block-states on the corresponding blocks, they might leave several gapless modes on the edge of the corresponding blocks, and there are several gapless edge modes coinciding near the lower-dimensional blocks with lower dimension. Repeatedly consider the wallpaper group $p3m1$ as an example, if we decorate a Majorana chain on the 1D block labeled by $\tau_1$ (because of the rotational symmetry, there are also two Majorana chains decorated at the 1D blocks labeled by $\boldsymbol{R}_{\mu_3}\tau_1$ and $\boldsymbol{R}_{\mu_3}^2\tau_1$, respectively), leaving 3 dangling Majorana modes near the 0D block labeled by $\mu_3$.
In order to contribute an SPT state, the bulk of the system should be fully gapped, hence the aforementioned gapless modes should be gapped out (by some proper interactions, mass terms, entanglement pairs, etc.) in a symmetric way. If the bulk of the system cannot be fully gapped (i.e., several aforementioned 0D modes cannot be gapped in a symmetric way), we call the corresponding decoration \textit{obstructed}. Equivalently, an obstruction-free decoration should satisfy the no-open-edge condition. All obstruction-free $d$D block-states form the group $\tilde{G}_{d\mathrm{D}}\subset G_{d\mathrm{D}}$ as a subgroup of $G_{d\mathrm{D}}$, and all obstruction-free block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}=\bigotimes_{d=0}^2\tilde{G}_{d\mathrm{D}}\subset\{\mathrm{BS}\}
\end{align}
Here ``OFBS'' is the abbreviation of ``obstruction-free block-states'', and $\{\mathrm{OFBS}\}$ is a subgroup of $\{\mathrm{BS}\}$.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{antiPBC.png}
\caption{Majorana chain with periodic boundary condition (PBC, left panel) and anti-periodic boundary condition (anti-PBC, right panel). The boundary conditions are indicated by the red arrows in both panels. Here ellipses represent the physical sites, and the solid oriented line from $j$ to $k$ indicates the paring direction, corresponding to the $i\gamma_j\gamma_k$ term in parent Hamiltonian. For the Majorana chain with PBC, the graph is not Kasteleyn oriented, and
the ground state has odd fermion parity; for the Majorana chain with anti-PBC, the graph is Kasteleyn oriented, thus
the ground state has even fermion parity.}
\label{anti-PBC}
\end{figure}
\subsection{Bubble equivalence}
In order to obtain a nontrivial SPT state from obstruction-free block state decorations, we should further consider possible trivializations. For blocks with dimension larger than 0, we can further decorate some codimension 1 degree of freedom that could be trivialized when they shrink to a point.
This construction is called \textit{bubble equivalence}, and we demonstrate it for different dimensions:
\paragraph{2D bubble equivalence}For 2D blocks, we can consider a 1D chain which can be shrunk to a point inside each 2D block, and there is no on-site symmetry on them for all possible cases. In fermionic systems, the only possible state we can decorate is Majorana chain. There are two distinct boundary conditions: periodic boundary condition (PBC) with odd fermion parity and anti-periodic boundary condition (anti-PBC) with even fermion parity, see Fig. \ref{anti-PBC}. According to the definition of bubble equivalence, we only choose the ``Majorana bubbles'' with anti-PBC because it can be trivialized if we shrink it into a point: if we decorate a Majorana chain with anti-PBC on a 2D block, we can shrink it to a smaller one by a 2D local unitary (LU) transformation without breaking any symmetry. Repeatedly apply this LU transformation on ``Majorana'' bubble, we can shrink it to a point and eliminate it (because a Majorana chain with anti-PBC has even fermion parity) by a symmetric finite-depth circuit.
Techniquely, it is well known that for two Majorana fermions $\gamma_j$ and $\gamma_k$, their entanglement pair $i\gamma_j\gamma_k$ can be created by the following projection operator \cite{general1,general2}:
\begin{align}
P_{j,k}=\frac{1}{2}\left(1-i\gamma_j\gamma_k\right)
\end{align}
and the direction is from $\gamma_j$ to $\gamma_k$. Consequently the creation operator of a Majorana chain containing $2N$ Majorana fermions with anti-PBC on the 2D block $\sigma$ can be generated by an assembly of these projection operators:
\begin{align}
A_\sigma=\prod\limits_{i=1}^{N-1}P_{2i,2i+1}\times\frac{1}{2}\left(1+i\gamma_{2N}\gamma_1\right)
\end{align}
Here the last bracket shows the direction of the Majorana entanglement pair $\langle\gamma_1,\gamma_{2N}\rangle$ is from $\gamma_1$ to $\gamma_{2N}$, and it explicitly shows the anti-PBC of the Majorana chain we have created. Finally the operator of creating a 2D ``Majorana'' bubble in the entire lattice is:
\begin{align}
A=\bigotimes_{\sigma}A_\sigma
\end{align}
\paragraph{1D bubble equivalence}For 1D blocks, we can consider two 0D FSPT modes protected by the corresponding on-site symmetry. These two 0D FSPT modes have the following geometry:
\begin{align}
\tikzstyle{xjp}=[rectangle,draw=none]
\begin{tikzpicture}
\path (-0.4,0.4) node [style=xjp] {$a_l^\dag$};
\path (0.4,0.4) node [style=xjp] {$a_r^\dag$};
\filldraw[fill=yellow, draw=yellow] (-0.4,0)circle (2pt);
\filldraw[fill=red, draw=red] (0.4,0) circle (2pt);
\draw (-0.322,0) -- (0.322,0);
\end{tikzpicture}
\label{1D bubble}
\end{align}
Where yellow and red dots represent two 0D FSPT modes $a_l^\dag$ and $a_r^\dag$, and that should be trivialized when they are fused, i.e., $a_l^\dag a_r^\dag|0\rangle$ should be a trivial state. We demonstrate that this 1D bubble can be shrunk to a point and trivialized by a finite-depth circuit: if we decorate a 1D bubble, we can enclose $a_l^\dag$ and $a_r^\dag$ by an LU transformation. Repeatedly apply this LU transformation, we can shrink this two modes to a point. Equivalently, we can trivialize them by a finite-depth circuit. Therefore, the creation operator of the 1D bubbles in the entire lattice is:
\begin{align}
B_j=\bigotimes_{\tau}(a_l^\tau)^\dag(a_r^\tau)^\dag
\end{align}
Enlarge these bubbles and proximate to the nearby lower-dimensional blocks, the FSPT phases decorated on the bubble can be fused with the original states on the nearby lower-dimensional blocks, which leads to some possible \textit{trivializations} of lower-dimensional block-state decorations.
Suppose there are $m$ different kinds of 1D bubble constructions, labeled by $B_j,j=1,...,m$. With this notation we can label an arbitrary bubble construction by an operator:
\[
A^{n_0}\prod\limits_{j=1}^{\beta}B_j^{n_j},~~n_0,n_j\in\mathbb{Z}
\]
here $n_0/n_j$ means that we take 2D/1D bubble construction $A/B_j$ by $n_0/n_j$. According to the definition of the bubble construction, taking an arbitrary bubble construction on the trivial state will lead to another trivial state, and all these trivial states form another group as following:
\begin{align}
\{\mathrm{TBS}\}=\left\{A^{n_0}\prod\limits_{j=1}^{\beta}B_j^{n_j}|0\rangle\Big|n_0,n_j\in\mathbb{Z}\right\}
\end{align}
Here ``TBS'' is the abbreviation of ``trivial block-states'', and $\{\mathrm{TBS}\}\subset\{\mathrm{OFBS}\}$ because all trivial block-states are obstruction-free block-states. Therefore, an obstruction and trivialization free block-state can be labeled by a group element of the following quotient group:
\begin{align}
G=\{\mathrm{OFBS}\}/\{\mathrm{TBS}\}
\end{align}
and all group elements in $G$ are not equivalent because we have already divided all trivial states connected by bubble constructions. Equivalently, group $G$ gives the classification of the corresponding crystalline topological phases.
In the following, we explicitly apply these procedures to calculate the classification of crystalline TSC and TI by several representative examples for each crystallographic systems.
\section{Construction and classification of crystalline TSC\label{example}}
In this section, we describe the details of real-space construction for crystalline TSC in 2D interacting fermionic systems by analyzing several typical examples. It is well known that all 17 wallpaper groups can be divided into five different crystallographic systems:
\begin{description}
\item[Square lattice]with rotational symmetry of order 4, including $p4$, $p4m$, $p4g$.
\item[Parallelogrammatic lattice]with only rotational symmetry of order 2, and no other symmetry than translational, including $p1$, $p2$.
\item[Rhombic lattice]with reflection combined with glide reflection, including $cm$, $cmm$
\item[Rectangle lattice]with reflection or glide reflection, but not both, including $pm$, $pg$, $pmm$, $pmg$, $pgg$.
\item[Hexagonal lattice]with rotational symmetry of order 3 or 6, including $p3$, $p3m1$, $p31m$, $p6$, $p6m$.
\end{description}
The key distinction between different crystallographic systems is 0D blocks as centers of different point group.
In particular, we apply the general paradigm of real-space construction highlighted in Sec. \ref{general} to investigate five representative cases that all of them belong to different crystallographic systems:
\begin{enumerate}
\item square lattice: $p4m$;
\item parallelogrammatic lattice: $p2$.
\item rhombic lattice: $cmm$;
\item rectangle lattice: $pgg$;
\item hexagonal lattice: $p6m$;
\end{enumerate}
And all other cases are assigned in Supplementary Materials \cite{supplementary}. The classification results are summarized in Table \ref{spinless} and \ref{spin-1/2}, for spinless and spin-1/2 fermions, respectively. Furthermore, there is an intrinsic interacting fermionic crystalline TSC obtained by real-space construction that cannot be realized by free fermionic systems or interacting bosonic systems: a 2D spinless system with $p4m$ wallpaper group symmetry.
\subsection{Square lattice: $p4m$}
For square lattice, we demonstrate the TSC protected by $p4m$ symmetry as an example. In the remainder of this paper, we use the same label of $p$-dimensional blocks that can be related by symmetry actions for abbreviation. The corresponding point group is dihedral group $D_4$, and for 2D blocks $\sigma$, there is no on-site symmetry group; for 1D blocks $\tau_1,\tau_2,\tau_3$, the on-site symmetry group is $\mathbb{Z}_2$ via the reflection symmetry acting internally; for 0D blocks $\mu_1$ and $\mu_3$, the on-site symmetry group is $\mathbb{Z}_4\rtimes\mathbb{Z}_2$ via the $D_4$ symmetry acting internally; for 0D blocks $\mu_2$, the on-site symmetry group is $\mathbb{Z}_2\rtimes\mathbb{Z}_2$ via the $D_2\subset D_4$ symmetry acting internally, as seen in Fig. \ref{p4m}.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{p4m.png}
\caption{\#11 wallpaper group $p4m$ and its cell decomposition.}
\label{p4m}
\end{figure}
We discuss systems with spinless and spin-1/2 fermions separately. The ``spinless''/``spin-1/2'' fermion means that the point subgroup is extended trivially/nontrivially by fermion parity $\mathbb{Z}_2^f$ \cite{dihedral}.
For spinless systems, we consider the 0D block-state decoration, for 0D blocks $\mu_1$ and $\mu_3$, and the classification data of the corresponding 0D block-states can be characterized by different 1D irreducible representations of the full symmetry group $\mathbb{Z}_2^f\times(\mathbb{Z}_4\rtimes\mathbb{Z}_2)$:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times\left(\mathbb{Z}_4\rtimes\mathbb{Z}_2\right),U(1)\right]=\mathbb{Z}_2^3
\label{p4m classification data}
\end{align}
For 0D blocks $\mu_2$, the classification data of the corresponding 0D block-states can also be characterized by different 1D irreducible representations of the full symmetry group $\mathbb{Z}_2^f\times(\mathbb{Z}_2\rtimes\mathbb{Z}_2)$:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times\left(\mathbb{Z}_2\rtimes\mathbb{Z}_2\right),U(1)\right]=\mathbb{Z}_2^3
\label{mu2}
\end{align}
For arbitrary 0D block [whose classification data are calculated in Eqs. (\ref{p4m classification data}) and (\ref{mu2})], three $\mathbb{Z}_2$ have different physical meanings: the first $\mathbb{Z}_2$ represents the parity of complex fermion (even or odd), the second $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$, and the third $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. So at each 0D block, the block-state can be labeled by $(\pm,\pm,\pm)$, where these three $\pm$ represent the fermion parity, rotation and reflection eigenvalues, respectively. We should note that even-fold dihedral group can also be generated by two independent reflection operations: for 0D blocks $\mu_1/\mu_3$, $D_4$ symmetry can be generated by reflection operations $\boldsymbol{M}_{\tau_1}/\boldsymbol{M}_{\tau_2}$ and $\boldsymbol{M}_{\tau_3}$ ($\boldsymbol{M}_{\tau_1}, \boldsymbol{M}_{\tau_2}, \boldsymbol{M}_{\tau_3}$ represent the reflection operation with respect to the axis which coincide with the 1D block labeled by $\tau_1, \tau_2,\tau_3$); for 0D block $\tau_2$, $D_2$ symmetry can be generated by reflection operations $\boldsymbol{M}_{\tau_1}$ and $\boldsymbol{M}_{\tau_2}$. Hence the last two $\pm$'s can also represent the eigenvalues of two independent reflections. According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p4m,0}^{\mathrm{0D}}=\mathbb{Z}_2^9
\end{align}
where the group elements can be labeled by:
\[
[(\pm,\pm,\pm),(\pm,\pm,\pm),(\pm,\pm,\pm)]
\]
here three brackets represent the block-states at $\mu_1$, $\mu_2$ and $\mu_3$, respectively.
Subsequently we consider the 1D block-state decoration. For $\tau_1$, $\tau_2$ and $\tau_3$, the total symmetry group is $\mathbb{Z}_2^f\times\mathbb{Z}_2$, so there are two possible 1D block-states: Majorana chain and 1D FSPT state, and all 1D block-states form a group:
\begin{align}
\{\mathrm{BS}\}_{p4m,0}^{\mathrm{1D}}=\mathbb{Z}_2^6
\end{align}
Below we discuss the decorations of these two root phases separately.
\paragraph{Majorana chain decoration}Consider Majorana chain decoration on 1D blocks labeled by $\tau_1$, which leaves 4 dangling Majorana fermions at each 0D block $\mu_1/\mu_3$, and 2 dangling Majorana fermions at each 0D block $\mu_2$. Near $\mu_1$, Majorana fermions have the following rotation and reflection symmetry (all subscripts are taken with modulo 4):
\begin{align}
\boldsymbol{R}_{\mu_1}:~\gamma_j\mapsto\gamma_{j+1},~~\boldsymbol{M}_{\tau_2}:~\gamma_j\mapsto\gamma_{4-j}
\end{align}
The local fermion parity operator and its symmetry properties read:
\begin{align}
P_f=-\prod\limits_{j=1}^4\gamma_j,~~\boldsymbol{R}_{\mu_1},\boldsymbol{M}_{\tau_2}:~P_f\mapsto-P_f
\end{align}
Hence these four Majorana modes form a projective representation of the symmetry group $(\mathbb{Z}_4\rtimes\mathbb{Z}_2)\times\mathbb{Z}_2^f$ on 0D block $\mu_1$, and a non-degenerate ground state is forbidden. Thus Majorana chain decoration on $\tau_1$ does not contribute nontrivial crystalline TSC because of the violation of the no-open-edge condition. It is similar for 1D blocks $\tau_2$ and $\tau_3$, and all types of Majorana chain decoration are obstructed.
\paragraph{1D FSPT state decoration}The 1D FSPT state decoration on $\tau_1$, $\tau_2$ and $\tau_3$ will leaves 8 dangling Majorana fermions ($\xi_j,\xi_j'$, $j=1,2,3,4$) at each 0D block labeled by $\mu_1/\mu_3$ and 4 dangling Majorana fermions ($\eta_j,\eta_j'$, $j=1,2$) at each 0D block labeled by $\mu_2$. At $\mu_1/\mu_3$ (we discuss $\mu_1$ as an example), the corresponding 8 Majorana fermions have the following rotation and reflection symmetry properties (all subscripts are taken under modulo 4, e.g., $\xi_5\equiv\xi_1$ and $\xi_5'\equiv\xi_1'$):
\begin{align}
\left.
\begin{aligned}
&\boldsymbol{R}_{\mu_1}:~\xi_j\mapsto\xi_{j+1},~\xi_j'\mapsto\xi_{j+1}'\\
&\boldsymbol{M}
_{\tau_1}:~\xi_j\mapsto\xi_{6-j},~\xi_j'\mapsto-\xi_{6-j}'
\end{aligned}
\right.,~~j=1,2,3,4.
\label{p4m Majorana}
\end{align}
We can define four complex fermions from these eight dangling Majorana fermions:
\begin{align}
c_j^\dag=\frac{1}{2}\left(\xi_j+i\xi_j'\right),~j=1,2,3,4
\end{align}
And from the point group symmetry properties (\ref{p4m Majorana}), we can obtain the point group symmetry properties of the above complex fermions as:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_1}:~\left(c_1^\dag,c_2^\dag,c_3^\dag,c_4^\dag\right)\mapsto\left(c_2^\dag,c_3^\dag,c_4^\dag,c_1^\dag\right)\\
&\boldsymbol{M}
_{\tau_1}:~\left(c_1^\dag,c_2^\dag,c_3^\dag,c_4^\dag\right)\mapsto\Big(c_1,c_4,c_3,c_2\Big)
\end{aligned}
\label{p4m complex}
\end{align}
We denote the fermion number operators $n_j=c_j^\dag c_j, j=1,2,3,4$. Firstly we consider the Hamiltonian with Hubbard interaction ($U>0$) that can gap out these dangling Majorana fermions:
\begin{align}
H_U=U&\left[\left(n_1-\frac{1}{2}\right)\left(n_3-\frac{1}{2}\right)\right.\nonumber\\
&\left.+\left(n_2-\frac{1}{2}\right)\left(n_4-\frac{1}{2}\right)\right]
\label{HU}
\end{align}
And it can also be expressed in terms of Majorana fermions with symmetry properties as shown in Eq. (\ref{p4m Majorana}):
\begin{align}
H_U=-\frac{U}{4}\left(\xi_1\xi_1'\xi_3\xi_3'+\xi_2\xi_2'\xi_4\xi_4'\right)
\end{align}
It is easy to verify that $H_U$ respects all symmetries. There is a 4-fold ground-state degeneracy from $(n_1,n_3)$ and $(n_2,n_4)$, which can be viewed as two spin-1/2 degrees of freedom:
\begin{align}
\tau_{13}^\mu=\left(c_1^\dag,c_3^\dag\right)\sigma^\mu\left(
\begin{array}{ccc}
c_1\\
c_3
\end{array}
\right)
\end{align}
and
\begin{align}
\tau_{24}^\mu=\left(c_2^\dag,c_4^\dag\right)\sigma^\mu\left(
\begin{array}{ccc}
c_2\\
c_4
\end{array}
\right)
\end{align}
where $\sigma^\mu,\mu=x,y,z$ are Pauli matrices. In order to lift this ground-state degeneracy (GSD), we should further consider the interactions between these two spins. The symmetry properties of these two spins can be easily obtained from (\ref{p4m complex}):
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_1}:~
\begin{aligned}
&\left(\tau_{13}^x,\tau_{13}^y,\tau_{13}^z\right)\mapsto\left(\tau_{24}^x,\tau_{24}^y,\tau_{24}^z\right)\\
&\left(\tau_{24}^x,\tau_{24}^y,\tau_{24}^z\right)\mapsto\left(\tau_{13}^x,-\tau_{13}^y,-\tau_{13}^z\right)
\end{aligned}\\
&\boldsymbol{M}
_{\tau_1}:
\begin{aligned}
&\left(\tau_{13}^x,\tau_{13}^y,\tau_{13}^z\right)\mapsto\left(-\tau_{13}^x,\tau_{13}^y,-\tau_{13}^z\right)\\
&\left(\tau_{24}^x,\tau_{24}^y,\tau_{24}^z\right)\mapsto\left(-\tau_{24}^x,-\tau_{24}^y,\tau_{24}^z\right)
\end{aligned}
\end{aligned}
\label{p4m spin}
\end{align}
Then we can further add a spin Hamiltonian ($J>0$):
\begin{align}
H_J=J\big(\tau_{13}^x\tau_{24}^x+\tau_{13}^y\tau_{24}^z-\tau_{13}^z\tau_{24}^y\big)
\label{HJ}
\end{align}
According to the symmetry properties of spin operations (\ref{p4m spin}), we can easily verify that the spin Hamiltonian $H_J$ is symmetric under all symmetries. We can also verify the symmetry properties in Majorana representations: express $H_J$ in terms of Majorana fermions as:
\begin{align}
H_J=&-\frac{J}{4}\left(\xi_1\xi_3'-\xi_1'\xi_3\right)\left(\xi_2\xi_4'-\xi_2'\xi_4\right)\nonumber\\
&-\frac{J}{4}\left(\xi_1\xi_3+\xi_1'\xi_3'\right)\left(\xi_2\xi_2'-\xi_4\xi_4'\right)\nonumber\\
&+\frac{J}{4}\left(\xi_1\xi_1'-\xi_3\xi_3'\right)\left(\xi_2\xi_4+\xi_2'\xi_4'\right)
\end{align}
and it is invariant under the symmetry properties defined in Eq. (\ref{p4m Majorana}). The GSD is lifted by a symmetric Hamiltonian $H_U+H_J$, and the non-degenerate ground-state is:
\begin{align}
|\psi\rangle_{\mathrm{0D}}=-\frac{1}{2}\left(|\uparrow,\uparrow\rangle+i|\uparrow,\downarrow\rangle-i|\downarrow,\uparrow\rangle-|\downarrow,\downarrow\rangle\right)
\end{align}
where $\uparrow$ and $\downarrow$ represent spin-up and spin-down of two spin-1/2 degrees of freedom ($\vec{\tau}_{13}$ and $\vec{\tau}_{24}$), and the ground-state energy is $-3J$. It is easy to verify that this state is invariant under arbitrary symmetry actions because $|\psi\rangle_{\mathrm{0D}}$ is the eigenstate of the operators $\boldsymbol{R}_{\mu_1}$ and $\boldsymbol{M}_{\tau_1}$ as two generators of $D_4$ group at each $\mu_1$:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_1}|\psi\rangle_{\mathrm{0D}}=i|\psi\rangle_{\mathrm{0D}}\\
&\boldsymbol{M}_{\tau_1}|\psi\rangle_{\mathrm{0D}}=-|\psi\rangle_{\mathrm{0D}}
\end{aligned}
\end{align}
Thus the corresponding 8 Majorana fermions are gapped out by interactions in a symmetric way.
Next we consider the dangling Majorana fermions from the 1D FSPT decorations on $\tau_1$ at $\mu_2$ with the rotation and reflection symmetry properties:
\begin{align}
\left.
\begin{aligned}
\boldsymbol{R}_{\mu_2}:~&\left(\eta_1,\eta_1',\eta_2,\eta_2'\right)\mapsto\left(\eta_2,\eta_2',\eta_1,\eta_1'\right)\\
\boldsymbol{M}_{\tau_1}:~&\left(\eta_1,\eta_1',\eta_2,\eta_2'\right)\mapsto\left(\eta_1,-\eta_1',\eta_2,-\eta_2'\right)
\end{aligned}
\right.
\label{p4m mu2 Majorana}
\end{align}
We can define two complex fermions from these four dangling Majorana fermions:
\begin{align}
c^\dag=\frac{1}{2}(\eta_1+i\eta_2),~c'^\dag=\frac{1}{2}(\eta_1'+i\eta_2')
\end{align}
and from the symmetry properties (\ref{p4m mu2 Majorana}), we can obtain the point group symmetry properties of the above complex fermions:
\begin{align}
\begin{aligned}
&\boldsymbol{R}:~\Big(c^\dag,c'^\dag\Big)\mapsto\Big(ic,ic'\Big)\\
&\boldsymbol{M}:~\Big(c^\dag,c'^\dag\Big)\mapsto\Big(c^\dag,-c'^\dag\Big)
\end{aligned}
\label{p4m mu2 complex}
\end{align}
We denote the fermion number operators $n=c^\dag c$ and $n'=c'^\dag c'$. First we consider the Hamiltonian with Hubbard interaction ($U'>0$) that can gap out these dangling Majorana fermions:
\begin{align}
H_U'=U'\left(n-\frac{1}{2}\right)\left(n'-\frac{1}{2}\right)
\end{align}
And it is easy to verify that $H_U'$ respects all symmetries according to the symmetry properties of defined complex fermions (\ref{p4m mu2 complex}). There is a 2-fold ground-state degeneracy from $(n,n')$ that can be viewed as a spin-1/2 degree of freedom:
\begin{align}
\tau^\mu=\left(c^\dag,c'^\dag\right)\sigma^\mu\left(
\begin{array}{ccc}
c\\
c'
\end{array}
\right)
\end{align}
In order to investigate that whether the degenerate ground states can be gapped out, we concentrate on the projective Hilbert space spanned by two states $c^\dag|0\rangle$ and $c'^\dag|0\rangle$. In this projective Hilbert space, two generators of $D_2$ symmetry on each $\mu_2$, $\boldsymbol{R}_{\mu_2}$ and $\boldsymbol{M}_{\tau_1}$ can be represented by two $2\times2$ matrices:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_2}=\left(
\begin{array}{ccc}
0 & 1\\
1 & 0
\end{array}
\right)=\sigma^x\\
&\boldsymbol{M}_{\tau_1}=\left(
\begin{array}{ccc}
1 & 0\\
0 & -1
\end{array}
\right)=\sigma^z
\end{aligned}
\end{align}
It is obvious that these two generators are anticommute:
$$\boldsymbol{R}_{\mu_2}\boldsymbol{M}_{\tau_1}=-\boldsymbol{M}_{\tau_1}\boldsymbol{R}_{\mu_2}$$
i.e., a sufficient condition shows that the projective Hilbert space is a projective representation of the symmetry group $D_2$ at each 0D block labeled by $\mu_2$. Hence, the two-fold ground-state degeneracy cannot be lifted.
We demonstrate this conclusion in Majorana representation and elucidate that all possible mass terms are not compatible with symmetries. A mass term is formed by two Majorana operators, and all possible mass terms are:
\[
\eta_1\eta_2,~\eta_1\eta_1',~\eta_1\eta_2',~\eta_2\eta_1',~\eta_2\eta_2',~\eta_1'\eta_2'
\]
and their linear combinations. Under 2-fold rotation $\boldsymbol{R}_{\mu_2}$, these mass terms will be transformed to:
\[
-\eta_1\eta_2,~\eta_2\eta_2',~\eta_2\eta_1',~\eta_1\eta_2',~\eta_1\eta_1',~-\eta_1'\eta_2'
\]
so there are only two mass terms that are symmetric under $\boldsymbol{R}_{\mu_2}$: $\eta_1\eta_1'+\eta_2\eta_2'$ and $\eta_1\eta_2'+\eta_2\eta_1'$ and their linear combinations. Subsequently, under the reflection $\boldsymbol{M}_{\tau_1}$, these terms are not symmetric:
\[
-(\eta_1\eta_1'+\eta_2\eta_2'),~-(\eta_1\eta_2'+\eta_2\eta_1')
\]
Therefore, there is no symmetric mass term to lift the GSD. Accordingly, 1D FSPT state decoration on $\tau_1$ is \textit{obstructed} because of the degenerate ground state, similar arguments can also be held on 1D blocks labeled by $\tau_2$ (and the obstruction also happens at 0D block $\mu_2$, as the center of $D_2$ symmetry). 1D FSPT state decoration on $\tau_3$ is \textit{obstruction-free} because this decoration leaves eight dangling Majorana fermions at each 0D block labeled by $\mu_1$ and $\mu_3$, and both of them are centers of $D_4$ symmetry.
There is one exception: if we decorate a 1D FSPT phase on each 0D block labeled by $\tau_1$ and $\tau_2$ simultaneously, it leaves eight dangling Majorana fermions at each 0D block $\mu_2$ ($\eta_j,\eta_j',j=1,2,3,4$), with the following rotation and reflection symmetry properties:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_2}:~\left\{\begin{aligned}
&(\eta_1,\eta_1',\eta_2,\eta_2')\mapsto(\eta_2,\eta_2',\eta_1,\eta_1')\\
&(\eta_3,\eta_3',\eta_4,\eta_4')\mapsto(\eta_4,\eta_4',\eta_3,\eta_3')
\end{aligned}\right.\\
&\boldsymbol{M}_{\tau_1}:~\left\{\begin{aligned}
&(\eta_1,\eta_1',\eta_2,\eta_2')\mapsto(\eta_1,-\eta_1',\eta_2,-\eta_2')\\
&(\eta_3,\eta_3',\eta_4,\eta_4')\mapsto(\eta_4,-\eta_4',\eta_3,-\eta_3')
\end{aligned}\right.
\end{aligned}
\label{p4m tau23 Majorana}
\end{align}
This problem is quite similar with aforementioned gapping problem at each 0D block labeled by $\mu_1$ or $\mu_3$, with lower point group symmetry ($D_2\in D_4$). Thus eight dangling Majorana fermions at each 0D block $\mu_2$ from decorating a 1D FSPT state on each $\tau_1$ and $\tau_2$ can be gapped by previously discussed interactions $H_U+H_J$ [cf. Eqs. (\ref{HU}) and (\ref{HJ})] in a symmetric way, and the 1D FSPT state decoration on $\tau_1$ and $\tau_2$ simultaneously is \textit{obstruction-free}. We should note that this block-state has no free-fermion realization because as aforementioned, we should introduce some interactions to satisfy the no-open-edge condition, as non-interacting mass terms cannot gap them out. Hence the crystalline TSC realized before is an intriguing fermionic SPT phase. As a consequence, all obstruction-free 1D block-states are:
\begin{itemize}
\item 1D FSPT state decoration on $\tau_1$ and $\tau_2$ simultaneously;
\item 1D FSPT state decoration on $\tau_3$.
\end{itemize}
and they form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p4m,0}^{\mathrm{1D}}=\mathbb{Z}_2^2
\end{align}
where the group elements can be labeled by:
\[
[n_1=n_2,n_3]
\]
here $n_j=0,1$ ($j=1,2,3$) represents the number of decorated 1D FSPT states on $\tau_j$, respectively. According to aforementioned discussions, a necessary condition of an obstruction-free block-state is $n_1=n_2$.
So far we have already obtained all obstruction-free block-states, and they form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p4m,0}&=\{\mathrm{OFBS}\}_{p4m,0}^{\mathrm{1D}}\times\{\mathrm{OFBS}\}_{p4m,0}^{\mathrm{0D}}\nonumber\\
&=\mathbb{Z}_2^2\times\mathbb{Z}_2^9=\mathbb{Z}_2^{11}
\end{align}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{antiPBC.pdf}
\caption{``Majorana'' bubble equivalence near each 0D block $\mu_2$. (a): ``Majorana'' bubble construction: decorate a Majorana chain with anti-PBC (indicated by red arrow) on each 2D block. (b): We cut Majorana bubbles near each 0D block (for instance, $\mu_2$ as illustrated), near each nearby 1D block, ``Majorana bubbles'' are deformed to double Majorana chains that is exactly the definition of nontrivial 1D FSPT state protected by on-site $\mathbb{Z}_2$ symmetry. (c): Alternative way to cut the Majorana bubbles: Near each nearby 1D block, ``Majorana bubbles'' are deformed to double Majorana chains again; and near the 0D block labeled by $\mu_2$, Majorana fermions surrounded by green dashed circle are deformed toward an enclosed Majorana chain. Nevertheless, Majorana chain is not compatible with the reflection symmetry for spinless fermions \cite{rotation}, hence the deformation illustrated in this panel is forbidden by symmetry. Here ellipses represent the physical sites, and the solid oriented line represents the entanglement pair of two Majorana fermions.}
\label{antiPBC}
\end{figure*}
With all obstruction-free block-states, subsequently we discuss all possible trivializations. First we consider about the 2D bubble equivalences: we decorate a ``Majorana bubble'' on each 2D block $\sigma$ (see Fig. \ref{antiPBC}), and then demonstrate that they can be deformed into double Majorana chains at each nearby 1D block, and this is exactly the definition of the nontrivial 1D FSPT phase protected by on-site $\mathbb{Z}_2$ symmetry. Figs. \ref{antiPBC}(b) shows that if we cut Majorana bubbles near each 0D block, these ``Majorana bubbles'' can be deformed to double Majorana chains. For $p4m$ case, all 1D blocks are lying on the reflection axis, and reflection operation acting on them internally: reflection operation (on-site $\mathbb{Z}_2$ symmetry on 1D blocks) exchanges two Majorana chains deformed from ``Majorana'' bubble constructions, and this is exactly the definition of the nontrivial 1D FSPT phase protected by on-site $\mathbb{Z}_2$ symmetry. Equivalently, 1D FSPT state decorations on all 1D blocks can be deformed to a trivial state via both types of 2D bubble equivalence. Furthermore, we demonstrate that 2D bubble equivalence has no effect on 0D block: Figs. \ref{antiPBC}(c) shows an alternative type of deformation: at each 1D block, this deformation is identical with aforementioned one, but at each 0D block, there is an alternative Majorana chain with \textit{odd} fermion parity surrounding it. Nevertheless, for spinless fermions, Majorana chain is not compatible with reflection symmetry \cite{rotation}, hence this deformation is forbidden by reflection symmetry. Therefore, the overall effect of 2D ``Majorana'' bubble equivalence is deforming the 1D FSPT phase (protected by on-site $\mathbb{Z}_2$ symmetry) decorations on all 1D blocks to a trivial state, see Fig. \ref{deformation}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{deformation.pdf}
\caption{Deformation of ``Majorana'' bubble construction. (a): 1D vacuum block-state that all 1D blocks are in vacuum state. Here $\mu_1$, $\mu_2$ and $\mu_3$ label different 0D blocks and $\sigma$ enclosed by a dashed triangle labels 2D blocks in one unit cell. (b): Decorate a ``Majorana'' bubble in each 2D block in a symmetric way. Each solid oriented triangle expresses a Majorana chain with anti-PBC. (c): Enlarge the ``Majorana'' bubbles, according to Fig. \ref{antiPBC}, they can be deformed to the 1D block-state with 1D FSPT states protected by on-site $\mathbb{Z}_2$ symmetry on all 1D blocks. Here each double oriented lines expresses a 1D FSPT states protected by on-site $\mathbb{Z}_2$ symmetry that can be constructed by two Majorana chains.}
\label{deformation}
\end{figure*}
Subsequently we consider the 1D bubble equivalences. For instance, we decorate a pair of complex fermions [cf. Eq. (\ref{1D bubble})] on each 1D block $\tau_1$: Near each 0D block $\mu_1$, there are 4 complex fermions forming the following atomic insulator:
\begin{align}
|\psi\rangle_{p4m}^{\mu_1}=c_1^\dag c_2^\dag c_3^\dag c_4^\dag|0\rangle
\end{align}
with two independent reflection properties:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\psi\rangle_{p4m}^{\mu_1}=c_1^\dag c_4^\dag c_3^\dag c_2^\dag|0\rangle=-|\psi\rangle_{p4m}^{\mu_1}\\
&\boldsymbol{M}_{\tau_3}|\psi\rangle_{p4m}^{\mu_1}=c_3^\dag c_4^\dag c_1^\dag c_2^\dag|0\rangle=|\psi\rangle_{p4m}^{\mu_1}
\end{aligned}
\end{align}
i.e., at 0D blocks $\mu_1$, 1D bubble construction on $\tau_1$ changes the reflection eigenvalue of $\boldsymbol{M}_{\tau_1}$, and leaves the reflection eigenvalue of $\boldsymbol{M}_{\tau_2}$ invariant. Near each 0D block $\mu_2$, there are two complex fermions forming another atomic insulator:
\begin{align}
|\psi\rangle_{p4m}^{\mu_2}=c_1'^\dag c_2'^\dag|0\rangle
\end{align}
with two independent reflection properties:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\psi\rangle_{p4m}^{\mu_2}=c_1'^\dag c_2'^\dag|0\rangle=|\psi\rangle_{p4m}^{\mu_2}\\
&\boldsymbol{M}_{\tau_2}|\psi\rangle_{p4m}^{\mu_2}=c_2'^\dag c_1'^\dag|0\rangle=-|\psi\rangle_{p4m}^{\mu_2}
\end{aligned}
\end{align}
i.e., at 0D blocks $\mu_2$ 1D bubble construction on $\tau_1$ changes the reflection eigenvalue of $\boldsymbol{M}_{\tau_2}$, and leaves the reflection eigenvalue of $\boldsymbol{M}_{\tau_1}$ invariant. Similar 1D bubble constructions can be held on 1D blocks $\tau_2$ and $\tau_3$, and we summarize the effects of 1D bubble constructions as following:
\begin{enumerate}[1.]
\item 1D bubble construction on $\tau_1$: simultaneously changes the eigenvalues of $\boldsymbol{M}_{\tau_1}$ at $\mu_1$ and $\boldsymbol{M}_{\tau_2}$ at $\mu_2$;
\item 1D bubble construction on $\tau_2$: simultaneously changes the eigenvalues of $\boldsymbol{M}_{\tau_1}$ at $\mu_2$ and $\boldsymbol{M}_{\tau_2}$ at $\mu_3$;
\item 1D bubble construction on $\tau_3$: simultaneously changes the eigenvalues of $\boldsymbol{M}_{\tau_3}$ at $\mu_1$ and $\boldsymbol{M}_{\tau_3}$ at $\mu_3$;
\end{enumerate}
With all possible trivializations, we are ready to study the trivial states. Start from the original 0D trivial block-state (nothing is decorated on arbitrary 0D blocks):
\[
[(+,+,+),(+,+,+),(+,+,+)]
\]
If we take 1D bubble constructions on $\tau_j$ by $n_j$ times ($j=1,2,3$), above trivial 0D block-state will be transformed to a new 0D block-state labeled by:
\begin{align}
&\left[\left(+,(-1)^{n_1},(-1)^{n_3}\right)\right.,\left(+,(-1)^{n_2},(-1)^{n_1}\right),\nonumber\\
&\left.\left(+,(-1)^{n_2},(-1)^{n_3}\right)\right]
\label{p4m spinless trivial state}
\end{align}
According to the definition of bubble equivalence, all these states should be trivial. It is easy to see that there are only three independent quantities ($n_j,j=1,2,3$) in Eq. (\ref{p4m spinless trivial state}). Together with the 2D ``Majorana'' bubble construction that deforms the vacuum 1D block-state to 1D FSPT states decorated on all 1D blocks, all these trivial states form the group:
\begin{align}
\{\mathrm{TBS}\}_{p4m,0}&=\{\mathrm{TBS}\}_{p4m,0}^{\mathrm{1D}}\times\{\mathrm{TBS}\}_{p4m,0}^{\mathrm{0D}}\nonumber\\
&=\mathbb{Z}_2\times\mathbb{Z}_2^3=\mathbb{Z}_2^4
\end{align}
where $\{\mathrm{TBS}\}_{p4m,0}^{\mathrm{1D}}$ represents the group of trivial states with non-vacuum 1D blocks (i.e., 1D FSPT phase decorations on all 1D blocks), and $\{\mathrm{TBS}\}_{p4m,0}^{\mathrm{0D}}$ represents the group of trivial states with non-vacuum 0D blocks.
Therefore, all independent nontrivial block-states are labeled by the group elements of the following quotient group:
\begin{align}
E_{p4m,0}&=\{\mathrm{OFBS}\}_{p4m,0}/\{\mathrm{TBS}\}_{p4m,0}\nonumber\\
&=\mathbb{Z}_2^{11}/\mathbb{Z}_2^{4}=\mathbb{Z}_2^{7}
\label{p4m spinless classification}
\end{align}
here one $\mathbb{Z}_2$ is from the nontrivial 1D block-state, and other six $\mathbb{Z}_2$ are from the nontrivial 0D block-states.
With all nontrivial block-states, we consider the group structure of the ultimate classification. The physical meaning of the group structure investigation is finding whether 1D block-state extends 0D block-state or not. We argue that there is no stacking between block-states with different dimensions, and the group structure of classification data (\ref{p4m spinless classification}) have already been accurate. In order to investigate the possible stacking, we consider two identical 1D block-states: for example, we decorate two copies of 1D FSPT states on each 1D block labeled by $\tau_3$, which leaves 16 dangling Majorana fermions at each 0D block labeled by $\mu_1/\mu_3$. It is easy to verify that two copies of 1D FSPT states should be a trivial 1D block-state because the root phase has a $\mathbb{Z}_2$ structure. First of all, according to previous discussions, these decoration cannot be deformed to a Majorana chain surrounding the 0D block to change the corresponding fermion parity because the Majorana chain is not compatible with the reflection symmetry. Subsequently at each 0D block $\mu_1/\mu_3$, we can treat these 16 Majorana fermions as 8 complex fermions: $c_j$ and $c_j'$ ($j=1,2,3,4$) that form two atomic insulators:
\begin{align}
\begin{aligned}
&|\phi\rangle=a_1^\dag a_2^\dag a_3^\dag a_4^\dag|0\rangle\\
&|\phi'\rangle=a_1'^\dag a_2'^\dag a_3'^\dag a_4'^\dag|0\rangle
\end{aligned}
\end{align}
and the wavefunction of these 8 complex fermions are direct product of $|\phi\rangle$ and $|\phi'\rangle$:
\begin{align}
|\Phi\rangle=|\phi\rangle\otimes|\phi'\rangle
\end{align}
$|\phi\rangle$ and $|\phi'\rangle$ are eigenstates of two generators of $D_4$ symmetry, $\boldsymbol{M}_{\tau_1}$ and $\boldsymbol{M}_{\tau_3}$:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\phi\rangle=a_2^\dag a_1^\dag a_4^\dag a_3^\dag|0\rangle=|\phi\rangle\\
&\boldsymbol{M}_{\tau_1}|\phi'\rangle=a_2'^\dag a_1'^\dag a_4'^\dag a_3'^\dag|0\rangle=|\phi'\rangle\\
&\boldsymbol{M}_{\tau_3}|\phi\rangle=a_1^\dag a_4^\dag a_3^\dag a_2^\dag|0\rangle=-|\phi\rangle\\
&\boldsymbol{M}_{\tau_3}|\phi'\rangle=a_1'^\dag a_4'^\dag a_3'^\dag a_2'^\dag|0\rangle=-|\phi'\rangle\\
\end{aligned}
\end{align}
Then the eigenvalues of $|\Phi\rangle$ under $\boldsymbol{M}_{\tau_1}$ and $\boldsymbol{M}_{\tau_3}$ is trivial:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\Phi\rangle=|\Phi\rangle\\
&\boldsymbol{M}_{\tau_3}|\Phi\rangle=|\Phi\rangle
\end{aligned}
\end{align}
Therefore, 1D block-state cannot extend 0D block-state, and the group structure of classification data (\ref{p4m spinless classification}) have already been accurate.
Now we turn to discuss systems with spin-1/2 fermions. We first consider the 0D block-state decoration. For each 0D block labeled by $\mu_1$ or $\mu_3$, the classification data can also be characterized by different 1D irreducible representations of the full symmetry group $\mathbb{Z}_2^f\times_{\omega_2}(\mathbb{Z}_4\rtimes\mathbb{Z}_2)$ (the symbol ``$\times_{\omega_2}$'' means that the physical symmetry group is nontrivially extended by fermion parity $\mathbb{Z}_2^f$, which is characterized by a 2-cocycle $\omega_2$, see Sec. \ref{spinSec}):
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times_{\omega_2}(\mathbb{Z}_4\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^2
\end{align}
similar for each 0D block $\mu_2$:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times_{\omega_2}(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^2
\end{align}
To calculate this, we should firstly calculate the following two cohomologies:
\begin{align}
\left\{
\begin{aligned}
&\mathcal{H}^0(\mathbb{Z}_n\rtimes\mathbb{Z}_2,\mathbb{Z}_2)=\mathbb{Z}_2\\
&\mathcal{H}^1\left[\mathbb{Z}_n\rtimes\mathbb{Z}_2,U(1)\right]=\mathbb{Z}_2^2
\end{aligned}
\right.
\end{align}
But the 0-cocycle $n_0\in\mathcal{H}^0(\mathbb{Z}_n\rtimes\mathbb{Z}_2,\mathbb{Z}_2)$ does not contribute a nontrivial 0D block-state: a specific $n_0$ is obstructed if and only if $(-1)^{\omega_2\smile n_0}\in\mathcal{H}^2[\mathbb{Z}_4\rtimes\mathbb{Z}_2,U(1)]$ is a nontrivial 2-cocycle with $U(1)$ coefficient. From Refs. \cite{general2} and \cite{dihedral} we know that nontrivial 0-cocycle $n_0=1$ (fermion parity odd) leads to a nontrivial 2-cocycle $(-1)^{\omega_2\smile n_0}\in\mathcal{H}^2[\mathbb{Z}_4\rtimes\mathbb{Z}_2,U(1)]$, and the 0D block-states at $\mu_1$ and $\mu_3$ with odd fermion parity are obstructed, similar for 0D block $\mu_2$. Hence different $\mathbb{Z}_2$'s in the classification data represent the rotation and reflection eigenvalues at each $D_4$ or $D_2$ center. As a consequence, all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p4m,1/2}^{\mathrm{0D}}=\mathbb{Z}_2^6
\end{align}
And it is straightforward to see that there is no more trivialization ($\{\mathrm{TS}\}_{p4m,1/2}^{\mathrm{0D}}=0$), hence the classification attributed to 0D block-states is:
\begin{align}
E_{p4m,1/2}^{\mathrm{0D}}=\mathbb{Z}_2^6
\end{align}
Subsequently we consider the 1D block-state decoration. For arbitrary 1D blocks, the total symmetry group is $\mathbb{Z}_4^f$, hence there is no nontrivial 1D block-state due to the trivial classification of the corresponding 1D FSPT phases (i.e., there is no obstruction-free 1D block-state), and the classification attributed to 1D block-state decorations is trivial:
\begin{align}
E_{p4m,1/2}^{\mathrm{1D}}=\{\mathrm{OFBS}\}_{p4m,1/2}^{\mathrm{1D}}=0
\end{align}
Therefore it is obvious that there is no stacking between 1D and 0D block-states because of the trivial contribution from 1D block-state. The ultimate classification with accurate group structure is:
\begin{align}
\mathcal{G}_{p4m,1/2}=E_{p4m,1/2}^{\mathrm{0D}}\times E_{p4m,1/2}^{\mathrm{1D}}=\mathbb{Z}_2^6
\end{align}
\subsection{Parallelogrammatic lattice: $p2$}
For parallelogrammatic lattice, we demonstrate the crystalline TSC protected by $p2$ symmetry as an example. The corresponding point group of $p2$ is rotation group $C_2$. For 1D and 2D blocks, there is no on-site symmetry group, but the rotational subgroup $C_2$ acts on each 0D blocks internally, just identical with on-site $\mathbb{Z}_2$ symmetry, see Fig. \ref{p2}.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{p2.png}
\caption{\#2 wallpaper group $p2$ and its cell decomposition.}
\label{p2}
\end{figure}
We discuss systems with spinless and spin-1/2 fermions separately. Consider the 0D block-state decoration, for 0D blocks labeled by $\mu_j, j=1,2,3,4$, the total symmetry group of each of them is $\mathbb{Z}_2^f\times\mathbb{Z}_2$, and the classification data can be characterized by different 1D irreducible representations of the symmetry group $\mathbb{Z}_2^f\times\mathbb{Z}_2$:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times\mathbb{Z}_2,U(1)\right]=\mathbb{Z}_2^2
\end{align}
One $\mathbb{Z}_2$ is from the fermion parity, and the other is from the rotation eigenvalue $-1$. So at each 0D block, the block-state can be labeled by $(\pm,\pm)$, here these two $\pm$'s represent the fermion parity and rotation eigenvalue, respectively. According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p2,0}^{\mathrm{0D}}=\mathbb{Z}_2^8
\end{align}
and the group elements can be labeled by (four brackets represent the block-states at $\mu_j,j=1,2,3,4$):
\[
[(\pm,\pm),(\pm,\pm),(\pm,\pm),(\pm,\pm)]
\]
Then we consider possible trivializations via bubble construction. First of all, we consider the 2D bubble equivalence: as illustrated in Fig. \ref{antiPBC p2}, we decorate a Majorana chain with anti-PBC on each 2D block which can be trivialized if it shrinks to a point. At each nearby 1D block, we can see that these ``Majorana'' bubbles can be deformed into double Majorana chains. But distinct with the $p4m$ case, there is no on-site symmetry at arbitrary 1D block, hence the only root phase at each 1D block should be Majorana chain as the only invertible topological phase, and double of them should be trivialized because of the $\mathbb{Z}_2$ classification of this phase. Hence ``Majorana bubble'' construction has no effect on 1D blocks. At each nearby 0D block ($\mu_2$ as an example, see Fig. \ref{antiPBC p2}), these ``Majorana'' bubbles can be deformed into an alternative Majorana chain with \textit{odd} fermion parity surrounding it. Distinct with the $p4m$ case, this Majorana chain respects all symmetry actions of $p2$ because there is no reflection operation on this Majorana chain, so this ``Majorana'' bubble construction can change the fermion parities of all 0D blocks simultaneously.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{antiPBC_p2.png}
\caption{2D bubble equivalence for \#2 wallpaper group $p2$. Near each 0D block ($\mu_2$ for example), Majorana fermions surrounded by green dashed circle are deformed toward an enclosed Majorana chain surrounding the 0D block $\mu_2$. Left panel shows the bubble construction, and right panel is the deformed Majorana chain which is not Kasteleyn oriented, and the state has odd fermion parity.}
\label{antiPBC p2}
\end{figure}
Furthermore, consider 1D bubble equivalence on $\tau_1$: on each 1D block labeled by $\tau_1$, we decorate a pair of complex fermions [cf. Eq. (\ref{1D bubble})]: Near each 0D block $\mu_2$, there are 2 complex fermions which form an atomic insulator:
\begin{align}
|\psi\rangle_{p2}^{\mu_2}=c_1^\dag c_2^\dag|0\rangle
\end{align}
with rotation property as ($\boldsymbol{R}_{\mu_2}$ represents the 2-fold rotation operation centred at $\mu_2$):
\begin{align}
\boldsymbol{R}_{\mu_2}|\psi\rangle_{p2}^{\mu_2}=c_2^\dag c_1^\dag|0\rangle=-|\psi\rangle_{p2}^{\mu_2}
\end{align}
Hence the rotation eigenvalue $-1$ can be trivialized by atomic insulator $|\psi\rangle_{p2}^{\mu_2}$. Similar for $\mu_1$, and we can conclude that rotation eigenvalues at 0D blocks labeled by $\mu_1$ and $\mu_2$ are not independent. Similar bubble equivalences can be held on arbitrary 1D blocks $\tau_j$, $j=1,2,3,4$, and rotation eigenvalues at all 0D blocks are not independent as a consequence.
With all possible bubble constructions, we are ready to study the trivial states. Start from the original trivial state (nothing is decorated on arbitrary 0D block):
\[
[(+,+),(+,+),(+,+),(+,+)]
\]
if we take 2D bubble construction $n_0$ times, and take 1D bubble constructions on $\tau_j$ by $n_j$ times ($j=1,2,3$), above trivial state will be transformed to a new 0D block-state labeled by:
\begin{align}
&\left[((-1)^{n_0},(-1)^{n_1+n_2}),((-1)^{n_0},(-1)^{n_1+n_3}),\right.\nonumber\\
&\left.((-1)^{n_0},(-1)^{n_2}),((-1)^{n_0},(-1)^{n_3})\right]
\label{p2 spinless trivial state}
\end{align}
According to the definition of bubble equivalence, all these states should be trivial. Alternatively, all 0D block-states can be viewed as a vector of an 8-dimensional $\mathbb{Z}_2$-valued vector space, and all trivial 0D block-states with the form as Eq. (\ref{p2 spinless trivial state}) can be viewed as a vector of the subspace of aforementioned vector space. The dimensionality of this subspace can be determined by calculating the rank of the following transformation matrix:
\begin{align}
\mathrm{rank}\left(
\begin{matrix}
1 & 0 & 1 & 0 & 1 & 0 & 1 & 0\\
0 & 1 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\
\end{matrix}
\right)=4
\end{align}
Here different rows of this matrix represent different bubble constructions. Hence the vector subspace containing all trivial 0D block-states is 4, and all trivial 0D block-states form the group:
\begin{align}
\{\mathrm{TBS}\}_{p2,0}^{\mathrm{0D}}=\mathbb{Z}_2^4
\end{align}
Therefore, all independent nontrivial 0D block-states are labeled by different group elements of the following quotient group:
\begin{align}
E_{p2,0}^{\mathrm{0D}}=\{\mathrm{OFBS}\}_{p2,0}^{\mathrm{0D}}/\{\mathrm{TBS}\}_{p2,0}^{\mathrm{0D}}=\mathbb{Z}_2^4
\end{align}
Subsequently we consider the 1D block-state decorations. The unique possible 1D block-state is Majorana chain due to the absence of on-site symmetry on arbitrary 1D block, and all 1D block-states form a group:
\begin{align}
\{\mathrm{BS}\}_{p2,0}^{\mathrm{1D}}=\mathbb{Z}_2^3
\end{align}
Then we consider about the possible obstruction: Majorana chain decoration on $\tau_1$ leaves 2 dangling Majorana fermions at each 0D block labeled by $\mu_2$ which can be glued by an entanglement pair $i\gamma_1\gamma_2$. Nevertheless, this entanglement pair breaks $C_2$ symmetry:
\begin{align}
\boldsymbol{R}_{\mu_2}:~i\gamma_1\gamma_2\mapsto i\gamma_2\gamma_1=-i\gamma_1\gamma_2
\end{align}
hence this decoration is obstructed, and does not contribute nontrivial crystalline TSC because of the violation of the no-open-edge condition. It is similar for all other 1D blocks. As a consequence, 1D block-state decorations do not contribute any nontrivial crystalline topological state because all block-states are obstructed:
\begin{align}
E_{p2,0}^{\mathrm{1D}}=\{\mathrm{OFBS}\}_{p2,0}^{\mathrm{1D}}=0
\end{align}
Hence it is obvious that there is no stacking between 1D and 0D block-states, and the ultimate classification with accurate group structure is:
\begin{align}
\mathcal{G}_{p2,0}=E_{p2,0}^{\mathrm{1D}}\times E_{p2,0}^{\mathrm{0D}}=\mathbb{Z}_2^4
\end{align}
Now we turn to discuss systems with spin-1/2 fermions. First we consider the 0D block-state decoration, whose candidate states can be characterized by different 1D irreducible representations of the symmetry group $\mathbb{Z}_4^f$ (nontrivial $\mathbb{Z}_2^f$ extension of $\mathbb{Z}_2$ on-site symmetry):
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_4^f,U(1)\right]=\mathbb{Z}_4
\end{align}
All root phases are characterized by eigenvalues $\{i,-1,-i,1\}$ of 2-fold rotation operation composed by fermion parity. So at each 0D block, the block-state can be labeled by $\nu\in\{i,-1,-i,1\}$. According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p2,1/2}^{\mathrm{0D}}=\mathbb{Z}_4^4
\end{align}
and different group elements can be labeled by:
\[
[\nu_1,\nu_2,\nu_3,\nu_4]
\]
where $\nu_j$ labels the 0D block-state at $\mu_j$ (j=1,2,3,4). It is easy to see that there is no trivialization on 0D block-states (i.e., $\{\mathrm{TBS}\}_{p2,1/2}^{\mathrm{0D}}=0$), so the classification attributed to 0D block-state decoration is:
\begin{align}
E_{p2,1/2}^{\mathrm{0D}}=\{\mathrm{OFBS}\}_{p2,1/2}^{\mathrm{0D}}/\{\mathrm{TBS}\}_{p2,1/2}^{\mathrm{0D}}=\mathbb{Z}_4^4
\label{p2-0}
\end{align}
Subsequently consider the 1D block-state decorations. The unique possible 1D block-state is still Majorana chain due to the absence of on-site symmetry on each 1D block. The Majorana chain decoration on $\tau_1$ leaves 2 dangling Majorana fermions at each 0D block labeled by $\mu_2$ which can be glued by an entanglement pair $i\gamma_1\gamma_2$, and it respects rotational symmetry:
\begin{align}
\boldsymbol{R}_{\mu_2}:~i\gamma_1\gamma_2\mapsto-i\gamma_2\gamma_1=i\gamma_1\gamma_2
\end{align}
Hence Majorana chain decoration on $\tau_1$ is an obstruction-free block-state because of the satisfaction of the no-open-edge condition. It is similar for 1D blocks labeled by $\tau_2$ and $\tau_3$. Hence all obstruction-free 1D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p2,1/2}^{\mathrm{1D}}=\mathbb{Z}_2^3
\end{align}
It is obvious that there is no trivialization (i.e., $\{\mathrm{TBS}\}_{p2,1/2}^{\mathrm{1D}}=0$), so the classification attributed to 1D block-state decorations is:
\begin{align}
E_{p2,1/2}^{\mathrm{1D}}=\{\mathrm{OFBS}\}_{p2,1/2}^{\mathrm{1D}}/\{\mathrm{TBS}\}_{p2,1/2}^{\mathrm{1D}}=\mathbb{Z}_2^3
\label{p2-1}
\end{align}
With the classification data as Eqs. (\ref{p2-0}) and (\ref{p2-1}), we consider the group structure of the corresponding classification. Equivalently, we investigate that if 1D block-state extends 0D block-state. As an example, we decorate two copies of Majorana chains on each 1D block labeled by $\tau_1$ which leaves four dangling Majorana fermions at each 0D block labeled by $\mu_1/\mu_2$. Similar with Ref. \onlinecite{rotation}, these Majorana chains can be smoothly deformed to another assembly of Majorana chains surrounding 0D blocks labeled by $\mu_1$ and $\mu_2$ as follows (each yellow ellipse represents a physical site):
\begin{align}
\tikzstyle{xjp}=[rectangle,draw=none]
\begin{tikzpicture}
\filldraw[fill=yellow!80!white, draw=yellow] (-2.4,0.3) ellipse (16pt and 5pt);
\filldraw[fill=yellow!80!white, draw=yellow] (-2.4,-0.3) ellipse (16pt and 5pt);
\filldraw[fill=yellow!80!white, draw=yellow] (-0.8,0.3) ellipse (16pt and 5pt);
\filldraw[fill=yellow!80!white, draw=yellow] (-0.8,-0.3) ellipse (16pt and 5pt);
\filldraw[fill=black, draw=black] (-0.4-1.6,0.3)circle (2pt);
\filldraw[fill=black, draw=black] (0.4-1.6,0.3) circle (2pt);
\draw[->,thick] (-0.322-1.6,0.3) -- (0.08-1.6,0.3);
\draw[thick] (0.06-1.6,0.3) -- (0.322-1.6,0.3);
\filldraw[fill=black, draw=black] (-0.4-1.6,-0.3)circle (2pt);
\filldraw[fill=black, draw=black] (0.4-1.6,-0.3) circle (2pt);
\draw[->,thick] (-0.322-1.6,-0.3) -- (0.08-1.6,-0.3);
\draw[thick] (0.06-1.6,-0.3) -- (0.322-1.6,-0.3);
\path (-0.4-1.6,0.6) node [style=xjp] {$\gamma_1$};
\path (0.4-1.6,0.65) node [style=xjp] {$\gamma_1'$};
\path (-0.4-1.6,-0.65) node [style=xjp] {$\gamma_4$};
\path (0.4-1.6,-0.6) node [style=xjp] {$\gamma_4'$};
\filldraw[fill=black, draw=black] (-0.4-1.6-0.8,0.3)circle (2pt);
\filldraw[fill=black, draw=black] (-0.4-1.6-0.8,-0.3)circle (2pt);
\filldraw[fill=black, draw=black] (0.4-1.6+0.8,0.3)circle (2pt);
\filldraw[fill=black, draw=black] (0.4-1.6+0.8,-0.3)circle (2pt);
\draw[->,thick] (-0.4-1.6-0.8,0.3) -- (-0.4-1.6-0.8,-0.08);
\draw[thick] (-0.4-1.6-0.8,0) -- (-0.4-1.6-0.8,-0.3);
\draw[->,thick] (0.4-1.6+0.8,0.3) -- (0.4-1.6+0.8,-0.08);
\draw[thick] (0.4-1.6+0.8,0) -- (0.4-1.6+0.8,-0.3);
\path (-0.4-1.6-0.8,0.6) node [style=xjp] {$\gamma_2$};
\path (-0.4-1.6-0.8,-0.65) node [style=xjp] {$\gamma_3$};
\path (0.4-1.6+0.8,0.65) node [style=xjp] {$\gamma_2'$};
\path (0.4-1.6+0.8,-0.6) node [style=xjp] {$\gamma_3'$};
\end{tikzpicture}
\label{Majorana stacking}
\end{align}
with rotational symmetry properties: $\gamma_j\mapsto\gamma_j'$ and $\gamma_j'\mapsto-\gamma_j$. The gapped Hamiltonian corresponding to the graph in Eq. (\ref{Majorana stacking}) is:
\begin{align}
H=-i\gamma_1\gamma_1'-i\gamma_4\gamma_4'-i\gamma_2\gamma_3-i\gamma_2'\gamma_3'
\label{Majorana stacking H}
\end{align}
We can further define four complex fermions according to eight Majorana fermions in Eq. (\ref{Majorana stacking}) as follows:
\begin{align}
\left.
\begin{aligned}
&c_1=(\gamma_2+i\gamma_1)/2~~~~~~c_2=(\gamma_3+i\gamma_4)/2\\
&c_1'=(\gamma_2'+i\gamma_1')/2~~~~~~c_2'=(\gamma_3'+i\gamma_4')/2
\end{aligned}
\right.
\end{align}
It is easy to find the ground state of Eq. (\ref{Majorana stacking H}):
\begin{align}
|\phi\rangle_{\mathrm{0D}}=(c_1^\dag-c_2^\dag&-ic_1'^\dag+ic_2'^\dag-c_1^\dag c_1'^\dag c_2'^\dag+c_2^\dag c_1'^\dag c_2'^\dag\nonumber\\
&+ic_1^\dag c_2^\dag c_1'^\dag-ic_2^\dag c_2^\dag c_1'^\dag)|0\rangle
\label{Majorana stacking 0D}
\end{align}
with the 2-fold rotation property:
\begin{align}
\boldsymbol{R}_{\mu_1}|\phi\rangle_{\mathrm{0D}}=i|\phi\rangle_{\mathrm{0D}}
\label{Majorana stacking sym.}
\end{align}
If a 0D block-state with eigenvalue $e^{i\pi q/2}$ under 2-fold rotation is attached to each 1D block-state near each 0D block labeled by $\mu_1$, the rotation eigenvalue $r$ of the obtained 0D block-state becomes:
\begin{align}
r=e^{i\pi/2+i\pi q}
\end{align}
And there is no solution to the formula $r=1$. Therefore, near each 0D block labeled by $\mu_1/\mu_2$, 1D block-states extend 0D block-states, hence the 0D block-states at $\mu_1/\mu_2$ have the group structure $\mathbb{Z}_8$ as the nontrivial extension of $\mathbb{Z}_4$ and $\mathbb{Z}_2$ which should be attributed to 0D and 1D block-state decorations, respectively.
Similar for other 1D and 0D block-states, we can obtain that the 0D block-states have the group structure $\mathbb{Z}_8$ for an arbitrary 0D block. Nevertheless, stacking between 1D and 0D block-states at different 0D blocks are not independent. For instance, if we decorate two copies of Majorana chain on 1D blocks $\tau_1$, these two Majorana chains extend the 0D block-states at both $\mu_1$ and $\mu_2$. It is not hard to verify that only three 0D blocks have independent stacking between 1D and 0D block-states, hence the ultimate classification with accurate group structure is:
\begin{align}
\mathcal{G}_{p2,1/2}=E_{p2,1/2}^{\mathrm{1D}}\times_{\omega_2}E_{p2,1/2}^{\mathrm{0D}}=\mathbb{Z}_4\times\mathbb{Z}_8^3
\end{align}
here the symbol $\times_{\omega_2}$ means that independent nontrivial 1D and 0D block-states $E_{p2,1/2}^{\mathrm{1D}}$ and $E_{p2,1/2}^{\mathrm{0D}}$ have nontrivial extension, described by the following short exact sequence:
\begin{align}
0\rightarrow E_{p2,1/2}^{\mathrm{1D}}\rightarrow G_{p2,1/2}\rightarrow E_{p2,1/2}^{\mathrm{0D}}\rightarrow 0
\end{align}
\subsection{Rhombic lattice: $cmm$}
For rhombic lattice, we demonstrate the crystalline TSC protected by $cmm$ symmetry as an example. The corresponding point group of $cmm$ is dihedral group $D_2$, and for 2D blocks $\sigma$ and 1D blocks $\tau_1$, there is no on-site symmetry; and for 1D blocks $\tau_2/\tau_3$ and 0D blocks $\mu_1$, the on-site symmetry is $\mathbb{Z}_2$ via the reflection symmetry acting internally; for 0D blocks $\mu_2$ and $\mu_3$, the on-site symmetry group is $\mathbb{Z}_2\rtimes\mathbb{Z}_2$ via the $D_2$ symmetry acting internally. The cell decomposition of $cmm$ is illustrated in Fig. \ref{cmm}.
\begin{figure}
\centering
\includegraphics[width=0.483\textwidth]{cmm.png}
\caption{\#9 wallpaper group $cmm$ and its cell decomposition.}
\label{cmm}
\end{figure}
We discuss systems with spinless and spin-1/2 fermions separately. Consider the 0D block-state decoration: For 0D blocks $\mu_1$, the total symmetry group of each is $\mathbb{Z}_2^f\times\mathbb{Z}_2$, and candidate states can be characterized by different 1D irreducible representations of the symmetry group:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times\mathbb{Z}_2,U(1)\right]=\mathbb{Z}_2^2
\end{align}
where the first $\mathbb{Z}_2$ is the complex fermion, and the other is eigenvalue $-1$ of rotational symmetry operation. So at each 0D block labeled by $\mu_1$, the block-state can be labeled by $(\pm,\pm)$, where these two $\pm$'s represent the fermion and rotation eigenvalue, respectively. For 0D blocks $\mu_2$ and $\mu_3$, the classification data can be characterized by different irreducible representations of the full symmetry group $\mathbb{Z}_2^f\times(\mathbb{Z}_2\rtimes\mathbb{Z}_2)$:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^3
\end{align}
and three $\mathbb{Z}_2$ have different physical meanings: the first $\mathbb{Z}_2$ represents the complex fermion, the second $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$, and the third $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. So at each 0D block, the block-state can be labeled by $(\pm,\pm,\pm)$, where these three $\pm$'s represent the fermion parity, rotation and reflection eigenvalues (or two independent reflection eigenvalues, because even-fold dihedral group can also be generated by two independent reflections), respectively. According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{cmm,0}^{\mathrm{0D}}=\mathbb{Z}_2^8
\end{align}
where the group elements can be labeled by:
\begin{align}
[(\pm,\pm),(\pm,\pm,\pm),(\pm,\pm,\pm)]
\end{align}
here three brackets represent the block-states at $\mu_1$, $\mu_2$ and $\mu_3$, respectively.
Subsequently we consider the 1D block-state decoration. For 1D block $\tau_1$, the total symmetry group is just fermion parity $\mathbb{Z}_2^f$, so the only nontrivial 1D block-state is Majorana chain; for 1D blocks $\tau_2$ and $\tau_3$, the total symmetry group is $\mathbb{Z}_2^f\times\mathbb{Z}_2$, so there are two possible 1D block-states: Majorana chain and 1D FSPT state (composed by double Majorana chains), so all 1D block-states form a group:
\begin{align}
\{\mathrm{BS}\}_{cmm,0}^{\mathrm{1D}}=\mathbb{Z}_2^5
\end{align}
Then we discuss the decorations of these two root phases separately.
\paragraph{Majorana chain decoration}First we consider the Majorana chain decoration on 1D blocks $\tau_1$ which leaves two/four dangling Majorana fermions at each 0D block $\mu_1/\mu_2$. Near $\mu_1$, Majorana fermions have following rotational symmetry properties:
\begin{align}
\boldsymbol{R}_{\mu_1}:~\gamma_1\leftrightarrow\gamma_2
\end{align}
with local fermion parity and its symmetry property:
\begin{align}
P_f=i\gamma_1\gamma_2,~\boldsymbol{R}_{\mu_1}:~P_f\mapsto-P_f
\end{align}
Hence these two Majorana fermions form a projective representation of the total symmetry group $\mathbb{Z}_2^f\times\mathbb{Z}_2$ on 0D block $\mu_1$, hence a non-degenerate ground state is forbidden. Thus Majorana chain decoration on 1D block $\tau_1$ is obstructed because of the violation of the no-open-edge condition.
Then we consider the Majorana chain decoration on 1D blocks $\tau_2$ which leaves two Majorana fermions at each 0D block $\mu_2/\mu_3$. Near $\mu_2$, Majorana fermions have following reflection symmetry properties:
\begin{align}
\boldsymbol{M}_{\tau_3}:~\gamma_1\leftrightarrow\gamma_2
\end{align}
with local fermion parity and its symmetry property:
\begin{align}
P_f=i\gamma_1\gamma_2,~\boldsymbol{M}_{\tau_3}:~P_f\mapsto-P_f
\end{align}
Hence these two Majorana fermions form a projective representation of the total symmetry group $(\mathbb{Z}_2\rtimes\mathbb{Z}_2)\times\mathbb{Z}_2^f$ on 0D block $\mu_2$, and a non-degenerate ground state is forbidden. Thus Majorana chain decoration on $\tau_2$ is obstructed because of the violation of the no-open-edge condition. Majorana chain decoration on 1D blocks $\tau_3$ is similar, hence all types of Majorana chain decorations are obstructed.
\paragraph{1D FSPT state decoration}1D FSPT state can only be decorated on 1D blocks $\tau_2$ and $\tau_3$ because 1D block $\tau_1$ does not have $\mathbb{Z}_2$ on-site symmetry. First we consider the 1D FSPT state decoration on 1D blocks $\tau_2$ which leaves four dangling Majorana fermions ($\xi_j,\xi_j',j=1,2$) at each 0D block $\mu_2/\mu_3$. Near $\mu_2/\mu_3$, the corresponding 4 Majorana fermions have the following symmetry properties ($j=1,2$):
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_2}:~\gamma_j\mapsto\gamma_j,~\gamma_j'\mapsto-\gamma_j'\\
&\boldsymbol{M}_{\tau_3}:~\gamma_1\leftrightarrow\gamma_2,~\gamma_1'\leftrightarrow\gamma_2'
\end{aligned}
\end{align}
Similar with the 1D block-state decorations for $p4m$ case (point group of 0D block $\mu_2$ in the cell decomposition of $p4m$ symmetry is $D_2$), these 4 Majorana fermions cannot be gapped out because they form a projective representation of $D_2$ group at each corresponding 0D block, and a non-degenerate ground state is forbidden. Accordingly, the 1D FSPT state decoration on $\tau_2$ or $\tau_3$ is obstructed because of the degenerate ground state.
There is one exception: if we decorate 1D FSPT phases on 1D blocks $\tau_2$ and $\tau_3$ simultaneously, it leaves eight dangling Majorana fermions at each $\mu_2$ and $\mu_3$. We demonstrate that these Majorana fermions can be gapped out: we have shown that if we decorate a 1D FSPT phase solely on each 1D block $\tau_2$ or $\tau_3$, four dangling Majorana fermions at each 0D block $\mu_2/\mu_3$ form a projective representation of $D_2$ symmetry group (acting internally, identical with $\mathbb{Z}_2\rtimes\mathbb{Z}_2$ on-site symmetry), so for the present situation, eight dangling Majorana fermions at each $\mu_2$ and $\mu_3$ form two projective representations of $D_2$ symmetry group. Nevertheless, there is only one nontrivial projective representation of $D_2$ which can be obtained by the following 2-cohomology:
\begin{align}
\mathcal{H}^2\left[\mathbb{Z}_2\rtimes\mathbb{Z}_2,U(1)\right]=\mathbb{Z}_2
\end{align}
So these two projective representations can form a linear representation of $D_2$ symmetry group, and the corresponding eight Majorana fermions can be gapped out. As a consequence, the only nontrivial obstruction-free 1D block-state is 1D FSPT state decorations on $\tau_2$ and $\tau_3$ simultaneously, and all obstruction-free 1D block-states form a group:
\begin{align}
\{\mathrm{OFBS}\}_{cmm,0}^{\mathrm{1D}}=\mathbb{Z}_2
\end{align}
where the group elements can be labeled by $n_2=n_3$ ($n_2/n_3$ represents the number of decorated 1D FSPT states on $\tau_2/\tau_3$). According to aforementioned discussions, a necessary condition of an obstruction-free block-state is $n_2=n_3$.
So far we have already obtained all obstruction-free block-states, and they form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{cmm,0}&=\{\mathrm{OFBS}\}_{cmm,0}^{\mathrm{1D}}\times\{\mathrm{OFBS}\}_{cmm,0}^{\mathrm{0D}}\nonumber\\
&=\mathbb{Z}_2\times\mathbb{Z}_2^8=\mathbb{Z}_2^9
\end{align}
With all obstruction-free block-states, subsequently we discuss all possible trivializations. First we consider about the 2D bubble equivalences: we decorate a Majorana chain with anti-PBC on each 2D block and enlarge all ``Majorana bubbles'', near each 1D block labeled by $\tau_1$, it can be deformed to double Majorana chains which can be trivialized because there is no on-site symmetry on $\tau_1$ and the classification of 1D invertible topological phases (i.e., Majorana chain) is $\mathbb{Z}_2$; near each 1D block labeled by $\tau_2/\tau_3$, it can also be deformed to double Majorana chains, nevertheless, these double Majorana chains cannot be trivialized because there is an on-site $\mathbb{Z}_2$ symmetry on each $\tau_2/\tau_3$ by internal action of reflection symmetry, and this $\mathbb{Z}_2$ action exchanges these two Majorana chains, and this is exactly the definition of the nontrivial 1D FSPT phase protected by on-site $\mathbb{Z}_2$ symmetry. Equivalently, 1D FSPT state decorations on 1D blocks $\tau_2$ and $\tau_3$ can be deformed to a trivial state via 2D ``Majorana'' bubble equivalence. Furthermore, similar with the $p4m$ case, there is no effect on 0D blocks labeled by $\mu_2$ and $\mu_3$ by taking 2D ``Majorana'' bubble equivalence; nevertheless, similar with the $p2$ case, 2D ``Majorana bubble'' construction changes the fermion parity of each 0D block labeled by $\mu_1$ because there is no reflection operation on 0D block $\mu_1$, and the alternative Majorana chain surrounding each $\mu_1$ is compatible with all other symmetry operations.
Subsequently we consider the 1D bubble equivalences. For instance, we decorate a pair of complex fermions [cf. Eq. (\ref{1D bubble})]: Near each 0D block $\mu_1$, there are 2 complex fermions forming the following atomic insulator:
\begin{align}
|\psi\rangle_{cmm}^{\mu_1}=c_1^\dag c_2^\dag|0\rangle
\end{align}
with rotation property:
\begin{align}
\boldsymbol{R}_{\mu_1}|\psi\rangle_{cmm}^{\mu_1}=c_2^\dag c_1^\dag|0\rangle=-|\psi\rangle_{cmm}^{\mu_1}
\end{align}
i.e., 1D bubble construction on $\tau_1$ changes the rotation eigenvalue at each 0D block $\mu_1$. Near each 0D block $\mu_2$, there are 4 complex fermions forming another atomic insulator:
\begin{align}
|\psi\rangle_{cmm}^{\mu_2}=c_1'^\dag c_2'^\dag c_3'^\dag c_4'^\dag|0\rangle
\end{align}
with two independent reflection symmetry properties ($D_2$ symmetry at 0D block $\mu_2$ can also be generated by two independent reflections $\boldsymbol{M}_{\tau_2}$ and $\boldsymbol{M}_{\tau_3}$):
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_2}|\psi\rangle_{cmm}^{\mu_2}=c_3'^\dag c_4'^\dag c_1'^\dag c_2'^\dag|0\rangle=|\psi\rangle_{cmm}^{\mu_2}\\
&\boldsymbol{M}_{\tau_3}|\psi\rangle_{cmm}^{\mu_2}=c_4'^\dag c_3'^\dag c_2'^\dag c_1'^\dag|0\rangle=|\psi\rangle_{cmm}^{\mu_2}
\end{aligned}
\end{align}
i.e., 1D bubble construction on $\tau_1$ does not change anything on $\mu_2$. Similar 1D bubble constructions can be held on 1D blocks $\tau_2$ and $\tau_3$, and we summarize the effects of 1D bubble constructions as following:
\begin{enumerate}[1.]
\item 1D bubble construction on $\tau_1$: changes the eigenvalue of $\boldsymbol{R}_{\mu_1}$ at $\mu_1$;
\item 1D bubble construction on $\tau_2$: simultaneously changes the eigenvalues of $\boldsymbol{M}_{\tau_3}$ at $\mu_2$ and $\mu_3$;
\item 1D bubble construction on $\tau_3$: simultaneously changes the eigenvalues of $\boldsymbol{M}_{\tau_2}$ at $\mu_2$ and $\mu_3$;
\end{enumerate}
With all possible trivializations, we are ready to study the trivial states. Start from the original trivial 0D block-state (nothing is decorated on arbitrary 0D blocks):
\[
[(+,+),(+,+,+),(+,+,+)]
\]
If we take 2D ``Majorana bubble'' construction $n_0$ times, and take 1D bubble equivalences on $\tau_j$ by $n_j$ times ($j=1,2,3$), above trivial state will be deformed to a new 0D block-state labeled by:
\begin{align}
&\left[((-1)^{n_0},(-1)^{n_1}),(+,(-1)^{n_3},(-1)^{n_2}),\right.\nonumber\\
&\left.(+,(-1)^{n_3},(-1)^{n_2})\right]
\label{cmm spinless trivial state}
\end{align}
According to the definition of bubble equivalence, all these states should be trivial. It is easy to see that there are only four independent quantities ($n_j=0,1,2,3$) in Eq. (\ref{cmm spinless trivial state}), hence all these trivial states form the following group:
\begin{align}
\{\mathrm{TBS}\}_{cmm,0}&=\{\mathrm{TBS}\}_{cmm,0}^{\mathrm{1D}}\times\{\mathrm{TBS}\}_{cmm,0}^{\mathrm{0D}}\nonumber\\
&=\mathbb{Z}_2\times\mathbb{Z}_2^3=\mathbb{Z}_2^4
\end{align}
here $\{\mathrm{TBS}\}_{cmm,0}^{\mathrm{1D}}$ represents the group of trivial states with non-vacuum 1D blocks (i.e., 1D FSPT phase decorations on $\tau_2$ and $\tau_3$ simultaneously that is obtained from 2D ``Majorana'' bubble construction on the vacuum block-state), and $\{\mathrm{TBS}\}_{cmm,0}^{\mathrm{0D}}$ represents the group of trivial states with non-vacuum 0D blocks.
Therefore, all independent nontrivial block-states are labeled by the group elements of the following quotient group:
\begin{align}
E_{cmm,0}&=\{\mathrm{OFBS}\}_{cmm,0}/\{\mathrm{TBS}\}_{cmm,0}\nonumber\\
&=\mathbb{Z}_2^9/\mathbb{Z}_2^4=\mathbb{Z}_2^5
\end{align}
here all $\mathbb{Z}_2$'s are from the nontrivial 0D block-states. It is obvious that there is no nontrivial group extension because of the absence of nontrivial 1D block-state, and the group structure of $E_{cmm,0}$ has already been correct.
Now we turn to discuss systems with spin-1/2 fermions. First we consider the 0D block-state decorations. For 0D blocks labeled by $\mu_1$, the 2-fold rotational symmetry acts on each of them internally, hence the total symmetry is $\mathbb{Z}_4^f$: nontrivial $\mathbb{Z}_2^f$ extension of on-site $\mathbb{Z}_2$ symmetry. And all different 0D block-states at which can be characterized by different 1D irreducible representations of the corresponding symmetry group are:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_4^f,U(1)\right]=\mathbb{Z}_4
\end{align}
And there is no trivialization on them. Furthermore, for 0D blocks labeled by $\mu_2$ and $\mu_3$, the dihedral group symmetry $D_2$ acts on each of them internally, and similar with the $p4m$ case, the classification of corresponding 0D block-states can be characterized by different 1D irreducible representations of the full symmetry group:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times_{\omega_2}(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^2
\label{cmm classification data}
\end{align}
Here different $\mathbb{Z}_2$'s represent the rotation and reflection eigenvalues at each $D_2$ center. As a consequence, all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{cmm,1/2}^{\mathrm{0D}}=\mathbb{Z}_4\times\mathbb{Z}_2^4
\end{align}
and there is no trivialization on them ($\{\mathrm{TBS}\}_{cmm,1/2}^{\mathrm{0D}}=0$). As a consequence, the classification attributed to 0D block-state decorations is:
\begin{align}
E_{cmm,1/2}^{\mathrm{0D}}=\mathbb{Z}_4\times\mathbb{Z}_2^4
\label{cmm0}
\end{align}
Subsequently we investigate the 1D block-state decoration. On $\tau_1$, the unique possible 1D block-state is Majorana chain because of the absence of the on-site symmetry; on $\tau_2$ and $\tau_3$, the total symmetry group is $\mathbb{Z}_4^f$, hence there is no candidate block-state due to the trivial classification of the corresponding 1D FSPT phases. The Majorana chain decoration on $\tau_1$ leaves 2 dangling Majorana fermions at each $\mu_1$, and 4 dangling Majorana fermions at each $\mu_2$. At $\mu_1$, the 2 dangling Majorana fermions at which can be gapped out by an entanglement pair without breaking any symmetry are:
\begin{align}
\boldsymbol{R}_{\mu_1}:~i\gamma_1\gamma_2\mapsto-i\gamma_2\gamma_1=i\gamma_1\gamma_2
\end{align}
at $\mu_2$, the 4 Majorana fermions have the following reflection symmetry properties ($D_2$ symmetry can also be generated by two independent reflections $\boldsymbol{M}_{\tau_2}$ and $\boldsymbol{M}_{\tau_3}$):
\begin{align}
\left.
\begin{aligned}
\boldsymbol{M}_{\tau_2}:~&(\eta_1,\eta_2,\eta_3,\eta_4)\mapsto(\eta_2,-\eta_1,\eta_4,-\eta_3)\\
\boldsymbol{M}_{\tau_3}:~&(\eta_1,\eta_2,\eta_3,\eta_4)\mapsto(\eta_4,\eta_3,-\eta_2,-\eta_1)
\end{aligned}
\right.
\label{cmm spin-1/2 symmetry at mu2}
\end{align}
Consider the following Hamiltonian containing two entanglement pairs of these four Majorana fermions:
\begin{align}
H_{\mu_2}=-i\eta_1\eta_3-i\eta_2\eta_4
\end{align}
It is easy to verify that $H_{\mu_2}$ is invariant under the symmetry actions (\ref{cmm spin-1/2 symmetry at mu2}). As a consequence, all obstruction-free 1D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{cmm,1/2}^{\mathrm{1D}}=\mathbb{Z}_2
\end{align}
And it is easy to see that there is no trivialization (i.e., $\{\mathrm{TBS}\}_{cmm,1/2}^{\mathrm{0D}}=0$). So the classification attributed to 1D block-state decorations is:
\begin{align}
E_{cmm,1/2}^{\mathrm{1D}}=\mathbb{Z}_2
\label{cmm1}
\end{align}
With the classification data as Eqs. (\ref{cmm0}) and (\ref{cmm1}), we consider the group structure of the corresponding classification. Equivalently, we investigate that if 1D block-state extends 0D block-state. The only possible case of stacking should happen on 1D blocks labeled by $\tau_1$ because according to discussions about $p4m$ and $p2$ cases, other 1D blocks have no nontrivial block-state. We decorate two copies of Majorana chains on $\tau_1$, which will leave 2 dangling Majorana fermions at each 0D block labeled by $\mu_1$ and 4 dangling Majorana fermions at each 0D block labeled by $\mu_2$. At $\mu_1$, these Majorana chains can be smoothly deformed to the state described by Eqs. (\ref{Majorana stacking}) and (\ref{Majorana stacking 0D}), with the symmetry properties as Eq. (\ref{Majorana stacking sym.}). So similar with $p2$ case, near each 0D block labeled by $\mu_1$, 1D block-states extend 0D block-state, and 0D block-states at $\mu_1$ have the group structure $\mathbb{Z}_8$ as a consequence. At $\mu_2$, these Majorana chains can be smoothly deformed to two copies of the state described by Eqs. (\ref{Majorana stacking}) and (\ref{Majorana stacking 0D}), and have eigenvalue $-1$ under 2-fold rotational symmetry. The classification data of 0D block-states at $\mu_2$ is determined by Eq. (\ref{cmm classification data}), hence if a 0D block-state with eigenvalue $-1$ under 2-fold rotation is attached to each 1D block-state near each 0D block labeled by $\mu_2$, the rotation eigenvalue $s$ of the obtained 0D block-state becomes:
\begin{align}
s=-1\times-1=1
\end{align}
Therefore, near 0D block $\mu_2$ there is an appropriate 1D block-state which itself form a $\mathbb{Z}_2$ structure under stacking, and there is no stacking between 1D and 0D block-states at $\mu_2$ as a consequence. Finally, the ultimate classification with accurate group structure is:
\begin{align}
\mathcal{G}_{cmm,1/2}=E_{cmm,1/2}^{\mathrm{0D}}\times_{\omega_2}E_{cmm,1/2}^{\mathrm{1D}}=\mathbb{Z}_8\times\mathbb{Z}_2^4
\end{align}
here the symbol ``$\times_{\omega_2}$'' means that 1D and 0D block-states $E_{cmm,1/2}^{\mathrm{1D}}$ and $E_{cmm,1/2}^{\mathrm{0D}}$ have nontrivial extension, and described by the following short exact sequence:
\begin{align}
0\rightarrow E_{cmm,1/2}^{\mathrm{1D}}\rightarrow G_{cmm,1/2}\rightarrow E_{cmm,1/2}^{\mathrm{0D}}\rightarrow0
\end{align}
\subsection{Rectangle lattice: $pgg$}
For rectangle lattice, we demonstrate the crystalline TSC protected by $pgg$ symmetry as an example. $pgg$ is a non-symorphic wallpaper group and the corresponding point group is dihedral group. The corresponding point group for this case is 2-fold dihedral group $D_2$. For 2D blocks $\sigma$ and 1D blocks $\tau_1$ and $\tau_2$, there is no on-site symmetry, and for 0D blocks $\mu_1$ and $\mu_2$, the on-site symmetry is $\mathbb{Z}_2$ because 2-fold rotational symmetry $C_2$ acts on the 0D blocks internally, see Fig. \ref{pgg}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{pgg.png}
\caption{\#8 wallpaper group $pgg$ and its cell decomposition.}
\label{pgg}
\end{figure}
We investigate the systems with spinless and spin-1/2 fermions separately. For systems with spinless fermions, first we investigate the 0D block-state decoration. For an arbitrary 0D block, the total symmetry group is an on-site $\mathbb{Z}_2$ symmetry (by 2-fold rotation acting internally) together with the fermion parity: $\mathbb{Z}_2^f\times\mathbb{Z}_2$, and the classification data can be characterized by different 1D irreducible representations of the symmetry group:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_2^f\times\mathbb{Z}_2,U(1)\right]=\mathbb{Z}_2^2
\end{align}
these two $\mathbb{Z}_2$'s represent the fermion parity and eigenvalues of 2-fold rotational symmetry on each 0D block, respectively. So at each 0D block, the block-state can be labeled by $(\pm,\pm)$, here these two $\pm$'s represent the fermion parity and rotation eigenvalue, respectively. According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{pgg,0}^{\mathrm{0D}}=\mathbb{Z}_2^4
\end{align}
and the group elements can be labeled by (two brackets represent the block-states at $\mu_1$ and $\mu_2$):
\[
[(\pm,\pm),(\pm,\pm)]
\]
Then we consider possible trivializations via bubble construction. First of all, we consider the 2D bubble equivalence: we decorate a Majorana chain with anti-PBC on each 2D block which can be trivialized if it shrinks to a point. Similar with the $p2$ case, by some proper local unitary transformations, this assembly of bubbles can be deformed to an assembly of Majorana chains with odd fermion parity surrounding each of 0D block, and the fermion parities of all 0D blocks are changed simultaneously. Equivalently, the fermion parities of 0D blocks labeled by $\mu_1$ and $\mu_2$ are not independent.
Then we study the role of rotational symmetry. Consider the 1D bubble equivalence on $\tau_2$: we decorate a pair of complex fermions [cf. Eq. (\ref{1D bubble})]: Near $\mu_2$, there are 2 complex fermions which form an atomic insulator:
\begin{align}
|\psi\rangle_{pgg}^{\mu_2}=c_1^\dag c_2^\dag|0\rangle
\end{align}
with rotation property as ($\boldsymbol{R}_{\mu_2}$ represents the rotation operation centred at the 0D block labeled by $\mu_2$):
\begin{align}
\boldsymbol{R}_{\mu_2}|\psi\rangle_{pgg}^{\mu_2}=c_2^\dag c_1^\dag|0\rangle=-|\psi\rangle_{pgg}^{\mu_2}
\end{align}
i.e., $|\psi\rangle_{pgg}^{\mu_2}$ can trivialize the rotation eigenvalue $-1$ at each 0D block labeled by $\mu_2$, similar for the 0D block labeled by $\mu_1$. Hence the rotation eigenvalues at $\mu_1$ and $\mu_2$ are not independent; and we further consider the 1D bubble equivalence on $\tau_1$: Near each 0D block labeled by $\mu_1$, there are 4 complex fermions which form another atomic insulator:
\begin{align}
|\psi\rangle_{pgg}^{\mu_1}=c_1'^\dag c_2'^\dag c_3'^\dag c_4'^\dag
\end{align}
with rotation property as ($\boldsymbol{R}_{\mu_1}$ represents the rotation operation centred at the 0D block labeled by $\mu_1$):
\begin{align}
\boldsymbol{R}_{\mu_1}|\psi\rangle_{pgg}^{\mu_1}=c_3'^\dag c_4'^\dag c_1'^\dag c_2'^\dag=|\psi\rangle_{pgg}^{\mu_1}
\end{align}
So there is no trivialization from this bubble construction.
With all possible bubble constructions, we are ready to study the trivial states. Start from the original trivial state (nothing is decorated on arbitrary 0D block):
\[
[(+,+),(+,+)]
\]
if we take 2D bubble construction $n_0$ times and 1D bubble construction on $\tau_2$ by $n_2$ times, above trivial state will be deformed to a new 0D block-state labeled by:
\begin{align}
[((-1)^{n_0},(-1)^{n_2}),((-1)^{n_0},(-1)^{n_2})]
\label{pgg spinless trivial state}
\end{align}
According to the definition of bubble equivalence, all these states should be trivial. It is easy to see that there are only two independent quantities in the state (\ref{pgg spinless trivial state}), hence all trivial states form the group:
\begin{align}
\{\mathrm{TBS}\}_{pgg,0}^{\mathrm{0D}}=\mathbb{Z}_2^2
\end{align}
Therefore, all independent nontrivial 0D block-states are labeled by different group elements of the following quotient group:
\begin{align}
E_{pgg,0}^{\mathrm{0D}}=\{\mathrm{OFBS}\}_{pgg,0}^{\mathrm{0D}}/\{\mathrm{TBS}\}_{pgg,0}^{\mathrm{0D}}=\mathbb{Z}_2^2
\end{align}
Subsequently we investigate the 1D block-state decoration. Due to the absence of the on-site symmetry, the unique possible 1D block-state is Majorana chain. So all 1D block-states form a group:
\begin{align}
\{\mathrm{BS}\}_{pgg,0}^{\mathrm{1D}}=\mathbb{Z}_2^2
\end{align}
Then we discuss the possible obstructions: we discuss the 1D block-state decorations on $\tau_1$ and $\tau_2$ separately.
\paragraph{Majorana chain decoration on $\tau_1$}Majorana chain decoration on $\tau_1$ leaves 4 dangling Majorana fermions at each corresponding 0D blocks $\mu_1$ with the following rotational symmetry properties:
\begin{align}
\boldsymbol{R}_{\mu_1}:~\gamma_j\mapsto\gamma_{j+2}
\label{pgg sym}
\end{align}
where $\boldsymbol{R}_{\mu_1}$ is the generator of $C_2$ rotational symmetry centred at each 0D block labeled by $\mu_1$ and all subscripts are taken with modulo 4 (i.e., $\gamma_5$ represents the Majorana mode labeled by $\gamma_1$). Consider the local fermion parity and its symmetry property:
$$P_f=-\prod\limits_{j=1}^4\gamma_j,~~\boldsymbol{R}_{\mu_1}:~P_f\mapsto P_f$$
Hence these 4 dangling Majorana fermions can be gapped by some proper interactions in a symmetric way, equivalently the no-open-edge condition is satisfied. More precisely, we consider the following Hamiltonian near each 0D block $\mu_1$:
\begin{align}
H=i\gamma_1\gamma_2+i\gamma_3\gamma_4
\end{align}
it is obvious that $H$ is symmetric under (\ref{pgg sym}), and it can gap out the four Majorana fermions at each $\mu_1$.
\paragraph{Majorana chain decoration on $\tau_2$}Majorana chain decoration on $\tau_2$ leaves 2 dangling Majorana fermions at each corresponding 0D block which can be gapped out by an entanglement pair. Nevertheless this entanglement pair breaks the rotational symmetry, and the no-open-edge condition is violated.
As a consequence, all obstruction-free 1D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{pgg,0}^{\mathrm{1D}}=\mathbb{Z}_2
\end{align}
and it is easy to verify that there is no trivialization (i.e., $\{\mathrm{OFBS}\}_{pgg,0}^{\mathrm{1D}}=0$). Therefore, all independent nontrivial 1D block-states are labeled by different group elements of the following group:
\begin{align}
E_{pgg,0}^{\mathrm{1D}}=\{\mathrm{OFBS}\}_{pgg,0}^{\mathrm{1D}}/\{\mathrm{TBS}\}_{pgg,0}^{\mathrm{1D}}=\mathbb{Z}_2
\end{align}
It is straightforward to see that there is no stacking between 1D and 0D block-states, and the ultimate classification with accurate group structure is:
\begin{align}
\mathcal{G}_{pgg,0}=E_{pgg,0}^{\mathrm{0D}}\times E_{pgg,0}^{\mathrm{1D}}=\mathbb{Z}_2^3
\end{align}
Then we turn to the systems with spin-1/2 fermions. First we investigate the 0D block-state decorations. All 0D blocks are 2-fold rotation centers, hence the total symmetry group of each 0D block is $\mathbb{Z}_4^f$: nontrivial $\mathbb{Z}_2^f$ extension of the on-site symmetry $\mathbb{Z}_2$, and different 0D block-states at which can be characterized by different 1D irreducible representations of the corresponding symmetry group are:
\begin{align}
\mathcal{H}^1\left[\mathbb{Z}_4^f,U(1)\right]=\mathbb{Z}_4
\end{align}
All root phases at each 0D block are characterized by group elements of $\{1,i,-1,-i\}$. So at each 0D block, the block-state can be labeled by $\nu\in\{1,i,-1,-i\}$. According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{pgg,1/2}^{\mathrm{0D}}=\mathbb{Z}_4^2
\end{align}
and different group elements can be labeled by:
\[
[\nu_1,\nu_2]
\]
where $\nu_1$ and $\nu_2$ label the 0D block-state at $\mu_1$ and $\mu_2$. It is easy to see that there is no trivialization on 0D block-states (i.e., $\{\mathrm{TBS}\}_{pgg,1/2}^{\mathrm{0D}}=0$), so the classification attributed to 0D block-state decorations is:
\begin{align}
E_{pgg,1/2}^{\mathrm{0D}}=\{\mathrm{OFBS}\}_{pgg,1/2}^{\mathrm{0D}}/\{\mathrm{TBS}\}_{pgg,1/2}^{\mathrm{0D}}=\mathbb{Z}_4^2
\end{align}
Subsequently we investigate the 1D block-state decoration. The unique possible 1D block-state is Majorana chain because of the absence of on-site symmetry.
\paragraph{Majorana chain deocration on $\tau_1$}Majorana chain decoration on $\tau_1$ leaves 4 dangling Majorana fermions at each 0D blocks labeled by $\mu_1$ with identical symmetry properties with the spinless fermions [cf. Eq. (\ref{pgg sym})], hence these 4 Majorana fermions can be gapped out by some proper interactions in a symmetric way, and the no-open-edge condition is satisfied.
\paragraph{Majorana chain decoration on $\tau_2$}Majorana chain decoration on $\tau_2$ leaves 2 dangling Majorana fermions at each 0D block $\mu_2$ which can be gapped out by an entanglement pair in a symmetric way. Therefore the no-open-edge condition is satisfied. Consequently, all obstruction-free 1D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{pgg,1/2}^{\mathrm{1D}}=\mathbb{Z}_2^2
\end{align}
and it is obvious that there is no trivialization from bubble constructions (i.e., $\{\mathrm{TBS}\}_{pgg,1/2}^{\mathrm{0D}}=0$). Therefore, the classification of 2D FSPT phases with $pgg$ symmetry attributed to 1D block-state decoration is:
\begin{align}
E_{pgg,1/2}^{\mathrm{1D}}=\{\mathrm{OFBS}\}_{pgg,1/2}^{\mathrm{1D}}/\{\mathrm{TBS}\}_{pgg,1/2}^{\mathrm{1D}}=\mathbb{Z}_2^2
\end{align}
Then we study the possible stacking between 1D and 0D block-states. If we decorate two Majorana chains on each 1D block labeled by $\tau_1$, similar with $cmm$ case, there is no stacking between 1D and 0D block-states; if we decorate two Majorana chains on each 1D block labeled by $\tau_2$, similar with $p2$ case, it can be smoothly deformed to an assembly of 0D root phases at 0D blocks $\mu_2$. Therefore, the ultimate classification with accurate group structure is:
\begin{align}
\mathcal{G}_{pgg,1/2}=E_{pgg,1/2}^{\mathrm{1D}}\times_{\omega_2}E_{pgg,1/2}^{\mathrm{0D}}=\mathbb{Z}_2\times\mathbb{Z}_4\times\mathbb{Z}_8
\end{align}
here the symbol ``$\times_{\omega_2}$'' means that 1D and 0D block-states $E_{cmm,1/2}^{\mathrm{1D}}$ and $E_{cmm,1/2}^{\mathrm{0D}}$ have nontrivial extension, and are described by the following short exact sequence:
\begin{align}
0\rightarrow E_{pgg,1/2}^{\mathrm{1D}}\rightarrow G_{pgg,1/2}\rightarrow E_{pgg,1/2}^{\mathrm{0D}}\rightarrow0
\end{align}
\subsection{Hexagonal lattice: $p6m$}
For hexagonal lattice, we demonstrate the crystalline TSC protected by $p6m$ symmetry as an example. The corresponding point group of $p6m$ is dihedral group $D_6$, and for 2D blocks labeled by $\sigma$, there is no on-site symmetry; for arbitrary 1D block, the on-site symmetry is $\mathbb{Z}_2$ which is attributed to the reflection symmetry acting internally; for 0D blocks $\mu_1$, the on-site symmetry group is $\mathbb{Z}_6\rtimes\mathbb{Z}_2$ which is attributed to the $D_6$ group acting internally; for 0D blocks $\mu_2$, the on-site symmetry is $\mathbb{Z}_2\rtimes\mathbb{Z}_2$ which is attributed to the $D_2\subset D_6$ acting internally; for 0D blocks $\mu_3$, the on-site symmetry is $\mathbb{Z}_3\rtimes\mathbb{Z}_2$, which is attributed to the $D_3\subset D_6$ acting internally. The cell decomposition is shown in Fig. \ref{p6m}.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{p6m.png}
\caption{\#17 wallpaper group $p6m$ and its cell decomposition.}
\label{p6m}
\end{figure}
We discuss systems with spinless and spin-1/2 fermions separately. Consider the 0D block-state decorations, for $\mu_j$, $j=1,2,3$, the classification data can be characterized by different 1D irreducible representations of the full symmetry groups, respectively:
\begin{align}
\left.
\begin{aligned}
&\mathcal{H}^1\left[\mathbb{Z}_2^f\times(\mathbb{Z}_6\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^3\\
&\mathcal{H}^1\left[\mathbb{Z}_2^f\times(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^3\\
&\mathcal{H}^1\left[\mathbb{Z}_2^f\times(\mathbb{Z}_3\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^2
\end{aligned}
\right.
\label{p6m classification data}
\end{align}
For 0D blocks labeled by $\mu_1$ and $\mu_2$, three $\mathbb{Z}_2$ in the classification data [cf. first two rows in Eq. (\ref{p6m classification data})] have different physical meanings: the first $\mathbb{Z}_2$ represents the complex fermion, the second $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$, and the third $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$; for 0D blocks labeled by $\mu_3$, two $\mathbb{Z}_2$ in the classification data [cf. the last row in Eq. (\ref{p6m classification data})] have different physical meanings: the first $\mathbb{Z}_2$ represents the complex fermion, and the second $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$ (i.e., the rotational symmetry plays no role in the 0D block-state decorations). So the 0D block-states at $\mu_1$ and $\mu_2$ can be labeled by $(\pm,\pm,\pm)$, here these three $\pm$'s represent the fermion parity, 2-fold rotation and reflection symmetry eigenvalues (alternatively, the last two $\pm$'s can also represent the eigenvalues of two independent reflection operations because even-fold dihedral group can also be generated by two independent reflections); the 0D block-states at $\mu_3$ can be labeled by $(\pm,\pm)$, here these two $\pm$'s represent the fermion parity and reflection symmetry eigenvalues. According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p6m}^{\mathrm{0D}}=\mathbb{Z}_2^8
\end{align}
and the group elements can be labeled by (three brackets represent the block-states at $\mu_1$, $\mu_2$ and $\mu_3$):
\[
[(\pm,\pm,\pm),(\pm,\pm,\pm),(\pm,\pm)]
\]
Subsequently we investigate the 1D block-state decoration. For all 1D blocks, the total symmetry group is $\mathbb{Z}_2^f\times\mathbb{Z}_2$, and the candidate 1D block-state is Majorana chain and 1D FSPT state. So all 1D block-states form a group:
\begin{align}
\{\mathrm{BS}\}_{p6m,0}^{\mathrm{1D}}=\mathbb{Z}_2^6
\end{align}
Then we discuss the decorations of these two root phases separately.
\paragraph{Majorana chain decoration}Consider Majorana chain decorations on 1D blocks labeled by $\tau_1$, which leaves 6 dangling Majorana fermions at each $\mu_1$ and 2 dangling Majorana fermions at each $\mu_2$. Near each 0D block $\mu_1$, six dangling Majorana fermions have the following rotational symmetry properties (all subscripts are taken with modulo 6):
\begin{align}
\boldsymbol{R}_{\mu_1}:~\gamma_j\mapsto\gamma_{j+1},~~j=1,...,6.
\end{align}
Then we consider the local fermion parity and its rotational symmetry property:
\begin{align}
P_f=i\prod\limits_{j=1}^6\gamma_j,~~\boldsymbol{R}_{\mu_1}:~P_f\mapsto-P_f
\end{align}
Thus these 6 dangling Majorana fermions form a projective representation of the symmetry group $p6m\times\mathbb{Z}_2^f$, and a non-degenerate ground state is forbidden. Thus Majorana chain decoration on 1D blocks $\tau_1$ is obstructed because of the violation of the no-open-edge condition. On $\tau_2$, the Majorana chain decoration leaves 6 dangling Majorana fermions at each $\mu_1$ and 3 dangling Majorana fermions at each $\mu_3$. It is well-known that odd number of Majorana fermions cannot be gapped out, hence Majorana chain decoration on $
\tau_2$ is obstructed. On $\tau_3$, Majorana chain decoration leaves 2 dangling Majorana fermions at each $\mu_2$ and 3 dangling Majorana fermions at each $\mu_3$. Similar with the $\tau_2$ case, Majorana chain decoration is obstructed. Note that if we consider all 1D blocks together and decorate a Majorana chain on each, it leaves 12 dangling Majorana fermions at each $\mu_1$, 4 dangling Majorana fermions at each $\mu_2$ and 6 dangling Majorana fermions at each $\mu_3$. Consider Majorana fermions as edge modes of the decorated Majorana fermions at each $\mu_2$, with the following rotation and reflection symmetry properties (all subscripts are taken with modulo 4):
\begin{align}
\boldsymbol{R}_{\mu_3}:~\gamma_j'\mapsto\gamma_{j+2}',~~\boldsymbol{M}_{\tau_3}:~\gamma_j'\mapsto\gamma_{6-j}'
\end{align}
Then we consider the local fermion parity and its rotation and reflection symmetry properties:
\begin{align}
P_f'=-\prod\limits_{j=1}^4\gamma_j',~~
\left\{
\begin{aligned}
&\boldsymbol{R}_{\mu_3}:~P_f'\mapsto P_f'\\
&\boldsymbol{M}_{\tau_3}:~P_f'\mapsto-P_f'
\end{aligned}
\right.
\end{align}
Thus these Majorana fermions cannot be gapped in a symmetric way, and Majorana chain decoration on all 1D blocks is obstructed. As a consequence, Majorana chain decoration does not contribute a nontrivial crystalline TSC.
\paragraph{1D FSPT state decoration}1D FSPT state decoration on $\tau_1$ leaves 12 dangling Majorana fermions at each $\mu_1$ and 4 dangling Majorana fermions at each $\mu_2$. Similar with the $p4m$ and $cmm$ cases, four Majorana fermions at each $\mu_2$ form a projective representation of $D_2$ symmetry group, and non-degenerate ground-state is forbidden. Thus the 1D FSPT state decoration on $\tau_1$ is \textit{obstructed}.
1D FSPT state decoration on $\tau_2$ leaves 12 dangling Majorana fermions at each $\mu_1$ and 6 dangling Majorana fermions at each $\mu_3$. Consider the Majorana fermions at each $\mu_3$, with the following rotation and reflection symmetry properties (all subscripts are taken with modulo 3):
\begin{align}
\left.
\begin{aligned}
\boldsymbol{R}_{\mu_3}:~&\eta_j\mapsto\eta_{j+1},~\eta_j'\mapsto\eta_{j+1}'\\
\boldsymbol{M}_{\tau_3}:~&\eta_j\mapsto-\eta_{5-j},~\eta_j'\mapsto\eta_{5-j}'
\end{aligned}
\right.,~j=1,2,3.
\end{align}
Then we consider the local fermion parity with its rotation and reflection symmetry properties:
\begin{align}
P_f^{\tau_2}=i\prod\limits_{j=1}^3\eta_j\eta_j',~~
\left\{
\begin{aligned}
&\boldsymbol{R}_{\mu_3}:~P_f^{\tau_2}\mapsto P_f^{\tau_2}\\
&\boldsymbol{M}_{\tau_3}:~P_f^{\tau_2}\mapsto-P_f^{\tau_2}
\end{aligned}
\right.
\end{align}
Hence these 6 Majorana fermions form a projective representation of the symmetry group $\mathbb{Z}_2^f\times p6m$ that cannot be gapped out in a symmetric way, and the corresponding 1D FSPT state decoration is \textit{obstructed} because of the violation of the no-open-edge condition.
1D FSPT state decoration on $\tau_3$ leaves 4 dangling Majorana fermions at each $\mu_2$ and 6 dangling Majorana fermions at each $\mu_3$. Similar with the 1D FSPT state decoration on $\tau_2$ case, 6 Majorana fermions at each $\mu_3$ cannot be gapped out in a symmetric way: consider the Majorana fermions as the edge modes of decorated Majorana chains on $\tau_3$ at each $\mu_3$, with the following rotation and reflection symmetry properties: (all subscripts are taken with modulo 3):
\begin{align}
\boldsymbol{R}_{\mu_3}:~&\zeta_j\mapsto\zeta_{j+1},~\zeta_j'\mapsto\zeta_{j+1}'\\
\boldsymbol{M}_{\tau_3}:~&\zeta_j\mapsto-\zeta_{5-j},~\zeta_j'\mapsto\zeta_{5-j}'
\end{align}
with the local fermion parity and its rotation and reflection symmetry properties:
\begin{align}
P_f^{\tau_3}=i\prod\limits_{j=1}^3\zeta_j\zeta_j',~~
\left\{
\begin{aligned}
&\boldsymbol{R}_{\mu_3}:~P_f^{\tau_3}\mapsto P_f^{\tau_3}\\
&\boldsymbol{M}_{\tau_3}:~P_f^{\tau_3}\mapsto-P_f^{\tau_3}
\end{aligned}
\right.
\end{align}
Thus 1D FSPT state decoration on $\tau_3$ is \textit{obstructed}, and it does not contribute nontrivial crystalline TSC because of the violation of the no-open-edge condition.
Note that if we consider 1D blocks labeled by $\tau_2$ and $\tau_3$ together and decorate a 1D FSPT state on each of them, this decoration leaves 12 dangling Majorana fermions at each $\mu_1$ and $\mu_3$, and 4 dangling Majorana fermions at each $\mu_2$. For the Majorana fermions as the edge modes of the decorated 1D FSPT states at each $\mu_2$, as aforementioned, they can be gapped out in a symmetric way; For Majorana fermions as the edge modes of the decorated 1D FSPT states at each $\mu_1/\mu_3$, the local fermion parity is the product of $P_f^{\tau_2}$ and $P_f^{\tau_3}$, with the following symmetry properties:
\begin{align}
P_f''=P_f^{\tau_2}P_f^{\tau_3},~~
\left\{
\begin{aligned}
&\boldsymbol{R}_{\mu_3}:~P_f''\mapsto P_f''\\
&\boldsymbol{M}_{\tau_3}:~P_f''\mapsto P_f''
\end{aligned}
\right.
\end{align}
Hence any symmetry operations commute with the fermion parity. Furthermore, there is no nontrivial projective representation of the $D_3$ group acting internally (identical with the internal symmetry group $\mathbb{Z}_3\rtimes\mathbb{Z}_2$), it can be obtained by calculating the following 2-cohomology of the symmetry group:
\begin{align}
\mathcal{H}^2\left[\mathbb{Z}_3\rtimes\mathbb{Z}_2,U(1)\right]=0
\end{align}
Therefore, these 12 dangling Majorana fermions form a linear representation of the symmetry group, and can be gapped out by some proper interactions in symmetry way. Nevertheless, four Majorana fermions at each 0D block labeled by $\mu_2$ form a projective representation of the $D_2$ symmetry group that forbids the non-degenerate ground-state, so the 1D FSPT state decoration on $\tau_2$ and $\tau_3$ is still \textit{obstructed} because of the violation of no-open-edge condition at each 0D block $\mu_2$.
There is one exception: If we decorate a 1D FSPT phase on each 1D block (including $\tau_j,j=1,2,3$), the dangling Majorana fermions at each 0D block can be gapped out in a symmetric way. In the aforementioned discussions we have elucidated that at each $\mu_3$, there are 12 dangling Majorana fermions via 1D FSPT state decorations that can be gapped in a symmetric way; and at each $\mu_2$, there are 8 dangling Majorana fermions and similar with the $p4m$ and $cmm$ case, they can be gapped out in a symmetric way because they form a linear representation of the corresponding symmetry group. Near each 0D block labeled by $\mu_1$, this decoration leaves 24 dangling Majorana fermions as the edge modes of decorated 1D FSPT phases. Consider half of them from 1D FSPT state decorations on $\tau_1$ with the following rotation and reflection symmetry properties (all subscripts are taken with modulo 6):
\begin{align}
\left.
\begin{aligned}
\boldsymbol{R}_{\mu_1}:~&\gamma_j\mapsto\gamma_{j+1},~\gamma_j'\mapsto\gamma_{j+1}'\\
\boldsymbol{M}_{\tau_1}:~&\gamma_j\mapsto\gamma_{8-j},~\gamma_j'\mapsto\gamma_{8-j}'
\end{aligned}
\right.
\end{align}
Then we consider the local fermion parity and its rotation and reflection symmetry properties:
\begin{align}
P_f^{\tau_1}=-\prod\limits_{j=1}^6\gamma_j\gamma_j',~~\boldsymbol{R}_{\mu_1},\boldsymbol{M}_{\tau_1}:~P_f^{\tau_1}\mapsto P_f^{\tau_1}
\end{align}
Hence arbitrary symmetry actions commute with the fermion parity of these 12 Majorana fermions, and they form either a linear representation or a projective representation of the $D_6$ symmetry. Similar arguments can be held for other 12 Majorana fermions. We should note that there is only one nontrivial projective representation of the $D_6$ symmetry group acting internally (i.e., $\mathbb{Z}_6\rtimes\mathbb{Z}_2$ on-site symmetry) that can easily to be verified by the following 2-cohomology:
\begin{align}
\mathcal{H}^2\left[\mathbb{Z}_6\rtimes\mathbb{Z}_2,U(1)\right]=\mathbb{Z}_2
\end{align}
So these 24 Majorana fermions together form a linear representation of the $D_6$ symmetry at each 0D block labeled by $\mu_1$, and they can be gapped out in a symmetric way. Thus the 1D FSPT state decorations on all 1D blocks simultaneously is \textit{obstruction-free}, and all obstruction-free 1D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p6m,0}^{\mathrm{1D}}=\mathbb{Z}_2
\end{align}
and the group elements can be labeled by $n_1=n_2=n_3$. Here $n_j=0,1$ ($j=1,2,3$) represents the number of decorated 1D FSPT states on $\tau_j$, respectively. According to aforementioned discussions, a necessary condition of an obstruction-free block-state is $n_1=n_2=n_3$.
So far we have already obtained all obstruction-free block-states, and they form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p6m,0}&=\{\mathrm{OFBS}\}_{p6m,0}^{\mathrm{1D}}\times\{\mathrm{OFBS}\}_{p6m,0}^{\mathrm{0D}}\nonumber\\
&=\mathbb{Z}_2\times\mathbb{Z}_2^8=\mathbb{Z}_2^{9}
\end{align}
With all obstruction-free block-states, subsequently we discuss all possible trivializations. First we consider about the 2D bubble equivalence: Similar with the $p4m$ case, ``Majorana bubbles'' can be deformed to double Majorana chains at each nearby 1D block, and this is exactly the definition of the nontrivial 1D FSPT phase protected by on-site $\mathbb{Z}_2$ symmetry (by reflection symmetry acting internally). As a consequence, 1D FSPT state decorations on all 1D blocks can be deformed to a trivial state via 2D ``Majorana'' bubble equivalences. Furthermore, repeatedly similar with the $p4m$ case, ``Majorana bubble'' constructions have no effect on 0D blocks.
Subsequently we consider the 1D bubble equivalences. For example, on each 1D block labeled by $\tau_2$, we decorate a pair of complex fermions [cf. Eq. (\ref{1D bubble})]: Near each 0D block labeled by $\mu_1$, there are 6 complex fermions which form an atomic insulator with even fermion parity:
\begin{align}
|\psi\rangle_{p6m}^{\mu_1}=\prod\limits_{j=1}^6c_j^\dag|0\rangle
\end{align}
hence $|\psi\rangle_{p6m}^{\mu_1}$ cannot change the fermion parity of the 0D block labeled by $\mu_1$; Near each 0D block labeled by $\mu_3$, there are 3 complex fermions which form another atomic insulator with odd fermion parity:
\begin{align}
|\psi\rangle_{p6m}^{\mu_3}=c_1'^\dag c_2'^\dag c_3'^\dag|0\rangle
\end{align}
and it can change the fermion parity at each 0D block labeled by $\mu_3$. Then we consider the symmetry properties of these atomic insulators: the eigenvalues of $|\psi\rangle_{p6m}^{\mu_1}$ at $\mu_1$ under two independent reflection operations is:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\psi\rangle_{p6m}^{\mu_1}=c_6^\dag c_5^\dag c_4^\dag c_3^\dag c_2^\dag c_1^\dag=-|\psi\rangle_{p6m}^{\mu_1}\\
&\boldsymbol{M}_{\tau_2}|\psi\rangle_{p6m}^{\mu_1}=c_1^\dag c_6^\dag c_5^\dag c_4^\dag c_3^\dag c_2^\dag=|\psi\rangle_{p6m}^{\mu_1}
\end{aligned}
\end{align}
i.e., 1D bubble construction on $\tau_2$ can change the eigenvalue of $\boldsymbol{M}_{\tau_2}$ and leave the eigenvalue of $\boldsymbol{M}_{\tau_1}$ invariant. The eigenvalue of $|\psi\rangle_{p6m}^{\mu_3}$ at $\mu_3$ under reflection $\boldsymbol{M}_{\mu_2}$ is:
\begin{align}
\boldsymbol{M}_{\tau_2}|\psi\rangle_{p6m}^{\mu_3}=c_1'^\dag c_3'^\dag c_2'^\dag|0\rangle=-|\psi\rangle_{p6m}^{\mu_3}
\end{align}
i.e., 1D bubble construction on $\tau_2$ can change the eigenvalue of $\boldsymbol{M}_{\tau_2}$. Similar 1D bubble constructions can be held on other 1D blocks, and we summarize the effects of 1D bubble constructions as following:
\begin{enumerate}[1.]
\item 1D bubble construction on $\tau_1$: simultaneously changes the eigenvalue of $\boldsymbol{M}_{\tau_2}$ at $\mu_1$ and $\boldsymbol{M}_{\tau_3}$ at $\mu_2$;
\item 1D bubble construction on $\tau_2$: simultaneously changes the eigenvalue of $\boldsymbol{M}_{\tau_1}$ at $\mu_1$, $\boldsymbol{M}_{\tau_2}$ at $\mu_3$ and the fermion parity of $\mu_3$;
\item 1D bubble construction on $\tau_3$: simultaneously changes the eigenvalues of $\boldsymbol{M}_{\tau_1}$ at $\mu_2$, $\boldsymbol{M}_{\tau_2}$ at $\mu_3$ and the fermion parity of $\mu_3$.
\end{enumerate}
There is another type of 1D bubble construction on $\tau_2$ and $\tau_3$ (we denote the above 1D bubble construction by ``type-\text{\uppercase\expandafter{\romannumeral1}}'' and this 1D bubble construction by ``type-\text{\uppercase\expandafter{\romannumeral2}}''): we decorate an Eq. (\ref{1D bubble}) on each $\tau_2$ (here both yellow and red dots represent the 0D FSPT mode with reflection eigenvalue $-1$), near $\mu_1$, there are six 0D FSPT modes with reflection eigenvalue $-1$ that change nothing; near $\mu_3$, there are three 0D FSPT modes with reflection eigenvalues $-1$ that changes the reflection eigenvalue at $\mu_3$. Similar for 1D bubble constructions on $\tau_3$.
With all possible bubble constructions, we are ready to investigate the trivial states. Start from the original 0D trivial block-state (nothing is decorated on arbitrary 0D blocks):
\[
[(+,+,+),(+,+,+),(+,+)]
\]
if we take type-\text{\uppercase\expandafter{\romannumeral1}} 1D bubble constructions on $\tau_j$ by $n_j$ times ($j=1,2,3$), and type-\text{\uppercase\expandafter{\romannumeral2}} 1D bubble constructions on $\tau_2$ and $\tau_3$ by $n_2'$ and $n_3'$ times, above trivial state will be deformed to a new block-state labeled by:
\begin{align}
&\left[(+,(-1)^{n_2},(-1)^{n_1}),(+,(-1)^{n_3},(-1)^{n_1}),\right.\nonumber\\
&\left.((-1)^{n_2'+n_3'},(-1)^{n_2+n_3})\right]
\label{p6m spinless trivial state}
\end{align}
According to the definition of bubble equivalence, all these states should be trivial. Alternatively, all 0D block-states can be viewed as vectors of an 8-dimensional $\mathbb{Z}_2$-valued vector space $V$, and all trivial 0D block-states with the form as Eq. (\ref{p6m spinless trivial state}) can be viewed as vectors of the subspace of $V$. The dimensionality of this subspace can be determined by calculating the rank of the following transformation matrix:
\begin{align}
\mathrm{rank}\left(
\begin{matrix}
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{matrix}
\right)=4
\end{align}
Here different rows of this matrix represent different bubble constructions. Hence the dimensionality of the vector subspace containing all trivial 0D block-states is 4. Together with the 2D bubble equivalence, all trivial states form the group:
\begin{align}
\{\mathrm{TBS}\}_{p6m,0}&=\{\mathrm{TBS}\}_{p6m,0}^{\mathrm{1D}}\times\{\mathrm{TBS}\}_{p6m,0}^{\mathrm{0D}}\nonumber\\
&=\mathbb{Z}_2\times\mathbb{Z}_2^4=\mathbb{Z}_2^5
\end{align}
here $\{\mathrm{TBS}\}_{p6m,0}^{\mathrm{1D}}$ represents the group of trivial states with non-vacuum 1D blocks (i.e., 1D FSPT phase decorations on all 1D blocks simultaneously), and $\{\mathrm{TBS}\}_{p6m,0}^{\mathrm{0D}}$ represents the group of trivial states with non-vacuum 0D blocks.
Therefore, all independent nontrivial block-states are labeled by the group elements of the following quotient group:
\begin{align}
G_{p6m,0}&=\{\mathrm{OFBS}\}_{p6m,0}/\{\mathrm{TBS}\}_{p6m,0}\nonumber\\
&=\mathbb{Z}_2^9/\mathbb{Z}_2^5=\mathbb{Z}_2^4
\end{align}
here all $\mathbb{Z}_2$'s are from the nontrivial 0D block-states. It is obvious that there is no nontrivial group extension because of the absence of nontrivial 1D block-state, and the group structure of $E_{p6m,0}$ has already been correct.
Now we turn to discuss systems with spin-1/2 fermions. Consider the 0D block-state decoration, and similar with the $p4m$ case, the classification data can also be characterized by different 1D irreducible representations of alternative symmetry groups:
\begin{align}
\left.
\begin{aligned}
&\mathcal{H}^1\left[\mathbb{Z}_2^f\times_{\omega_2}(\mathbb{Z}_6\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^2\\
&\mathcal{H}^1\left[\mathbb{Z}_2^f\times_{\omega_2}(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^2\\
&\mathcal{H}^1\left[\mathbb{Z}_2^f\times_{\omega_2}(\mathbb{Z}_3\rtimes\mathbb{Z}_2),U(1)\right]=\mathbb{Z}_2^2
\end{aligned}
\right.
\end{align}
For $D_6$ and $D_2$ centers, the physical meaning of two $\mathbb{Z}_2$'s in the classification data are rotation and reflection eigenvalues, respectively. Furthermore, the group structure of the classification of 0D FSPT phases protected by $\mathbb{Z}_3\rtimes\mathbb{Z}_2$ on-site symmetry for systems with spin-1/2 fermions is $\mathbb{Z}_4$. Equivalently, we can label different 0D block-states by the group elements of the 4-fold cyclic group:
\begin{align}
\mathbb{Z}_4=\left\{1,i,-1,-i\right\}
\label{Z4}
\end{align}
So the 0D block-states at $\mu_1$ and $\mu_2$ can be labeled by $(\pm,\pm)$, here these two $\pm$'s represent the 2-fold rotation and reflection symmetry eigenvalues (alternatively, they can also represent the eigenvalues of two independent reflection operations because even-fold dihedral group can also be generated by two independent reflections); the 0D block-states at $\mu_3$ can be labeled by $\nu\in\left\{1,i,-1,-i\right\}$ as the eigenvalues of $\mathbb{Z}_4^f$ symmetry. According to this notation, all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p6m,1/2}^{\mathrm{0D}}=\mathbb{Z}_2^4\times\mathbb{Z}_4
\end{align}
and the group elements can be labeled by (three brackets represent the block-states at $\mu_1$, $\mu_2$ and $\mu_3$):
\[
[(\pm,\pm),(\pm,\pm),\nu]
\]
Then we investigate the possible trivializations. Consider the 1D bubble equivalence on 1D blocks labeled by $\tau_1$: on each $\tau_1$, the total on-site symmetry is $\mathbb{Z}_4^f$: nontrivial $\mathbb{Z}_2^f$ extension of the on-site symmetry $\mathbb{Z}_2$. Next we decorate an Eq. (\ref{1D bubble}) onto each of them, here the yellow/red dots represent the 0D FSPT modes protected by $\mathbb{Z}_4^f$ symmetry which are labeled by $i~\&~-i\in\mathbb{Z}_4$, cf. Eq. (\ref{Z4}), and they can be trivialized if they shrink to a point. Near each 0D block labeled by $\mu_3$, there are three 0D FSPT modes labeled by $i\in\mathbb{Z}_4$ and they can change the label of 0D block-state decorated at each 0D block $\mu_3$ by $-i\in\mathbb{Z}_4$. Therefore, the 0D block-state on each $\mu_3$ can be trivialized by this bubble construction. Near 0D block $\mu_1$, this 1D bubble construction changes nothing because there is no $\mathbb{Z}_4^f$ on-site symmetry on $\mu_1$. Similar 1D bubble construction can be held on $\tau_3$.
With all possible bubble constructions, we are ready to investigate the trivial states. Start from the original trivial state (nothing decorated on arbitrary 0D block):
\[
[(+,+),(+,+),1]
\]
if we take above 1D bubble constructions on $\tau_2$ and $\tau_3$ by $n_2$ and $n_3$ times, above trivial state will be deformed to a new 0D block-state labeled by:
\begin{align}
[(+,+),(+,+),(-i)^{3(n_2+n_3)}]
\label{p6m spin-1/2 trivial state}
\end{align}
According to the definition of bubble equivalence, all these states should be trivial and all trivial states form the group:
\begin{align}
\{\mathrm{TBS}\}_{p6m,1/2}^{\mathrm{0D}}=\mathbb{Z}_4
\end{align}
Therefore, all independent nontrivial 0D block-states are labeled by different group elements of the following quotient group:
\begin{align}
E_{p6m,1/2}^{\mathrm{0D}}=\{\mathrm{OFBS}\}_{p6m,1/2}^{\mathrm{0D}}/\{\mathrm{TBS}\}_{p6m,1/2}^{\mathrm{0D}}=\mathbb{Z}_2^4
\end{align}
Subsequently we consider the 1D block-state decoration. For arbitrary 1D blocks, the total on-site symmetry on them is $\mathbb{Z}_4^f$: nontrivial $\mathbb{Z}_2^f$ extension of $\mathbb{Z}_2$ on-site symmetry, hence there is no nontrivial 1D block-state due to the trivial classification of the corresponding 1D FSPT phases, and the classification attributed to 1D block-state decorations is trivial:
\begin{align}
E_{p6m,1/2}^{\mathrm{1D}}=\{\mathrm{OFBS}\}_{p6m,1/2}^{\mathrm{1D}}=0
\end{align}
Therefore, it is obvious that there is no stacking between 1D and 0D block-states, and the ultimate classification with accurate group structure is:
\begin{align}
\mathcal{G}_{p6m,1/2}=\mathbb{Z}_2^4
\end{align}
\section{Construction and classification of crystalline TI\label{insulator}}
So far we have discussed the construction and classification of crystalline TSC in 2D interacting fermionic systems.
In this section, we will discuss the crystalline TI with additional $U^f(1)$
by generalizing the real-space construction highlighted in Sec. \ref{general}. We demonstrate that 1D block-state decoration has no contribution and all nontrivial crystalline TI in 2D interacting fermionic systems can be constructed by 0D block-state decoration.
For 1D blocks, there are two different cases: symmetry group with/without the reflection symmetry operation. Since bosonic and fermionic systems can be mapped to each other by Jordan-Wigner transformation, the classification data of 1D SPT phases for bosonic and fermionic systems are identical: by calculating the different projective representations of the symmetry group. (However, the group structure of the classification data could be different in general as staking operation has different physical meaning for boson and fermion ssytems.)
For symmetry groups without reflection symmetry operation, the on-site symmetry group of an arbitrary 1D blocks should be $U^f(1)$ charge conservation only, and the corresponding classification for 2D systems with spinless/spin-1/2 fermions can be calculated by the following group cohomology:
\begin{align}
\mathcal{H}^2[U^f(1),U(1)]=0
\end{align}
Thus, there is no nontrivial 1D block-state for this case.
For the symmetry group with reflection symmetry operation, the on-site symmetry group of some 1D blocks should be $U^f(1)$ charge conservation and $\mathbb{Z}_2$ symmetry via reflection symmetry acting internally. The corresponding classification for 2D systems with spinless/spin-1/2 fermions can be calculated by the following group cohomology:
\begin{align}
\mathcal{H}^2[U^f(1)\times\mathbb{Z}_2,U(1)]=0
\end{align}
Again, there is also no nontrivial 1D block-state for this case.
Below we will again study five representative cases belonging to different crystallographic systems:
\begin{enumerate}
\item square lattice: $p4m$;
\item parallelogrammatic lattice: $p2$.
\item rhombic lattice: $cmm$;
\item rectangle lattice: $pgg$;
\item hexagonal lattice: $p6m$;
\end{enumerate}
And all other cases are assigned in Supplementary Materials \cite{supplementary}. The classification results are summarized in Table \ref{insulator U(1)}.
\subsection{Square lattice: $p4m$}
Again, we begin with the cell decomposition of $p4m$ as illustrated in Fig. \ref{p4m}. For 0D blocks labeled by $\mu_1$ and $\mu_3$, different 0D block-states are characterized by different irreducible representations of the corresponding onsite symmetry group:
\begin{align}
\mathcal{H}^1[U^f(1)\times(\mathbb{Z}_4\rtimes\mathbb{Z}_2),U(1)]=\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
Here $\mathbb{Z}$ represents the $U^f(1)$ charge carried by complex fermion, and first $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$ and the second $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. Similarly, for 0D blocks labeled by $\mu_2$, different 0D block-states are characterized by different irreducible representations of the corresponding onsite symmetry group:
\begin{align}
\mathcal{H}^1[U^f(1)\times(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)]=\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
Again, here $\mathbb{Z}$ represents the $U(1)$ charge carried by the complex fermion, and first $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$ and the second $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. Therefore, for systems with spinless fermions, the 0D block-states at $\mu_j$ ($j=1,2,3$) can be labeled by $(n_j,\pm,\pm)$, where $n_j\in\mathbb{Z}$ represents the $U(1)$ charge carried by complex fermions decorated on $\mu_j$ and two $\pm$'s represent the eigenvalues of two independent reflection operations (becaues even-fold dihedral group can also be generated by two independent reflections). According to this notation, the obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p4m,0}^{U(1)}=\mathbb{Z}^3\times\mathbb{Z}_2^6
\end{align}
and the group elements can be labeled by (three brackets represent the block-states at $\mu_1$, $\mu_2$ and $\mu_3$):
\begin{align}
[(n_1,\pm,\pm),(n_2,\pm,\pm),(n_3,\pm,\pm)]
\end{align}
Nevertheless, this is not the final classification data and we must further consider possible trivializations. For systems with spinless fermions, we first consider the 1D bubble equivalence on 1D blocks labeled by $\tau_1$: we decorate a 1D ``particle-hole'' bubble [cf. Eq. (\ref{1D bubble}), here yellow and red dots represent particle and hole with opposite $U^f(1)$ charge, respectively] on each $\tau_1$, and they can be trivialized if we shrink them to a point. Near each 0D block labeled by $\mu_1$, there are four particles forming the following atomic insulator:
\begin{align}
|\phi\rangle_{p4m}^{\mu_1}=p_1^\dag p_2^\dag p_3^\dag p_4^\dag|0\rangle
\end{align}
with eigenvalues of independent reflection operations:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\phi\rangle_{p4m}^{\mu_1}=p_1^\dag p_4^\dag p_3^\dag p_2^\dag|0\rangle=-|\phi\rangle_{p4m}^{\mu_1}\\
&\boldsymbol{M}_{\tau_3}|\phi\rangle_{p4m}^{\mu_1}=p_3^\dag p_4^\dag p_1^\dag p_2^\dag|0\rangle=|\phi\rangle_{p4m}^{\mu_1}
\end{aligned}
\end{align}
i.e., eigenvalues $-1$ of the reflection $\boldsymbol{M}_{\tau_1}$ at each 0D block $\mu_1$ can be trivialized by atomic insulator $|\phi\rangle_{p4m}^{\mu_1}$. Near $\mu_2$, there are two holes forming another atomic insulator:
\begin{align}
|\phi\rangle_{p4m}^{\mu_2}=h_1^\dag h_2^\dag|0\rangle
\end{align}
with eigenvalues of independent reflection operations:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\phi\rangle_{p4m}^{\mu_2}=h_1^\dag h_2^\dag|0\rangle=|\phi\rangle_{p4m}^{\mu_2}\\
&\boldsymbol{M}_{\tau_2}|\phi\rangle_{p4m}^{\mu_2}=h_2^\dag h_1^\dag|0\rangle=-|\phi\rangle_{p4m}^{\mu_2}
\end{aligned}
\end{align}
i.e., eigenvalues $-1$ of the reflection $\boldsymbol{M}_{\tau_2}$ at each 0D block $\mu_2$ can be trivialized by atomic insulator $|\phi\rangle_{p4m}^{\mu_2}$. Therefore, aforementioned 1D bubble construction leads to the non-independence of rotation and reflection eigenvalues of $\mu_1$ and $\mu_2$ (can be changed simultaneously). Similar 1D bubble construction can be held on $\tau_2$ and leads to the non-independence of rotation and reflection eigenvalues of all 0D blocks $\mu_j, j=1,2,3$.
Now we move to the $U^f(1)$ charge sector. As shown in Fig. \ref{p4m}, we note that within a specific unit cell, there is one 0D block labeled by $\mu_1$ and $\mu_3$, two 0D blocks labeled by $\mu_2$. Repeatedly consider the aforementioned 1D bubble construction on $\tau_1$: it adds four complex fermions at each 0D block $\mu_1$ and removes two complex fermions at each 0D block $\mu_2$ (by adding two holes), hence the number of complex fermions at $\mu_1$ and $\mu_2$ are not independent. Similar arguments are also applied for 1D blocks labeled by $\tau_2$ and make the nonindependence of complex fermion decorations on 0D blocks $\mu_2$ and $\mu_3$.
With the help of above discussions, we consider the 1D bubble equivalence. Start from the trivial state:
\begin{align}
[(0,+,+),(0,+,+),(0,+,+)]
\label{p4m original trivial state}
\end{align}
Take aforementioned 1D bubble constructions on $\tau_j$ by $n_j\in\mathbb{Z}$ times, it will lead to a new 0D block-state labeled by:
\begin{align}
&\left[\left(4n_1+4n_3,(-1)^{n_1},(-1)^{n_3}\right),\right.\nonumber\\
&\left(-2n_1+2n_2,(-1)^{n_2},(-1)^{n_1}\right),\nonumber\\
&\left.\left(-4n_2-4n_3,(-1)^{n_2},(-1)^{n_3}\right)\right]
\label{p4m trivial state}
\end{align}
And this state should be trivial according to the definition of bubble equivalence. Alternatively, all 0D block-states can be viewed as vectors of a 9-dimensional vector space $V$, where the complex fermion components are $\mathbb{Z}$-valued and all other components are $\mathbb{Z}_2$-valued attributed to rotation and reflection eigenvalues. Then all trivial 0D block-states with the form as Eq. (\ref{p4m trivial state}) can be viewed as a vector subspace $V'$ of $V$. It is easy to see that there are only three independent quantities in Eq. (\ref{p4m trivial state}): $n_1$, $n_2$ and $n_3$, so the dimensionality of the vector subspace $V'$ should be 3. For the $U^f(1)$ charge sector, we have the following relationship:
\begin{align}
-(4n_1+4n_3)-2(-2n_1+2n_2)=-4n_2-4n_3
\end{align}
i.e., there are only two independent quantities which serves a $2\mathbb{Z}\times4\mathbb{Z}$ trivialization. The remaining one degree of freedom of the vector subspace $V'$ should be attributed to the eigenvalues of point group symmetry action, and serves a $\mathbb{Z}_2$ trivialization. Therefore, all trivial states with the form as shown in Eq. (\ref{p4m trivial state}) compose the following group:
\begin{align}
\{\mathrm{TBS}\}_{p4m,0}^{U(1)}=2\mathbb{Z}\times4\mathbb{Z}\times\mathbb{Z}_2
\end{align}
and different independent nontrivial 0D block-states can be labeled by different group elements of the following quotient group:
\begin{align}
\mathcal{G}_{p4m,0}^{U(1)}&=\{\mathrm{OFBS}\}_{p4m,0}^{U(1)}/\{\mathrm{TBS}\}_{p4m,0}^{U(1)}\nonumber\\
&=\mathbb{Z}\times\mathbb{Z}_4\times\mathbb{Z}_2^6
\end{align}
For systems with spin-1/2 fermions, the rotation and reflection properties of $|\phi\rangle_{p4m}^{\mu_1}$ and $|\phi\rangle_{p4m}^{\mu_2}$ at $\mu_1$ and $\mu_2$ are changed by an additional $-1$ and it leads to no trivialization. Furthermore, like the cases without $U^f(1)$ charge conservation, the classification data of the corresponding 0D block-states can be characterized by different irreducible representations of the corresponding on-site symmetry group ($n=2,4$):
\begin{align}
\mathcal{H}^1[U^f(1)\rtimes_{\omega_2}(\mathbb{Z}_n\rtimes\mathbb{Z}_2),U(1)]=2\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
To calculate this classification data, we should firstly calculate the following two cohomologies \cite{supplementary}:
\begin{align}
\left\{
\begin{aligned}
&n_0\in\mathcal{H}^0(\mathbb{Z}_n\rtimes\mathbb{Z}_2,\mathbb{Z})=\mathbb{Z}\\
&n_1\in\mathcal{H}^1\left[\mathbb{Z}_n\rtimes\mathbb{Z}_2,U(1)\right]=\mathbb{Z}_2^2
\end{aligned}
\right.
\end{align}
Here $\mathbb{Z}$ represents the $U^f(1)$ charge carried by complex fermions, and two $\mathbb{Z}_2$'s represent the rotation and reflection eigenvalues. We demonstrate that the odd number of the $U^f(1)$ charge at each 0D block is not allowed: a specific $n_0$ is obstructed if and only if $(-1)^{\omega_2\smile n_0}\in\mathcal{H}^2[\mathbb{Z}_n\rtimes\mathbb{Z}_2,U(1)]$ is a nontrivial 2-cocycle with $U(1)$-coefficient. From Refs. \onlinecite{general2} and \onlinecite{dihedral} we know that for cases without $U(1)$ charge conservation, nontrivial 0-cocycle $n_0=1,n_0\in\mathcal{H}^0(\mathbb{Z}_n\rtimes\mathbb{Z}_2,\mathbb{Z}_2)$ leads to nontrivial 2-cocycle $(-1)^{\omega_2\smile n_0}\in\mathcal{H}^2[\mathbb{Z}_n\rtimes\mathbb{Z}_2,U(1)]$. So for $U(1)$ charge conserved cases, odd $n_0\in\mathcal{H}^0(\mathbb{Z}_n\rtimes\mathbb{Z}_2,\mathbb{Z})$ lead to nontrivial 2-cocycle $(-1)^{\omega_2\smile n_0}\in\mathcal{H}^2[\mathbb{Z}_n\rtimes\mathbb{Z}_2,U(1)]$. As a consequence, for systems with spin-1/2 fermions, we can only decorate even number of complex fermions on each 0D block. As consequence, all obstruction-free block-states form a group:
\begin{align}
\{\mathrm{OFBS}\}_{p4m,1/2}^{U(1)}=(2\mathbb{Z})^3\times\mathbb{Z}_2^6
\end{align}
Then we consider the possible trivializations via 1D bubble constructions. Repeatedly consider aforementioned ``particle-hole'' bubble, and above atomic insulators have alternative symmetry properties:
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\phi\rangle_{p4m}^{\mu_1}=-p_1^\dag p_4^\dag p_3^\dag p_2^\dag|0\rangle=|\phi\rangle_{p4m}^{\mu_1}\\
&\boldsymbol{M}_{\tau_3}|\phi\rangle_{p4m}^{\mu_1}=p_3^\dag p_4^\dag p_1^\dag p_2^\dag|0\rangle=|\phi\rangle_{p4m}^{\mu_1}
\end{aligned}
\end{align}
and
\begin{align}
\begin{aligned}
&\boldsymbol{M}_{\tau_1}|\phi\rangle_{p4m}^{\mu_2}=h_1^\dag h_2^\dag|0\rangle=|\phi\rangle_{p4m}^{\mu_2}\\
&\boldsymbol{M}_{\tau_2}|\phi\rangle_{p4m}^{\mu_2}=-h_2^\dag h_1^\dag|0\rangle=|\phi\rangle_{p4m}^{\mu_2}
\end{aligned}
\end{align}
i.e., atomic insulators $|\phi\rangle_{p4m}^{\mu_1}$ and $|\phi\rangle_{p4m}^{\mu_2}$ do not change the eigenvalues of any symmetry. The discussion of $U^f(1)$ charge sector is identical, so again we start from the original trivial state (\ref{p4m original trivial state}), take above 1D bubble constructions on $\tau_j$ by $n_j\in\mathbb{Z}$ times, and it will lead to a new 0D block-state labeled by:
\begin{align}
&\left[\left(4n_1+4n_3,0,0\right),\right.\nonumber\\
&\left(-2n_1+2n_2,0,0\right),\nonumber\\
&\left.\left(-4n_2-4n_3,0,0\right)\right]
\label{p4m U(1) spin-1/2 trivial state}
\end{align}
Similar with the spinless case, all states with the form (\ref{p4m U(1) spin-1/2 trivial state}) are trivial, and forming the following group:
\begin{align}
\{\mathrm{TBS}\}_{p4m,1/2}^{U(1)}=2\mathbb{Z}\times4\mathbb{Z}
\end{align}
because the $U^f(1)$ charge sector for systems with spinless and spin-1/2 fermions is identical. Different independent nontrivial 0D block-states can be labeled by different group elements of the following quotient group:
\begin{align}
\mathcal{G}_{p4m,1/2}^{U(1)}&=\{\mathrm{OFBS}\}_{p4m,1/2}^{U(1)}/\{\mathrm{TBS}\}_{p4m,1/2}^{U(1)}\nonumber\\
&=2\mathbb{Z}\times\mathbb{Z}_2^7
\end{align}
Here $2\mathbb{Z}$ means that we can only decorate even number of complex fermions on each 0D block.
\subsection{Parallelogrammatic lattice: $p2$}
Repeatedly consider the cell decomposition of $p2$ as illustrated in Fig. \ref{p2}. For arbitrary 0D blocks, different 0D block-states are characterized by different irreducible representations of the symmetry group as:
\begin{align}
\mathcal{H}^1[U^f(1)\times\mathbb{Z}_2,U(1)]=\mathbb{Z}\times\mathbb{Z}_2
\end{align}
Here $\mathbb{Z}$ represents the complex fermion and $\mathbb{Z}_2$ represents the rotation eigenvalues. So 0D block-states at $\mu_j$ ($j=1,2,3,4$) can be labeled by $(n_j,\pm)$, here $n_j\in\mathbb{Z}$ represents the number of complex fermions decorated on $\mu_j$ and $\pm$ represents the eigenvalue of 2-fold rotation operation. According to this notation, all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p2,0}^{U(1)}=\mathbb{Z}^4\times\mathbb{Z}_2^4
\end{align}
We should further consider possible trivializations: for systems with spinless fermions, consider the 1D bubble equivalence on 1D blocks labeled by $\tau_1$: we decorate a 1D ``particle-hole'' bubble [cf. Eq. (\ref{1D bubble}), here yellow and red dots represent particle and hole, respectively] on each $\tau_1$, and they can be trivialized if we shrink them to a point. Near each 0D block labeled by $\mu_1$, there are two particles forming the following atomic insulator:
\begin{align}
|\xi\rangle_{p2}^{\mu_1}=p_1^\dag p_2^\dag|0\rangle
\end{align}
with following rotation property:
\begin{align}
\boldsymbol{R}_{\mu_1}|\xi\rangle_{p2}^{\mu_1}=p_2^\dag p_1^\dag|0\rangle=-|\xi\rangle_{p2}^{\mu_1}
\end{align}
i.e., rotation eigenvalue $-1$ at each 0D block $\mu_1$ can be trivialized by atomic insulator $|\xi\rangle_{p2}^{\mu_1}$. Near $\mu_2$, there are two holes forming another atomic insulator:
\begin{align}
|\xi\rangle_{p2}^{\mu_2}=h_1^\dag h_2^\dag|0\rangle
\end{align}
with following rotation property:
\begin{align}
\boldsymbol{R}_{\mu_2}|\xi\rangle_{p2}^{\mu_2}=h_2^\dag h_1^\dag|0\rangle=-|\xi\rangle_{p2}^{\mu_2}
\end{align}
i.e., rotation eigenvalue $-1$ at each 0D block $\mu_2$ can be trivialized by atomic insulator $|\xi\rangle_{p2}^{\mu_2}$. Therefore, aforementioned 1D bubble construction leads to the nonindependence of rotation eigenvalues at $\mu_1$ and $\mu_2$ (can be changed simultaneously).
Now we move to the $U^f(1)$ charge sector. Repeatedly consider the aforementioned 1D bubble construction on $\tau_1$: it adds two complex fermions at each 0D block $\mu_1$ and removes two complex fermions at each 0D block $\mu_2$ (by adding two holes), hence the number of complex fermions at $\mu_1$ and $\mu_2$ are not independent. More specifically, suppose there are $a$ complex fermions on each $\mu_1$, $b$ complex fermions on each $\mu_2$ and the total number of complex fermions on $\mu_1$ and $\mu_2$ within a certain unit cell is $c=a+b$. Take above manipulation $n$ times ($n\in\mathbb{Z}$), the number of complex fermions on each $\mu_1/\mu_2$ is $a+2n/b-2n$, and the total number of complex fermions on $\mu_1$ and $\mu_2$ remains invariant. So for a specific $c$, there are only two independent cases: $c=a+b$ and $c=(a+1)+(b-1)$. It is similar for other 1D blocks, and we summarize effects of all possible 1D bubble constructions:
\begin{enumerate}
\item 1D bubble construction on $\tau_1$: Add two complex fermions on $\mu_1$, eliminate two complex fermions on $\mu_2$, and simultaneously change the rotation eigenvalues of $\mu_1$ and $\mu_2$;
\item 1D bubble construction on $\tau_2$: Add two complex fermions on $\mu_1$, eliminate two complex fermions on $\mu_3$, and simultaneously change the rotation eigenvalues of $\mu_1$ and $\mu_3$;
\item 1D bubble construction on $\tau_3$: Add two complex fermions on $\mu_2$, eliminate two complex fermions on $\mu_4$, and simultaneously change the rotation eigenvalues of $\mu_2$ and $\mu_4$;
\end{enumerate}
With the help of above discussions, we consider the 1D bubble equivalence. Start from the original trivial state (nothing is decorated on all blocks):
\begin{align}
[(0,+),(0,+),(0,+),(0,+)]
\label{p2 original trivial state}
\end{align}
Take aforementioned 1D bubble constructions on $\tau_j$ by $n_j\in\mathbb{Z}$ times ($j=1,2,3$), this trivial state will be deformed to a new 0D block-state labeled by:
\begin{align}
&\left[(2n_1+2n_2,(-1)^{n_1+n_2}),(-2n_1+2n_3,(-1)^{n_1+n_3}),\right.\nonumber\\
&\left.(-2n_2,(-1)^{n_2}),(-2n_3,(-1)^{n_3})\right]
\label{p2 U(1) spinless trivial state}
\end{align}
According to the definition of bubble equivalence, this state should be trivial. Alternatively, all 0D block-states can be viewed as vectors of an 8-dimensional vector space $V$, where the complex fermion components are $\mathbb{Z}$-valued, and all other components are $\mathbb{Z}_2$-valued. Then all trivial 0D block-states with the form as Eq. (\ref{p2 U(1) spinless trivial state}) can be viewed as a vector space $V'$ of $V$. It is easy to see that there are only three independent quantities in Eq. (\ref{p2 U(1) spinless trivial state}): $n_1$, $n_2$ and $n_3$. So the dimensionality of the vector subspace $V'$ should be 3. For the $U^f(1)$ charge sector, it is easy to notice that there are 3 independent quantities in the following 4 variables:
\[
2n_1+2n_2,-2n_1+2n_3,-2n_2,-2n_3
\]
Hence all 1D bubble constructions serve a $(2\mathbb{Z})^3$ trivialization in $U^f(1)$ charge sector, and all trivial states form the following group:
\begin{align}
\{\mathrm{TBS}\}_{p2,0}^{U(1)}=(2\mathbb{Z})^3
\end{align}
and different indpendent nontrivial 0D block-states can be labeled by different group elements of the following quotient group:
\begin{align}
\mathcal{G}_{p2,0}^{U(1)}&=\{\mathrm{OFBS}\}_{p2,0}^{U(1)}/\{\mathrm{TBS}\}_{p2,0}^{U(1)}\nonumber\\
&=\mathbb{Z}^4\times\mathbb{Z}_2^4/(2\mathbb{Z})^3=\mathbb{Z}\times\mathbb{Z}_2^7
\end{align}
For systems with spin-1/2 fermions, 0D obstruction-free block-states are identical with spinless case:
\begin{align}
\{\mathrm{OFBS}\}_{p2,1/2}^{U(1)}=\mathbb{Z}^4\times\mathbb{Z}_2^4
\end{align}
then repeatedly consider the aforementioned 1D bubble constructions: rotation properties of $|\xi\rangle_{p2}^{\mu_1}$ and $|\xi\rangle_{p2}^{\mu_2}$ at $\mu_1$ and $\mu_2$ are changed by an additional $-1$ and it leads to no trivialization. Furthermore, it is easy to verify that the complex fermion decorations for spinless and spin-1/2 fermions are identical. So again we start from the original trivial state (\ref{p2 original trivial state}), take above 1D bubble constructions on $\tau_j$ by $n_j\in\mathbb{Z}$ times ($j=1,2,3$), and it will lead to a new 0D block-state labeled by:
\begin{align}
&\left[(2n_1+2n_2,+),(-2n_1+2n_3,+),\right.\nonumber\\
&\left.(-2n_2,+),(-2n_3,+)\right]
\end{align}
Similar with the spinless case, all states with this form are trivial, and forming the following group:
\begin{align}
\{\mathrm{TBS}\}_{p2,1/2}^{U(1)}=(2\mathbb{Z})^3
\end{align}
and different independent nontrivial 0D block-states can be labeled by different group elements of the following quotient group
\begin{align}
\mathcal{G}_{p2,1/2}^{U(1)}&=\{\mathrm{OFBS}\}_{p2,1/2}^{U(1)}/\{\mathrm{TBS}\}_{p2,1/2}^{U(1)}\nonumber\\
&=\mathbb{Z}^4\times\mathbb{Z}_2^4/(2\mathbb{Z})^3=\mathbb{Z}\times\mathbb{Z}_2^7
\end{align}
We notice that the classifications of 2D crystalline topological phases protected by $p2$ symmetry for systems with both spinless and spin-1/2 fermions are identical. Now we give a comprehension of this issue: for systems with both spinless and spin-1/2 fermions (for 0D blocks, $\boldsymbol{R}^2=1$ and $\boldsymbol{R}^2=-1$, respectively), the group structure of the symmetry group on 0D blocks are identical: direct product of $U^f(1)$ charge conservation and $\mathbb{Z}_2$ on-site symmetry (by 2-fold rotational symmetry acting internally): $U^f(1)\times\mathbb{Z}_2$. We explicitly formulate the $U^f(1)$ charge conservation and $\mathbb{Z}_2$ on-site symmetry as:
\begin{align}
\mathbb{Z}_2=\left\{E,\boldsymbol{R}\right\},~~U^f(1)=\left\{e^{i\theta}\big|\theta\in[0,2\pi)\right\}
\end{align}
For systems with spinless fermions, $\boldsymbol{R}^2=1$. Nevertheless, we can twist the group elements of $\mathbb{Z}_2$ by a $U^f(1)$ phase factor as:
\begin{align}
\boldsymbol{R}'=\boldsymbol{R}e^{i\pi/2},~~e^{i\pi/2}\in U^f(1)
\label{twist}
\end{align}
then we reformulate the total symmetry group with the twisted operators:
\begin{align}
\mathbb{Z}_2=\left\{E,\boldsymbol{R}'\right\},~~U^f(1)=\left\{e^{i\theta}\big|\theta\in[0,2\pi)\right\}
\end{align}
But $\boldsymbol{R}'^2=-1$ for this case. Therefore, the symmetry group for both spinless and spin-1/2 fermions are identical, and can be deformed to each other by Eq. (\ref{twist}).
We stress that such a statement is true for all wall paper group with a single reflection axis.
\subsection{Rhombic lattice: $cmm$}
Repeatedly consider the cell decomposition of $cmm$ as illustrated in Fig. \ref{cmm}. For 0D blocks labeled by $\mu_1$, different 0D block-states are characterized by different irreducible representations of the symmetry group as:
\begin{align}
\mathcal{H}^1[U^f(1)\times\mathbb{Z}_2,U(1)]=\mathbb{Z}\times\mathbb{Z}_2
\end{align}
Here $\mathbb{Z}$ represents the complex fermion and $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$. For 0D blocks labeled by $\mu_2$ and $\mu_3$, different 0D block-states are characterized by different irreducible representations of the symmetry group as:
\begin{align}
\mathcal{H}^1[U^f(1)\times(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)]=\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
Here $\mathbb{Z}$ represents the complex fermion, first $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$ and the second $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. So 0D block-states at $\mu_1$ can be labeled by $(n_1,\pm)$, here $n_1\in\mathbb{Z}$ represents the number of complex fermions at each $\mu_1$ and $\pm$ represents the eigenvalues of 2-fold rotation operation $\boldsymbol{R}_{\mu_1}$; and 0D block-states at $\mu_2$ and $\mu_3$ can be labeled by $(n_2/n_3,\pm,\pm)$, here $n_2/n_3\in\mathbb{Z}$ represents the number of complex fermions at each $\mu_2/\mu_3$, and two $\pm$'s represent the eigenvalues of two independent reflection operations (because even-fold dihedral group can also be generated by two independent reflections). According to this notation, all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{cmm,0}^{U(1)}=\mathbb{Z}^3\times\mathbb{Z}_2^5
\end{align}
We should further consider possible trivializations: for systems with spinless fermions, consider the 1D bubble equivalence on 1D blocks labeled by $\tau_1$: we decorate a 1D ``particle-hole'' bubble [cf. Eq. (\ref{1D bubble}), here yellow and red dots represent particle and hole, respectively] on each $\tau_1$, and they can be trivialized if we shrink them to a point. Near each 0D block labeled by $\mu_1$, there are two particles forming atomic insulator:
\begin{align}
|\xi\rangle_{cmm}^{\mu_1}=p_1^\dag p_2^\dag|0\rangle
\end{align}
with rotation property:
\begin{align}
\boldsymbol{R}_{\mu_1}|\xi\rangle_{cmm}^{\mu_1}=p_2^\dag p_1^\dag|0\rangle=-|\xi\rangle_{cmm}^{\mu_1}
\end{align}
i.e., rotation eigenvalue $-1$ at each 0D block $\mu_1$ can be trivialized by atomic insulator $|\xi\rangle_{p2}^{\mu_1}$. Near $\mu_2$, there are four holes forming another atomic insulator:
\begin{align}
|\xi\rangle_{cmm}^{\mu_2}=h_1^\dag h_2^\dag h_3^\dag h_4^\dag|0\rangle
\end{align}
with rotation property:
\begin{align}
\boldsymbol{R}_{\mu_2}|\xi\rangle_{cmm}^{\mu_2}=h_3^\dag h_4^\dag h_1^\dag h_2^\dag|0\rangle=|\xi\rangle_{cmm}^{\mu_2}
\end{align}
i.e., rotation eigenvalue $-1$ at each 0D block $\mu_2$ cannot be trivialized by atomic insulator $|\xi\rangle_{p2}^{\mu_2}$. Therefore, aforementioned 1D bubble construction leads to the trivialization of rotation eigenvalues at $\mu_1$. Then we consider the 1D bubble equivalence on 1D blocks labeled by $\tau_2$: we decorate an identical 1D ``particle-hole'' bubble as aforementioned on each $\tau_2$. Near each 0D block labeled by $\mu_2$, there are two particles forming the following atomic insulator:
\begin{align}
|\eta\rangle_{cmm}^{\mu_2}=p_1'^\dag p_2'^\dag|0\rangle
\end{align}
with rotation and reflection properties as:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_2}|\eta\rangle_{cmm}^{\mu_2}=p_2'^\dag p_1'^\dag|0\rangle=-|\eta\rangle_{cmm}^{\mu_2}\\
&\boldsymbol{M}_{\tau_3}|\eta\rangle_{cmm}^{\mu_2}=p_2'^\dag p_1'^\dag|0\rangle=-|\eta\rangle_{cmm}^{\mu_2}
\end{aligned}
\end{align}
i.e., rotation and reflection eigenvalues $-1$ at each 0D block $\mu_2$ can be trivialized by atomic insulator $|\eta\rangle_{cmm}^{\mu_2}$. Near $\mu_3$, there are two holes forming another atomic insulator:
\begin{align}
|\eta\rangle_{cmm}^{\mu_3}=h_1'^\dag h_2'^\dag|0\rangle
\end{align}
with rotation and reflection properties as:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_3}|\eta\rangle_{cmm}^{\mu_3}=h_2'^\dag h_1'^\dag|0\rangle=-|\eta\rangle_{cmm}^{\mu_3}\\
&\boldsymbol{M}_{\tau_3}|\eta\rangle_{cmm}^{\mu_3}=h_2'^\dag h_1'^\dag|0\rangle=-|\eta\rangle_{cmm}^{\mu_3}
\end{aligned}
\end{align}
i.e., rotation and reflection eigenvalues $-1$ at each 0D block $\mu_3$ can be trivialized by atomic insulator $|\eta\rangle_{cmm}^{\mu_3}$. Therefore, aforementioned 1D bubble construction leads the nonindependence of rotation and reflection eigenvalues of $\mu_2$ and $\mu_3$ (can be changed simultaneously).
Subsequently we consider the $U^f(1)$ charge sector. First of all, as shown in Fig. \ref{cmm}, we should identify that within a specific unit cell, there are two 0D blocks labeled by $\mu_1$ and one 0D block labeled by $\mu_2/\mu_3$. Repeatedly consider above 1D bubble construction on $\tau_1$: it adds two complex fermions on each 0D block $\mu_1$ and removes four complex fermions at each 0D block $\mu_2$ (by adding four holes), hence the numbers of complex fermions at $\mu_1$ and $\mu_2$ are not independent. More specifically, suppose there are $a$ complex fermions on each $\mu_1$, $b$ complex fermions on each $\mu_2$, and the total number of complex fermions on $\mu_1$ and $\mu_2$ within a certain unit cell is $c=2a+b$. Take above manipulation $n$ times ($n\in\mathbb{Z}$), and the number of complex fermions on each $\mu_1/\mu_2$ is $a+2n/b-4n$, and the total number of complex fermions on $\mu_1$ and $\mu_2$ within a certain unit cell remains invariant. So for a specific $c$, there are only two independent cases: $c=2a+b$ and $c=2(a+1)+(b-2)$.
Then we consider aforementioned 1D bubble equivalence on 1D blocks $\tau_2$ repeatedly: it adds two complex fermions at each 0D block $\mu_2$ and removes two complex fermions at each 0D block $\mu_3$ (by adding two holes), hence the number of complex fermions at $\mu_2$ and $\mu_3$ are not independent. More specifically, suppose there are $a'$ complex fermions on each $\mu_2$, $b'$ complex fermions on each $\mu_3$, and the total number of complex fermions on $\mu_2$ and $\mu_3$ within a certain unit cell is $c'=a'+b'$. Take above manipulation $n'$ times ($n'\in\mathbb{Z}$), the number of complex fermions on each $\mu_2/\mu_3$ is $a'+2n'/b'-2n'$, and the total number of complex fermions on $\mu_2$ and $\mu_3$ within a certain unit cell remains invariant. So for a specific $c'$, there are only two independent cases: $c'=a'+b'$ and $c'=(a'+1)+(b'-1)$.
With the help of above discussions, we consider the 0D block-state decorations. Start from the original trivial state (nothing is decorated on all blocks):
\begin{align}
[(0,+),(0,+,+),(0,+,+)]
\label{cmm original trivial state}
\end{align}
Take aforementioned 1D bubble construction on $\tau_j$ by $n_j\in\mathbb{Z}$ times ($j=1,2,3$), it will lead to a new 0D block-state labeled by:
\begin{align}
&\left[\left(2n_1,(-1)^{n_1}\right),\left(-2n_1+2n_2+2n_3,(-1)^{n_2+n_3},(-1)^{n_2}\right)\right.\nonumber\\
&\left.\left(-2n_2-2n_3,(-1)^{n_2+n_3},(-1)^{n_2}\right)\right]
\label{cmm U(1) spinless trivial state}
\end{align}
According to the definition of bubble equivalence, all states with this form should be trivial. Alternatively, all 0D block-states can be viewed as vectors of an 8-dimensional vector space $V$, where the complex fermion components are $\mathbb{Z}$-valued and all other components are $\mathbb{Z}_2$-valued. Then all trivial 0D block-states with the form as Eq. (\ref{cmm U(1) spinless trivial state}) can be viewed as a vector subspace $V'$ of $V$. It is easy to see that there are only three independent quantities in Eq. (\ref{cmm U(1) spinless trivial state}): $n_1$, $n_2$ and $n_3$. So the dimensionality of the vector subspace $V'$ should be 3. For the $U^f(1)$ charge sector, we have the following relationship:
\begin{align}
-2n_1-(-2n_1+2n_2+2n_3)=-2n_2-2n_3
\end{align}
i.e., there are only two independent quantities which serves a $(2\mathbb{Z})^2$ trivialization. The remaining one degree of freedom of the vector subspace $V'$ should be attributed to the eigenvalues of point group symmetry action, and serves a $\mathbb{Z}_2$ trivialization. Therefore, all trivial states (\ref{cmm U(1) spinless trivial state}) form the following group:
\begin{align}
\{\mathrm{TBS}\}_{cmm,0}^{U(1)}=(2\mathbb{Z})^2\times\mathbb{Z}_2
\end{align}
and different independent nontrivial 0D block-states can be labeled by different group elements of the following quotient group:
\begin{align}
\mathcal{G}_{cmm,0}^{U(1)}&=\{\mathrm{OFBS}\}_{cmm,0}^{U(1)}/\{\mathrm{TBS}\}_{cmm,0}^{U(1)}\nonumber\\
&=\mathbb{Z}^3\times\mathbb{Z}_2^5/(2\mathbb{Z})^2\times\mathbb{Z}_2=\mathbb{Z}\times\mathbb{Z}_2^6
\end{align}
For systems with spin-1/2 fermions, like the cases without $U^f(1)$ charge conservation, the classification data of the 0D block-states of 0D blocks labeled by $\mu_2$ and $\mu_3$ can be characterized by different irreducible representations of the corresponding on-site symmetry group (the meaning of and $\omega_2$ are refer to Sec. \ref{spinSec}):
\begin{align}
\mathcal{H}^1[U^f(1)\times_{\omega_2}(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)]=2\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
Here $2\mathbb{Z}$ represents the $U^f(1)$ charge carried by complex fermion, and two $\mathbb{Z}_2$'s represent the rotation and reflection eigenvalues (similar with the $p4m$ case, we can only decorate even number of $U^f(1)$ charge on each 0D block). So all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{cmm,1/2}^{U(1)}=\mathbb{Z}\times(2\mathbb{Z})^2\times\mathbb{Z}_2^5
\end{align}
Then we discuss possible trivializations. Repeatedly consider aforementioned 1D bubble constructions, and now the rotation properties of $|\xi\rangle_{cmm}^{\mu_1}$, $|\xi\rangle_{cmm}^{\mu_2}$, $|\eta\rangle_{cmm}^{\mu_2}$ and $|\eta\rangle_{cmm}^{\mu_3}$ at $\mu_j, j=1,2,3$ are changed by an additional $-1$; the reflection properties of $|\eta\rangle_{cmm}^{\mu_2}$ and $|\eta\rangle_{cmm}^{\mu_3}$ at $\mu_2$ and $\mu_3$ are also changed by an additional $-1$. All of them lead to no trivialization. Furthermore, it is easy to see that all arguments about the $U^f(1)$ charge sector are identical. So again we start from the original trivial state (\ref{cmm original trivial state}), take above 1D bubble constructions on $\tau
_j$ by $n_j$ times ($j=1,2,3$), it will lead to a new 0D block-state labeled by:
\begin{align}
&\left[\left(2n_1,+\right),\left(-2n_1+2n_2+2n_3,+,+\right)\right.\nonumber\\
&\left.\left(-2n_2-2n_3,+,+\right)\right]
\label{cmm U(1) spin-1/2 trivial state}
\end{align}
Similar with the spinless case, all states with this form are trivial and forming the following group:
\begin{align}
\{\mathrm{TBS}\}_{cmm,1/2}^{U(1)}=(2\mathbb{Z})^2
\end{align}
and all different independent nontrivial 0D block-states can be labeled by different group elements of the following quotient group:
\begin{align}
\mathcal{G}_{cmm,1/2}^{U(1)}&=\{\mathrm{OFBS}\}_{cmm,1/2}^{U(1)}/\{\mathrm{TBS}\}_{cmm,1/2}^{U(1)}\nonumber\\
&=\mathbb{Z}\times(2\mathbb{Z})^2\times\mathbb{Z}_2^5/(2\mathbb{Z})^2=2\mathbb{Z}\times\mathbb{Z}_2^6
\end{align}
We should notice that the group structure of the classification should be $2\mathbb{Z}\times\mathbb{Z}_2^6$ rather than $\mathbb{Z}\times\mathbb{Z}_2^5$: two independent quantities are $n_1$ and $n_2+n_3$, hence the classification contributed from complex fermion decorations on $\mu_1$ should be $\mathbb{Z}/2\mathbb{Z}=\mathbb{Z}_2$. Equivalently, 0D block-state $(1,+)$ at $\mu_1$ is nontrivial.
\subsection{Rectangle lattice: $pgg$}
Repeatedly consider the cell decomposition of $pgg$ as illustrated in Fig. \ref{pgg}. For an arbitrary 0D block, different 0D block-states are characterized by different irreducible representations of symmetry group as:
\begin{align}
\mathcal{H}^1[U^f(1)\times\mathbb{Z}_2,U(1)]=\mathbb{Z}\times\mathbb{Z}_2
\end{align}
Here $\mathbb{Z}$ represents the complex fermion and $\mathbb{Z}_2$ represents the eigenvalues of 2-fold rotational symmetry operation. So the 0D block-state decorated on $\mu_j~(j=1,2)$ can be labeled by $(n_j,\pm)$, where $n_j\in\mathbb{Z}$ represents the number of complex fermions decorated on $\mu_j$ and $\pm$ represents the eigenvalues of 2-fold rotational symmetry on $\mu_j$. According to this notation, all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{pgg,0}^{U(1)}=\mathbb{Z}^2\times\mathbb{Z}_2^2
\end{align}
We should further consider the possible trivialization. For systems with spinless fermions, consider the 1D bubble equivalence on $\tau_2$: we decorate a 1D ``particle-hole'' bubble [cf. Eq. (\ref{1D bubble}), here yellow and red dots represent particle and hole, respectively] on each $\tau_2$, and can be trivialized if we shrink them to a point. Near each 0D block labeled by $\mu_1$, there are two particles that form an atomic insulator:
\begin{align}
|\phi\rangle_{pgg}^{\mu_1}=p_1^\dag p_2^\dag|0\rangle
\end{align}
with rotation property as:
\begin{align}
\boldsymbol{R}_{\mu_1}|\phi\rangle_{pgg}^{\mu_1}=p_2^\dag p_1^\dag|0\rangle=-|\phi\rangle_{pgg}^{\mu_1}
\end{align}
i.e., rotation eigenvalue $-1$ can be trivialized by the atomic insulator $|\phi\rangle_{pgg}^{\mu_1}$ at each 0D block labeled by $\mu_1$. Near each 0D block labeled by $\mu_2$, there are two holes that form another atomic insulator:
\begin{align}
|\phi\rangle_{pgg}^{\mu_2}=h_1^\dag h_2^\dag|0\rangle
\end{align}
with rotation property as:
\begin{align}
\boldsymbol{R}_{\mu_2}|\phi\rangle_{pgg}^{\mu_2}=h_2^\dag h_1^\dag|0\rangle=-|\phi\rangle_{pgg}^{\mu_2}
\end{align}
i.e., rotation eigenvalue $-1$ can be trivialized by the atomic insulator $|\phi\rangle_{pgg}^{\mu_2}$ at each 0D block labeled by $\mu_2$. Thus the 1D bubble construction on $\tau_2$ can change the rotation eigenvalues of $\mu_1$ and $\mu_2$ simultaneously, which leads to the nonindependence of rotation eigenvalues of $\mu_1$ and $\mu_2$.
Subsequently we consider the $U^f(1)$ charge sector: consider 1D bubble equivalence on 1D blocks $\tau_2$ [cf. Eq. (\ref{1D bubble}), here yellow and red dots represent particle and hole, respectively, and they can be trivialized if we shrink them to a point]: it adds two complex fermions at each 0D block $\mu_1$ and removes two complex fermions at each 0D block $\mu_2$ (by adding two holes), hence the numbers of complex fermions at $\mu_1$ and $\mu_2$ are not independent. More specifically, consider there are $a$ complex fermions at each $\mu_1$ and $b$ complex fermions at each $\mu_2$, and suppose $c=a+b$. Take above manipulation $n$ times ($n\in\mathbb{Z}$), the number of complex fermions on each $\mu_1/\mu_2$ is $a+2n/b-2n$, and their summation remains invariant. So for a specific $c$, there are only two independent cases: $c=a+b$ and $c=(a+1)+(b-1)$.
With the help of above discussions, we consider the 0D block-state decorations. Start from the following trivial state:
\begin{align}
[(0,+),(0,+)]
\end{align}
Take aforementioned 1D bubble construction on $\tau_2$ by $n\in\mathbb{Z}$ times will obtain the following group containing all trivial states:
\begin{align}
\{\mathrm{TBS}\}_{pgg,0}^{U(1)}&=\left\{\big[(2n,(-1)^n),(-2n,(-1)^n)\big]\Big|n\in\mathbb{Z}\right\}\nonumber\\
&=2\mathbb{Z}
\end{align}
Therefore, the ultimate classification of crystalline topological phases protected by $pgg$ symmetry for 2D systems with spinless fermions is:
\begin{align}
\mathcal{G}_{pgg,0}^{U(1)}&=\{\mathrm{OFBS}\}_{pgg,0}^{U(1)}/\{\mathrm{TBS}\}_{pgg,0}^{U(1)}\nonumber\\
&=\mathbb{Z}^2\times\mathbb{Z}_2^2/2\mathbb{Z}=\mathbb{Z}\times\mathbb{Z}_2^3
\end{align}
For systems with spin-1/2 fermions, 0D obstruction-free block-states are identical with spinless case:
\begin{align}
\{\mathrm{OFBS}\}_{pgg,1/2}^{U(1)}=\mathbb{Z}^2\times\mathbb{Z}_2^2
\end{align}
Then repeatedly consider the aforementioned 1D bubble constructions: the rotation properties of $|\phi\rangle_{pgg}^{\mu_1}$ and $|\phi\rangle_{pgg}^{\mu_2}$ are changed by an additional $-1$, which leads to no trivialization. It is easy to verify that the complex fermion decorations for spinless and spin-1/2 fermions are identical. Repeatedly consider the 1D bubble construction on $\tau_2$ and it will lead to the following group containing all trivial states:
\begin{align}
\{\mathrm{TBS}\}_{pgg,1/2}^{U(1)}=\left\{\big[(2n,+),(-2n,+)\big]\Big|n\in\mathbb{Z}\right\}=2\mathbb{Z}
\end{align}
Therefore, the ultimate classification of crystalline topological phases protected by $pgg$ symmetry for 2D systems with spin-1/2 fermions is:
\begin{align}
\mathcal{G}_{pgg,1/2}^{U(1)}&=\{\mathrm{OFBS}\}_{pgg,1/2}^{U(1)}/\{\mathrm{TBS}\}_{pgg,1/2}^{U(1)}\nonumber\\
&=\mathbb{Z}^2\times\mathbb{Z}_2^2/2\mathbb{Z}=\mathbb{Z}\times\mathbb{Z}_2^3
\end{align}
\subsection{Hexagonal lattice: $p6m$}
Repeatedly consider the cell decomposition of $p6m$ as illustrated in Fig. \ref{p6m}. For 0D blocks labeled by $\mu_1$, different 0D block-states are characterized by different irreducible representations of the symmetry group as:
\begin{align}
\mathcal{H}^1[U^f(1)\times(\mathbb{Z}_6\rtimes\mathbb{Z}_2),U(1)]=\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
Here $\mathbb{Z}$ represents the complex fermion, first $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$ and the second $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. For 0D blocks labeled by $\mu_2$, different 0D block-states are characterized by different irreducible representations of the symmetry group as:
\begin{align}
\mathcal{H}^1[U^f(1)\times(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)]=\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
Here $\mathbb{Z}$ represents the complex fermion, first $\mathbb{Z}_2$ represents the rotation eigenvalue $-1$ and the second $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. For 0D blocks labeled by $\mu_3$, different 0D block-states are characterized by different irreducible representations of the symmetry group as:
\begin{align}
\mathcal{H}^1[U^f(1)\times(\mathbb{Z}_3\rtimes\mathbb{Z}_2),U(1)]=\mathbb{Z}\times\mathbb{Z}_2
\end{align}
Here $\mathbb{Z}$ represents the complex fermion and $\mathbb{Z}_2$ represents the reflection eigenvalue $-1$. So the 0D block-states on $\mu_1$ and $\mu_2$ can be labeled by $(n_1/n_2,\pm,\pm)$, where $n_1/n_2$ represents the number of complex fermions decorated on $\mu_1/\mu_2$ and two $\pm$'s represent the eigenvalues of two independent reflection operations (because even-fold dihedral group can also be generated by two independent reflections); the 0D block-states on $\mu_3$ can be labeled by $(n_3,\pm)$, where $n_3$ represents the number of complex fermions decorated on $\mu_3$ and $\pm$ represents the eigenvalue of reflection operation. According to this notation, all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p6m,0}^{U(1)}=\mathbb{Z}^3\times\mathbb{Z}_2^5
\end{align}
We should further consider possible trivializations: for systems with spinless fermions, consider the 1D bubble equivalence on 1D blocks labeled by $\tau_1$: we decorate a 1D ``particle-hole'' bubble [cf. Eq. (\ref{1D bubble}), here yellow and red dots represent particle and hole, respectively] one each $\tau_1$, and they can be trivialized if we shrink them to a point. Near each 0D block labeled by $\mu_1$, there are six particles forming the following atomic insulator:
\begin{align}
|\xi\rangle_{p6m}^{\mu_1}=p_1^\dag p_2^\dag p_3^\dag p_4^\dag p_5^\dag p_6^\dag|0\rangle
\end{align}
with rotation and reflection properties as:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_1}|\xi\rangle_{p6m}^{\mu_1}=p_2^\dag p_3^\dag p_4^\dag p_5^\dag p_6^\dag p_1^\dag|0\rangle=-|\xi\rangle_{p6m}^{\mu_1}\\
&\boldsymbol{M}_{\tau_2}|\xi\rangle_{p6m}^{\mu_1}=p_6^\dag p_5^\dag p_4^\dag p_3^\dag p_2^\dag p_1^\dag|0\rangle=-|\xi\rangle_{p6m}^{\mu_1}
\end{aligned}
\end{align}
i.e., rotation and reflection eigenvalues $-1$ at each 0D block $\mu_1$ can be trivialized by atomic insulator $|\xi\rangle_{p6m}^{\mu_1}$. Near $\mu_2$, there are two holes forming another atomic insulator:
\begin{align}
|\xi\rangle_{p6m}^{\mu_2}=h_1^\dag h_2^\dag|0\rangle
\end{align}
with rotation and reflection properties as:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_2}|\xi\rangle_{p6m}^{\mu_2}=h_2^\dag h_1^\dag|0\rangle=-|\xi\rangle_{p6m}^{\mu_2}\\
&\boldsymbol{M}_{\tau_3}|\xi\rangle_{p6m}^{\mu_2}=h_2^\dag h_1^\dag|0\rangle=-|\xi\rangle_{p6m}^{\mu_2}
\end{aligned}
\end{align}
i.e., rotation and reflection eigenvalues $-1$ at each 0D block $\mu_2$ can be trivialized by atomic insulator $|\xi\rangle_{p6m}^{\mu_2}$. Therefore, aforementioned 1D bubble construction leads the nonindependence of rotation and reflection eigenvalues of $\mu_1$ and $\mu_2$ (can be changed simultaneously). Then we consider the 1D bubble equivalence on 1D blocks labeled by $\tau_3$: we decorate an identical 1D ``particle-hole'' bubble as aforementioned on each $\tau_3$. Near each 0D block labeled by $\mu_2$, there are two particles forming the following atomic insulator:
\begin{align}
|\eta\rangle_{p6m}^{\mu_2}=p_1'^\dag p_2'^\dag|0\rangle
\end{align}
with rotation and reflection properties as:
\begin{align}
\begin{aligned}
&\boldsymbol{R}_{\mu_2}|\eta\rangle_{p6m}^{\mu_2}=p_2'^\dag p_1'^\dag|0\rangle=-|\eta\rangle_{p6m}^{\mu_2}\\
&\boldsymbol{M}_{\tau_1}|\eta\rangle_{p6m}^{\mu_2}=p_2'^\dag p_1'^\dag|0\rangle=-|\eta\rangle_{p6m}^{\mu_2}
\end{aligned}
\end{align}
i.e., rotation and reflection eigenvalues $-1$ at each 0D block $\mu_2$ can be trivialized by atomic insulator $|\eta\rangle_{p6m}^{\mu_2}$. Near each 0D block $\mu_3$, there are three holes forming another atomic insulator:
\begin{align}
|\eta\rangle_{p6m}^{\mu_3}=h_1'^\dag h_2'^\dag h_3'^\dag|0\rangle
\end{align}
with reflection property as:
\begin{align}
\boldsymbol{M}_{\tau_2}|\eta\rangle_{p6m}^{\mu_3}=h_1'^\dag h_3'^\dag h_2'^\dag|0\rangle=-|\eta\rangle_{p6m}^{\mu_3}
\end{align}
i.e., reflection eigenvalues $-1$ at each 0D block $\mu_3$ can be trivialized by atomic insulator $|\eta\rangle_{p6m}^{\mu_3}$. Hence the reflection eigenvalues $-1$ at all 0D blocks are not independent, and the rotation eigenvalues $-1$ at $\mu_1$ and $\mu_2$ can be totally trivialized. Furthermore, we investigate an alternative 1D bubble equivalence on 1D blocks $\tau_2$ (we label above 1D bubble construction by ``type-\text{\uppercase\expandafter{\romannumeral1}}'',and label this 1D bubble construction by ``type-\text{\uppercase\expandafter{\romannumeral2}}''): we decorate an alternative 1D bubble on each 1D block labeled by $\tau_2$ [cf. Eq. (\ref{1D bubble}), here both yellow and red dots represent the 0D FSPT modes characterized by eigenvalues $-1$ of reflection symmetry, and they can be trivialized if we shrink them to a point]. According to this 1D bubble construction, the reflection eigenvalue at each 0D block $\mu_3$ is changed by $-1$ while the reflection eigenvalue at each 0D block $\mu_2$ remains invariant. Another type-\text{\uppercase\expandafter{\romannumeral2}} 1D bubble construction can also be constructed on $\tau_3$.
Subsequently we consider the c$U^f(1)$ charge sector. First of all, as shown in Fig. \ref{p6m}, we should identify that within a specific unit cell, there is one 0D block labeled by $\mu_1$, two 0D blocks labeled by $\mu_3$ and three 0D blocks labeled by $\mu_2$. Repeatedly consider the aforementioned 1D bubble construction on $\tau_1$: it adds six complex fermions at each 0D block $\mu_1$ and removes two complex fermions at each 0D block $\mu_2$ (by adding two holes), hence the number of complex fermions at $\mu_1$ and $\mu_2$ are not independent.
With the help of above discussions, we consider the 0D block-state decorations. Start from the original trivial state (nothing is decorated on all blocks):
\begin{align}
[(0,+,+),(0,+,+),(0,+)]
\label{p6m original trivial state}
\end{align}
Take aforementioned type-\text{\uppercase\expandafter{\romannumeral1}} 1D bubble constructions on $\tau_j$ by $n_j$ times ($j=1,2,3$), and type-\text{\uppercase\expandafter{\romannumeral2}} 1D bubble constructions on $\tau_2/\tau_3$ by $n_2'/n_3'$ times, it will lead to a new 0D block-state labeled by:
\begin{align}
&\left[\left(6n_1+6n_2,(-1)^{n_1},(-1)^{n_2}\right),\right.\nonumber\\
&\left(-2n_1+2n_3,(-1)^{n_1},(-1)^{n_2}\right),\nonumber\\
&\left.\left(-3n_2-3n_3,(-1)^{n_2+n_3+n_2'+n_3'}\right)\right]
\label{p6m U(1) spinless trivial state}
\end{align}
According to the definition of bubble equivalence, all states with this form should be trivial. Alternatively, all 0D block-states can be viewed as vectors of an 8-dimensional vector space $V$, where the complex fermion components are $\mathbb{Z}$-valued and all other components are $\mathbb{Z}_2$-valued. Then all trivial 0D block-states with the form as Eq. (\ref{p6m U(1) spinless trivial state}) can be viewed as a vector subspace $V'$ of $V$. We notice that $n_2'$ and $n_3'$ should appear simultaneously, hence they are not independent. As a consequence, there are only four independent quantities in Eq. (\ref{p6m U(1) spinless trivial state}): $n_1$, $n_2$, $n_3$ and $n_2'/n_3'$. So the dimensionality of the vector subspace $V'$ should be 4. For the $U^f(1)$ charge sector, we have the following relationship:
\begin{align}
-(6n_1+6n_2)-3(-2n_1+2n_3)=2(-3n_2-3n_3)
\end{align}
i.e., there are only two independent quantities which serves a $2\mathbb{Z}\times3\mathbb{Z}$ trivialization. The remaining two degrees of freedom of the vector subspace $V'$ should be attributed to the eigenvalues of point group symmetry actions, and serves a $\mathbb{Z}_2^2$ trivialization. Therefore, all trivial states with form as shown in Eq. (\ref{p6m U(1) spinless trivial state}) compose the following group:
\begin{align}
\{\mathrm{TBS}\}_{p6m,0}^{U(1)}=2\mathbb{Z}\times3\mathbb{Z}\times\mathbb{Z}_2^2
\end{align}
and different independent nontrivial 0D block-states can be labeled by different group elements of the following quotient group:
\begin{align}
\mathcal{G}_{p6m,0}^{U(1)}&=\{\mathrm{OFBS}\}_{p6m,0}^{U(1)}/\{\mathrm{TBS}\}_{p6m,0}^{U(1)}\nonumber\\
&=\mathbb{Z}^3\times\mathbb{Z}_2^5/2\mathbb{Z}\times3\mathbb{Z}\times\mathbb{Z}_2^2=\mathbb{Z}\times\mathbb{Z}_3\times\mathbb{Z}_2^4
\end{align}
For systems with spin-1/2 fermions, the rotation properties of $|\xi\rangle_{p6m}^{\mu_1}$, $|\xi\rangle_{p6m}^{\mu_2}$ and $|\eta\rangle_{p6m}^{\mu_2}$ at $\mu_1$ and $\mu_2$ are changed by an additional $-1$; the reflection properties of $|\xi\rangle_{p6m}^{\mu_1}$, $|\xi\rangle_{p6m}^{\mu_2}$, $|\eta\rangle_{p6m}^{\mu_2}$ and $|\eta\rangle_{p6m}^{\mu_3}$ at $\mu_1$, $\mu_2$ and $\mu
_3$ are also changed by an additional $-1$. All of them lead no trivialization. Furthermore, like the cases without $U^f(1)$ charge conservation, the classification data of the corresponding 0D block-states on $\mu_1$ and $\mu_2$ can be characterized by different irreducible representations of the corresponding on-site symmetry group (the meaning of $\omega_2$ are refer to Sec. \ref{spinSec}):
\begin{align}
\begin{aligned}
&\mathcal{H}^1\left[U^f(1)\times_{\omega_2}(\mathbb{Z}_6\rtimes\mathbb{Z}_2),U(1)\right]=2\mathbb{Z}\times\mathbb{Z}_2^2\\
&\mathcal{H}^1\left[U^f(1)\times_{\omega_2}(\mathbb{Z}_2\rtimes\mathbb{Z}_2),U(1)\right]=2\mathbb{Z}\times\mathbb{Z}_2^2
\end{aligned}
\end{align}
Here each $2\mathbb{Z}$ represents the $U^f(1)$ charge carried by complex fermion, and different $\mathbb{Z}_2$'s represent the rotation and reflection eigenvalues at each 0D block labeled by $mu_1$ and $\mu_2$ (similar with the $p4m$ case, we can only decorate even number of $U^f(1)$ charge on each 0D block). So all obstruction-free 0D block-states form the following group:
\begin{align}
\{\mathrm{OFBS}\}_{p6m,1/2}^{U(1)}=\mathbb{Z}\times(2\mathbb{Z})^2\times\mathbb{Z}_2^5
\end{align}
then repeatedly consider the aforementioned 1D bubble constructions, the reflection properties of the atomic insulators: $|\xi\rangle_{p6m}^{\mu_1}$, $|\xi\rangle_{p6m}^{\mu_2}$, $|\eta\rangle_{p6m}^{\mu_1}$, $|\eta\rangle_{p6m}^{\mu_2}$ and $|\eta\rangle_{p6m}^{\mu_3}$ are changed by an additional $-1$, and all of them lead no trivialization. Other 1D bubble constructions are identical. So again we start from the original trivial state (\ref{p6m original trivial state}), take above type-\text{\uppercase\expandafter{\romannumeral1}} 1D bubble constructions on $\tau_j$ by $n_j$ times ($j=1,2,3$), and type-\text{\uppercase\expandafter{\romannumeral2}} 1D bubble constructions on $\tau_2/\tau_3$ by $n_2'/n_3'$ times, it will lead to a new 0D block-state labeled by:
\begin{align}
&\left[\left(6n_1+6n_2,+,+\right),\right.\nonumber\\
&\left(-2n_1+2n_3,+,+\right),\nonumber\\
&\left.\left(-3n_2-3n_3,(-1)^{n_2'+n_3'}\right)\right]
\label{p6m U(1) spin-1/2 trivial state}
\end{align}
The $U^f(1)$ charge sector is identical with spinless case, and there is one independent nonzero reflection eigenvalue $(-1)^{n_2'+n_3'}$. Therefore, all trivial states with form as shown in Eq. (\ref{p6m U(1) spin-1/2 trivial state}) compose the following group:
\begin{align}
\{\mathrm{TBS}\}_{p6m,1/2}^{U(1)}=2\mathbb{Z}\times3\mathbb{Z}\times\mathbb{Z}_2
\end{align}
and different independent nontrivial 0D block-states can be labeled by different group elements of the following group:
\begin{align}
\mathcal{G}_{p6m,1/2}^{U(1)}&=\{\mathrm{OFBS}\}_{p6m,1/2}^{U(1)}/\{\mathrm{TBS}\}_{p6m,1/2}^{U(1)}\nonumber\\
&=\mathbb{Z}\times(2\mathbb{Z})^2\times\mathbb{Z}_2^5/2\mathbb{Z}\times3\mathbb{Z}\times\mathbb{Z}_2\nonumber\\
&=2\mathbb{Z}\times\mathbb{Z}_3\times\mathbb{Z}_2^4
\end{align}
\section{Generalized crystalline equivalence principle\label{principle}}
In this section, we discuss how to generalize the crystalline equivalence principle that is rigorously proven for interacting bosonic systems \cite{correspondence}.
By comparing the classification results of the topological crystalline TSC summarized in Table \ref{spinless}, Table \ref{spin-1/2} and the classification results of crystalline TI summarized in Table \ref{insulator U(1)} with the classification results of the 2D FSPT phases protected by the corresponding on-site symmetry\cite{resolution, QingruiTI},
we verify the fermionic crystalline equivalence principle for all TSC and TI(for both spinless and spin-1/2 cases)constructed in this paper,.
In particular, we should map the space group symmetry to on-site symmetry according to the following rules:
\begin{enumerate}
\item Subgroup of translational symmetry along a particular direction should be mapped to the on-site symmetry group $\mathbb{Z}$. Equivalently, the total translational subgroup should be mapped to the on-site symmetry group $\mathbb{Z}^2$;
\item $n$-fold rotational symmetry subgroup should be mapped to the on-site symmetry group $\mathbb{Z}_n$;
\item Reflection symmetry subgroup should be mapped to the time-reversal symmetry group $\mathbb{Z}_2^T$ which is antiunitary.
\item Spinless (spin-1/2) fermionic systems should be mapped
into spin-1/2 (spinless) fermionic systems.
\end{enumerate}
The additional twist on spinless and spin-$1/2$ fermions can be naturally interpreted as the spin rotation of fermions: a $2\pi$ rotation of a fermion around a specific axis results in a $-1$ phase factor \cite{supplementary}.
We conjecture that such crystalline equivalence principle is also correct for 3D crystalline FSPT phases as well.
\section{Conclusion and discussion\label{conclusion}}
In this paper, we derive the classification of crystalline TSC and TI in 2D interacting femionic systems by using the explicit real-space constructions. For a 2D system with a specific wallpaper group symmetry, we first decompose the system into an assembly of unit cells. Then according to the so-called \textit{extensive trivialization} scheme, we can further decompose each unit cell into an assembly of lower-dimensional blocks. After cell decompositions, we can decorate some lower-dimensional block-states on them, and investigate the \textit{obstruction} and \textit{trivialization} for all block states by checking the no-open-edge condition and bubble equivalence. An obstruction/trivialization free decoration corresponds to a nontrivial crystalline SPT phases. We further investigated the group structures of the classification data by considering the possible stacking between 1D and 0D block-states. Finally, with the complete classification results, we compare our results with classification of 2D FSPT phases protected by the corresponding on-site symmetry, we verify the crystalline equivalence principle for generic 2D interacting fermionic systems.
We believe that the real-space construction scheme for crystalline SPT are also applicable to 3D interacting fermionic systems, with similar procedures discussed in this work.
In future works, we will try to construct and fully classify the crystalline TSC/TI in 3D interacting fermionic systems.
We stress that the method in this paper can also be applied for cases with mixture of internal and space group symmetries, i.e. when considering about the lower-dimensional block-states, we should also include the internal symmetry together with the space group symmetry acting internally that leads to different lower-dimensional root phases and bubbles. Then based on these root phases, we can further discuss possible obstructions and trivializations by using the general paradigms highlighted in Sec. \ref{general}.
Moreover, we also predict an intriguing fermionic crystalline TSC (that cannot be realized in both free-fermion and interacting bosonic systems) with $p4m$ wall paper group symmetry.
The iron-based superconductor could be a natural strongly correlation electron system to realize such a new phase, especially the monolayer iron selenide/pnictide \cite{monolayer}. Since the spin-orbit interactions in FeSe is relatively small [distinct from Fe(Se,Te) because of the absence of tellurium], we can effectively treat fermions in this system as spinless.
\begin{acknowledgements}
We thank Qingrui Wang, Zheng-Xin Liu and Meng Cheng for enlightening discussions. This work is supported by Direct Grant No. 4053409 from The Chinese University of Hong Kong and funding from Hong Kong's Research Grants Council (GRF No.14306918, ANR/RGC Joint Research Scheme No. A-CUHK402/18). SY is supported by NSFC (Grant No. 11804181) and the National Key R\&D Program of China (Grant No. 2018YFA0306504).
\end{acknowledgements}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
1,941,325,221,059 | arxiv | \section{Introduction}
The Kronecker's {\it Jugendtraum} is a conjecture that the maximal unramified abelian extension
(the Hilbert class field) of any algebraic number
field is generated by the special values of modular functions attached to an
abelian variety. The conjecture is true for the rational field and
imaginary quadratic fields with the modular functions
being an exponent and the $j$-invariant, respectively.
In the case of an arbitrary number field, a description of the abelian extensions is given
by class field theory, but an explicit formula for the generators of these abelian extensions,
in the sense sought by Kronecker, is unknown even for the real quadratic fields.
The problem was first studied by [Hecke 1910] \cite{Hec1}. A description of abelian extensions of real
quadratic number fields in terms of coordinates of points of finite order on abelian varieties associated
with certain modular curves was obtained in [Shimura 1972] \cite{Shi1}.
Stark formulated a number of conjectures on abelian extension of arbitrary number fields,
which in the real quadratic case amount to specifying generators of these extensions using special values of
Artin $L$-functions, see [Stark 1976] \cite{Sta1}.
Based on an analogy with complex multiplication, Manin suggested to use the so-called
``pseudo-lattices'' ${\Bbb Z}+{\Bbb Z}\theta$ in ${\Bbb R}$ having non-trivial real multiplications
to produce abelian extensions of real quadratic fields, see [Manin 2004] \cite{Man1}.
Similar to the case of complex multiplication, the endomorphism ring ${\goth R}_{\goth f}={\Bbb Z}+{\goth f} O_{\goth k}$
of pseudo-lattice ${\Bbb Z}+{\Bbb Z}\theta$ is an order in the real quadratic field ${\goth k}={\Bbb Q}(\theta)$,
where $O_{\goth k}$ is the ring of integers of ${\goth k}$ and ${\goth f}$ is the conductor of ${\goth R}_{\goth f}$;
Manin calls these pseudo-lattices with {\it real multiplication}.
The aim of our note is a formula for generators of the Hilbert class field of
real quadratic fields based on a modularity and a symmetry
of complex and real multiplication. To give an idea, let
\begin{equation}
\Gamma_1(N)=\left\{\left(\matrix{a & b\cr c & d}\right)\in SL_2({\Bbb Z}) ~|~
a\equiv d\equiv 1 ~\hbox{{\bf mod}} ~N, ~c\equiv 0 ~\hbox{{\bf mod}} ~N\right\}
\end{equation}
be a congruence subgroup of level $N\ge 1$ and ${\Bbb H}$
be the Lobachevsky half-plane; let $X_1(N):={\Bbb H}/\Gamma_1(N)$ be the corresponding
modular curve and $S_2(\Gamma_1(N))$ the space of all cusp forms on $\Gamma_1(N)$ of weight 2.
Let ${\cal E}_{CM}^{(-D,f)}$ be elliptic curve with complex multiplication
by an order $R_f={\Bbb Z}+fO_k$ in the field $k={\Bbb Q}(\sqrt{-D})$, see
[Silverman 1994] \cite{S}, Chapter II. Denote by ${\cal K}^{ab}(k):=k(j({\cal E}_{CM}^{(-D,f)}))$
the Hilbert class field of $k$ modulo conductor $f\ge 1$ and let $N=fD$;
let $Jac~(X_1(fD))$ be the Jacobian of modular curve $X_1(fD)$.
There exists an abelian sub-variety $A_{\phi}\subset Jac~(X_1(fD))$,
such that its points of finite order generate ${\cal K}^{ab}(k)$,
see [Hecke 1928] \cite{Hec2}, [Shimura 1971] \cite{Shi2}, Theorem 1 and [Shimura 1972] \cite{Shi1},
Section 8. The ${\cal K}^{ab}(k)$ is a {\it CM-field}, i.e. a totally imaginary quadratic extension of the totally real field
${\cal K}_{\phi}$ generated by the Fourier coefficients of the Hecke eigenform
$\phi(z)\in S_2(\Gamma_1(fD))$, see [Shimura 1972] \cite{Shi1}, p. 137.
In particular, there exists a holomorphic map $X_1^0(fD)\to {\cal E}_{CM}^{(-D,f)}$,
where $X_1^0(fD)$ is a Riemann surface such that $Jac~(X_1^0(fD))\cong A_{\phi}$;
we refer to the above as a {\it modularity} of complex multiplication.
Recall that (twisted homogeneous) coordinate ring of an elliptic curve ${\cal E}({\Bbb C})$ is isomorphic to a
{\it Sklyanin algebra}, see e.g. [Stafford \& van ~den ~Bergh 2001] \cite{StaVdb1},
Example 8.5; the norm-closure of a self-adjoint representation of the Sklyanin algebra
by the linear operators on a Hilbert space ${\cal H}$ is isomorphic to a
noncommutative torus ${\cal A}_{\theta}$, see [Rieffel 1990] \cite{Rie1} for the definition.
Whenever elliptic curve ${\cal E}({\Bbb C})\cong {\cal E}_{CM}^{(-D,f)}$ has complex multiplication,
the noncommutative torus ${\cal A}_{\theta}$ has real multiplication by an order
${\goth R}_{\goth f}={\Bbb Z}+{\goth f}O_{\goth k}$ in the field ${\goth k}={\Bbb Q}(\sqrt{D})$;
moreover, it is known that ${\goth f}=f^m$ for the minimal power $m$ satisfying an isomorphism:
\begin{equation}\label{eq3}
Cl~({\goth R}_{f^m})\cong Cl~(R_f),
\end{equation}
where $Cl~(R_f)$ and $Cl~({\goth R}_{\goth f})$ are the ideal class groups of orders $R_f$ and
${\goth R}_{\goth f}$, respectively. We shall refer to (\ref{eq3}) as a {\it symmetry} of complex and real multiplication.
The noncommutative torus with real multiplication
by ${\goth R}_{\goth f}$ will be denoted by ${\cal A}_{RM}^{(D, {\goth f})}$.
\begin{rmk}
\textnormal{
The isomorphism (\ref{eq3}) can be calculated using
the well-known formula for the class number of a non-maximal order ${\Bbb Z}+fO_K$ of a quadratic
field $K={\Bbb Q}(\sqrt{D})$:
\begin{equation}
h_{{\Bbb Z}+fO_K}={h_{O_K} f \over e_f}\prod_{p|f}\left(1-\left({D\over p}\right){1\over p}\right),
\end{equation}
where $h_{O_K}$ is the class number of the maximal order $O_K$, $e_f$ is the index
of the group of units of ${\Bbb Z}+fO_K$ in the group of units of $O_K$, $p$ is a prime number and
$\left({D\over p}\right)$ is the Legendre symbol, see e.g. [Borevich \& Shafarevich 1966]
\cite{BS}, p.153 and [Hasse 1950] \cite{H}, pp. 297 and 351.
}
\end{rmk}
The (twisted homogeneous) coordinate ring of the Riemann surface $X_1^0(fD)$ is
an {\it AF-algebra} ${\Bbb A}_{\phi^0}$ linked to a holomorphic differential $\phi^0(z)dz$
on $X_1^0(fD)$, see Section 2.2, Definition \ref{dfn1} and Remark \ref{rmk4} for the details;
the Grothendieck semigroup $K_0^+({\Bbb A}_{\phi^0})$ is a pseudo-lattice
${\Bbb Z}+{\Bbb Z}\theta_1+\dots+{\Bbb Z}\theta_{n-1}$ in the number field
${\cal K}_{\phi}$, where $n$ equals the genus of $X_1^0(fD)$.
Moreover, a holomorphic map $X_1^0(fD)\to {\cal E}_{CM}^{(-D,f)}$
induces the $C^*$-algebra homomorphism ${\Bbb A}_{\phi^0}\to {\cal A}_{RM}^{(D, {\goth f})}$
between the corresponding coordinate rings, so that the following diagram commutes:
\begin{picture}(300,110)(-100,-5)
\put(20,70){\vector(0,-1){35}}
\put(130,70){\vector(0,-1){35}}
\put(52,23){\vector(1,0){53}}
\put(52,83){\vector(1,0){53}}
\put(5,20){${\cal E}_{CM}^{(-D,f)}$}
\put(120,20){${\cal A}_{RM}^{(D, {\goth f})}$}
\put(0,80){$X_1^0(fD)$}
\put(127,80){${\Bbb A}_{\phi^0}$}
\put(50, 90){coordinate}
\put(70, 70){map}
\put(50, 30){coordinate}
\put(70, 10){map}
\end{picture}
\noindent
But $K_0^+({\cal A}_{RM}^{(D, {\goth f})})$ is a pseudo-lattice ${\Bbb Z}+{\Bbb Z}\theta$ in
the field ${\goth k}$, such that $End~({\Bbb Z}+{\Bbb Z}\theta)\cong {\goth R}_{\goth f}$;
in other words,
one can use the above diagram to control the arithmetic of the field ${\cal K}_{\phi}$
by such of the real quadratic field $\goth k$. Roughly speaking,
this observation solves the Kronecker's Jugendtraum for the real quadratic fields; namely,
the following is true.
\begin{thm}\label{thm1}
The Hilbert class field of a real quadratic field ${\goth k}={\Bbb Q}(\sqrt{D})$
modulo conductor $f^m$ is an extension of ${\goth k}$
by the Fourier coefficients of the Hecke eigenform $\phi(z)\in S_2(\Gamma_1(fD))$,
where $m$ is the smallest positive integer satisfying isomorphism (\ref{eq3}).
\end{thm}
\begin{rmk}
\textnormal{
Theorem \ref{thm1} can be used to compute concrete extensions. For instance,
theorem \ref{thm1} says that for the quadratic field ${\Bbb Q}(\sqrt{15})$ its Hilbert class field
is isomorphic to ${\Bbb Q}\left(\sqrt{-1 +\sqrt{15}}\right)$ and
for ${\Bbb Q}(\sqrt{14})$ such a field modulo conductor ${\goth f}=8$ is isomorphic to
${\Bbb Q}\left(^4\sqrt{-27+8\sqrt{14}}\right)$, see Section 4 for more examples.
}
\end{rmk}
The article is organized as follows. Section 2 covers basic facts on real
multiplication and AF-algebras of the Hecke eigenforms. Theorem \ref{thm1}
is proved in Section 3. Section 4 contains numerical examples illustrating
theorem \ref{thm1}.
\section{Preliminaries}
The reader can find basics of the $C^*$-algebras in [Murphy 1990] \cite{M}
and their $K$-theory in [Blackadar 1986] \cite{B}.
The noncommutative tori are covered in [Rieffel 1990] \cite{Rie1}
and real multiplication in [Manin 2004] \cite{Man1}.
For main ideas of non-commutative algebraic geometry, see the survey
by [Stafford \& van ~den ~Bergh 2001] \cite{StaVdb1}. The AF-algebras
are reviewed in [Effros 1981] \cite{E}. For a general theory of modular
forms we refer the reader to [Diamond \& Shurman 2005] \cite{DS}.
\subsection{Real multiplication}
The noncommutative torus ${\cal A}_{\theta}$ is a universal {\it $C^*$-algebra}
generated by the unitary operators $u$ and $v$ acting on a Hilbert space ${\cal H}$
and satisfying the commutation relation $vu=e^{2\pi i\theta}uv$, where $\theta$ is a
real number. The $C^*$-algebra ${\cal A}_{\theta}$ is said to be stably isomorphic
(Morita equivalent) to ${\cal A}_{\theta'}$, whenever ${\cal A}_{\theta}\otimes {\cal K}\cong
{\cal A}_{\theta'}\otimes {\cal K}$, where ${\cal K}$ is the $C^*$-algebra of all compact operators
on ${\cal H}$; the ${\cal A}_{\theta}$ is stably isomorphic to ${\cal A}_{\theta'}$ if and only if
\begin{equation}\label{eq6}
\theta'={a\theta +b\over c\theta+d}\quad
\hbox{for some matrix} \quad \left(\matrix{a & b\cr c & d}\right)\in SL_2({\Bbb Z}).
\end{equation}
The $K$-theory of ${\cal A}_{\theta}$ is two-periodic and
$K_0({\cal A}_{\theta})\cong K_1({\cal A}_{\theta})\cong {\Bbb Z}^2$ so that
the Grothendieck semigroup $K_0^+({\cal A}_{\theta})$ corresponds to positive reals of
the pseudo-lattice ${\Bbb Z}+{\Bbb Z}\theta\subset {\Bbb R}$.
The ${\cal A}_{\theta}$ is said to have {\it real multiplication}, if $\theta$ is a quadratic
irrationality, i.e. irrational root of a quadratic polynomial in ${\Bbb Z}[x]$.
The real multiplication says that the endomorphism ring of pseudo-lattice
${\Bbb Z}+{\Bbb Z}\theta$ exceeds the ring ${\Bbb Z}$ corresponding to multiplication
by $m$ endomorphisms; similar to complex multiplication, it means that the
endomorphism ring is isomorphic to an order ${\goth R}_{\goth f}={\Bbb Z}+{\goth f}O_{\goth k}$
of conductor ${\goth f}\ge 1$ in the real quadratic field ${\goth k}={\Bbb Q}(\theta)$,
hence the name. If $D>0$ is the discriminant of ${\goth k}$, then
by ${\cal A}_{RM}^{(D, {\goth f})}$ we denote torus ${\cal A}_{\theta}$ with real multiplication
by the order ${\goth R}_{\goth f}$.
The Sklyanin algebra $S_{\alpha,\beta,\gamma}({\Bbb C})$ is a free ${\Bbb C}$-algebra on four generators
and six relations:
\begin{equation}
\left\{
\begin{array}{ccc}
x_1x_2-x_2x_1 &=& \alpha(x_3x_4+x_4x_3),\\
x_1x_2+x_2x_1 &=& x_3x_4-x_4x_3,\\
x_1x_3-x_3x_1 &=& \beta(x_4x_2+x_2x_4),\\
x_1x_3+x_3x_1 &=& x_4x_2-x_2x_4,\\
x_1x_4-x_4x_1 &=& \gamma(x_2x_3+x_3x_2),\\
x_1x_4+x_4x_1 &=& x_2x_3-x_3x_2,
\end{array}
\right.
\end{equation}
where $\alpha+\beta+\gamma+\alpha\beta\gamma=0$;
such an algebra corresponds to a {\it twisted homogeneous coordinate ring} of
an elliptic curve in the complex projective space ${\Bbb C}P^3$
given by the intersection of two quadric surfaces of the form
${\cal E}_{\alpha,\beta,\gamma}({\Bbb C})=\{(u,v,w,z)\in {\Bbb C}P^3 ~|~u^2+v^2+w^2+z^2={1-\alpha\over 1+\beta}v^2+
{1+\alpha\over 1-\gamma}w^2+z^2=0\}$.
Being such a ring means that the algebra $S_{\alpha,\beta,\gamma}$ satisfies an isomorphism
\begin{equation}
\hbox{{\bf Mod}}~(S_{\alpha,\beta,\gamma}({\Bbb C}))/
\hbox{{\bf Tors}}\cong \hbox{{\bf Coh}}~({\cal E}_{\alpha,\beta,\gamma}({\Bbb C})),
\end{equation}
where {\bf Coh} is the category of quasi-coherent sheaves on ${\cal E}_{\alpha,\beta,\gamma}({\Bbb C})$,
{\bf Mod} the category of graded left modules over the graded ring $S_{\alpha,\beta,\gamma}({\Bbb C})$
and {\bf Tors} the full sub-category of {\bf Mod} consisting of the
torsion modules, see [Stafford \& van ~den ~Bergh 2001] \cite{StaVdb1}, Example 8.5.
If one sets $x_1=u, x_2=u^*, x_3=v, x_4=v^*$, then there exists a self-adjoint representation
of the Sklyanin $\ast$-algebra $S_{\alpha, 1, -1}({\Bbb C})$ by linear operators on a Hilbert
space ${\cal H}$, such that its norm-closure is isomorphic to ${\cal A}_{\theta}$;
namely,
${\cal A}_{\theta}^0\cong S_{\alpha, 1, -1}({\Bbb C})/I_{\mu}$, where ${\cal A}_{\theta}^0$ is a dense
sub-algebra of ${\cal A}_{\theta}$ and $I_{\mu}$ is an ideal generated by the ``scaled unit''
relations $x_1x_3=x_3x_4={1\over\mu}e$, where $\mu>0$ is a constant. Thus the algebra ${\cal A}_{\theta}$
is a coordinate ring of elliptic curve ${\cal E}({\Bbb C})$, such that isomorphic elliptic
curves correspond to the stably isomorphic (Morita equivalent) noncommutative tori;
this fact explains the modular transformation law in (\ref{eq6}). In particular,
if ${\cal E}({\Bbb C})$ has complex multiplication by an order $R_f={\Bbb Z}+fO_k$ in
a quadratic field $k={\Bbb Q}(\sqrt{-D})$, then ${\cal A}_{\theta}$ has real multiplication
by an order ${\goth R}_{\goth f}={\Bbb Z}+{\goth f}O_{\goth k}$ in the quadratic field
${\goth k}={\Bbb Q}(\sqrt{D})$, where ${\goth f}$ is the smallest integer satisfying
an isomorphism $Cl~({\goth R}_{\goth f})\cong Cl~(R_f)$, see \cite{Nik2};
the isomorphism is a necessary and
sufficient condition for ${\cal A}_{RM}^{(D, {\goth f})}$ to discern non-isomorphic
elliptic curves ${\cal E}_{CM}^{(-D,f)}$ having the same endomorphism ring $R_f$.
For the constraint ${\goth f}=f^m$, see remark \ref{rmk5}.
\subsection{AF-algebra of the Hecke eigenform}
An {\it AF-algebra} (Approximately Finite $C^*$-algebra) is defined to
be the norm closure of an ascending sequence of finite dimensional
$C^*$-algebras $M_n$, where $M_n$ is the $C^*$-algebra of the $n\times n$ matrices
with entries in ${\Bbb C}$. Here the index $n=(n_1,\dots,n_k)$ represents
the semi-simple matrix algebra $M_n=M_{n_1}\oplus\dots\oplus M_{n_k}$.
The ascending sequence mentioned above can be written as
$M_1\buildrel\rm\varphi_1\over\longrightarrow M_2
\buildrel\rm\varphi_2\over\longrightarrow\dots,
$
where $M_i$ are the finite dimensional $C^*$-algebras and
$\varphi_i$ the homomorphisms between such algebras.
The homomorphisms $\varphi_i$ can be arranged into a graph as follows.
Let $M_i=M_{i_1}\oplus\dots\oplus M_{i_k}$ and
$M_{i'}=M_{i_1'}\oplus\dots\oplus M_{i_k'}$ be
the semi-simple $C^*$-algebras and $\varphi_i: M_i\to M_{i'}$ the homomorphism.
One has two sets of vertices $V_{i_1},\dots, V_{i_k}$ and $V_{i_1'},\dots, V_{i_k'}$
joined by $b_{rs}$ edges whenever the summand $M_{i_r}$ contains $b_{rs}$
copies of the summand $M_{i_s'}$ under the embedding $\varphi_i$.
As $i$ varies, one obtains an infinite graph called the {\it Bratteli diagram} of the
AF-algebra. The matrix $B=(b_{rs})$ is known as a {\it partial multiplicity matrix};
an infinite sequence of $B_i$ defines a unique AF-algebra.
An AF-algebra is called {\it stationary} if $B_i=Const=B$, see [Effros 1981] \cite{E}, Chapter 6;
when two non-similar matrices $B$ and $B'$ have the same characteristic polynomial,
the corresponding stationary AF-algebras will be called {\it companion AF-algebras}.
Let $N\ge 1$ be a natural number and consider a (finite index) subgroup
of the modular group given by the formula:
\begin{equation}
\Gamma_1(N)=\left\{\left(\matrix{a & b\cr c & d}\right)\in SL_2({\Bbb Z}) ~|~
a\equiv d\equiv 1 ~\hbox{{\bf mod}} ~N, ~c\equiv 0 ~\hbox{{\bf mod}} ~N\right\}.
\end{equation}
Let ${\Bbb H}=\{z=x+iy\in {\Bbb C} ~|~ y>0\}$ be the upper half-plane and
let $\Gamma_1(N)$ act on ${\Bbb H}$ by the linear fractional
transformations; consider an orbifold ${\Bbb H}/\Gamma_1(N)$.
To compactify the orbifold
at the cusps, one adds a boundary to ${\Bbb H}$, so that
${\Bbb H}^*={\Bbb H}\cup {\Bbb Q}\cup\{\infty\}$ and the compact Riemann surface
$X_1(N)={\Bbb H}^*/\Gamma_1(N)$ is called a {\it modular curve}.
The meromorphic functions $\phi(z)$ on ${\Bbb H}$ that
vanish at the cusps and such that
\begin{equation}
\phi\left({az+b\over cz+d}\right)= (cz+d)^2 \phi(z),\qquad
\forall \left(\matrix{a & b\cr c & d}\right)\in\Gamma_0(N),
\end{equation}
are called {\it cusp forms} of weight two; the (complex linear) space of such forms
will be denoted by $S_2(\Gamma_1(N))$. The formula $\phi(z)\mapsto \omega=\phi(z)dz$
defines an isomorphism $S_2(\Gamma_1(N))\cong \Omega_{hol}(X_1(N))$, where
$\Omega_{hol}(X_1(N))$ is the space of all holomorphic differentials
on the Riemann surface $X_1(N)$. Note that
\linebreak
$\dim_{\Bbb C}(S_2(\Gamma_1(N))=\dim_{\Bbb C}(\Omega_{hol}(X_1(N))=g$,
where $g=g(N)$ is the genus of the surface $X_1(N)$.
A {\it Hecke operator}, $T_n$, acts on $S_2(\Gamma_1(N))$ by the formula
$T_n \phi=\sum_{m\in {\Bbb Z}}\gamma(m)q^m$, where
$\gamma(m)= \sum_{a|\hbox{{\bf GCD}}(m,n)}a c_{mn/a^2}$ and
$\phi(z)=\sum_{m\in {\Bbb Z}}c(m)q^m$ is the Fourier
series of the cusp form $\phi$ at $q=e^{2\pi iz}$. Further, $T_n$ is a
self-adjoint linear operator on the vector space $S_2(\Gamma_1(N))$
endowed with the Petersson inner product; the algebra
${\Bbb T}_N :={\Bbb Z}[T_1,T_2,\dots]$ is a commutative algebra.
Any cusp form $\phi\in S_2(\Gamma_1(N))$ that is an eigenvector
for one (and hence all) of $T_n$, is referred to
as a {\it Hecke eigenform}.
The Fourier coefficients $c(m)$ of $\phi$ are algebraic integers, and we
denote by ${\cal K}_{\phi}={\Bbb Q}(c(m))$ an extension of the field ${\Bbb Q}$
by the Fourier coefficients of $\phi$. Then ${\cal K}_{\phi}$
is a real algebraic number field of degree $1\le \deg~({\cal K}_{\phi} | {\Bbb Q})\le g$,
where $g$ is the genus of the surface $X_1(N)$, see e.g. [Diamond \& Shurman 2005] \cite{DS}, Proposition 6.6.4.
Any embedding $\sigma: {\cal K}_{\phi}\to {\Bbb C}$ conjugates $\phi$ by acting
on its coefficients; we write the corresponding Hecke eigenform
$\phi^{\sigma}(z):=\sum_{m\in {\Bbb Z}}\sigma(c(m))q^m$ and call $\phi^{\sigma}$
a {\it conjugate} of the Hecke eigenform $\phi$.
Let $\omega=\phi(z)dz\in\Omega_{hol}(X)$ be a holomorphic differential on a Riemann surface $X$.
We shall denote by $\Re~(\omega)$ a closed form on $X$ (the real part of $\omega$)
and consider its periods $\lambda_i=\int_{\gamma_i}\Re~(\omega)$
against a basis $\gamma_i$ in the (relative) homology group
$H_1(X, Z(\Re~(\omega)); ~{\Bbb Z})$, where $Z(\Re~(\omega))$ is the set of zeros of the form
$\Re~(\omega)$.
Assume $\lambda_i> 0$ and consider the vector $\theta=(\theta_1,\dots,\theta_{n-1})$
with $\theta_i=\lambda_{i+1} / \lambda_1$. The {\it Jacobi-Perron continued fraction} of
$\theta$ is given by the formula:
\begin{equation}
\left(\matrix{1\cr \theta}\right)=
\lim_{i\to\infty} \left(\matrix{0 & 1\cr I & b_1}\right)\dots
\left(\matrix{0 & 1\cr I & b_i}\right)
\left(\matrix{0\cr {\Bbb I}}\right)=
\lim_{i\to\infty} B_i\left(\matrix{0\cr {\Bbb I}}\right),
\end{equation}
where $b_i=(b^{(i)}_1,\dots, b^{(i)}_{n-1})^T$ is a vector of non-negative integers,
$I$ is the unit matrix and ${\Bbb I}=(0,\dots, 0, 1)^T$, see e.g. [Bernstein 1971] \cite{BE}.
By ${\Bbb A}_{\phi}$ we shall understand the AF-algebra
given the Bratteli diagram with partial multiplicity matrices $B_i$.
If $\phi(z)\in S_2(\Gamma_1(N))$ is a Hecke eigenform, then the
corresponding AF-algebra ${\Bbb A}_{\phi}$ is {\it stationary} with the partial multiplicity
matrices $B_i=Const=B$; moreover, each conjugate eigenform $\phi^{\sigma}$
defines a {\it companion} AF-algebra ${\Bbb A}_{\phi^{\sigma}}$.
It is known that $K_0^+({\Bbb A}_{\phi})\cong {\Bbb Z}+{\Bbb Z}\theta_1+\dots+{\Bbb Z}\theta_{n-1}
\subset {\cal K}_{\phi}$, where ${\cal K}_{\phi}$ is an algebraic number field generated by the Fourier
coefficients of $\phi$, see \cite{Nik1}.
\section{Proof of theorem \ref{thm1}}
\begin{dfn}\label{dfn1}
Let $A_{\phi}\subset Jac~(X_1(fD))$ be an abelian variety associated to the
Hecke eigenform $\phi(z)\in S_2(\Gamma_1(fD))$, see e.g. [Diamond \& Shurman 2005] \cite{DS},
Definition 6.6.3. By $X_1^0(fD)$ we shall understand the Riemann surface of genus $g$,
such that
\begin{equation}
Jac~(X_1^0(fD))\cong A_{\phi}.
\end{equation}
By $\phi^0(z)dz\in\Omega_{hol}(X_1^0(fD))$ we denote the
image of the Hecke eigenform $\phi(z)dz\in\Omega_{hol}(X_1(fD))$
under the holomorphic map $X_1(fD)\to X_1^0(fD)$.
\end{dfn}
\begin{rmk}
\textnormal{
The surface $X_1^0(fD)$ is correctly defined. Indeed,
since the abelian variety $A_{\phi}$ is the product of $g$ copies of an
elliptic curve with the complex multiplication, there exists a holomorphic map from
$A_{\phi}$ to the elliptic curve. For a Riemann
surface $X$ of genus $g$ covering the elliptic curve ${\cal E}_{CM}$ by a holomorphic
map (such a surface and a map always exist), one gets a period map $X\to A_{\phi}$
by closing the arrows of a commutative diagram $A_{\phi}\rightarrow {\cal E}_{CM}
\leftarrow X$. It is easy to see, that the Jacobian of $X$ coincides
with $A_{\phi}$ and we set $X_1^0(fD) := X$.
}
\end{rmk}
\begin{lem}\label{lm1}
$g(X_1^0(fD))=\deg~({\cal K}^{ab}(k)~|~k)$.
\end{lem}
{\it Proof.}
By definition, abelian variety $A_{\phi}$ is the quotient of ${\Bbb C}^n$ by a lattice
of periods of the Hecke eigenform $\phi(z)\in S_2(\Gamma_1(fD))$ and all its
conjugates $\phi^{\sigma}(z)$ on the Riemann surface $X_1(fD)$. These periods
are complex algebraic numbers generating the Hilbert class field ${\cal K}^{ab}(k)$
over imaginary quadratic field $k={\Bbb Q}(\sqrt{-D})$ modulo conductor $f$, see
[Hecke 1928] \cite{Hec2}, [Shimura 1971] \cite{Shi2} and [Shimura 1972] \cite{Shi1},
Section 8. The number of linearly independent periods is equal to the total number
of the conjugate eigenforms $\phi^{\sigma}(z)$, i.e. $|\sigma|=n=\dim_{\Bbb C} (A_{\phi})$.
Since real dimension $\dim_{\Bbb R} (A_{\phi})=2n$, we conclude that
$\deg~({\cal K}^{ab}(k)|{\Bbb Q})=2n$ and, therefore, $\deg ~({\cal K}^{ab}(k)|k)=n$.
But $\dim_{\Bbb C} (A_{\phi})=g(X_1^0(fD))$ and one gets
$g(X_1^0(fD))=\deg ~({\cal K}^{ab}(k)|k)$. Lemma \ref{lm1} follows.
$\square$
\begin{cor}\label{cor1}
$g(X_1^0(fD))=|Cl~(R_f)|$.
\end{cor}
{\it Proof.}
Because ${\cal K}^{ab}(k)$ is the Hilbert class field over $k$ modulo conductor $f$,
we must have
\begin{equation}\label{eq12}
Gal~({\cal K}^{ab}(k)|k)\cong Cl~(R_f),
\end{equation}
where $Gal$ is the Galois group of the extension ${\cal K}^{ab}(k)|k$ and $Cl~(R_f)$
is the class group of ring $R_f$, see e.g. [Silverman 1994] \cite{S}, p.112.
But $|Gal~({\cal K}^{ab}(k)|k)|=\deg ~({\cal K}^{ab}(k) | k)$ and by lemma \ref{lm1} we have
$\deg~({\cal K}^{ab}(k) | k)=g(X_1^0(fD))$.
In view of this and isomorphism (\ref{eq12}), one gets $|Cl~(R_f)|=|Gal~({\cal K}^{ab}|k)|=g(X_1^0(fD))$.
Corollary \ref{cor1} follows.
$\square$
\begin{lem}\label{lm2}
$g(X_1^0(fD))=\deg~({\cal K}_{\phi} ~|~{\Bbb Q})$.
\end{lem}
{\it Proof.}
It is known that $\dim_{\Bbb C} (A_{\phi})=\deg~({\cal K}_{\phi} ~|~{\Bbb Q})$,
see e.g. [Diamond \& Shurman 2005] \cite{DS}, Proposition 6.6.4.
But abelian variety $A_{\phi}\cong Jac~(X_1^0(fD))$ and, therefore,
$\dim_{\Bbb C} (A_{\phi})=\dim_{\Bbb C} (Jac~(X_1^0(fD)))=g(X_1^0(fD))$,
hence the lemma.
$\square$
\begin{cor}\label{cor2}
$\deg~({\cal K}_{\phi} ~|~{\Bbb Q})=|Cl~({\goth R}_{\goth f})|$.
\end{cor}
{\it Proof.}
From lemma \ref{lm2} and corollary \ref{cor1} one gets $\deg~({\cal K}_{\phi} |{\Bbb Q})=|Cl~(R_f)|$.
In view of this and equality (\ref{eq3}), one gets the conclusion of corollary \ref{cor2}.
$\square$
\begin{lem}\label{lm3}
{\bf (Basic lemma)}
$Gal~({\cal K}_{\phi}~|~{\Bbb Q})\cong Cl~({\goth R}_{\goth f})$.
\end{lem}
{\it Proof.}
Let us outline the proof. In view of lemma \ref{lm2} and corollaries \ref{cor1}-\ref{cor2},
we denote by $h$ the single integer $g(X_1^0(fD))=|Cl~(R_f)|=|Cl~({\goth R}_{\goth f})|=
\deg~({\cal K}_{\phi}|{\Bbb Q})$.
Since $\deg~({\cal K}_{\phi}|{\Bbb Q})=h$, there exist $\{\phi_1,\dots,\phi_h\}$ conjugate
Hecke eigenforms $\phi_i(z)\in S_2(\Gamma_1(fD))$, see e.g.
[Diamond \& Shurman 2005] \cite{DS}, Theorem 6.5.4; thus one gets $h$
holomorphic forms $\{\phi_1^0,\dots,\phi_h^0\}$ on the Riemann surface $X_1^0(fD)$.
Let $\{{\Bbb A}_{\phi_1^0},\dots, {\Bbb A}_{\phi_h^0}\}$
be the corresponding stationary AF-algebras; the ${\Bbb A}_{\phi_i^0}$ are {\it companion}
AF-algebras, see Section 1.2.
Recall that the characteristic polynomial for the partial multiplicity matrices $B_{\phi_i^0}$
of companion AF-algebras ${\Bbb A}_{\phi_i^0}$ is the same; it is a minimal polynomial
of degree $h$ and let $\{\lambda_1,\dots,\lambda_h\}$ be the roots of such a polynomial,
compare with [Effros 1981] \cite{E}, Corollary 6.3.
Since $\det~(B_{\phi_i^0})=1$, the numbers $\lambda_i$ are algebraic units of the field ${\cal K}_{\phi}$.
Moreover, $\lambda_i$ are algebraically conjugate and can be taken for generators
of the extension ${\cal K}_{\phi}|{\Bbb Q}$; since $\deg~({\cal K}_{\phi}|{\Bbb Q})=h=|Cl~({\goth R}_{\goth f})|$
there exists a natural action of group $Cl~({\goth R}_{\goth f})$ on these generators. The action extends to
automorphisms of the entire field ${\cal K}_{\phi}$ preserving ${\Bbb Q}$; thus one gets
the Galois group of extension ${\cal K}_{\phi}|{\Bbb Q}$ and an isomorphism
$Gal~({\cal K}_{\phi}|{\Bbb Q})\cong Cl~({\goth R}_{\goth f})$. Let us pass to a step-by-step argument.
\bigskip
(i) Let $h:=g(X_1^0(fD))=|Cl~(R_f)|=|Cl~({\goth R}_{\goth f})|$
and let $\phi(z)\in S_2(\Gamma_1(fD))$ be the
Hecke eigenform. It is known that there exists $\{\phi_1,\dots,\phi_h\}$
conjugate Hecke eigenforms, so that $\phi(z)$ is one of them, see
[Diamond \& Shurman 2005] \cite{DS}, Theorem 6.5.4.
Let $\{\phi_1^0,\dots,\phi_h^0\}$ be the corresponding forms on the Riemann surface $X_1^0(fD)$.
\begin{rmk}
\textnormal{
The forms $\{\phi_1^0,\dots,\phi_h^0\}$ can be taken for a basis in the space $\Omega_{hol}(X_1^0(fD))$;
we leave it to the reader to verify, that abelian variety $A_{\phi}$ is isomorphic to the quotient of ${\Bbb C}^h$ by
the lattice of periods of holomorphic differentials $\phi_i^0(z) dz$ on $X_1^0(fD)$.
}
\end{rmk}
\bigskip
(ii) Let ${\Bbb A}_{\phi_i^0}$ be the AF-algebra corresponding to holomorphic differential $\phi_i^0(z)dz$
on $X_1^0(fD)$, see Section 2.2;
the set $\{{\Bbb A}_{\phi_1^0},\dots, {\Bbb A}_{\phi_h^0}\}$ consists of the companion AF-algebras.
It is known that each ${\Bbb A}_{\phi_i^0}$ is a stationary AF-algebra, i.e. its partial multiplicity matrix is a constant;
we shall denote such a matrix by $B_{\phi_i^0}$.
\bigskip
(iii) By definition, the matrices $B_{\phi_i^0}$ of companion $AF$-algebras ${\Bbb A}_{\phi_i}$
have the same characteristic polynomial $p(x)\in {\Bbb Z}[x]$; the matrices $B_{\phi_i^0}$ itself are not pairwise
similar and, therefore, the AF-algebras ${\Bbb A}_{\phi_i^0}$ are not pairwise isomorphic. The total
number $h$ of such matrices is equal to the class number of the endomorphism ring of
pseudo-lattice $K_0^+({\Bbb A}_{\phi_i^0})\cong {\Bbb Z}+{\Bbb Z}\theta_1^i+\dots+{\Bbb Z}\theta_{h-1}^i
\subset {\cal K}_{\phi}$, see [Effros 1981] \cite{E}, Corollary 6.3.
\begin{rmk}\label{rmk4}
\textnormal{
Notice that there are $\{X_1,\dots, X_h\}$ pairwise non-isomorphic Riemann surfaces
$X:=X_1^0(fD)$ endowed with a holomorphic map $X_i\to {\cal E}_i$,
where $\{{\cal E}_1,\dots, {\cal E}_h\}$ are pairwise non-isomorphic elliptic curves
${\cal E}_{CM}^{(-D,f)}$ corresponding to elements of the group $Cl~(R_f)$.
Thus the companion AF-algebras $\{{\Bbb A}_{\phi_1^0},\dots, {\Bbb A}_{\phi_h^0}\}$
can be viewed as coordinate rings of $\{X_1,\dots, X_h\}$;
the latter means that ${\Bbb A}_{\phi_i^0}$ discern non-isomorphic Riemann surfaces and
$K_0^+({\Bbb A}_{\phi_i^0})\cong {\Bbb Z}+{\Bbb Z}\theta_1^i+\dots+{\Bbb Z}\theta_{h-1}^i$
represents the moduli space of $X_1^0(fD)$.
}
\end{rmk}
\bigskip
(iv) The polynomial $p(x)$ is minimal and splits in the totally real field ${\cal K}_{\phi}$.
Indeed, matrices $B_{\phi_i^0}$ generate the Hecke algebra ${\Bbb T}_N$
on $S_2(\Gamma_1(N))$; thus each $B_{\phi_i^0}$ is self-adjoint and, therefore,
all eigenvalues are real of multiplicity one; since $B_{\phi_i^0}$ is integer, all roots
of characteristic polynomial $p(x)$ of $B_{\phi_i^0}$ belong to the field ${\cal K}_{\phi}$.
\bigskip
(v) Let $p(x)=(x-\lambda_1)\dots (x-\lambda_h)$. It is easy to see that $\lambda_i$ are algebraic units
of the field ${\cal K}_{\phi}$ because $\det~(B_{\phi_i^0})=1$; note that numbers
$\{\lambda_1,\dots,\lambda_h\}$ are algebraically conjugate.
Since $\deg~({\cal K}_{\phi}|{\Bbb Q})=h$, the numbers $\lambda_i$ can be taken for
generators of the field ${\cal K}_{\phi}$, i.e. ${\cal K}_{\phi}={\Bbb Q}(\lambda_1,\dots,\lambda_h)$.
\bigskip
(vi) Finally, let us establish an explicit formula for the isomorphism
\begin{equation}\label{eq17}
Cl~({\goth R}_{\goth f})\to Gal~({\cal K}_{\phi}|{\Bbb Q})
\end{equation}
Since $Gal~({\cal K}_{\phi}|{\Bbb Q})$ is an automorphism group of the
field ${\cal K}_{\phi}$ preserving ${\Bbb Q}$, it will suffice to define the action $\ast$ of an
element $a\in Cl~({\goth R}_{\goth f})$ on the generators $\lambda_i$ of ${\cal K}_{\phi}$.
Let $\{a_1,\dots,a_h\}$ be the set of all elements of the group $Cl~({\goth R}_{\goth f})$.
For an element $a\in Cl~({\goth R}_{\goth f})$ define an index function $\alpha$
by the formula $a_ia=a_{\alpha(i)}$. Then the action $\ast$ of an
element $a\in Cl~({\goth R}_{\goth f})$ on the generators $\lambda_i$ of the field
${\cal K}_{\phi}$ is given by the formula:
\begin{equation}\label{eq18}
a\ast \lambda_i:= \lambda_{a(i)}, \qquad\forall a\in Cl~({\goth R}_{\goth f}).
\end{equation}
It is easy to verify that formula (\ref{eq18}) gives an isomorphism
$Cl~({\goth R}_{\goth f})\to Gal~({\cal K}_{\phi}|{\Bbb Q})$, which is independ of the choice of
$\{a_i\}$ and $\{\lambda_i\}$. This argument completes the proof of lemma \ref{lm3}.
$\square$
\begin{rmk}\label{rmk5}
\textnormal{
The class field theory says that ${\goth f}=f^m$, i.e.
the extensions of fields $k$ and ${\goth k}$ must ramify over the same
set of prime ideals. Indeed, consider the commutative diagram below,
\begin{figure}[here]
\begin{picture}(300,110)(-100,-5)
\put(30,70){\vector(0,-1){35}}
\put(155,70){\vector(0,-1){35}}
\put(52,23){\vector(1,0){73}}
\put(52,83){\vector(1,0){73}}
\put(28,20){$I_{\goth f}$}
\put(135,20){$Gal~({\cal K}_{\phi}|{\Bbb Q})$}
\put(28,80){$I_f$}
\put(135,80){$Gal~({\cal K}^{ab}(k)|{\Bbb Q})$}
\put(68, 90){Artin}
\put(50, 70){homomorphism}
\put(68, 30){Artin}
\put(50, 10){homomorphism}
\end{picture}
\end{figure}
where $I_f$ and $I_{\goth f}$ are groups of all ideals of $k$ and
${\goth k}$, which are relatively prime to the principal ideals $(f)$ and $({\goth f})$,
respectively. Since $Gal~({\cal K}^{ab}(k)|{\Bbb Q})\cong Gal~({\cal K}_{\phi}|{\Bbb Q})$,
one gets an isomorphism $I_f\cong I_{\goth f}$, i.e. ${\goth f}=f^m$ for some
positive integer $m$.
}
\end{rmk}
\begin{cor}\label{cr3}
The Hilbert class field of real quadratic field ${\goth k}={\Bbb Q}(\sqrt{D})$
modulo conductor ${\goth f}\ge 1$ is isomorphic to the field ${\goth k}({\cal K}_{\phi})$
generated by the Fourier coefficients of the Hecke eigenform $\phi(z)\in S_2(\Gamma_1(fD))$.
\end{cor}
{\it Proof.}
As in the classical case of imaginary quadratic fields,
notice that $\deg~({\cal K}_{\phi}|{\Bbb Q})=\deg~({\goth k}({\cal K}_{\phi})|{\goth k})=Cl~({\goth R}_{\goth f})$;
therefore corollary \ref{cr3} is an implication of lemma \ref{lm3} and isomorphism
$Gal~({\cal K}_{\phi}|{\Bbb Q})\cong Gal~({\goth k}({\cal K}_{\phi})|{\goth k})\cong Cl~({\goth R}_{\goth f})$.
$\square$
\bigskip
Theorem \ref{thm1} follows from corollary \ref{cr3}.
$\square$
\section{Examples}
Along with the method of Stark's units [Cohen \& Roblot 2000] \cite{CoRo1}, theorem \ref{thm1} can be used in the computational number theory. For the sake of clarity,
we shall consider the simplest examples; the rest can be found in Figure 1.
\begin{exm}
\textnormal{
Let $D=15$. The class number of quadratic field $k={\Bbb Q}(\sqrt{-15})$
is known to be $2$; such a number for quadratic field ${\goth k}={\Bbb Q}(\sqrt{15})$
is also equal to $2$. Thus,
\begin{equation}
Cl~({\goth R}_{{\goth f}=1})\cong Cl~(R_{f=1})\cong {\Bbb Z}/2 {\Bbb Z},
\end{equation}
and isomorphism (\ref{eq3}) is trivially satisfied for each power $m$,
i.e. one obtains an unramified extension.
By theorem \ref{thm1}, the Hilbert class field of ${\goth k}$
is generated by the Fourier coefficients of the Hecke eigenform
$\phi(z)\in S_2(\Gamma_1(15))$.
Using the computer program {\it SAGE} created by William ~A. ~Stein, one finds an
irreducible factor $p(x)=x^2-4x+5$ of the characteristic polynomial of the Hecke operator $T_{p=2}$
acting on the space $S_2(\Gamma_1(15))$. Therefore, the Fourier coefficient $c(2)$ coincides with a root
of equation $p(x)=0$; in other words, we arrive at an extension of ${\goth k}$ by the polynomial $p(x)$.
The generator $x$ of the field ${\cal K}_{\phi}={\Bbb Q}(c(2))$ is a root of the
bi-quadratic equation $[(x-2)^2+1]^2-15=0$; it is easy to see that
$x=2+\sqrt{-1+\sqrt{15}}$.
One concludes, that the field
${\cal K}_{\phi}\cong {\Bbb Q}\left(\sqrt{-1+\sqrt{15}}\right)$
is the Hilbert class field of quadratic field ${\goth k}={\Bbb Q}(\sqrt{15})$.
}
\end{exm}
\begin{exm}
\textnormal{
Let $D=14$. It is known, that for the quadratic field $k={\Bbb Q}(\sqrt{-14})$ we have $|Cl~(R_{f=1})|=4$,
while for the quadratic field ${\goth k}={\Bbb Q}(\sqrt{14})$ it holds $|Cl~({\goth R}_{{\goth f}=1})|=1$.
However, for the ramified extensions one obtains
the following isomorphism:
\begin{equation}
Cl~({\goth R}_{{\goth f}=2^3})\cong Cl~(R_{f=2})\cong {\Bbb Z}/4 {\Bbb Z},
\end{equation}
where $m=3$ is the smallest integer satisfying formula (\ref{eq3}).
By theorem \ref{thm1}, the Hilbert class field of ${\goth k}$ modulo ${\goth f}=8$ is generated by the
Fourier coefficients of the Hecke eigenform $\phi(z)\in S_2(\Gamma_1(2\times 14))$.
Using the {\it SAGE}, one finds that the characteristic polynomial of the Hecke operator $T_{p=3}$ on
$S_2(\Gamma_1(2\times 14))$ has an irreducible factor $p(x)=x^4+3x^2+9$.
Thus the Fourier coefficient $c(3)$ is a root of the polynomial $p(x)$ and one gets
an extension of ${\goth k}$ by the polynomial $p(x)$.
In other words, generator $x$ of the field ${\cal K}_{\phi}={\Bbb Q}(c(3))$
is a root of the polynomial equation $(x^4+3x^2+9)^2-4\times 14=0$.
The bi-quadratic equation $x^4+3x^2+9-2\sqrt{14}=0$ has discriminant $-27+8\sqrt{14}$
and one finds a generator of ${\cal K}_{\phi}$ to be $^4\sqrt{-27+8\sqrt{14}}$.
Thus the field ${\Bbb Q}\left(^4\sqrt{-27+8\sqrt{14}}\right)$
is the Hilbert class field over ${\Bbb Q}(\sqrt{14})$ modulo conductor ${\goth f}=8$.
Clearly, the extension is ramified over the prime ideal ${\goth p}=(2)$.
}
\end{exm}
\begin{rmk}
\textnormal{
Table 1 below lists quadratic fields for some square-free discriminants $2\le D\le 101$.
The conductors $f$ and ${\goth f}$ satisfying equation (\ref{eq3}) were calculated using
tables for the class number of non-maximal orders in quadratic fields
posted at {\sf www.numbertheory.org}; the site is maintained by Keith ~Matthews.
We focused on small conductors; the interested reader
can compute the higher conductors using a pocket calculator.
In contrast, computation of generator $x$ of the Hilbert class field
require the online program {\it SAGE} created
by William ~A. ~Stein. We write an explicit
formula for $x$ or its minimal polynomial $p(x)$ over ${\goth k}$.
}
\end{rmk}
\begin{figure}[top]
\begin{tabular}{c|c|c|c|c}
\hline
&&&&\\
$D$ & $f$ & $Cl~(R_f)$ & ${\goth f}$ & Hilbert class field of ${\Bbb Q}(\sqrt{D})$\\
&&&& modulo conductor ${\goth f}$\\
&&&&\\
\hline
$2$ & $1$ & trivial & $1$ & ${\Bbb Q}(\sqrt{2})$ \\
\hline
$3$ & $1$ & trivial & $1$ & ${\Bbb Q}(\sqrt{3})$ \\
\hline
$7$ & $1$ & trivial & $1$ & ${\Bbb Q}(\sqrt{7})$ \\
\hline
$11$ & $1$ & trivial & $1$ & ${\Bbb Q}(\sqrt{11})$ \\
\hline
&&&&\\
$14$ & $2$ & ${\Bbb Z}/4 {\Bbb Z}$ & $8$ & ${\Bbb Q}\left(^4\sqrt{-27+8\sqrt{14}}\right)$ \\
&&&&\\
\hline
&&&&\\
$15$ & $1$ & ${\Bbb Z}/2 {\Bbb Z}$ & $1$ & ${\Bbb Q}\left(\sqrt{-1 +\sqrt{15}}\right)$ \\
&&&&\\
\hline
$19$ & $1$ & trivial & $1$ & ${\Bbb Q}(\sqrt{19})$\\
\hline
$21$ & $2$ & ${\Bbb Z}/4 {\Bbb Z}$ & $8$ & ${\Bbb Q}\left(^4\sqrt{-3 +2\sqrt{21}}\right)$ \\
\hline
&&&&\\
$35$ & $1$ & ${\Bbb Z}/2 {\Bbb Z}$ & $1$ & ${\Bbb Q}\left(\sqrt{17+\sqrt{35}}\right)$ \\
&&&&\\
\hline
$43$ & $1$ & trivial & $1$ & ${\Bbb Q}(\sqrt{43})$\\
\hline
&&&&\\
$51$ & $1$ & ${\Bbb Z}/2 {\Bbb Z}$ & $1$ & ${\Bbb Q}\left(\sqrt{17+\sqrt{51}}\right)$ \\
&&&&\\
\hline
$58$ & $1$ & ${\Bbb Z}/2 {\Bbb Z}$ & $1$ & ${\Bbb Q}\left(\sqrt{-1+\sqrt{58}}\right)$ \\
\hline
$67$ & $1$ & trivial & $1$ & ${\Bbb Q}(\sqrt{67})$ \\
\hline
$82$ & $1$ & ${\Bbb Z}/4 {\Bbb Z}$ & $1$ & $x^4-2x^3+4x^2-8x+16$ \\
\hline
&&&&\\
$91$ & $1$ & ${\Bbb Z}/2 {\Bbb Z}$ & $1$ & ${\Bbb Q}\left(\sqrt{-3 +\sqrt{91}}\right)$ \\
&&&&\\
\hline
\end{tabular}
\caption{Square-free discriminants $2\le D\le 101$.}
\end{figure}
\bigskip\noindent
{\sf Acknowledgment.} I thank Yu.~I.~Manin for helpful correspondence.
|
1,941,325,221,060 | arxiv | \section*{Introduction}
Since the first discovery of the exoplanets in the 1990s \cite{1992Natur.355..145W, 1995Natur.378..355M},
expectations for the discovery of extraterrestrial life have increased.
The most likely outcomes are the discovery of traces of life in our Solar System such as Mars, Europa or Enceladus via in-situ explorations made by spacecrafts,
or the discovery of biosignatures on exoplanets in their host star's habitable zones via astronomical high-resolution spectroscopic observations.
On the other hand, there is a slight possibility that an advanced civilization will be found before these discoveries are made.
For example, the Square Kilometer Array (SKA), which is the international radio telescope project currently under development,
will be capable of detecting leakage emissions from Earth-level civilizations within 100 pc \cite{2015aska.confE.116S}.
Recently, the Five-hundred-meter Aperture Spherical radio Telescope (FAST) in China has been conducting Search for Extraterrestrial Intelligence (SETI) observations \cite{Zhang2020}.
Therefore, estimating the number of planets with extraterrestrial life in our Milky Way galaxy is important.
One way to make this estimate is to use the Drake equation,
which is the famous algebraic expression for quantifying the number of communicative civilizations in our Galaxy \cite{1961PhT....14...40D}.
The Drake equation is generally expressed as \cite{1963PSS...11..485S}:
\begin{equation}
N = R_{\ast} \cdot f_{p} \cdot n_{e} \cdot f_{l} \cdot f_{i} \cdot f_{c} \cdot L
\end{equation}
\begin{quote}
\begin{description}
\item[$N$:] The number of civilizations in our Galaxy with which communication might be possible.
\item[$R_{\ast}$:] The mean rate of star formation averaged over the lifetime of the Galaxy.
\item[$f_{p}$:] The fraction of stars with planetary systems.
\item[$n_{e}$:] The mean number of planets in each planetary system with environments favorable for the origin of life.
\item[$f_{l}$:] The fraction of such favorable planets on which life does develop.
\item[$f_{i}$:] The fraction of such inhabited planets on which intelligent life with manipulative abilities arises during the lifetime of their local sun.
\item[$f_{c}$:] The fraction of planets populated by intelligent beings on which an advanced technical civilization arises during the host star's lifetime.
\item[$L$:] The lifetime of the technical civilization.
\end{description}
\end{quote}
There are reliable estimates for the first three factors ($R_{\ast} \cdot f_{p} \cdot n_{e} \sim 0.1$\cite{10.1093/mnras/staa512})
based on recent astronomical observations of exoplanets, protoplanetary disks, and star-forming regions.
However, because we have not yet discovered any extraterrestrial life, nor elucidated the origins of terrestrial life,
the other remaining factors are highly conjectural owing to the one-sample statistics of Earth.
We discuss $f_{i}$ among these conjectural factors in the Drake equation in this paper.
Previous estimates of $f_{i}$ range from pessimistic ($f_{i} \sim 0$) to optimistic ($f_{i} \sim 1$).
In general, while many physicists and astronomers prefer the optimistic value,
many biologists prefer a value several orders of magnitude smaller \cite{1980QJRAS..21..267T, Lineweaver2008}.
Table \ref{tab:fi} shows the various estimated values of $f_{i}$ and $f_{l} \cdot f_{i} \cdot f_{c}$ to date.
\begin{table*}[ht]
\centering
\caption{Previous estimates of $f_{i}$ and $f_{l} \cdot f_{i} \cdot f_{c}$}
\begin{tabular}{|c|c|c|}
\hline
Estimated value of $f_{i}$ & Estimated value of $f_{l} \cdot f_{i} \cdot f_{c}$ & Reference \\
\hline
$\sim 0.1$ & $\sim 0.01$ & \cite{1963PSS...11..485S} \\
$\sim 1$ & 0.1 - 0.2 & \cite{1963icse.book.....C} \\
$\sim 1$ & $\sim 0.5$ & \cite{1963ST....26..258C} \\
$\sim 1$ & $\sim 1$ & \cite{1965cae..book..323D} \\
$\sim 1$ & $ > 0.1$ & \cite{1975Icar...25..360O} \\
$\sim 1$ & $ 0.01$ & \cite{1975Icar...25..368F} \\
--- & $< 10^{-10}$ & \cite{1980QJRAS..21..267T} \\
--- & 0.01 & \cite{1981QJRAS..22..380W} \\
0.01 & $10^{-4}$ & \cite{1992iaot.book.....D} \\
0.01-0.1 & --- & \cite{2009IJAsB...8..121F} \\
0.2 & 0.02 & \cite{2010AcAau..67.1366M} \\
--- & $> 1.7 \cdot 10^{-11}$ & \cite{2016AsBio..16..359F} \\
0.5 & 0.05 & \cite{2019AcAau.155..118B} \\
$\sim 1$ & $< 10^{-40}$ & \cite{Totani2020} \\
$\sim 0.15$ & --- & this work \\
\hline
\end{tabular}
\label{tab:fi}
\end{table*}
This paper introduce a new approach to estimating the probability that life on Earth has not gone extinct since the birth of life, $f_{i, \oplus}$.
Since its birth, life on Earth has gone through many extinction events due to various random external factors, such as changes in the environment or impacts of meteorites.
Extinction events of high intensity (where a significant fraction of species disappear) occur much less frequently than events of low intensity.
The fossil record in the Phanerozoic Eon, which covers 540 Myr to the present \cite{Sepkoski2002, 2005Natur.434..208R},
indicates that a histogram of extinction intensity can be well modeled by a log-normal distribution.
This log-normal distribution of extinction was converted into a cumulative probability that life on Earth survives up to the present, $f_{i, \oplus}$,
by ``continuing to win the lottery of extinction'' since its birth.
The obtained survival probability, $f_{i, \oplus}$, can be a template for estimating $f_{i}$ in the Drake equation,
or other factors for estimating the number of life-bearing exoplanets,
assuming that life on any other exoplanets essentially always becomes complex if it does not become extinct first.
\section*{Histogram of the Terrestrial Extinction History}
Based on the Sepkoski's compendium \cite{Sepkoski2002},
a biodiversity database has been created from the Phanerozoic Eon fossil record \cite{2005Natur.434..208R},
and the study described in this paper is based on these data.
Figure \ref{fig:biod} (top) shows the number of known marine animal genera as a function of time for all data (black),
and data with single occurrence and poorly dated genera removed (blue).
The six major mass extinction events\cite{10.1130/2019.2542(14),Rampino2019}, the Ordovician-Silurian extinction at 443.8 Myr ago (O-S),
the Late Devonian extinction at 372.2 Myr ago (F-F),
the Capitanian extinction at 259.8 Myr ago (Cap), the Permian-Triassic extinction at 251.9 Myr ago (P-T),
the Tiassic-Jurassic extinction at 201.4 Myr ago (T-J), and the Cretaceous-Paleogene extinction at 66 Myr ago (K-Pg),
are clearly seen in Figure \ref{fig:biod} (top).
Five of the six major mass extinctions were most likely related to flood-basalt volcanism, and one (K-Pg) to the massive impact of an asteroid \cite{10.1130/2019.2542(14)}.
Figure \ref{fig:biod} (bottom) shows the extinction intensity as a function of time.
Extinction intensity is defined as the fraction of well-resolved genera (those having both a distinct first and last appearance known to the stage level)
present in the bin that are absent in the following bin.
Two more big extinction events at around 500 Myr ago, the End Botomian extinction (B) at 517 Myr ago and the Dresbachian extinction (D) at 502 Myr ago,
are also visible in Figure \ref{fig:biod} (bottom).
Although the details of these two extinctions are unclear due to the paucity of fossil records at that time,
these data have also been analyzed without arbitrarily dismissing them in this work.
Further details about these data are described by reference \cite{2005Natur.434..208R}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{fig1.eps}
\caption{Biodiversity in the marine fossil record.
Top: The number of known marine animal genera as a function of time for all data (black) and with single occurrence and poorly dated genera removed (blue).
Bottom: Extinction intensity as a function of time.
The six major mass extinctions (O-S, F-F, Cap, P-T, T-J, and K-Pg) and two more big extinctions (B and D) are visible.
The data used in these figures are from reference \cite{2005Natur.434..208R}.}
\label{fig:biod}
\end{figure}
A histogram of extinction intensity, constructed using the extinction intensity history data shown in Figure \ref{fig:biod} (bottom), is shown in Figure \ref{fig:hist}.
Although the provided data \cite{2005Natur.434..208R} have a time bin size of 1 Myr, the time resolution of these data is closer to $\sim 3$ Myr because the peaks of the big extinctions are three bins wide.
Although extinctions are most likely sudden events, the extinction peaks are spread out because the fossil record is incomplete (the Signor-Lipps effect \cite{10.1130/SPE190-p291}).
Therefore, the frequency of the histogram (vertical axis in Figure \ref{fig:hist}) was divided by three to match the time resolution of 3 Myr.
This histogram was then fitted with a log-normal distribution function $\varphi_{ln}(x)$:
\begin{equation}
\varphi_{ln}(x)=\frac{1}{\sqrt{2\pi} \sigma x} \exp \left(-\frac{(\ln x - \mu )^2}{2\sigma^2}\right)
\end{equation}
where $x$ denotes the extinction intensity as a random variable, and $\mu$ and $\sigma$ are free parameters in this distribution function.
Since the histogram has a peak at $x \sim 0.05$, minor mass extinctions ($x < 0.2$), in addition to massive extinctions, affect the overall shape of this histogram.
The choice of the log-normal distribution function is supported by the statistical principle that a random multiplicative process converges to a log-normal distribution owing to the central limit theorem.
Since the terrestrial extinctions are caused by random events, such as
volcanic activities \cite{Hesselbo2002, KAMO200375, 10.1130/G38940.1}, asteroid impacts \cite{Schulte1214},
superflares of the Sun \cite{azurc3464, 2017ApJ...848...41L}, gamma-ray bursts \cite{2004IJAsB...3...55M}, and so on,
it is justified that the terrestrial extinction intensity can be expressed by a log-normal probability distribution as a result of these random multiplicative processes.
The ultimate use of a log-normal distribution has precedence in the context of the Drake equation \cite{2010AcAau..67.1366M, MACCONE201163, 2019AcAau.155..118B}.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.55]{fig2.eps}
\caption{Histogram of extinction intensity.
The lines show the best-fitting curves for a log-normal distribution function (red), beta prime distribution function (blue), and gamma distribution function (green).}
\label{fig:hist}
\end{figure}
The best-fitting parameters of the log-normal function fitted to the histogram are $\mu = -2.447$ and $\sigma = 0.825$ with a scaling factor,
and its reduced chi-squared ($\chi ^2$) is 0.988.
The best-fitting curve is shown in Figure \ref{fig:hist} and Figure \ref{fig:fit} (bottom) as a red line.
Figure \ref{fig:fit} (top) shows the confidence contour maps of the fitted parameters.
The uncertainties associated with this fitting were evaluated as follows.
First, all parameter sets within a 99\% confidence level in the confidence contour map (Figure \ref{fig:fit} top) were extracted,
and then envelops of the log-normal distribution functions with these extracted parameters are shown by dashed lines in Figure \ref{fig:fit} (bottom).
Therefore, the two envelope curves in Figure \ref{fig:fit} (bottom) denote the uncertainties of this fitting with 99\% confidence level.
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{fig3.eps}
\caption{Fitting the histogram of extinction intensity with a log-normal distribution function.
Top: Contour map of $\chi ^2$ of the fitting.
The cross mark at the center of the contour shows the best-fitting parameter set, and the three contour curves show 68\%, 90\%, and 99\% confidence levels, respectively.
Bottom: Histogram of the extinction intensity and its best-fitting log-normal distribution function (red curve).
The dashed red curves constrain the 99\% confidence level region.}
\label{fig:fit}
\end{figure}
The same fitting procedures were applied to the histogram data using two additional distribution functions:
the beta prime distribution function $\varphi_{\beta}(x)$ and the gamma distribution function $\varphi_{\gamma}(x)$:
\begin{equation}
\varphi_{\beta} (x) = \frac{1}{B(\alpha, \beta)}\frac{x^{\alpha-1}}{(1+x)^{\alpha+\beta}}
\end{equation}
\begin{equation}
\varphi_{\gamma}(x) = \frac{\lambda^k}{\Gamma(k)}x^{k-1}e^{-\lambda x}
\end{equation}
where $B(\alpha, \beta)$ denotes the beta function with two free parameters of $\alpha$ and $\beta$,
and $\Gamma(k)$ denotes the gamma function with two free parameters of $k$ and $\lambda$.
These functions were previously used to estimate the factors in the Drake equation \cite{2019AcAau.155..118B}.
Figure \ref{fig:hist} also shows the best-fitting curves of these distribution functions.
The reduced $\chi ^2$ of these fits for the beta prime distribution function and the gamma distribution function are 1.184 and 1.329, respectively.
These poorer fitting statistics imply that the log-normal distribution function is more appropriate for the considered histogram, which lends
credence to the assumption that the terrestrial extinction process is based on a random multiplicative process.
Therefore, only the fitting results from the log-normal distribution function are considered hereafter.
\section*{Estimation of the survival probability of terrestrial life}
The previous section showed that the histogram of extinction intensity in the Phanerozoic Eon could be well fitted by a log-normal distribution function (Figure \ref{fig:hist} and \ref{fig:fit}).
In this section, a model is proposed to estimate the probability that life on Earth has not gone extinct from its birth to the present epoch.
In this model, the histogram of extinction intensity is interpreted as a probability distribution, representing how many fractions of all genera become extinct within a unit timescale (3 Myr),
if an extinction event of a certain magnitude, $x$, as a random variable with this probability distribution occurs every 3 Myr.
Although $x$ was defined originally as the fraction of extinct genera, i.e., $0 \le x \le 1$, here we interpret $x$ as the magnitude of each extinction event,
where $x=1$ means an extinction event with a minimal magnitude for all life on Earth to become extinct, and $x > 1$ means extinction events with greater magnitudes than it.
In this model, to illustrate the extinction history, this ``extinction lottery'' is drawn once every 3 Myr.
If the result of this lottery is $x = 0.05$, it means that 5\% of the genera on Earth became extinct during this period,
and then the next lottery is drawn in the next 3 Myr.
If the extinction intensity $x$ takes a value of 1 or greater, it means all life on Earth are extinct, and the game is over.
The fact that we are here now means that life on Earth has endured repeated extinction lotteries every 3 Myr since the birth of life (i.e. $x < 1$ for all lotteries).
Under such a condition, the probability that life on Earth has survived from the origin of life to the present, $f_{i, \oplus}$, is calculated below.
The cumulative distribution function, $\Phi_{ln}(x)$, of the log-normal distribution function, $\varphi_{ln}(x)$, can be expressed as:
\begin{equation}
\Phi_{ln}(x) = \int_{0}^{x}\varphi_{ln}(x^{\prime})dx^{\prime} = \frac{1}{2}\mathrm{erfc}\left(-\frac{\ln x-\mu}{\sqrt2 \sigma} \right)
\end{equation}
where $\mathrm{erfc}(x)$ denotes the complementary error function.
A value of $p = \Phi_{ln}(1)$ means the probability that the extinction intensity, $x$, as a random variable takes a value smaller than 1.
This can be taken to mean that not all life on Earth becomes extinct; in other words, some genera on Earth have survived through a unit timescale (3 Myr).
The history of evolution on Earth shows that life is quite resilient, because it eventually recovers even after big extinction events \cite{Chen2012}.
Therefore, life persist unless the result of the lottery is $x>1$.
Since ``winning an extinction lottery'' is defined as the result of $x<1$, the probability of winning this extinction lottery is $p$.
Thus, the survival probability $f_{i, \oplus}$ for duration $T$ can be expressed as a probability of winning $T / \Delta T$ times in the repeated extinction lotteries,
\begin{equation}
f_{i, \oplus}(T) = p^{T / \Delta T} \label{eq:fi}
\end{equation}
where $\Delta T$ is the time resolution of the probability distribution function ($\Delta T = 3$ Myr in this case).
Using the best-fitting parameter set of $\mu$ and $\sigma$, the value of $p$ is calculated as $p = \Phi_{ln}(1) = 0.9985^{+0.0012}_{-0.0058}$ (99\% confidence level),
which means that the probability that some genera on Earth survive for 3 Myr is $\sim$99.85\%,
or the probability that all life on Earth becomes extinct during a 3 Myr period is $\sim$0.15\%.
Therefore, the estimated survival probability of life on Earth during the Phanerozoic Eon ($T = 540$ Myr) is
$f_{i, \oplus}(540\ \textrm{Myr}) = 0.76^{+0.01}_{-0.06}$.
This means that life on Earth had a $\sim$24\% probability of becoming extinct during the Phanerozoic Eon.
This value is quite reliable because the log-normal distribution function was obtained by fitting to the histogram of the extinction history in the Phanerozoic Eon.
Recent geological evidence has suggested that life on Earth first occurred 3.7-4.1 Gyr ago \cite{1996Natur.384...55M, 1999Sci...283..674R, 2002Natur.418..627V, 2014NatGe...7...25O, 2015PNAS..11214518B, 2018AsBio..18..343P}.
Assuming that the value of $p$, obtained from the fossil record during the Phanerozoic Eon (540 Myr ago to present),
can be extended to the entire history of life on Earth (3.7-4.1 Gyr ago to present), the survival probability for the entire history of life can be calculated as
$f_{i, \oplus}(3.7\ \textrm{Gyr}) = 0.16^{+0.01}_{-0.03}$ or $f_{i, \oplus}(4.1\ \textrm{Gyr}) = 0.13^{+0.01}_{-0.03}$.
Therefore, as a conclusion, the probability that life on Earth survived without becoming completely extinct can be estimated as $\sim$15\% based on the fossil records of extinction from the Phanerozoic Eon.
\section*{Evaluation of the model assumptions}
Although the extinction intensity, $x$, is defined as the fraction of extinct genera, i.e. $0 < x < 1$,
the histogram of $x$ was fitted by the log-normal distribution function $\varphi_{ln}(x)$ defined in $0 < x < \infty$.
Therefore, one might think it is better to use a truncated log-normal distribution function defined in $0 < x < 1$, $\varphi_{ln}^{\prime}(x)$:
\begin{equation}
\varphi_{ln}^{\prime}(x) = \frac{1}{\Phi_{ln}(1)-\Phi_{ln}(0)}\varphi_{ln}(x) \label{eq:tln}
\end{equation}
The fitting procedure was conducted with the truncated log-normal distribution function and the same best-fitting parameters of $\mu = -2.447$ and $\sigma = 0.825$ are obtained.
Because the difference between $\varphi_{ln}^{\prime}(x)$ and $\varphi_{ln}(x)$ is only a scaling factor with a given parameter set of $\mu$ and $\sigma$,
and the scaling factor is determined by fitting the function to the data,
it is mathematically correct that the same best-fitting parameters are obtained with the log-normal distribution function $\varphi_{ln}(x)$
and the truncated log-normal distribution function $\varphi_{ln}^{\prime}(x)$.
This result gains more credibility to employ the log-normal distribution function.
The obtained value of $f_{i, \oplus}$ represents the probability that life on Earth has survived various random extinction events since the birth of life to the present.
One big assumption in this model is that the obtained extinction rate determined over only the last 540 Myr (the Phanerozoic Eon)
can be extended to the entire history of life on Earth ($\sim$4 Gyr); however there is no guarantee that this assumption is correct.
For example, Earth experienced the Late Heavy Bombardment (LHB) 3.8-3.9 Gyr ago, which likely destroyed almost all life present on Earth at that time \cite{Cohen1754, Nisbe2001, Line2002, Zahnle2007},
but this truly massive extinction event was not included in the model.
In addition, even within the Phanerozoic Eon, the extinction rate declines with time \cite{macleod2013great}, which can be seen in Figure \ref{fig:biod} (bottom).
Moreover, the dataset used in this model (Figure \ref{fig:biod}) was constructed using fossil records of marine animal genera, not all types of life \cite{2005Natur.434..208R}.
Therefore, it is not clear whether the modern extinction rate of marine animal genera, which is modeled in this paper, can be applied to all types of life across all of history of Earth.
In this paper, however, we assumed that the modern extinction rate of marine animal genera could be applied to all life throughout history.
This is a big assumption, but this is the best that can be made at this point with the available data.
The purpose of the Drake equation is to deal specifically with questions of if, how, when and how often evolution leads to complex life,
which cannot be answered completely without other examples of evolution.
Therefore, by using the terrestrial history as a template for the histories of life on exoplanets,
we have attempted to provide some useful perspective using the available information.
In this context, it was assumed that the obtained survival probability, $f_{i, \oplus}$, can be used to represent $f_{i}$ in the Drake equation, i.e., $f_i = f_{i, \oplus} \sim 0.15$,
because the only available data pertain to the history of Earth.
This assumption means that the evolutionary history of Earth is universal, i.e.,
once the origin of life is accomplished, the evolution of complex life always take place in any stable, sufficiently extensive environment if it does not become extinct first \cite{life6030025}.
This assumption is called the ``{\it Planet of the Apes}'' hypothesis \cite{Lineweaver2008} or the astrobiological Copernican principle \cite{Westby2020}.
A unique point of the method used here was to model extinction, rather than the appearance of life, to address the Drake equation.
\section*{Application to other estimations}
According to Equation (\ref{eq:fi}), the survival probability of life on Earth, $f_{i, \oplus}$, has two parameters:
$p$ (the survival probability for a time-bin of $\Delta T$ = 3 Myr) and $T$ (the evolution duration from the birth of life to the emergence of intelligent life).
Assuming that values for Earth, $p = 0.9985$ and $T \sim$ 4 Gyr, are universal, $f_i = f_{i, \oplus} \sim 0.15$ was obtained.
This expression of $f_i$ has a room for adjusting these two parameters for other life-bearing exoplanets to match their local environments and situations.
For example, some exoplanets have harsher environments than Earth, which would require any life to ensure stronger stellar winds or interstellar radiation fields than those in our Solar system;
hence we can readjust the survival probability so that $p$ has a smaller value.
If the evolution speed of life on some exoplanets is slower than on Earth, we can apparently readjust the evolution time duration so that $T$ is larger.
It is still difficult to estimate $p$ and $T$ for other exoplanets,
but it can provide some useful constraints of $f_i$ based on scientific observations of exoplanets.
This approach can also be extended to the Seager equation, a parallel version of the Drake equation \cite{2018IJAsB..17..294S}.
The Seager equation estimates the number of planets with detectable signs of life by way of biosignature gases as:
\begin{equation}
N^{\prime} = N_{\ast} \cdot f_{Q} \cdot f_{HZ} \cdot f_{O} \cdot f_{L} \cdot f_{S}
\end{equation}
\begin{quote}
\begin{description}
\item[$N^{\prime}$:] The number of planets with detectable signs of life by way of biosignature gases.
\item[$N_{\ast}$:] The number of stars in the survey.
\item[$f_{Q}$:] The fraction of stars in the survey that are suitable for planet finding (e.g., quiet non-variable stars or non-binary stars).
\item[$f_{HZ}$:] The fraction of stars with rocky planets in the habitable zone.
\item[$f_{O}$:] The fraction of those planets that can be observed, according to limitations of planet orbital geometry or other limiting factors.
\item[$f_{L}$:] The fraction of planets that have life.
\item[$f_{S}$:] The fraction of planets with life that produce a detectable biosignature gas by way of a spectroscopic signature.
\end{description}
\end{quote}
The model introduced in this study can be applied to estimate $f_S$ in the Seager equation.
Here, we consider molecular oxygen (O$_2$) as the favored biosignature gas, which is a gas produced by life that can accumulate to detectable levels in an exoplanetary atmosphere.
The permanent rise to measurable concentrations of O$_2$ in the atmosphere of Earth via photosynthesis of prokaryotic and eukaryotic organisms in the ocean,
known as the Great Oxidation Event (GOE), occurred around 2.4 Gyr ago \cite{SESSIONS2009R567, 2014Natur.506..307L}.
Therefore, Earth took 1.3-1.7 Gyr from the birth of life to be detectable by astronomical spectroscopic observations of biosignature gas by outside observers.
The probability that life on Earth survives until the GOE, $f_{S, \oplus}$, can be calculated using the same method shown in Equation (\ref{eq:fi}), which yields
$f_{S, \oplus}(1.3\ \textrm{Gyr}) = 0.52^{+0.01}_{-0.06}$ and $f_{S, \oplus}(1.7\ \textrm{Gyr}) = 0.42^{+0.01}_{-0.06}$.
Assuming again that the time taken for photosynthesis to evolve is the same as that on Earth,
these values can be interpreted as $f_S$ in the Seager equation.
A value of $f_S = 0.5$ was originally speculated \cite{2018IJAsB..17..294S}, which is a reasonable estimate.
This model also can be applied to estimate the probability that existing life, including human beings on Earth, would become extinct before Earth became inhabitable.
As the Sun brightens due to its natural evolutionary process, Earth will become uninhabitable in the far future due to rising temperatures.
According to one model, the complete loss of oceans of Earth may occur in little over 2 Gyr from present, thereby transforming Earth into a desert planet.
This suggests that most forms of life will unable to survive for more than 1.3 Gyr from the present day on Earth \cite{doi:10.1002/2015JD023302}.
The survival probability of a 1.3-Gyr period in our model is $f_{i, \oplus}(1.3\ \textrm{Gyr}) \sim 0.5$,
indicating that existing life on Earth, including humans, have a $\sim$50\% probability of becoming extinct before Earth becomes inhabitable.
This survival probability is much higher than the estimated total longevity of our species of 0.2 million to 8 million years at the 95\% confidence level under the Copernican principle \cite{Gott1993}.
This difference may occur because advanced civilizations would be affected by much smaller catastrophes
that would not show up as mass extinctions in the geological records.
For example, asteroid impactors with 1-km diameters (making a 20-km diameter crater) are expected to occur about every $10^5$ years \cite{Chapman1994}
and large volcanic eruptions that could cause ``volcanic winter'' are expected to occur about every $5 \times 10^4$ year \cite{RAMPINO2002562};
both types of events are capable of destroying or greatly affecting an advanced civilization.
Such s discussion, however, is more related to the factors $f_c$ and $L$ in the Drake equation, rather than $f_i$ estimated in this paper, which are critical parameters to be considered
when searching for intelligent civilizations.
\section*{Conclusions}
A new approach to estimating the survival probability of life on Earth since its birth, $f_{i, \oplus}$, was introduced.
The principle idea is that the extinction history of Earth, based on the marine fossil record, can be used to obtain the survival probability of life since it began on Earth.
The obtained value is $f_{i, \oplus} \sim 0.15$.
Under the astrobiological Copernican principle \cite{Westby2020}, this survival probability can be interpreted as $f_i$ in the Drake equation, i.e., $f_i = f_{i, \oplus} \sim 0.15$.
Because $f_{i, \oplus}$ is a two-parameter function of $p$ (the survival probability for a unit time) and $T$ (the evolution time from the birth of life to intelligent life),
this method can be extended to estimate the survival probability on other life-bearing exoplanets by adjusting these two parameters to the local environments of the considered exoplanets.
|
1,941,325,221,061 | arxiv | \section{Introduction} \label{intro}
In \cite{St}, Stewart proved an effective finiteness result for shifted perfect powers in binary recurrence sequences. That is, if $\{ u_k \}$ is a binary recurrence sequence
for which the equation
\begin{equation} \label{cur}
x^n+c = u_k
\end{equation}
has a solution in integers $x, n, c$ and $k$, with $n \geq 2$ and $|x| > 1$, then, under mild conditions, $\max \{ |x|, n \}$ is bounded above effectively in terms of $c$ and the recurrence. This statement is actually a consequence of the following more general theorem of Shorey and Stewart \cite{ShSt}.
\begin{thm} (Shorey and Stewart) \label{SS}
Let $a, b, c, d, e$ and $f$ be integers with
$$
(b^2-4ac) (4 acf+bde-ae^2-cd^2-fb^2) \neq 0.
$$
If $x, y$ and $n$ are integers with $|x|>1$ and $n > 2$, satisfying
\begin{equation} \label{dio}
a x^{2n}+b x^n y + c y^2 + d x^n + e y +f = 0,
\end{equation}
then the maximum of $|x|, |y|$ and $n$ is less than a number which is effectively computable in terms of $a, b, c, d, e$ and $f$. Further, if $e^2 \neq 4 cf$ and $x$ and $y$ are integers satisfying
$$
a x^{4}+b x^2 y + c y^2 + d x^2 + e y +f = 0,
$$
then the maximum of $|x|$ and $|y|$ is less than a number which is effectively computable in terms of $a, b, c, d, e$ and $f$.
\end{thm}
To translate such effective statements to explicit ones regarding equations of the shape (\ref{cur}) or (\ref{dio}) has proven, with current technology, to be a rather challenging problem (and has been accomplished in only a handful of cases -- notably in the determination of perfect powers in the Fibonacci sequence \cite{BMS}). In this paper, we will develop a method which allows us to explicitly find all shifted perfect powers in a number of classes of Lucas recurrence sequences which are apparently inaccessible to existing techniques in the literature. Our approach combines lower bounds for linear forms in logarithms (which underlie the proof of Theorem \ref{SS}) with new ideas utilizing connections between Hilbert modular forms and elliptic curves defined over totally real fields.
Whilst we will develop techniques that allow one to carry out such a program in some generality, to focus our exposition we will essentially concentrate on a single example of an equation of the shape (\ref{dio}), proving the following.
\begin{thm} \label{main}
The Diophantine equation
\begin{equation} \label{dog}
x^{2n} \pm 6 x^n + 1 = 8y^2
\end{equation}
has no solutions in positive integers $x, n$ and $y$, with $x, n>1$.
\end{thm}
This result is the final ingredient required in work of the first author \cite{Be} on integral points on congruent number curves.
For equation (\ref{dog}),
it is a fairly routine matter to obtain an absolute bound on $n$ (via Theorem \ref{SS} or otherwise), thereby reducing the problem to that of finding the integral points on a finite collection of hyperelliptic curves. What is much less routine is the approach we take to reduce this bound. Indeed, whilst the problem of determining Fibonacci perfect powers reduces immediately to that of solving ternary equations of the shape
\begin{equation} \label{fibby}
x^2 - 5 y^{2n} = \pm 4,
\end{equation}
for which Frey (or, if you will, Frey--Hellegouarch) curves are immediately available, the fundamental difficulty one encounters in attempting to solve equation (\ref{dog}) is that it is {\it a priori} quaternary rather than ternary. The principal novelty of the paper at hand is that we are able to replace (\ref{dog}) with an equivalent ternary equation over a real quadratic field for which we are able to construct Frey curves which we can, in turn, associate with certain Hilbert modular forms. As in the work of Bugeaud, Mignotte and Siksek \cite{BMS}, we obtain local information from these Frey curves to reduce our problem from one of linear forms in three logarithms, to (the computationally more efficient) two logarithms, and subsequently, to find exceptionally strong lower bounds upon $|x|$ for nontrivial solutions to (\ref{dog}), eventually contradicting more general lower bounds for linear forms in (many) complex logarithms.
\section{Recurrence sequences : descent to a ternary equation} \label{sec2}
To begin the proof of Theorem \ref{main}, let us observe that, if $n=2$, equation (\ref{dog}) with a $(-)$ sign is insoluble modulo $8$;
with a $(+)$ sign \eqref{dog} defines a genus $1$
curve that is birational over ${\mathbb Q}$ to the rank $0$ elliptic curve with Cremona reference
{\tt 32a1}, and one verifies that the only solutions
on our affine model
satisfy $(|x|,|y|)=(1,1)$.
We may thus suppose that $n>2$ is odd and hence consider the equation
\begin{equation}\label{eqn:main}
x^{2p}+6 x^p+1=8 y^2,
\end{equation}
where $p$ is an odd prime, and $x$ and $y$ are integers.
Note that if we have
$$
u^2+6 u +1=8 y^2,
$$
where $\eta = \mbox{sgn} (u) \in \{ -1, 1 \}$, then $\eta u$ satisfies the recurrence
\begin{equation} \label{rec}
u_{n+1}=6u_n-u_{n-1}+ 12 \, \eta,
\end{equation}
with, say, $(u_0,u_1)=(4-3 \eta,20-3 \eta)$.
Let $K={\mathbb Q}(\sqrt{2})$ and write $\epsilon=1+\sqrt{2}$ for a fundamental unit of norm $-1$ in $K$. Our main observation that permits application of the so-called modular method is the following.
\begin{lem}\label{lem:descent}
If $(x,y,p)$ is a solution to \eqref{eqn:main} then there exist
integers $k, \ell$ and $s$, and an $\alpha \in {\mathbb Z}[\sqrt{2}]$ such that
\begin{equation}\label{eqn:ppp}
s \, \epsilon^{k} \sqrt{2}-\epsilon^{\ell} \alpha^p=1,
\end{equation}
where $k$ is odd, $s \in \{ -1, 1 \}$,
\begin{equation} \label{stuffy}
\Norm(\alpha)=(-1)^{\ell+1} x \; \; \text{ and } \; \; - \frac{p-1}{2} \leq \ell \leq \frac{p-1}{2}.
\end{equation}
\end{lem}
\begin{proof}
We can rewrite \eqref{eqn:main} as
\[
(x^p+3)^2-8=8 y^2,
\]
whereby $4 \mid x^p+3$ and
\[
y^2 -2 \left(\frac{x^p+3}{4} \right)^2=-1.
\]
Hence
\begin{equation}\label{eqn:rel1}
y+\left(\frac{x^p+3}{4} \right) \sqrt{2} =s \epsilon^{k},
\end{equation}
where $k$ is odd.
On the other hand, we can also transform equation \eqref{eqn:main} into
\[
\left( \frac{x^p+1}{2} \right)^2 -2 y^2= -x^p
\]
and hence have
\begin{equation}\label{eqn:rel2}
\left( \frac{x^p+1}{2} \right) + y\sqrt{2}= \epsilon^{\ell} \alpha^p,
\end{equation}
where $\alpha$ and $\ell$ satisfy \eqref{stuffy}.
From \eqref{eqn:rel1}
and \eqref{eqn:rel2}, we deduce \eqref{eqn:ppp}.
\end{proof}
\section{Linear forms in logarithms} \label{sec3}
The purpose of this section is to prove the following proposition,
via an appeal to the theory of linear forms in logarithms.
\begin{prop} \label{temp}
If the Diophantine equation
$$
x^{2n} \pm 6 x^n + 1 = 8y^2
$$
has a solution in positive integers $x, n$ and $y$, with $x, n>1$, then $n$ is divisible by an odd prime $p < 2 \cdot 10^{10}$ .
\end{prop}
Either equation (\ref{eqn:ppp}) or (\ref{eqn:rel1}) is a suitable starting point for deriving a linear form in logarithms leading to an absolute upper bound upon $p$; we will appeal to the latter.
Specifically, from (\ref{eqn:rel1}), we have
$$
\frac{\left| x^p+3 \right|}{4} = \frac{\epsilon^{k}+\epsilon^{-k}}{2\sqrt 2}.
$$
It follows that
$$
{|x|^p\over \sqrt 2 \, \epsilon^{|k|}}-1
$$
is ``small'', whereby the same is true of the linear form
\begin{equation} \label{lin-form}
\Lambda = p \log |x| - \log \sqrt 2 - |k| \log \epsilon.
\end{equation}
More precisely, it is easy to verify that
\begin{equation} \label{upper1}
\log |\Lambda | < - p \log |x| +2 .
\end{equation}
For any algebraic number $\alpha$ of degree $d$ over $\mathbb{Q}$, we define as usual the {\it absolute logarithmic height} of $\alpha$ by the formula
$$
h(\alpha)= \dfrac{1}{d} \left( \log \vert a_{0} \vert + \sum\limits_{i=1}^{d} \log \max \left( 1, \vert \alpha^{(i)}\vert \right) \right),
$$
where $a_{0}$ is the leading coefficient of the minimal polynomial of $\alpha$ over $\mathbb{Z}$ and the $\alpha^{(i)}$ are the conjugates of $\alpha$ in the field of complex numbers.
The following is the main result (Theorem 2.1) of Matveev \cite{Mat}.
\begin{thm} \label{Matveev} (Matveev)
Let $\mathbb{K}$ be an algebraic number field of degree $D$ over $\mathbb{Q}$ and put $\chi=1$ if $\mathbb{K}$ is real, $\chi=2$ otherwise. Suppose that $\alpha_1, \alpha_2, \ldots, \alpha_n \in \mathbb{K}^*$ with absolute logarithmic heights $h(\alpha_i)$ for $1 \leq i \leq n$, and suppose that
$$
A_i \geq \max \{ D \, h (\alpha_i), \left| \log \alpha_i \right| \}, \; 1 \leq i \leq n,
$$
for some fixed choice of the logarithm. Define
$$
\Lambda = b_1 \log \alpha_1 + \cdots + b_n \log \alpha_n,
$$
where the $b_i$ are integers and set
$$
B = \max \{ 1, \max \{ |b_i| A_i/A_n \; : \; 1 \leq i \leq n \} \}.
$$
Define, with $e := \exp(1)$, further,
$$
\Omega =A_1 \cdots A_n,
$$
$$
C(n) = C(n,\chi) = \frac{16}{n! \chi} e^n (2n+1+2 \chi) (n+2)(4n+4)^{n+1} \left( en/2 \right)^{\chi},
$$
$$
C_0 = \log \left( e^{4.4 n+7} n^{5.5} D^2 \log ( e D) \right) \; \mbox{ and } \; W_0 = \log \left(
1.5 e B D \log (eD) \right).
$$
Then, if $\log \alpha_1, \ldots, \log \alpha_n$ are linearly independent over $\mathbb{Z}$ and $b_n \neq 0$, we have
$$
\log \left| \Lambda \right| > - C(n) \, C_0 \, W_0 \, D^2 \, \Omega.
$$
\end{thm}
We apply this theorem to our situation, with
$$
D=2, \, \chi = 1, \, n=3, \, b_3=p, \, \alpha_3=|x|,
$$
under the assumption that $|x|>1$.
We conclude, after a little work, that
$$
\log |\Lambda| > - \left( 88626836156 \log p + 232663287513 \right) \log |x|.
$$
Combining this with (\ref{upper1}) (and using that $|x| \geq 7$, an almost immediate consequence of (\ref{eqn:main}) and the supposition that $|x|>1$; succinctly, $2$ is a quadratic residue modulo $x$, whence $x \equiv \pm 1 \pmod{8}$), we
obtain
the upper bound
\begin{equation} \label{manf}
p < 2{.}772 \cdot 10^{12} =: P_0.
\end{equation}
This bound is the starting point of our analysis. Arguing \`a la Baker, we may thus find an effective absolute upper bound upon $|x|$ in (the finite collection of hyperelliptic) equations (\ref{eqn:main}). Let us assume, for the remainder of this section, that
\begin{equation} \label{allo}
2 \cdot10^{10} < p < P_0.
\end{equation}
We will show that (\ref{eqn:main}) has no solutions for $p$ satisfying (\ref{allo}), via appeal to a complicated but slightly sharper lower bound for linear forms in three complex logarithms, due to the third author (Proposition 5.1 of \cite{Mig2}).
\begin{thm} \label{miggy} (Mignotte)
Consider three non-zero algebraic numbers $\alpha_1$, $\alpha_2$
and $\alpha_3$, which are either all real and ${}>1,$ or all complex of modulus
one and all ${}\not=1$. Further, assume that the three numbers $\alpha_1, \alpha_2$ and $\alpha_3$ are either all multiplicatively independent, or that two of the number are multiplicatively independent and the third is a root of unity.
We also consider three positive
rational integers $b_1$, $b_2$, $b_3$ with $\gcd(b_1,b_2,b_3)=1$, and the linear form
$$
\Lambda = b_2\log \alpha_2-b_1 \log \alpha_1-b_3\log \alpha_3 ,
$$
where the logarithms of the $\alpha_i$ are arbitrary determinations of the logarithm,
but which are all real or all purely imaginary.
Suppose further that
$$
b_2 |\log \alpha_2| =
b_1\,\vert \log \alpha_1 \vert+ b_3 \,\vert\log \alpha_3\vert \pm \vert\Lambda\vert
$$
and put
$$
d_1 = \gcd(b_1,b_2) \; \mbox{ and } \; d_3 = \gcd(b_3,b_2).
$$
Let $\rho\ge e := \exp(1)$ be a real number and set $\lambda = \log \rho$. Let $a_1, a_2$ and $a_3$ be real numbers such that
$$
a_i \ge \rho \vert \log \alpha_i \vert
- \log \vert \alpha_i\vert +2 D \,{\rm h}\kern .5pt(\alpha_i), \qquad
i \in \{1, 2, 3 \},
$$
where
$\,D=[\mathbb{Q}(\alpha_1,\alpha_2,\alpha_3) : \mathbb{Q}]\bigm/[\mathbb{R}(\alpha_1,\alpha_2,\alpha_3) : \mathbb{R}]$, and assume further that
$$
\Omega := a_1 a_2 a_3 \geq 2.5 \; \mbox{ and } \; a := \min \{ a_1, a_2, a_3 \} \geq 0.62.
$$
Let $m$ and $L$ be positive integers with $m \geq 3$, $L \geq D+4$ and set
$K = [ m \Omega L ].$
Let $\chi$ be fixed with $0 < \chi \leq 2$ and define
$$
c_1 = \max \{ (\chi m L)^{2/3}, \sqrt{2mL/a} \}, \;
c_2 = \max \{ 2^{1/3} \, (m L)^{2/3}, L \sqrt{m/a} \}, \;
c_3 = (6 m^2)^{1/3} L,
$$
$$
R_i = \left[ c_i a_2 a_3 \right], \;
S_i = \left[ c_i a_1 a_3 \right] \; \mbox{ and } T_i = \left[ c_i a_1 a_2 \right],
$$
for $i \in \{ 1, 2, 3 \}$,
and set
$$
R=R_1+R_2+R_3+1, \; S = S_1+S_2+S_3+1 \; \mbox{ and } \; T = T_1+T_2+T_3+1.
$$
Define
$$
c = \max \left\{ \frac{R}{L a_2 a_3}, \frac{S}{L a_1 a_3}, \frac{T}{L a_1 a_2} \right\}.
$$
Finally, assume that the quantity
$$
\begin{array}{c}
\left( \frac{KL}{2} + \frac{L}{4} - 1 - \frac{2K}{3L} \right) \lambda - (D+1) \log L - 3 g L^2 c \, \Omega \\ \\
- D (K-1) \log B - 2 \log K + 2 D \log 1.36
\end{array}
$$
is positive,
where
$$
g={1 \over 4}-{K^2L \over 12RST} \; \mbox{ and } \;
B = \frac{e^3 c^2 \Omega^2 L^2}{4K^2 d_1 d_3} \left( \frac{b_1}{a_2}+ \frac{b_2}{a_1} \right)
\left( \frac{b_3}{a_2}+ \frac{b_2}{a_3} \right).
$$
\noindent {\bf Then either}
\begin{equation} \label{rups}
\log \Lambda > - ( KL + \log ( 3 KL)) \lambda,
\end{equation}
or the following condition holds :
\smallskip
\noindent {\bf either} there exist non-zero rational integers $r_0$ and $s_0$ such that
\begin{equation} \label{rups2}
r_0b_2=s_0b_1
\end{equation}
with
\begin{equation} \label{rups3}
|r_0|
\le \frac{(R_1+1)(T_1+1)}{M-T_1}
\; \mbox{ and } \;
|s_0|
\le \frac{(S_1+1)(T_1+1)}{M-T_1},
\end{equation}
where
$$
M = \max\Bigl\{R_1+S_1+1,\,S_1+T_1+1,\,R_1+T_1+1,\,\chi \; \tau_1^{1/2} \Bigr\}, \; \;
\tau_1 = (R_1+1)(S_1+1)(T_1+1),
$$
{\bf or}
there exist rational integers $r_1$, $s_1$, $t_1$ and $t_2$, with
$r_1s_1\not=0$, such that
\begin{equation} \label{rups4}
(t_1b_1+r_1b_3)s_1=r_1b_2t_2, \qquad \gcd(r_1, t_1)=\gcd(s_1,t_2 )=1,
\end{equation}
which also satisfy
$$
|r_1s_1|
\le \gcd(r_1,s_1) \cdot
\frac{(R_1+1)(S_1+1)}{M-\max \{ R_1, S_1 \}},
$$
$$
|s_1t_1| \le \gcd(r_1,s_1) \cdot
\frac{(S_1+1)(T_1+1)}{M-\max \{ S_1, T_1 \}}
$$
and
$$
|r_1t_2|
\le \gcd(r_1,s_1) \cdot
\frac{(R_1+1)(T_1+1)}{M- \max \{ R_1, T_1 \}}.
$$
Moreover, when $t_1=0$ we can take $r_1=1$, and
when $t_2=0$ we can take $s_1=1$.
\end{thm}
\medskip
We apply this result with
$$
b_2=p, \; \alpha_2 = |x|, \;b_1=1, \; \alpha_1 = \sqrt{2}, \; b_3= |k| \; \mbox{ and } \; \alpha_3 =1 + \sqrt{2},
$$
so that we may take
$$
D=2, \; d_1=1, \; d_3 \in \{ 1, p \}, \; a_1 = \frac{\rho+3}{2} \log 2, \; a_2= (\rho+3) \log |x|
$$
and $a_3= (\rho+1) \log ( 1+\sqrt{2})$, whence $a=a_1$. Our goal is to choose $L$, $m,$ $\rho$ and $\chi$ such that (\ref{rups}) contradicts (\ref{upper1}), and (\ref{rups3}) contradicts (\ref{allo}), whereby we necessarily have (\ref{rups4}).
Setting
$$
L=545, \; m = 25, \, \rho = 5 \; \mbox{ and } \; \chi=2,
$$
we find, after a short Maple computation, that, for $3 \cdot 10^{10} < p \leq P_0$ and all $|x| \geq 7$, we are in situation (\ref{rups4}), whereby
there exist integers $r_1, s_1, t_1$ and $t_2$ for which
\begin{equation} \label{pood}
(t_1+r_1 |k|) s_1=r_1t_2 p,
\end{equation}
where
$$
\left| \frac{r_1s_1}{\gcd(r_1,s_1)} \right|
\le 80
\; \mbox{ and } \;
\left| \frac{s_1t_1}{ \gcd(r_1,s_1)} \right| \le 41.
$$
Similarly, for $2 \cdot 10^{10} < p < 3 \cdot 10^{10}$, we choose
$$
L=545, \; m = 21, \, \rho = 5 \; \mbox{ and } \; \chi=2,
$$
to deduce the existence of integers $r_1, s_1, t_1$ and $t_2$ with (\ref{pood}) and
$$
\left| \frac{r_1s_1}{\gcd(r_1,s_1)} \right|
\le 75 \; \mbox{ and } \; \left| \frac{s_1t_1}{ \gcd(r_1,s_1)} \right| \le 39.
$$
Since, in all cases, we assume that $p > 2 \cdot10^{10}$, we thus have
$$
\max \{ |r_1|, |s_1|, |t_1| \} < p.
$$
Hence, from the fact that $\gcd(r_1, t_1)=\gcd(s_1,t_2 )=1$, it follows that $r_1 = \pm s_1$, whereby
$t_1+r_1 |k| = \pm t_2 p$. Without loss of generality, we may thus write
$$
u + r |k| = t p,
$$
where $r = |r_1|$ and $t=|t_2|$ are positive integers, $u = \pm t_1$,
$$
|u| \leq
\left\{
\begin{array}{l}
41 \; \mbox{ if } \; 3 \cdot 10^{10} < p \leq P_0, \\
39 \; \mbox{ if } \; 2 \cdot 10^{10} < p < 3 \cdot 10^{10} \\
\end{array}
\right.
$$
and
$$
r \leq
\left\{
\begin{array}{l}
80 \; \mbox{ if } \; 3 \cdot 10^{10} < p \leq P_0, \\
75 \; \mbox{ if } \; 2 \cdot 10^{10} < p < 3 \cdot 10^{10} \\
\end{array}
\right..
$$
The linear form $\Lambda$ defined in (\ref{lin-form}) may thus be rewritten as a linear form in two logarithms :
$$
\Lambda =p \log \left( \frac{|x|}{(1+\sqrt{2})^{t/r}} \right) - \log \left( \frac{\sqrt{2}}{(1+\sqrt{2})^{u/r}} \right).
$$
We are in position to apply the following state-of-the-art lower bound for linear forms in the logarithms of two algebraic numbers, due to Laurent (Theorem 2 of \cite{Lau}).
\begin{thm} \label{laurentlemma} (Laurent)
Let $ \alpha_{1} $ and $ \alpha_{2}$ be multiplicatively independent algebraic numbers, $ h $, $ \rho $ and $ \mu $ be real numbers with $ \rho > 1 $ and $ 1/3 \leq \mu \leq 1$. Set
\begin{center}
$ \begin{array}{ccc}
\sigma=\dfrac{1+2\mu-\mu^{2}}{2}, & \lambda= \sigma \log \rho, & H= \dfrac{h}{\lambda}+ \dfrac{1}{\sigma},
\end{array} $\\
$ \begin{array}{cc}
\omega=2 \left(1+ \sqrt{1+ \dfrac{1}{4H^{2}} } \right), & \theta=\sqrt{1+ \dfrac{1}{4H^{2}} }+ \dfrac{1}{2H}.
\end{array} $
\end{center}
Consider the linear form $ \Lambda=b_{2}\log \alpha_{2}-b_{1}\log \alpha_{1}, $ where $ b_{1} $ and $ b_{2} $ are positive integers. Put
$$
D= \left[ \mathbb{Q}(\alpha_{1}, \alpha_{2} ): \mathbb{Q} \right]/\left[ \mathbb{R}(\alpha_{1}, \alpha_{2} ): \mathbb{R} \right]
$$
and assume that
$$
h \geq \max \left\lbrace D \left( \log \left( \dfrac{b_{1}}{a_{2}}+ \dfrac{b_{2}}{a_{1}} \right) + \log \lambda +1.75 \right)+0.06, \lambda , \dfrac{D \log 2}{2} \right\rbrace ,
$$
$$
a_{i} \geq \max \left\lbrace 1, \rho \vert \log \alpha_{i} \vert - \log \vert \alpha_{i} \vert + 2Dh(\alpha_{i}) \right\rbrace \ \ \ \ \ (i=1,2),
$$
and
$$
a_{1}a_{2} \geq \lambda^{2}.
$$
\noindent Then
\begin{equation} \label{laurentall}
\log \vert \Lambda \vert \geq -C \left( h+ \dfrac{\lambda}{\sigma} \right)^{2} a_{1}a_{2}- \sqrt{\omega \theta} \left(h + \dfrac{\lambda}{\sigma} \right)- \log \left( C' \left(h+ \dfrac{\lambda}{\sigma} \right)^{2} a_{1}a_{2} \right)
\end{equation}
\noindent with
$$
C=\dfrac{\mu}{\lambda^{3}\sigma} \left( \dfrac{\omega}{6}+ \dfrac{1}{2} \sqrt{\dfrac{\omega^{2}}{9}+ \dfrac{8\lambda \omega^{5/4} \theta^{1/4}}{3 \sqrt{a_{1}a_{2}}H^{1/2} } + \dfrac{4}{3} \left( \dfrac{1}{a_{1}}+ \dfrac{1}{a_{2}} \right) \dfrac{\lambda \omega }{H} } \right)^{2}
$$
and
$$
C'=\sqrt{ \dfrac{C \sigma \omega \theta}{\lambda^{3} \mu} }.
$$
\end{thm}
We apply this result with
$$
b_1=1, \; b_2=p, \; \alpha_1 = \frac{\sqrt{2}}{(1+\sqrt{2})^{u/r}} \; \mbox{ and } \; \alpha_2= \frac{|x|}{(1+\sqrt{2})^{t/r}},
$$
so that $D=2r$,
$$
h (\alpha_1) \leq \frac{\log 2}{2} + \frac{|u|}{2r} \log ( 1+\sqrt{2}) \; \mbox{ and } \;
h (\alpha_2) \leq \log |x| + \frac{t}{2r} \log ( 1+\sqrt{2}).
$$
Further, we may choose
$$
a_1 = \left( 2r + \frac{\rho-1}{2} \right) \log 2 + |u| \left( 2 + \frac{\rho+1}{r} \right) \log ( 1 + \sqrt{2})
$$
and
$$
a_2= \left( 8 \, r + 1 \right) \log |x|.
$$
That this latter choice is a valid one follows from the fact that $|x|^p > (1+\sqrt{2})^{|k|}$, whereby
$$
t < \left( \frac{u}{|k|}+r \right) \frac{\log |x|}{\log ( 1+ \sqrt{2})},
$$
and the assumption that $\rho < 10^6$, say.
Choosing $\rho=283$ and $\mu = 0.6$, we find, for $p > 3 \cdot 10^{10}$, $|x| \geq 7$ and all $-41 \leq u \leq 41$, $1 \leq r \leq 80$, that inequality (\ref{laurentall}) contradicts (\ref{upper1}), whilst the same is true (with identical parameter choices), for primes $p$ with $2 \cdot 10^{10} < p < 3 \cdot 10^{10}$,
$-38 \leq u \leq 38$, $1 \leq r \leq 75$. The final case is when $(r,u)=(75, \pm 39)$ which reduces immediately to that with $(r_0,u_0)=(25,\pm 13)$ upon dividing by the $\gcd (r,u)$.
Proposition~\ref{temp} thus follows, as desired.
\section{Frey curves and Hilbert modular forms} \label{sec4}
\subsection{The Frey Curve}
We next return to solutions to (\ref{eqn:main}), to which we shall now associate a Frey curve :
\begin{equation}\label{eqn:E}
E_{s,k} \; : \; Y^2=X(X+1)(X+ s\cdot \epsilon^{k} \sqrt{2})
\end{equation}
where the choice of sign $s=\pm 1$ and the value of
$k$ arises from Lemma~\ref{lem:descent}.
By an easy application of Tate's algorithm we find the following.
\begin{lem}
The curve $E_{s,k}$ has minimal discriminant
\[
\Delta_{\mathrm{min}}= 32 \epsilon^{2(k+\ell)} \alpha^{2p}
\]
and conductor
\[
\mathfrak{N}=(\sqrt{2})^9 \cdot \prod_{\mathfrak{q} \mid \alpha} \mathfrak{q}.
\]
\end{lem}
Our goal is to use the arithmetic of this Frey curve to show that any solution to (\ref{eqn:main}) necessarily closely resembles one of the ``trivial'' ones with $x=1$ and $y=k=\pm 1$.
\subsection{Irreducibility}
We shall make use of the following result
due to Freitas and Siksek \cite{FS2},
which is based on the work of David \cite{DavidI}
and Momose \cite{Momose}.
\begin{prop}\label{prop:irred}
Let $K$ be a totally real Galois number field of degree $d$,
with ring of integers ${\mathcal O}_K$ and Galois group $G=\Gal(K/{\mathbb Q})$.
Let $S=\{0,12\}^G$, which we think of as the set of sequences of values $0$, $12$
indexed by $\tau \in G$.
For $\mathbf{s}=(s_\tau) \in S$ and $\alpha \in K$, define the \textbf{twisted norm associated
to $\mathbf{s}$} by
\[
\mathcal{N}_\mathbf{s}(\alpha)= \prod_{\tau \in G} \tau(\alpha)^{s_\tau}.
\]
Let $\epsilon_1,\dots,\epsilon_{d-1}$
be a basis for the unit group of $K$,
and define
\begin{equation}\label{eqn:As}
A_\mathbf{s}:=\Norm \left( \gcd ( ( \mathcal{N}_\mathbf{s}(\epsilon_1)-1) {\mathcal O}_K,\ldots, (\mathcal{N}_\mathbf{s}(\epsilon_{d-1})-1 ) {\mathcal O}_K) \right).
\end{equation}
Let $B$ be the least common multiple of the $A_\mathbf{s}$ taken over all $\mathbf{s} \ne (0)_{\tau \in G}$,
$(12)_{\tau \in G}$.
Let $p \nmid B$ be a rational prime, unramified in $K$, such that $p \geq 17$ or $p = 11$.
Let $E/K$ be an elliptic curve, and $\mathfrak{q} \nmid p$ be a prime of good reduction for $E$.
Define
\[
P_\mathfrak{q}(X)=X^2-a_\mathfrak{q}(E) X + \Norm(\mathfrak{q})
\]
to be the characteristic polynomial
of Frobenius for $E$ at $\mathfrak{q}$. Let $r \ge 1$ be an integer such that
$\mathfrak{q}^r$ is principal.
If $E$ is semistable at all $\mathfrak{p} \mid p$
and $\overline{\rho}_{E,p}$ is reducible then
\begin{equation}\label{eqn:res}
p \mid \Res(\, P_\mathfrak{q}(X) \, , \, X^{12 r}-1\, )
\end{equation}
where $\Res$ denotes the resultant of the two polynomials.
\end{prop}
We now return to the case where $E=E_{s,k}$ is our Frey curve \eqref{eqn:E}.
\begin{lem}\label{lem:abirred}
For $E=E_{s,k}$ as above,
$\overline{\rho}_{E,p}$ is irreducible for $p \ge 5$.
\end{lem}
\begin{proof}
We suppose first that $p\ge 17$ or $p=11$, and
apply Proposition~\ref{prop:irred}. The constant $B$
in the proposition is simply
$$
B=\Norm(\epsilon^{12}-1)=-2^5 \cdot 5^2 \cdot 7^2,
$$
whereby if $p \ge 11$ then $p \nmid B$.
Suppose that $\overline{\rho}_{E,p}$ is reducible.
From \eqref{eqn:main}, if $q \equiv 3$, $5 \pmod{8}$
then $q \nmid x$ and so $E$ has good reduction
at (the inert prime) $q$.
We write $\mu_q$ for the multiplicative order
of $\epsilon$ modulo $q$.
Note that the trace $a_q(E_{s,k})$ depends only on the choice
of sign $s=\pm 1$ and on the value of $k$
modulo $\mu_q$.
We shall restrict our attention to the following set of primes
\[
Q=\{
3, 5, 11, 13, 19, 29, 43, 59, 83, 109, 131, 139, 251, 269, 307, 419, 461, 659
\}.
\]
The primes $q \in Q$ satisfy $q \equiv 3$, $5 \pmod{8}$ and also
\[
\mu_q \mid 9240=2^3 \cdot 3 \cdot 5 \cdot 7 \cdot 11.
\]
Recall also that $k$ is odd,
and that $\pm \epsilon^{k}\sqrt{2}-1=\epsilon^{\ell} \alpha^p
\not\equiv 0 \pmod{q}$.
Let
\begin{equation}\label{eqn:S}
S=\{ (t,m) \; : \; \text{$0 \le m < 9240$ odd,\; $t=\pm 1$,\;
$q \nmid (t \cdot \epsilon^m \sqrt{2}-1)$ for all $q \in Q$}
\}.
\end{equation}
It is clear that there is some $(t,m) \in S$ such that
$a_q(E_{s,k})=a_q(E_{t,m})$ for all $q \in Q$.
By Proposition~\ref{prop:irred}, we see that $p$ divides $R_{(t,m)}$ where
\[
R_{(t,m)} = \gcd\{\Res(x^{12}-1, x^2-a_q(E_{t,m})x+q^2) \; :\; q \in Q \}.
\]
Using a short {\tt Magma} \cite{magma} script , we computed $R_{(t,m)}$
for $(t,m) \in S$ and checked that $R_{(t,m)}$ is divisible only by
powers of $2$, $3$, $5$, $7$ and $13$.
Thus $\overline{\rho}_{E,p}$ is
irreducible for $p \ge 17$ or $p=11$.
We now briefly treat $p \in \{ 5, 7, 13 \}$. In each case, we will
in fact show that there is no elliptic curve $E/K$
with full $2$-torsion and a $p$-isogeny.
Here we found {\tt Magma}'s built-in {\tt Small
Modular Curve} package invaluable.
\begin{enumerate}
\item[(a)] An elliptic curve $E/K$ with full $2$-torsion and
a $5$-isogeny is isogenous over $K$ to an elliptic curve
with a $20$-isogeny, and so gives rise to a non-cuspidal
$K$-point on $X_0(20)$. A model for $X_0(20)$ is given by
the elliptic curve
\[
X_0(20) \; : \; y^2 = x^3 + x^2 + 4x + 4,
\]
which has Cremona reference 20A1.
This has rank $0$ over $K$, and in fact a full list
of $K$-points is $\{\infty, (4 , \pm 10), (0 , \pm 2), (-1 , 0 )\}$.
These points are all cuspidal, completing the proof for $p=5$.
\item[(b)] An elliptic curve $E/K$ with a $7$-isogeny
and a point of order $2$ gives rise to a non-cuspidal
$K$-point on $X_0(14)$. A model for $X_0(14)$ is
given by the elliptic curve
\[
X_0(14) \; : \; y^2 + x y + y = x^3 + 4 x - 6,
\]
which has Cremona reference 14A1.
This again has rank $0$ over $K$. The $K$-points
are
$\{\infty, (9 , 23),
(1 , -1 ), (2 , -5 ),
(9 , -33 ),
(2 , 2 )
\}$. The first four points are cusps. The last two
correspond to elliptic curves with $j$-invariants
$-3375$ and $16581375$. It turns out that elliptic curves
with these $j$-invariants have only one point of order $2$ over $K$.
This completes the proof for $p=7$.
\item[(c)] An elliptic curve $E/K$ with a $13$-isogeny
and a point of order $2$ gives rise to a non-cuspidal
$K$-point on $X_0(26)$, which has genus $2$. We shall in fact work
with $X_0(26)/\langle w_{13} \rangle$, where $w_{13}$ is the
Atkin-Lehner involution. This quotient is the
elliptic curve with Cremona reference 26B1:
\[
X_0(26)/\langle w_{13} \rangle \; : \; y^2 + x y + y = x^3 - x^2 - 3 x + 3.
\]
Again this has rank $0$ over $K$, and a full list of $K$-points
is given by $\{
\infty,
(-1 , -2),
(-1 , 2 ),
(1 , -2 ),
(1 , 0 ),
(3 , -6 ),
(3 , 2 )
\}$. The only $K$-points we obtain by pulling back to $X_0(26)$
are cusps. This completes the proof.
\end{enumerate}
\end{proof}
\subsection{Level-lowering}
Suppose that $p \ge 5$.
By \cite{FLS}, elliptic curves
over real quadratic
fields are modular; in particular $E$ is modular.
We now
apply the standard level-lowering recipe found in \cite{FS}
and based on theorems of Fujiwara, Jarvis and Rajaei (it is
here that we require Lemma~\ref{lem:abirred}). From the recipe
we know that $\overline{\rho}_{E,p} \sim \overline{\rho}_{f,\mathfrak{p}}$
for some Hilbert newform over $K$ of level $\mathfrak{M}=(\sqrt{2})^9$
and prime ideal $\mathfrak{p} \mid p$. Using {\tt Magma},
we find that the space of Hilbert newforms of level $\mathfrak{M}$
is $8$-dimensional, and in fact decomposes into $8$ rational
eigenforms.
Through a small search, we found $8$ elliptic curves over $K$ of conductor
$\mathfrak{M}$. By computing their traces at small prime ideals,
we checked that they are in fact pairwise non-isogenous. These
elliptic curves are all modular by the same theorem cited above, and
hence must correspond to the $8$ Hilbert newforms of level
$\mathfrak{M}$. Thus $\overline{\rho}_{E,p} \sim \overline{\rho}_{F_i,p}$
where $F_1,\dots,F_8$ are the $8$ elliptic curves given as follows :
\begin{gather*}
F_1 \; : \;
Y^2 = X^3 + \sqrt{2} X^2 + (\sqrt{2} - 1) X,\\
F_2 \; : \;
Y^2 = X^3 + (-\sqrt{2} + 3) X^2 + (-\sqrt{2} + 2) X,\\
F_3 \; : \;
Y^2 = X^3 + (2 \sqrt{2} - 1) X^2 + (-\sqrt{2} + 2) X,\\
F_4 \; : \;
Y^2 = X^3 + (\sqrt{2} - 2) X^2 + (-\sqrt{2} + 1) X,\\
F_5 \; : \;
Y^2 = X^3 + (-\sqrt{2} + 1) X^2 - \sqrt{2} X,\\
F_6 \; : \;
Y^2 = X^3 + (\sqrt{2} - 1) X^2 - \sqrt{2} X,\\
F_7 \; : \;
Y^2 = X^3 + (\sqrt{2} + 3) X^2 + (\sqrt{2} + 2) X,\\
F_8 \; : \;
Y^2 = X^3 - \sqrt{2} X^2 + (-\sqrt{2} - 1) X.
\end{gather*}
\begin{lem}\label{lem:KO}
Let $E=E_{s,k}$ and let $F$ be one of the eight
elliptic curves $F_1,\dots,F_8$ above.
Suppose that $\overline{\rho}_{E,p} \sim \overline{\rho}_{F,p}$ and
let $\mathfrak{q} \nmid 2$ be a prime ideal of $K$, and
$q$ be the rational prime such that $\mathfrak{q} \mid q$.
\begin{enumerate}
\item[(i)] If $\mathfrak{q} \nmid (s \epsilon^k \sqrt{2}-1)$ then
$a_\mathfrak{q}(E) \equiv a_\mathfrak{q}(F) \pmod{p}$.
\item[(ii)] If $\mathfrak{q} \mid (s \epsilon^k \sqrt{2}-1)$
and $q \not\equiv 7 \pmod{8}$ then
$\Norm(\mathfrak{q})+1 \equiv a_\mathfrak{q}(F) \pmod{p}$.
\item[(iii)] If $\mathfrak{q} \mid (s \epsilon^k \sqrt{2}-1)$
and $q \equiv 7 \pmod{8}$ then
$\Norm(\mathfrak{q})+1 \equiv -a_\mathfrak{q}(F) \pmod{p}$.
\end{enumerate}
\end{lem}
\begin{proof}
From the model \eqref{eqn:E},
it is easy to see that $E_{s,k}$ has good reduction at $\mathfrak{q}$ in case (i),
split multiplicative reduction in case (ii) and non-split
multiplicative reduction in case (iii).
If $q \ne p$,
the lemma follows by comparing traces of the images of Frobenius at $\mathfrak{q}$
for the representations $\overline{\rho}_{E,p}$ and
$\overline{\rho}_{F,p}$. For $q=p$ see \cite[Proposition 3]{KO}.
\end{proof}
\begin{lem}\label{lem:sign}
The $s=\pm 1$ sign in \eqref{eqn:ppp}, \eqref{eqn:rel1} and \eqref{eqn:E}
is in fact $+1$. Moreover, either
$k \equiv -1 \pmod{9240}$ and
$\overline{\rho}_{E,p} \sim \overline{\rho}_{F_2,p}$
or $k \equiv 1 \pmod{9240}$ and
$\overline{\rho}_{E,p} \sim \overline{\rho}_{F_7,p}$.
\end{lem}
\begin{proof}
We shall use ideas from the proof of Lemma~\ref{lem:abirred}.
For
a prime ideal $\mathfrak{q}$ we let $\mu_\mathfrak{q}$ be the order of $\epsilon$
modulo $\mathfrak{q}$.
In the proof of Lemma~\ref{lem:abirred}, we restricted ourselves
to inert primes. It is useful to now also use split primes.
Let
\[
\mathfrak{Q}=\{ \mathfrak{q} \; : \; \text{$\mu_\mathfrak{q} \mid 9240$, \;
$3 \le \Norm(\mathfrak{q}) < 1000$} \}.
\]
Fix $i \in \{ 1, 2, \dots, 8 \}$ and suppose $\overline{\rho}_{E,p} \sim
\overline{\rho}_{F_i,p}$ where $E=E_{s,k}$.
We let $S$ be given by \eqref{eqn:S}, whereby we know that there
is some $(t,m) \in S$ such that $s=t$ and $k \equiv m \pmod{9240}$.
Let $\mathfrak{q} \in \mathfrak{Q}$, and let $q$ be the
rational prime satisfying $\mathfrak{q} \mid q$.
By Lemma~\ref{lem:KO} we have the following.
\begin{enumerate}
\item[(a)] If $\mathfrak{q} \nmid (t \epsilon^m \sqrt{2}-1)$ then
$a_{\mathfrak{q}}(E_{t,m})=a_{\mathfrak{q}}(E_{s,k}) \equiv a_{\mathfrak{q}}(F_i) \pmod{p}$.
\item[(b)] If $\mathfrak{q} \mid (t \epsilon^m \sqrt{2}-1)$
and $q \not\equiv 7 \pmod{8}$ then
$\Norm(\mathfrak{q})+1 \equiv a_\mathfrak{q}(F_i) \pmod{p}$.
\item[(c)] If $\mathfrak{q} \mid (t \epsilon^m \sqrt{2}-1)$
and $q \equiv 7 \pmod{8}$ then
$\Norm(\mathfrak{q})+1 \equiv -a_\mathfrak{q}(F_i) \pmod{p}$.
\end{enumerate}
Define
\[
\beta_{t,m,i}(\mathfrak{q})=\begin{cases}
a_{\mathfrak{q}}(E_{t,m}) -a_{\mathfrak{q}}(F_i) & \text{if $\mathfrak{q} \nmid (t \epsilon^m \sqrt{2}-1)$}\\
\Norm(\mathfrak{q})+1 - a_\mathfrak{q}(F_i) &
\text{
if $\mathfrak{q} \mid (t \epsilon^m \sqrt{2}-1)$ and $q \not\equiv 7 \pmod{8}$
}
\\
\Norm(\mathfrak{q})+1 + a_\mathfrak{q}(F_i) &
\text{
if $\mathfrak{q} \mid (t \epsilon^m \sqrt{2}-1)$
and $q \equiv 7 \pmod{8}$
}.
\end{cases}
\]
We let
\[
\gamma_{t,m,i}= \gcd ( \beta_{t,m,i}(\mathfrak{q}) : \mathfrak{q} \in \mathfrak{Q}
).
\]
Note that if $s=t$, $k \equiv m \pmod{9240}$ and $\rho_{E,p} \sim \rho_{F_i,p}$,
then
$p \mid \gamma_{t,m,i}$. Using a {\tt Magma} script, we computed
$\gamma_{t,m,i}$ for all $(t,m) \in S$ and $i \in \{ 1,2, \dots, 8 \}$.
We found that all of these are divisible only by $2$ and $3$,
except for $\gamma_{1,9239,2}$ and $\gamma_{1,1,7}$
which are both zero. Hence $s=+1$, and either
$k \equiv -1 \pmod{9240}$ and $F_i=F_2$
or $k \equiv 1 \pmod{9240}$ and $F_i=F_7$.
\end{proof}
Note, in fact, that $F_2$ is isomorphic to $E_{1,-1}$ and
$F_7$ is isomorphic to $E_{1,1}$ (which explains why
$\gamma_{1,9239,2}=\gamma_{1,1,7}=0$).
We now simplify our Frey curve in \eqref{eqn:E} to take
account of the sign $s=+1$. We denote the new Frey curve
by
\[
E_k \; : \; Y^2=X(X+1)(X+\epsilon^k \sqrt{2}).
\]
Equations \eqref{eqn:main} and \eqref{eqn:ppp}
satisfy the following useful symmetry.
\begin{lem}\label{lem:choice}
Let $(x,y,k,\ell,\alpha)$ be a solution to equations \eqref{eqn:main} and
\eqref{eqn:ppp}, satisfying \eqref{stuffy}.
Then $(x,-y,-k,-\ell,(-1)^\ell \overline{\alpha})$ is also
a solution to equations \eqref{eqn:main} and \eqref{eqn:ppp}, also satisfying \eqref{stuffy}.
\end{lem}
\begin{proof}
The lemma follows on conjugating equations \eqref{eqn:ppp} and
\eqref{eqn:rel1},
observing
that $\overline{\epsilon}=-\epsilon^{-1}$, and
recalling that $k$ is odd.
\end{proof}
\section{Arithmetic information from Frey curves}
We continue with the same notation as in the previous section. Our basic goal is to obtain lower bounds for the exponent $|k|$ in the event that $k \neq 1$.
\subsection{Sieving : part I}
We begin by sharpening (part of) Lemma \ref{lem:sign}.
\begin{prop}\label{prop:1modM}
Suppose that $p \ge 19$ and let
\begin{equation}\label{eqn:M}
M=9240 \prod_{\overset{3 \le \ell < 2.4 \times 10^5}{\text{$\ell$ prime}}} \ell.
\end{equation}
Then $k \equiv \pm 1 \pmod{M}$. In particular,
$k=\pm 1$ or $\log_{10}{\lvert k \rvert} \ge 103944$.
\end{prop}
\begin{proof}
We start by defining $M_0=9240=2^3 \cdot 3 \cdot 5 \cdot 7 \cdot 11$.
We know from Lemma~\ref{lem:sign} that $k \equiv \pm 1 \pmod{M_0}$.
By Lemma~\ref{lem:choice}, we may assume that $k \equiv 1 \pmod{M_0}$,
and will use this to deduce that $k \equiv 1 \pmod{M}$.
It then plainly follows that if $k \equiv -1 \pmod{M_0}$
then $k \equiv -1 \pmod{M}$.
Suppose $k \equiv 1 \pmod{M_0}$. Let
\[
\ell_1=3, \quad \ell_2=5, \quad \ell_3=7, \dots
\]
be the sequence of primes starting with $3$. We define
$M_{n}=\ell_n \cdot M_{n-1}$. We will show inductively that
$k \equiv 1 \pmod{M_{n}}$ until $M_n=M$.
A direct computation, showing that $M$ somewhat exceeds $10^{103944}$, yields the last
statement in the proposition.
For the inductive step, suppose $k \equiv 1 \pmod{M_{n-1}}$.
Our strategy is to write down a small set
$\mathfrak{Q}$
of odd prime ideals
$\mathfrak{q}$ of $K$ satisfying
\[
\ell_n \mid \mu_\mathfrak{q}, \qquad \mu_\mathfrak{q} \mid M_{n};
\]
here as before,
$\mu_\mathfrak{q}$ is the multiplicative order of $\epsilon$ modulo $\mathfrak{q}$.
Let
\[
\mathcal{K}=\{1,1+M_{n-1},1+2M_{n-1},\dots,1+(\ell_{n}-1) M_{n-1} \}.
\]
We know that $k \equiv m \pmod{M_n}$ for some $m \in \mathcal{K}$.
By the previous section, $p$ divides $\beta_{1,m,7}(\mathfrak{q})$ for all
$\mathfrak{q} \in \mathfrak{Q}$. For $m \in \mathcal{K}$,
we compute $\gcd_{\mathfrak{q} \in \mathfrak{Q}} (\beta_{1,m,7}(\mathfrak{q}))$. With a
sufficiently large enough initial set $\mathfrak{Q}$, we
found that this gcd is divisible only by primes $\le 17$ unless
$m=1$ (in which case it is $0$). This shows that $k \equiv 1 \pmod{M_{n}}$.
The {\tt Magma} script executing this proof took roughly 218 hours to run on a 2200MHz AMD Opteron.
\end{proof}
\subsection{Sieving : part II}
In view of Proposition~\ref{prop:1modM}, we suppose $k \equiv \pm 1 \pmod{M}$
where $M$ is given by \eqref{eqn:M}. The objective of this subsection is to
show the following.
\begin{prop}\label{prop:k1modp}
For primes $19 \leq p < 2 \cdot10^{10}$ we have $k \equiv \ell \equiv \pm 1 \pmod{p}$,
where $k$ and $\ell$ are the exponents in \eqref{eqn:ppp}. In particular, via
\eqref{stuffy}, we have that $\ell=\pm 1$.
\end{prop}
We shall suppose that $k \equiv 1 \pmod{M}$ and deduce $k \equiv \ell \equiv 1
\pmod{p}$. The Proposition then follows from Lemma~\ref{lem:choice}.
Fix $p \ge 19$. Inspired by \cite[Lemma 7.4]{BMS},
we choose an auxiliary integer $q$ satisfying certain conditions, which we
enumerate as needed.
The first two conditions are the following.
\begin{enumerate}
\item[(i)] $q \equiv 1 \pmod{8}$ is prime.
\item[(ii)] $q =np+ 1$, where $n$ is an integer.
\end{enumerate}
Fix $\delta \in {\mathbb F}_q$ satisfying $\delta^2 \equiv 2 \pmod q$ (which we may do, via assumption (i)). Let
$$
\mathfrak{q}_1=q {\mathcal O}_K + (\sqrt{2}-\delta){\mathcal O}_K \; \mbox{ and } \;
\mathfrak{q}_2=q {\mathcal O}_K + (\sqrt{2}+\delta){\mathcal O}_K.
$$
Then $\mathfrak{q}_1$ and $\mathfrak{q}_2$ are prime ideals with residue field ${\mathbb F}_q$,
\[
\sqrt{2} \equiv \delta \pmod{\mathfrak{q}_1} \; \mbox{ and } \; \sqrt{2} \equiv -\delta \pmod{\mathfrak{q}_2}.
\]
Let
$$
\mathcal{F}_1 \; : \; Y^2=X^3+(\delta+3) X^2+(\delta+2)X
$$
and
$$
\mathcal{F}_2 \; : \; Y^2=X^3+(-\delta+3) X^2+(-\delta+2)X.
$$
These are the reductions of $F_7$ modulo $\mathfrak{q}_1$ and $\mathfrak{q}_2$,
respectively. We shall suppose that $q$ is chosen so that
the following condition on the traces of $\mathcal{F}_i$ is satisfied.
\begin{enumerate}
\item[(iii)] $a_q(\mathcal{F}_i) \not\equiv \pm 2 \pmod{p}$
for $i=1$, $2$.
\end{enumerate}
Note that $q \equiv 1 \pmod{p}$ by condition (ii). By Lemma~\ref{lem:KO}, we
see that
$$
\mathfrak{q}_i \nmid (\epsilon^k \sqrt{2}-1).
$$
Thus by
Lemma~\ref{lem:descent}, $q \nmid x$. Appealing to Lemma~\ref{lem:KO}
again, we have
\begin{equation}\label{eqn:traces}
a_{\mathfrak{q}_1}(E_k) \equiv a_q(\mathcal{F}_1) \pmod{p} \; \mbox{ and } \;
a_{\mathfrak{q}_2}(E_k) \equiv a_q(\mathcal{F}_2) \pmod{p}.
\end{equation}
By \eqref{eqn:rel1} and Lemma~\ref{lem:sign}, we have
\[
y+\left(\frac{x^p+3}{4} \right) \sqrt{2} =\epsilon^{k},
\]
whence, using the fact that $k$ is odd,
\begin{equation}\label{eqn:xepsk}
\left(\frac{x^p+3}{2} \right) \sqrt{2}= \epsilon^k+\epsilon^{-k}.
\end{equation}
Let
\[
\mu_n({\mathbb F}_q)=\{ \mu \in {\mathbb F}_q^* \; : \; \mu^n \equiv 1 \pmod{q} \},
\]
whereby, as $q \nmid x$ and $q=np+1$, we see that
\[
(x^p \mod{q}) \in \mu_n({\mathbb F}_q).
\]
Let
\[
\mathcal{W}_q=\left\{ e \in {\mathbb F}_q \; :
\; e+e^{-1}=\frac{(\mu+3)\delta}{2} \; \text{for some $\mu \in \mu_n({\mathbb F}_q)$} \right\}.
\]
By \eqref{eqn:xepsk}, we have that
\begin{equation}\label{eqn:epske}
\epsilon^k \equiv e \pmod{\mathfrak{q}_1} \; \mbox{ and } \; \epsilon^{k} \equiv -e^{-1} \pmod{\mathfrak{q}_2},
\end{equation}
for some $e \in \mathcal{W}_q$. Moreover, since $\mathfrak{q}_i \nmid (\epsilon^k \sqrt{2}-1)$, if we define
\[
\mathcal{X}_q=\{e \in \mathcal{W}_q \; : \; e \not\equiv \delta^{\pm 1} \pmod{q}\},
\]
then necessarily \eqref{eqn:xepsk} holds for some $e \in \mathcal{X}_q$.
In practice, we hope to be able to find a prime $q$ satisfying the foregoing
and forthcoming conditions, with $n$ small. Computing $\mathcal{W}_q$
amounts to solving $n$ quadratic equations in ${\mathbb F}_q$. The size of
$\mathcal{W}_q$ and thus $\mathcal{X}_q$ is at most $2n$. Thus
we know that the reduction
of $\epsilon^k$ modulo $\mathfrak{q}_1$ belongs to this relatively small set.
We will now refine $\mathcal{X}_q$, defining subsets
$\mathcal{Y}_q$, and $\mathcal{Z}_q$ that also contain
the reduction of $\epsilon^k$ modulo $\mathfrak{q}_1$.
To do this,
also suppose that $q$ is chosen so that the following condition is satisfied.
\begin{enumerate}
\item[(iv)] $n \mid M$.
\end{enumerate}
Since $k \equiv 1 \pmod{M}$, we see that $(q-1) \mid (k-1) p$, whereby
\[
(\epsilon^{k-1})^p \equiv 1 \pmod{\mathfrak{q}_i}.
\]
Note that $\epsilon \equiv 1+\delta \pmod{\mathfrak{q}_1}$.
Let
\[
\mathcal{Y}_q=\left\{ e \in \mathcal{X}_q \; : \; \left(\frac{e}{1+\delta}\right)^p \equiv 1 \pmod{q}\right\}.
\]
We see that \eqref{eqn:epske} holds for some $e \in \mathcal{Y}_q$.
Heuristically, the probability that a random element of $\mathcal{X}_q$
belongs to $\mathcal{Y}_q$ is $1/n$. Since $\# \mathcal{X}_q \le 2n$,
we expect that $\#\mathcal{Y}_q=O(1)$.
For the next refinement,
we will use information derived from the modular approach,
as given in \eqref{eqn:traces}.
Write
$$
\mathcal{E}_{1,e} \; : \;
Y^2=X(X+1)(X+e \cdot \delta)
\; \mbox{ and } \;
\mathcal{E}_{2,e} \; : \;
Y^2=X(X+1)(X+e^{-1}\cdot \delta).
$$
Given \eqref{eqn:epske},
these two elliptic curves over ${\mathbb F}_q$
are the reductions of the Frey curve $E_k$ modulo $\mathfrak{q}_1$, $\mathfrak{q}_2$.
Let $\mathcal{Z}_q$ be the set of $e \in \mathcal{Y}_q$ such that
\[
a_q(\mathcal{E}_{1,e}) \equiv a_q(\mathcal{F}_1) \pmod{p} \; \mbox{ and } \;
a_q(\mathcal{E}_{2,e}) \equiv a_q(\mathcal{F}_2) \pmod{p}.
\]
We know from \eqref{eqn:traces} that \eqref{eqn:epske} holds
for some $e \in \mathcal{Z}_q$. Note that $\mathcal{Z}_q$
cannot be empty, as the value $k=1$ leads to a solution to our
original equation~\eqref{eqn:main}. Thus certainly,
$1+\delta$ (which is the reduction of $\epsilon$ modulo $\mathfrak{q}_1$)
must appear in $\mathcal{Z}_q$. This, of course, is a useful check
on the correctness of our computations.
It is reasonable to expect on probabilistic grounds that
$\mathcal{Z}_q=\{1+\delta\}$. In fact, this is one of our two final
assumptions on $q$.
\begin{enumerate}
\item[(v)] $\mathcal{Z}_q=\{ 1+\delta\}$.
\item[(vi)] $(1+\delta)^n \not \equiv 1 \pmod{q}$.
\end{enumerate}
From (v), we see that
\[
(1+\delta)^k \equiv \epsilon^k \equiv 1+\delta \pmod{\mathfrak{q}_1}.
\]
Thus the multiplicative order of $1+\delta$ in ${\mathbb F}_q^*$ divides
$k-1$. Since $q-1=np$, we have from (vi) that
this multiplicative order must be divisible by $p$. Therefore,
$k \equiv 1 \pmod{p}$.
We now turn our attention to $\ell$.
Reducing \eqref{eqn:ppp} modulo $\mathfrak{q}_1$
(and recalling the value of the sign from Lemma~\ref{lem:sign})
we have
\[
(1+\delta)\delta - (1+\delta)^\ell \alpha^p \equiv 1 \pmod{\mathfrak{q}_1}.
\]
As $\delta^2 \equiv 2 \pmod{\mathfrak{q}_1}$, it follows that
\[
(1+\delta)^\ell \alpha^p \equiv 1+\delta \pmod{\mathfrak{q}_1}
\]
and so
\[
(1+\delta)^{n(\ell-1)} \equiv \alpha^{-np} \equiv 1 \pmod{\mathfrak{q}_1}.
\]
From (vi), we conclude that $\ell \equiv 1 \pmod{p}$.
The following lemma summarizes the above.
\begin{lem}
Suppose $k \equiv 1 \pmod{M}$.
Suppose there is a prime $q$ satisfying conditions (i)--(vi).
Then
$$
k \equiv \ell \equiv 1 \pmod{p}.
$$
\end{lem}
\begin{proof}[Proof of Proposition~\ref{prop:k1modp}]
As observed previously, it is sufficient to suppose
that $k \equiv 1 \pmod{M}$ and show that
$k \equiv \ell \equiv 1 \pmod{p}$, for primes $p$
in the range $19 \le p < 2\cdot 10^{10}$.
We used a {\tt Magma} script which for each prime $p$ in this range,
finds a prime $q$ satisfying conditions (i)--(vi) above.
The total processor time for the proof is roughly 1946 hours,
although the computation,
running on a 2200MHz AMD Opteron,
was spread over 10 processors,
making the actual computation time less than nine days.
\end{proof}
\section{Reducing our upper bound on $p$}
From Proposition \ref{temp}, we may suppose that $p < 2 \cdot 10^{10}$. The goal of this section is to reduce this upper bound still further.
Let $p\geq 19$. We have, according to Lemma~\ref{lem:sign}, that $s=1$ in~\eqref{eqn:ppp}.
We will assume for the remainder of this section that $k > 1$.
Appealing to Proposition \ref{prop:k1modp}, there exists a positive integer $k_0$ such that $k=p k_0 \pm 1$, whereby we may now rewrite (\ref{eqn:ppp}) as one of
\begin{equation} \label{eqn:new}
\alpha^p - \sqrt{2} \left( \epsilon^{k_0} \right)^p = \pm 1 - \sqrt{2},
\end{equation}
so that
\begin{equation} \label{wombat}
0 < \Lambda_1 = \log ( \sqrt{2} ) - p \log \left( \frac{\alpha}{\epsilon^{k_0}} \right) <
\epsilon \cdot \alpha^{-p}.
\end{equation}
Applying Theorem \ref{laurentlemma}, with
$$
b_2=1, \; \alpha_2 = \sqrt{2}, \; b_1=p, \; \alpha_1 = \alpha/\epsilon^{k_0} \; \mbox{ and } \; D=2,
$$
we may take
$$
a_1 = (\rho-1) \frac{\log 2}{ 2p} + 2 \log ( \alpha) + 4 \, h (\alpha) \leq (\rho-1) \frac{\log 2}{ 2p} + 6 \log ( \alpha)
$$
and
$$
a_2 = \left( \frac{\rho+3}{2} \right) \log 2.
$$
We choose
$\rho=27$ and $\mu = 1/3$, whereby a short calculation ensures that inequality (\ref{laurentall}), together with the fact that $\alpha \geq \sqrt{7}$, contradicts (\ref{wombat}) for $p > 1637$. Computing the first $10^7$ terms in recursion (\ref{rec}), verifying that none of them coincide with solutions to (\ref{eqn:main}) and noting that the $10^7$-th term exceeds $\exp(8.8 \cdot 10^6)$, we thus have $|x| > 10^{2348}$ and hence $|\alpha| > 10^{1174}$.
Now choosing $\rho=31$ and $\mu = 1/3$ in Theorem \ref{laurentlemma}, and using our new lower bound upon $\alpha$, we find that $p \leq 941$.
\subsection{Handling the values of $19 \leq p \leq 941$}
From equation (\ref{eqn:new}), it suffices to solve the family of Thue equations
\begin{equation} \label{gut}
X^p - \sqrt{2} \, Y^p = \pm 1 - \sqrt{2},
\end{equation}
in integers $X, Y \in \mathbb{Z} [ \sqrt{2}]$, for primes $p$ with $19 \leq p \leq 941$, to handle~\eqref{eqn:main} for these values of $p$. We will do this under the additional assumption that
$$
Y = \epsilon^{k_0} > 1.
$$
We argue as in the proof of Proposition 9.3 of \cite{BMS}. Define $\omega =
2^{1/2p}$ to be real and positive and let $\zeta$ be a primitive $p$-th root of
unity. Set $K = \mathbb{Q}(\omega)$ and $L=\mathbb{Q}(\omega,\zeta)$. It
is straightforward to check that an integral basis for $K$
is $1,\omega,\ldots,\omega^{2p-1}$, and to use this
to deduce that the discriminant of $K$ is
$D_K = 2^{4p-1} p^{2p}$. Moreover, the unit rank of $K$ is
$p$ and its Galois closure is $L$. We write
$\epsilon_{1,1}, \ldots, \epsilon_{1,p}$ for a system of fundamental units of
$K$ which, via Lemma 9.6 of \cite{BMS}, we may suppose to satisfy
\begin{equation} \label{ilk}
\prod_{i=1}^p h ( \epsilon_{1,i}) \leq 2^{1-p} (p!)^2 (2p)^{-p} R_K,
\end{equation}
where $R_K$ denotes the regulator of $K$. Further, the absolute values of the inverse of the regulator matrix corresponding to $\epsilon_{1,1}, \ldots, \epsilon_{1,p}$ are bounded above by $(p!)^2 2^{-p} \log^3 (6p)$. There thus exist integers $b_1, \ldots, b_p$ such that
$$
X - \omega Y = \pm \epsilon_{1,1}^{b_1} \cdots \epsilon_{1,p}^{b_p},
$$
where
$$
B= \max \{ |b_1|, \ldots, |b_p| \} \leq 2^{1-p} p (p!)^2 \log^3 (6p) \, h(X-\omega Y).
$$
We will assume that $k$ is positive; the argument for negative $k$ is similar and leads to an identical conclusion. From (\ref{gut}), considering imaginary parts, we have that
$$
\left| X - \omega Y \right| = ( \sqrt{2} \pm 1) \prod_{i=1}^{p-1} \left| X - \omega \zeta^i Y \right|^{-1}
\leq ( \sqrt{2}+1) \, \omega^{1-p} \, |Y|^{1-p} \, \prod_{i=1}^{(p-1)/2} \sin^{-2} (\pi i/p),
$$
whence
\begin{equation} \label{lush}
\left| X - \omega Y \right|< 2^p \, |Y|^{1-p}.
\end{equation}
It follows that
$$
\left| X - \omega \zeta^j Y \right| \leq \omega |Y| \left| 1 - \zeta^j \right| + 2^p \, |Y|^{1-p} < 2.1 \, |Y|,
$$
where, from the assumption that $k > 1$, we have
\begin{equation} \label{hoop}
\log |Y| = \frac{k \pm 1}{p} \, \log (1+\sqrt{2}) \geq \frac{M}{p} \log (1+ \sqrt{2}) > 10^{103942} \log (1+ \sqrt{2}).
\end{equation}
We therefore estimate that
$$
h(X-\omega Y) < 0.75 + \log |Y| < 1.01 \, \log |Y|,
$$
whereby, crudely,
\begin{equation} \label{why}
B < p^{2p} \, \log |Y|.
\end{equation}
Define $\epsilon_{2,i}$ and $\epsilon_{3,i}$ for $1 \leq i \leq p$ as the images under $\sigma$ and $\sigma^2$ of $\epsilon_{1,i}$, where $\sigma$ is the field automorphism that sends $\omega$ to $\omega \zeta$ and fixes $\zeta$. If, following Siegel, we set
$$
\lambda = \frac{-\zeta}{1+\zeta} \cdot \frac{X-\omega Y}{X - \zeta \omega Y} =
\frac{X-\zeta^2 \omega Y}{X - \zeta \omega Y} \cdot \frac{1}{1+\zeta} - 1,
$$
then we have
$$
\lambda = \left( \frac{\epsilon_{3,1}}{\epsilon_{2,1}} \right)^{b_1} \cdots
\left( \frac{\epsilon_{3,p}}{\epsilon_{2,p}} \right)^{b_p} \frac{1}{1+\zeta} - 1.
$$
Since $|\zeta/(1+\zeta)| < 1$, arguing crudely, from (\ref{lush}), we have that $|\lambda| < 2^{p} \, |Y|^{1-p}$ and so
\begin{equation} \label{ups}
\log |\lambda| < p \, \log 2 - (p-1) \, \log |Y|.
\end{equation}
It follows from (\ref{hoop}) that $|\lambda| < 1/3$ whereby there exists $b_0 \in \mathbb{Z}$ with $|b_0| \leq (p+1) B$ such that, if we define a corresponding linear forms in logarithms
$$
\Lambda = \left| b_0 \log (-1) + b_1 \log \left( \frac{\epsilon_{3,1}}{\epsilon_{2,1}} \right) + \cdots +
b_p \log \left( \frac{\epsilon_{3,p}}{\epsilon_{2,p}} \right) - \log \left( \zeta + 1 \right) \right|,
$$
we have $\Lambda \leq 2 | \lambda |$. We will apply Theorem \ref{Matveev}, with
$$
n=p+2, \; D=2p(p-1) \; \mbox{ and } \; \chi = 2.
$$
Using that
$$
h(a/b) \leq h(a)+h(b) \; \mbox{ and } \; h(a+b) \leq \log 2 + h(a) + h(b),
$$
for algebraic $a$ and $b$,
and the fact that, for each $j$,
$$
h \left( \epsilon_{1,j} \right) = h \left( \epsilon_{2,j} \right) = h \left( \epsilon_{3,j} \right),
$$
we thus have
$$
h \left( \frac{\epsilon_{3,j}}{\epsilon_{2,j}} \right) \leq h \left( \epsilon_{3,j} \right) +
h \left( \epsilon_{2,j} \right) \leq 2 \, h \left( \epsilon_{1,j} \right)
\; \mbox{ and } \; h (\zeta + 1) \leq \log 2.
$$
We may thus, in the notation of Theorem \ref{Matveev}, take $A_i$ such that
\begin{equation} \label{lunk}
\Omega = A_1 A_2 \cdots A_n = \pi \, \log (2) \, \left( 2 p (p-1) \right)^{p+1} \, 2^p \, \prod_{1 \leq j \leq p} h ( \epsilon_{1,j})
\end{equation}
and, via (\ref{why}),
$$
B < (p+1) p^{2p} \, \log |Y|.
$$
We conclude therefore that
$$
\log |\lambda| \geq \log \Lambda - \log 2 > - \log 2 - C(p+2) C_0 W_0 (2 p (p-1))^2 A_1 A_2 \cdots A_n,
$$
where, after a short computation, using the assumption that $p \geq 19$, we have
$$
C(p+2) < (4p)^{p+4}, \; \; C_0 < 7 p \; \; \mbox{ and } \; \; W_0 < \log \log |Y| + 3 p \log p.
$$
Applying (\ref{hoop}), (\ref{lunk}) and the fact that $19 \leq p \leq 941$, we thus have
$$
\log |\lambda| > - 2^{4p+15} \, p^{2p+8} \, (p-1)^{p+3} \,
\left( \log \log |Y| + 3 p \log p \right) \, \prod_{1 \leq j \leq p} h ( \epsilon_{1,j}).
$$
Inequality (\ref{ilk}) thus implies that
\begin{equation} \label{ricky}
\log |\lambda| > - 2^{2p+16} \, p^{2p+11} \, \log \left( p^{3p} \, \log |Y| \right) \,
(p!)^2 \, R_K.
\end{equation}
We apply Lemma 9.1 of \cite{BMS} to bound the regulator $R_K$. If we suppose that we have $D_K \leq L$, then
$$
R_K < \min \{ f_K (L, 2 - t/1000) \; : \; t=0, 1, \ldots, 999 \},
$$
where
$$
f_K (L,s) = 2^{-1} \left( 2^{1-p} \pi^{-p} \sqrt{L} \right)^s \, (\Gamma (s/2))^2 \, (\Gamma (s))^{p-1} \, s^{2p+1} \, (s-1)^{1-2p}.
$$
Since $D_K = 2^{4p-1} p^{2p}$, a short Maple computation reveals that, for $19 \leq p \leq 941$, we always have
\begin{equation} \label{trouble}
\log R_K < 10458,
\end{equation}
where the largest value of $\min \{ f_K (L, 2 - t/1000) \; : \; t=0, 1, \ldots, 999 \}$ encountered corresponds to $p=941$ and $t=743$.
Combining (\ref{ups}), (\ref{ricky}) and (\ref{trouble}), we thus have
\begin{equation} \label{final}
\begin{array}{c}
\log |Y| < 2^{2p+17} \, p^{2p+10} \, \log \left( p^{3p} \log |Y| \right) \, (p!)^2 \exp(10458).
\end{array}
\end{equation}
Since $p \leq 941$, in all cases we may conclude that $\log |Y| < 10^{15528}$, contradicting (\ref{hoop}). We may thus conclude that $(1,\pm1)$ are the only integer solutions to~\eqref{eqn:main} for the primes $p$ under consideration.
\subsection{Handling the values of $3 \leq p \leq 17$}
To finish the proof of Theorem~\ref{main}, it remains to solve~\eqref{eqn:main} for primes $p$ with $3 \leq p \leq 17$, which we shall carry out in this subsection.
Our strategy is to reduce the problem of treating equation ~\eqref{eqn:main} for a fixed odd prime $p$ to that of solving (a finite collection of) Thue equations over $\mathbb{Z}$.
As before, we write $\epsilon =1+\sqrt{2}$ for the fundamental unit of ${\mathbb Z}[\sqrt{2}]$ (the ring of integers of ${\mathbb Q}(\sqrt{2})$).
Consider a solution $(x,y,p)$ to~\eqref{eqn:main} and write
\begin{equation}\label{eqn:zDef}
z:=(x^p+3)/4,
\end{equation}
so that the integers $y$ and $z$ satisfy
$$
y^2-2z^2=-1,
$$
hence, as before (see \eqref{eqn:rel1}),
$$
y+z\sqrt{2}=s \epsilon^{k}
$$
for some odd integer $k$ and $s \in \{-1,1\}$. Replacing $y$ by $-y$ leads to replacing $k$ by $-k$ (see the proof of Lemma \ref{lem:choice}), so after possibly changing the sign of $y$ we have
\[k \equiv s \pmod{4}.\]
Writing
$$
a+b \sqrt{2}=\epsilon^{\frac{k-1}{2}},
$$
for $a, b \in \mathbb{Z}$,
we have
\begin{equation}\label{eqn:abPell}
a^2-2b^2=(-1)^{\frac{k-1}{2}}=s
\end{equation}
and
\begin{align*}
y+z\sqrt{2} & = s \epsilon (a+b \sqrt{2})^2 \\
& = s (a^2+4ab+2b^2) +s (a^2+2ab+2b^2)\sqrt{2}.
\end{align*}
Using this parametrization for $z$ together with ~\eqref{eqn:zDef} and ~\eqref{eqn:abPell}, we have
\begin{align*}
sx^p & = 4sz-3s \\
& = 4 (a^2+2ab+2b^2)-3(a^2-2b^2)\\
& = a^2+8ab+14b^2\\
& = (a+4b)^2-2b^2.
\end{align*}
Since $a$ and $p$ are odd, this implies that
\begin{equation}\label{eqn:abParam}
(a+4b)+b\sqrt{2}=\epsilon^t(u+v\sqrt{2})^p
\end{equation}
for certain $u,v,t \in {\mathbb Z}$ with $|t| \leq (p-1)/2$.
Let us define binary forms over ${\mathbb Z}$ via
$$
F_{p,t}(U,V)+G_{p,t}(U,V) \sqrt{2}=\epsilon^t(U+V\sqrt{2})^p
$$
and
\[H_{p,t}(U,V)=(F_{p,t}(U,V)-4G_{p,t}(U,V))^2-2G_{p,t}(U,V)^2.\]
Then ~\eqref{eqn:abPell} and ~\eqref{eqn:abParam} lead to
\[H_{p,t}(u,v)=s,\]
where $|u^2-2v^2| = |x|$.
We have now reduced the solution of~\eqref{eqn:main} for a fixed odd prime $p$ to the solution of the $2p$ Thue equations of degree $2p$
\begin{equation}\label{eqn:ThueSystemLarge}
H_{p,t}(U,V)=s, \quad U,V \in {\mathbb Z}, \quad t=0,\pm 1, \ldots, \pm \frac{p-1}{2}, \quad s \in \{-1,1\}.
\end{equation}
Taking into account information from the modular method, namely Lemma \ref{lem:sign}, we can restrict to $s=1$ when $p\geq 5$. If $p=3$, then the three
Thue equations in \eqref{eqn:ThueSystemLarge} corresponding to $s=-1$ can readily be seen to have no solutions; for $t=-1,0,1$ there are no solutions modulo $7,9,5$ respectively.
This means that for an odd prime $p$ we only have to solve the $p$ Thue equations
\begin{equation}\label{eqn:ThueSystem}
H_{p,t}(U,V)=1, \quad U,V \in {\mathbb Z}, \quad t=0,\pm 1, \ldots, \pm \frac{p-1}{2}.
\end{equation}
If we define $h_{p,t}(x)=H_{p,t}(x,1)$, let $\gamma$ be a root of $h_{p,t}(x,1)=0$ and set $K = \mathbb{Q}(\gamma)$, then we have that the discriminant of $K$ is $2^{p(6p-2)-1} p^{2p}$ and that $K$ has precisely $2$ real embeddings and hence $p$ fundamental units.
We note that $H_{p,0}$ is monic in $U$, so the equation $H_{p,0}(U,V)=1$ always has the solutions $U=\pm1, V=0$. If these are the only solutions to $H_{p,0}(U,V)=1$ and the other $p-1$ Thue equations in system~\eqref{eqn:ThueSystem} have no solutions, then it readily follows that $(x,y)=(1,\pm 1)$ are the only integer solution to~\eqref{eqn:main} for our fixed odd prime $p$.
We restrict our attention to $3 \leq p \leq 17$.
First of all, the Thue equation solver in {\tt PARI/GP} \cite{PARI2} can (unconditionally) solve
$$
H_{p,0}(U,V)=1
$$
for all these primes $p$ rather quickly. The upshot of such a computation is that, apart from $(U,V)=(\pm 1, 0)$, there are no further solutions. Many of the other Thue equations in~\eqref{eqn:ThueSystem} can also be solved using {\tt PARI/GP}. However, for some of the larger values of $p$ and some values of $t$ the computations appear to take a huge amount of time (and possibly memory). Luckily many of the Thue equations involved have local obstructions to solutions (which is something not automatically checked by {\tt PARI/GP} when trying to solve the equations). In fact, the only pairs $(p,t)$ with $3 \leq p \leq 17$ and $1 \leq |t|\leq (p-1)/2$ for which we cannot find a local obstruction for $H_{p,t}(U,V)=1$ are given by
\[(p,t) \in \{(5,1), (13,1), (17,1)\}.\]
For these pairs, we can in fact use the Thue equation solver in {\tt PARI/GP} to show within reasonable time that $H_{p,t}(U,V)=1$ has no integer solutions.
For sake of completeness, for all $(p,t)$ under consideration for which we found a local obstruction, we provide in Table~\ref{table:obstructions} at least one modulus $m$ such that the congruence $H_{p,t}(U,V)\equiv 1 \pmod{m}$ has no solutions.
\begin{table}[h!]
\caption{$(p,t,m)$ for which $H_{p,t}(U,V)\equiv 1 \pmod{m}$ has no solutions.}
\begin{tabular}{c|c|c}
$p$ & $t$ & $m$ \\
\hline
$3$ & $-1$ & $7$ \\
$3$ & $1$ & $3^2$ \\
$5$ & $-2,-1,2$ & $5^2$\\
$7$ & $-3,-2,-1,2,3$ & $7^2$\\
$7$ & $1,3$ & $13$\\
$11$ & $-4,-1,1,3,5$ & $11^2$ \\
$11$ & $-5,-4,-3,-2,-1,2,3,4,5$ & $23$\\
$13$ & $-6,-4,-3,4,5$ & $79$ \\
$13$ & $-5,-2,-1,2,3,6$ & $313$\\
$17$ & $-8, \ldots, -1, 2, \ldots, 8$ &$103$\\
\end{tabular}
\label{table:obstructions}
\end{table}
\begin{rem}
It is not strictly necessary to invoke the modular method to deal with the odd primes $p\leq 17$. Most of the Thue equations in~\eqref{eqn:ThueSystemLarge} with $s=-1$ can easily be solved, just like in the case when $p=3$. In fact, for all odd primes $p \leq 17 $ and all integers $t$ with $|t| \leq (p-1)/2$ we can find a local obstruction for
\begin{equation}\label{eqn:ThueMinus}
H_{p,t}(U,V)=-1
\end{equation}
(although some of the moduli involved are much larger than those in Table~\ref{table:obstructions}), except when $(p,t) \in \{(13,-4),(13,5)\}$. In the latter case the Thue equation solver in {\tt PARI/GP} can show again that~\eqref{eqn:ThueMinus} has no solutions. For $(p,t)=(13,-4)$ we were not immediately able to solve~\eqref{eqn:ThueMinus}. We did not pursue this however, but note instead that it seems possible to use the method of Chabauty-Coleman to deal completely with~\eqref{eqn:main} for $p=13$ without using the modular method; see below.
\end{rem}
\subsubsection{Alternative approach to handling small $p$}
To solve~\eqref{eqn:main} for a fixed odd prime $p$, without first reducing to Thue equations, we may note that this equation defines a hyperelliptic curve $C_p$ of genus $p-1$. Hence finding all integer or rational points on $C_p$ would suffice.
For convenience, consider the model for $C_p$ in weighted projective space given by
\[C_p \; : \; 8y^2=x^{2p}+6x^p z^p+z^{2p}.\]
We want to show that $C_p({\mathbb Q})=\{(1:\pm 1: 1 ) \}$.
We have the two (non-hyperelliptic) involutions on $C_p$
\[\iota^{\pm}_p: (x:y:z) \mapsto (z:\pm y:x),\]
and their corresponding quotients
\[D^{\pm}_p:=C_p/\iota^{\pm}_p.\]
A priori, for $p>3$, it suffices to determine either $D^+_p({\mathbb Q})$ or $D^-_p({\mathbb Q})$ since $D^+_p$ and $D^-_p$ are (hyperelliptic) curves of genus $(p-1)/2>1$. For $p=3$ it would suffice of course to find that at least one of $D^+_p({\mathbb Q})$ or $D^-_p({\mathbb Q})$ is finite.
To calculate explicit models for $D^+_p$ and $D^-_p$, we find the binary forms $F^{\pm}_p$ over ${\mathbb Z}$ of degree $p$ such that
\[F^{\pm}_p(xz,(x \pm z)^2)=x^{2p}+6 x^p z^p+z^{2p}.\]
Introducing the variables
\[X^{\pm}:=xz,\quad Y^{\pm}:=(x \pm z) y, \quad Z^{\pm}:=(x \pm z)^2,\]
we see that models for $D^+_p$ and $D^-_p$ in weighted projective space are given by
\[D^{\pm}_p\; : \; 8{Y^{\pm}}^2=Z^{\pm} F_p^{\pm}(X^{\pm},Z^{\pm}).\]
Furthermore, the rational points $(1:\pm 1:1)$ on $C_p$ map under $\iota^+_p$ to $(1:\pm 2:4)$ on $D^+_p$ and under $\iota^-_p$ to $(1:0:0)$ on $D^-_p$. We note that the point at infinity $(1:0:0)$ is also a rational point on $D^+_p$. It readily follows that if we can show that
\begin{equation}\label{eqn:Dplus}
D^+_p({\mathbb Q})=\{(1:\pm 2:4), (1:0:0)\}
\end{equation}
or
\begin{equation}\label{eqn:Dminus}
D^-_p({\mathbb Q})=\{(1:0:0)\},
\end{equation}
then $C_p({\mathbb Q})=\{(1,\pm 1, 1)\}$ and consequently $(1,\pm 1)$ are the only integer solutions to~\eqref{eqn:main} for the odd prime $p$ under consideration.
Using {\tt Magma}'s implementation of $2$-descent on hyperelliptic Jacobians we obtain upper bounds for the ${\mathbb Q}$-ranks of $\Jac(D^{\pm}_p)$ for several primes $p$; see Table~\ref{table:ranks}.
%
\begin{table}[h!]
\caption{Upper bounds for ranks of the Jacobian of $D^+_p$ and $D^-_p$.}
\begin{tabular}{c|c|c|c|c|c|c}
$p$ & 3 & 5 & 7 & 11 & 13 & 17 \\
\hline
Upper bound for $\operatorname{rank}(\Jac(D^+_p)({\mathbb Q}))$ & 1 & 1 & 1 & 2 & 1 & 2 \\
\hline
Upper bound for $\rank(\Jac(D^-_p)({\mathbb Q}))$ & 0 & 0 & 1 & 1 & 1 & 2 \\
\end{tabular}
\label{table:ranks}
\end{table}
We see that the ranks of $\Jac(D^-_3)({\mathbb Q})$ and $\Jac(D^-_5)({\mathbb Q})$ are both $0$. It is also easy to check that they both have trivial torsion. As a quick corollary we now obtain that~\eqref{eqn:Dminus} holds for $p=3$ and $p=5$. In fact (the Jacobian of) $D^-_3$ is the elliptic curve with Cremona Reference $1728j1$; for $D^+_3$, the reference is $1728e1$.
We can also easily check, for all primes $p$ in Table~\eqref{table:ranks}, that (two of) the three obvious rational points on $D^+_p$ give rise to a non-torsion element on the Jacobian. We conclude that the ranks of $\Jac(D^+_7)({\mathbb Q})$ and $\Jac(D^+_{13})({\mathbb Q})$ are both $1$ and that for both of them we have an explicit generator for a subgroup of finite index. This means that it should be possible to use the method of Chabauty-Coleman to check that~\eqref{eqn:Dplus} holds for $p=7$ and $p=13$. Determining $C_p({\mathbb Q})$ for $p=11$ or $p \geq 17$ seems much harder.
\section{Frey Curves for shifted powers in more general Lucas sequences}
In this section, we will indicate how our preceding arguments fit into a more general framework.
Let $K$ be a real quadratic number field, $\mathcal{O}_K$ its ring of integers and $\epsilon \in \mathcal{O}_K$ a fundamental unit in $K$, with conjugate $\overline{\epsilon}$. Define the Lucas sequences, of the first and second kinds, respectively,
$$
U_k=\frac{\epsilon^k-\left(\overline{\epsilon}\right)^k}{\epsilon-\overline{\epsilon}} \; \mbox{ and } \;
V_k = \epsilon^k+\left(\overline{\epsilon}\right)^k, \; \mbox{ for } \; k \in {\mathbb Z}.
$$
Let $a,c \in {\mathbb Q}$ with $a\not=0$, and consider the problem of determining the shifted powers $ax^n+c$ in one of these sequences, i.e. determining all integers $k, x$ and $n$ with $n \geq 2$ such that we have
\begin{equation}\label{eqn:ShiftedPowers}
U_k=ax^n+c
\end{equation}
or
\begin{equation}\label{eqn:ShiftedPowers2}
V_k=ax^n+c.
\end{equation}
If e.g. $\epsilon=(1+\sqrt{5})/2$, this amounts to determining shifted powers in the Fibonacci ($U_k$) or Lucas ($V_k$) sequences. If $\epsilon=1+\sqrt{2}$, $(a,c)=(\pm 1/4, \pm 3/4)$ and $k$ is odd, equation (\ref{eqn:ShiftedPowers}) corresponds to the main problem of this paper.
We will show that the arguments of this paper can potentially resolve such problems corresponding to either
\begin{itemize}
\item equation (\ref{eqn:ShiftedPowers}) with $k$ odd and $\Norm(\epsilon)=-1$, or
\item equation (\ref{eqn:ShiftedPowers2}) with either $k$ even or $\Norm(\epsilon)=1$.
\end{itemize}
We proceed from the observation that
\begin{equation} \label{crux}
\epsilon^k + \epsilon^{-k} \pm 2 =
\left\{
\begin{array}{ll}
\left( \epsilon^{k/2} \pm \epsilon^{-k/2} \right)^2 & \mbox{ if $k$ is even, } \\
\epsilon \left( \epsilon^{\frac{k-1}{2}} \pm \epsilon^{\frac{-k-1}{2}} \right)^2 & \mbox{ if $k$ is odd. } \\
\end{array}
\right.
\end{equation}
If $k$ is odd and $\Norm(\epsilon)=-1$, we have $\left(\overline{\epsilon}\right)^k=-\epsilon^{-k}$
whereby
$$
\left( \epsilon+ \epsilon^{-1} \right) U_k \pm 2 = \epsilon \left( \epsilon^{\frac{k-1}{2}} \pm \epsilon^{\frac{-k-1}{2}} \right)^2.
$$
It follows from equation (\ref{eqn:ShiftedPowers}) that
\begin{equation} \label{tom2}
(\epsilon+\epsilon^{-1})ax^n+((\epsilon+\epsilon^{-1} )c\pm 2)=\epsilon \gamma_{k,\pm}^2,
\end{equation}
where
$$
\gamma_{k,\pm}:=\epsilon^{\frac{k-1}{2}}\pm \epsilon^{\frac{-k-1}{2}} \in \mathcal{O}_K,
$$
Similarly, if either $k$ is even, or $\Norm(\epsilon)=1$, we have
$$
V_k = \epsilon^k + \epsilon^{-k},
$$
and hence, from (\ref{crux}) and assuming that we have (\ref{eqn:ShiftedPowers2}),
\begin{equation} \label{tom3}
ax^n+(c\pm 2)=
\left\{
\begin{array}{ll}
\tau_{k,\pm}^2 & \mbox{ if $k$ is even, } \\
\epsilon \gamma_{k,\pm}^2 & \mbox{ if $k$ is odd, } \\
\end{array}
\right.
\end{equation}
where
$$
\tau_{k,\pm}:=\epsilon^{k/2}\pm \epsilon^{-k/2} \in \mathcal{O}_K.
$$
To either three term relation (\ref{tom2}) or (\ref{tom3}), we can actually associate several Frey curves -- the most obvious are those corresponding to the generalized Fermat equations of signature $(n,n,2)$ or signature $(n,3,2)$. Both these Frey curves are defined over the totally real number field $K$ and therefore amenable to a Hilbert modular approach. In this setting, we can in fact generalize our choices of $a,c \in {\mathbb Q}$ to $a,c \in K$ and $x \in {\mathbb Z}$ to $x \in \mathcal{O}_K$.
\begin{rem}
If we also want to study~\eqref{eqn:ShiftedPowers} for $k$ even or $\Norm(\epsilon)=1$, or equation (\ref{eqn:ShiftedPowers2}) when $k$ is odd and $\Norm(\epsilon)=-1$, we can write down similar three term relations to those above, but now over $K(\sqrt{-1})$. The resulting Frey curves will therefore not {\it a priori} be defined over a totally real number field.
\end{rem}
\section{Other applications}
In \cite{BLMS}, equation (\ref{cur}) is solved in the case where $\{ u_k \} = \{ F_k \}$, the Fibonacci numbers and $c = \pm 1$. In this situation, the problem actually reduces to that of determining (almost) perfect powers in the Fibonacci and Lucas sequences (a program that is carried out in \cite{BMS}; see also \cite{BLMS2}), through a series of identities akin to
$$
F_{4k}+1 = F_{2k-1} L_{2k+1}.
$$
Here, $\{ L_k \}$ denotes the Lucas numbers, the companion sequence to the Fibonacci numbers.
This reduction in case $c = \pm 1$ depends crucially upon the fact that
$$
F_{-1}=F_1=F_2=1 \; \mbox{ and } \; F_{-2}=-1,
$$
and does not apparently extend to permit solution of the equation $F_k=ax^n+c$ for even a single fixed pair $(a,c)$ with $|c|>1$ (the case $c=-2$ is given as an open problem in \cite{BLMS}).
As explained in the preceding section, the methods of this paper potentially permit solution of $F_k=ax^n+c$ for any given pair $(a,c)$ and {\it odd} values of the index $k$. For certain choices of $c$, however, the elementary arguments of \cite{BLMS} allow one to handle the remaining even index terms. By way of example, let us suppose that $c = F_{2j}$, for some integer $j$, so that
$$
|c| \in \left\{ 1, 3, 8, 21, 55, 144, \ldots \right\}
$$
whereby we are considering equations of the shape
\begin{equation} \label{sharknado2}
F_k \pm F_{2j} = a x^n,
\end{equation}
for given fixed $a$ and $j$ (here, $k, x$ and $n$ are variables).
Then, appealing to the identity
$$
F_i L_j = F_{i+j} + (-1)^j F_{i-j},
$$
we have that
$$
F_{4k} = F_{2k+j} L_{2k-j} + (-1)^{j+1} F_{2j}
$$
and
$$
F_{4k} = F_{2k-j} L_{2k+j} + (-1)^{j+1} F_{-2j} = F_{2k-j} L_{2k+j} + (-1)^{j} F_{2j}.
$$
Assuming that we have $F_{4k}=a x^n+c$, if also $j$ is even, say, it follows that
$$
a x^n = F_{2k-j} L_{2k+j},
$$
whilst if $j$ is odd,
$$
a x^n = F_{2k+j} L_{2k-j}.
$$
Similarly, the identities
$$
F_{4k+2} = F_{2k+j+1} L_{2k-j+1} + (-1)^{j} F_{2j}
$$
and
$$
F_{4k+2} = F_{2k-j+1} L_{2k+j+1} + (-1)^{j+1} F_{2j},
$$
with $F_{4k+2}=a x^n+c$, imply that
$$
a x^n = F_{2k+j+1} L_{2k-j+1} \; \mbox{ or } \; a x^n = F_{2k-j+1} L_{2k+j+1}.
$$
In all cases, we are able to descend to a problem of (almost) perfect powers in the Fibonacci or Lucas sequences, one that has proven to be computationally tractable (whereby the same is potentially true for equation (\ref{sharknado2})).
|
1,941,325,221,062 | arxiv | \section{Introduction}
\label{sec:intro}
Recent advances in our knowledge of galaxy structure have shown that
the central spheroidal component of galaxies, i.e. the galaxy bulge,
comes in two flavours : classical and pseudo. Classical bulges are
formed by fast and violent processes such as major mergers or by
sinking and coalescence of giant gas clumps found in high redshift
discs \citep{Elmegreen2008,Kormendy2016}. Pseudobulges, on the other
hand, are thought to be formed by the slow rearrangement of gaseous
material from the disc to the central region of galaxies. The process
forming the pseudobulge can either be totally internal in nature,
driven by non-axisymmetric structures such as bars, ovals
etc. \citep{Kormendy2004}, or can involve external processes such as
minor mergers \citep{Eliche-Moral2011}. The different formation
mechanisms, for the two bulge types, leave their imprint on the
stellar population of these bulges. Previous studies have associated
formation of classical bulge with rapid and efficient star formation,
while the pseudobulges are formed slowly at lower redshift
\citep{Sanchez2016}. As as result, the stellar populations of
pseudobulges are found to be younger, on average, as compared to those
of classical bulges \citep{Gadotti2009}. \par
S0 galaxies are an intermediate transition class between elliptical
and spiral galaxies on the Hubble tuning fork diagram
\citep{Hubble1936}. They are thought to have formed from spiral
galaxies via different physical processeses such as major/minor
mergers \citep{Quereteja2015} or due to fading of spiral arms and the
quenching of star formation in the disc through environmental processes such as tidal
stripping and galaxy harassment \citep{Moore1996}, starvation \citep{Bekki2002} etc. By studying the stellar
population of these galaxy bulges, we can try to understand how
morphological transformation of a spiral into an S0 galaxy, affects
the properties of the bulge. One can put constraints on
the formation channel of S0 galaxies, by knowing the type of bulge
that they host. For example, the presence of a pseudobulge in a S0
galaxy helps us to discard the major merger driven formation channel
for that particular galaxy. One expects that the formation of an
S0 galaxy due to the quenching of star formation in the disc of their
progenitor spirals must leave an imprint on the star formation
history of pseudobulges which are formed from these discs. By comparing
the properties of the stellar population of pseudobulges hosted by S0
galaxies and the ones hosted by spirals, one can try to understand
the impact of morphological transformation on the properties of
pseudobulges. \par
In this letter, we study the stellar population of
pseudobulges hosted in S0 and spiral galaxies. The organization of
this letter is as follows, Section \ref{sec:data} describes our data
and sample selection. Section \ref{sec:result} describes our results
and discusses the findings before summarising the results in Section
\ref{sec:sum}. Throughout this work, we have used the WMAP9
cosmological parameters: $H_0$ = 69.3 km s$^{-1}$Mpc$^{-1}$,
$\Omega_m$= 0.287 and $\Omega_{\lambda}$= 0.713.
\section{Data and sample selection}
\label{sec:data}
In order to construct a statistically significant sample of S0
galaxies for our study, we started with data provided in
\cite{Nair2010}, which is a catalogue of detailed visual
classification for nearly 14,000 spectroscopically targeted galaxies
in the SDSS DR4. The \cite{Nair2010} catalogue is a flux limited
sample with an extinction corrected limit of $g < 16$ mag in the SDSS
$g$ band, spanning the redshift range $0.01 < z < 0.1$. This catalogue
provides information on the morphological T type and other
morphological features such as a bar, ring etc. In addition to
information on morphology, it also lists the stellar mass of each
galaxy as estimated by \cite{Kauffmann2003}, and group membership
information from the \cite{Yang2007} catalogue. In order to obtain
information on the structural components of these galaxies, we cross
matched \cite{Nair2010} catalogue with the data provided in
\cite{Simard2011} catalogue. \cite{Simard2011} provides us with
two-dimensional, point-spread-function-convolved, bulge+disc
decompositions in the $g$ and $r$ bands for a sample of 1,123,718
galaxies from the SDSS Data Release 7 \citep{Abazajian2009}. The cross
match resulted in 12,063 galaxies, which we refer to as the parent
sample hereafter.
\par \cite{Simard2011} have fitted each galaxy in their sample with
three different light profile models: a pure S\'ersic model, an
$n_b$ = 4 bulge + disc model, and a S\'ersic (free $n_b$) bulge +
disc model. One can choose the most appropriate model for a given
galaxy using the F-test probability criteria. For our study, we have
chosen only those galaxies from our parent sample where a bulge + disc
model is preferred over a single S\'ersic model. We chose the free
$n_b$ bulge + disc model for the disc galaxies in our sample as previous studies have shown that the bulges of S0s and spirals span a wide range of values of S\'ersic index \citep{Balcells2007, Laurikainen2010}. To find the appropriate model for the ellipticals in our sample we have carried out a comparison between the two available bulge + disc models and have found that majority of ellipticals are better fitted with $n_b$ = 4 bulge + disc model as compared to a free $n_b$ + disc model. Therefore, we use $n_b$ = 4 bulge + disc model to obtain relevant structural parameters of elliptical galaxies in our parent sample.\par
The allowed range of bulge S\'ersic index ($n_{b}$) in the
\cite{Simard2011} sample is $0.5<n_{b}<8$. From the literature it has been
known that the high values of the S\'ersic index are often associated
with fitting problems \citep{Meert2015}. In our sample, we find that
the mean error in bulge S\'ersic index for the galaxies having
$n_{b}>7.95$ is around twice the error in S\'ersic index estimate
below $n_{b} = 7.95$. \cite{Simard2011} also report that a
significant number galaxies with these high value of $n_{b}$, contain
a nuclear source or a bar + point like source. Presence of such sources
might affect the reliability of bulge parameter estimation at these
high values of $n_{b}$, hence we have excluded galaxies having
$n_{b}\geqslant 7.95$ from our parent sample. To further enhance the
quality of our sample, we impose a further selection cut based on the
error in estimation of $n_{b}$, in which we demand that no galaxy in
our parent sample should have error in $n_{b}$ which is greater than
the mean + one sigma of error distribution. We apply a final selection cut in our sample, in which we remove all the galaxies which host a bar. Since, \cite{Simard2011} does not fit for the bar profile in their decomposition, there is a chance that estimated bulge light profile might be contaminated by the bar light profile and one might be over estimating the bulge sizes etc. We have used flags provided in \cite{Nair2010} to identify and remove all galaxies which contain a bar from our sample.
\par Application of these cuts on the parent sample resulted in a
final sample of 1742 elliptical and 4697 disc galaxies which are
modelled by a $n_b = 4$ bulge + disc and free $n_b$ bulge + disc
galaxy models respectively. The mean error on $n_b$ in our final
sample of disc galaxies is 0.17. Out of these 4697 disc galaxies,
2067 of them are S0s and 2630 are spiral galaxies. To study the
stellar population of these disc galaxies in our final sample, we
have obtained relevant measurement of the $D_n$(4000) index ($d4000_n$; as
defined in \citealt{Balogh1999}) from the table {\it galSpecIndx} using
the SDSS DR13 \citep{SDSS2016} CASJobs. The median error on the measured value of $D_n$(4000)is 0.013 for our sample.
\begin{figure}
\includegraphics[width=0.5\textwidth]{0.pdf}
\caption{Distribution of D$_n$(4000) index in pseudobulge-hosting S0 (red solid line) and spiral (blue solid) galaxies. The histograms are normalised by the area under the curve.}
\label{fig:s0sp}
\end{figure}
\begin{figure*}
\includegraphics[width=.33\textwidth]{1.pdf}\hspace{-1.5em}
\includegraphics[width=.33\textwidth]{2.pdf}\hspace{-1.5em}
\includegraphics[width=.33\textwidth]{sigma.pdf}
\caption{\textbf{Left} : $D_n(4000)$ distribution of pseudobulge of S0 galaxies. The error bars on the histogram are Poisson errors and the median error in $D_n(4000)$ measurement is 0.013. The dividing line $D_n(4000)$ = 1.5 separates the pseudobulges into young and old population. \textbf{Middle}: position of pseudobulge-hosting S0 galaxies on u-r colour-mass diagram. Red and blue colours denote old and young pseudobulges respectively. The median errors in mass and colour measurement are 0.15 dex and 0.015 respectively. \textbf{Right}: Two dimensional $D_n(4000)$-environmental density histogram for pseudobulge-hosting S0 galaxies.}
\label{fig:fig1}
\end{figure*}
\section{Results and discussion}
\label{sec:result}
\subsection{Identifying pseudobulges}
We have classified bulges in our sample by combining two independent
criteria for bulge type identification, one of them coming from
photometry and the other coming from
spectroscopy. \cite{Kormendy2016} has shown that the failure
probability of a single criteria can range from 10-20 \%, and the
failure probability goes down significantly if one uses two or more
independent criteria to identify bulges. \par
The photometric criterion for the identification of pseudobulges
follows \cite{Gadotti2009}, which involves classification of bulge
types based on their position on the Kormendy diagram
\citep{Kormendy1977}. This diagram is a plot of the average surface
brightness of the bulge within its effective radius $\langle\mu_b (<
r_e)\rangle$ against the logarithm of the bulge effective radius
$r_{e}$. Elliptical galaxies are known to obey a tight linear relation
on this plot. Classical bulges being structurally similar to
ellipticals obey a similar relation while pseudobulges being
structurally different, lie away from it. Any bulge that deviates more
that three times the r.m.s. scatter from the best fit relation for
ellipticals is classified as pseudobulge by this criterion
\citep{Gadotti2009}. This physically motivated classification scheme
has been used in recent works \citep{Vaghmare2013,
Mishra2017,Neumann2017}
\par The Kormendy equation was obtained by fitting ellipticals in our final sample using $r$ band data. The equation for the best fit line is \\
$\langle\mu_b (< r_e)\rangle$ = $(2.330 \pm 0.047)$ log($r_e$) + $(18.160 \pm 0.024)$
\\
The rms scatter in $\langle\mu_b (< r_e)\rangle$ around the best fit line is 0.429. All galaxies which lie away more than 3 sigma scatter from this relation are classified as pseudobulge hosts.
\par \cite{Fisher2016} have suggested that if a bulge is found to have
a central velocity dispersion ($\sigma_{0}$) greater than 130 km
s$^{-1}$ , then it is most likely to be a classical bulge. We also
impose this criterion coming from spectroscopic measurements on our
sample, in which we demand that in order to be classified as a
pseudobulge, the central velocity dispersion ($\sigma_{0}$) of the
bulge should be less than 130 km s$^{-1}$. After the simultaneous
application of these two criteria, we find that 156 (7.5\%) out of
2067 S0 galaxies a host pseudobulge while 1118 (42.5\%) out of 2630
spirals are pseudobulge hosts. All the subsequent analysis presented
in this work has been carried on these pseudobulge-hosting spiral and
S0 galaxies. Previous works have quoted pseudobulge fraction in
spirals and S0s to be 32\% \citep{Gadotti2009} and 14\%
\citep{Vaghmare2013} respectively. Pseudobulges are more
commonly seen in low mass galaxies \citep{Fisher2016,
Mishra2017}. In his work \cite{Gadotti2009} has selected spiral
galaxies having stellar mass greater than $10^{10} M_{\odot}$ which
is much greater than lower limit of stellar mass ($10^{8}
M_{\odot}$) in our sample. The sample of \cite{Vaghmare2013} is, on
average, fainter than ours. The different pseudobulge fraction that
we obtain as compared to previous works is most likely due to the
different mass range of our sample.
\begin{figure*}
\centering
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{3.pdf}
\end{minipage}
\hspace{-1.98em}%
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{4.pdf}
\end{minipage}
\caption{\textbf{Left} : $D_n(4000)$ distribution of pseudobulges of early-type S0 and S0/a galaxies. The error bars on both histograms are the Poisson errors. \textbf{Right}: position of pseudobulge-hosting early-type S0 (open circles) and S0/a (filled circle) galaxies on u-r colour mass diagram. Red and blue colours denote old and young pseudobulge hosts respectively. The median errors on $D_n(4000)$, stellar mass and u-r colour are same as in Figure \ref{fig:fig1}}
\label{fig:both}
\end{figure*}
\subsection{Age bimodality in pseudobulges of S0 galaxies}
In order to study the stellar population of pseudobulges in our
sample, we have made use of available measurement of the 4000 {\AA}
spectral break index ($D_n(4000)$). The strength of the 4000 {\AA}
spectral break arises due to accumulation of absorption lines of
(mainly) metals in the atmosphere of old, low mass stars and by a lack
of hot, blue stars in galaxies. The strength of this break is
quantified by the $D_n(4000)$ index. The $D_n(4000)$ index is a reliable indicator of mean age of galaxy stellar population. In literature, galaxies with break strength of $D_n(4000)$ $\sim 1.3$ and $D_n(4000)$ $\sim 1.8$ have been quoted to have light weighted mean stellar ages of $\sim$ 1-2 Gyr and $\sim 10$ Gyr respectively \citep{Kauffmann2003}. In Figure \ref{fig:s0sp}, we have plotted the
distribution of $D_n(4000)$ index for pseudobulge-hosting S0 and
spiral galaxies. One can clearly see the distribution of $D_n(4000)$
index is bimodal for the pseudobulge-hosting S0 galaxies while spirals
do not show any bimodality in their $D_n(4000)$ distribution.
The measurement of $D_n(4000)$ index comes from central 3
arcsec as probed by SDSS fiber aperture. Since the galaxies in our
sample come in different physical sizes and are distributed over a
redshift range of $0.01 < z < 0.1$, there exists a possibility that
the SDSS fiber aperture is not predominantly tracing the bulge
region and is contaminated by the light from the inner disk. This
can cause a bias in $D_n(4000)$ measurements towards younger
ages. In order to correct for this effect, we have chosen to retain
only those pseudobulge-hosting S0 galaxies which have their light
profiles such that the bulge is brighter than the disc everywhere
within the region traced by the SDSS fiber aperture. For each galaxy
the bulge and disc light profile is obtained from the decompositions
of \cite{Simard2011}. Out of the original 156 S0 pseudobulges, 112
satisfied this criterion. For the remainder of this paper we will
work with this reduced sample of 112 pseudobulge-hosting S0
galaxies.
The $D_n(4000)$ index distribution of reduced sample of
pseudobulge-hosting S0 galaxies is shown in left panel of Figure
\ref{fig:fig1}. One can clearly see a bimodality in $D_n(4000)$
distribution which translates primarily to the age bimodality in
pseudobulges of S0 galaxies. In order to systematically explore the
possible cause of this age bimodality, we have divided the sample of
pseudobulges in S0 galaxies with an old (having $D_n(4000)\geq 1.5$)
and a young (having $D_n(4000) < 1.5$) stellar population. A
value of $D_n(4000)=1.5$ corresponds to a stellar age of $\sim$ 2
Gyr \citep{Kauffmann2003}. The choice of the value $D_n(4000)=1.5$
to divide bulges into old and young types was done by examining the
left panel of Figure \ref{fig:fig1}. At this point, a clear dip in the
$D_n(4000)$ distribution is seen. This dividing value has also been
used to select old and passive galaxies in recent literature
\citep{Zahid2017} \par
Pseudobulges are thought to be formed by transport of disc gas to the
central region of the galaxy \citep{Kormendy2016}. Since the amount of
gas in galaxies and star formation rate are correlated, one naively
expects that the age of the stellar population in the pseudobulges must be related to star formation rate of the galaxy as a whole. To understand this possible connection, we have
plotted our old and young pseudobulge host S0 galaxies on the
extinction corrected color-mass diagram. We have obtained total modelled
magnitude of S0 galaxies in $u$ and $r$ bands, and have corrected them
for extinction due to Galactic absorption by taking the extinction corrections in magnitudes (obtained following \citealt{Schlegel1998}) from the photoObj table given from SDSS DR13. We then have obtained K
corrected $u-r$ colour to $z = 0.0$ by making use of the the
K-corrections calculator
code\footnote{http://kcor.sai.msu.ru/getthecode/} which is based on
work by \cite{Chilingarian2010}. \par
The extinction corrected color-stellar mass diagram for pseudobulge
hosting S0 galaxies is shown in the middle panel of Figure
\ref{fig:fig1}. The two solid lines, taken from
\cite{Schawinski2014}, mark the boundary of the green valley region.
The region above the green valley in this diagram is the passive red
sequence and the region below the green valley is the star forming
blue cloud. The S0 galaxies hosting old and young pseudobulges are
shown by red and blue colours respectively. The median error in mass and colour estimate is about 0.15 dex and 0.015 mag respectively. We find that majority of
old pseudobulges are hosted by passive S0 galaxies while young bulges
are hosted by galaxies which are still forming stars in their
disc. We also notice that some of the young pseudobulge-hosting S0
galaxies are in the passive sequence, but one must be cautious here,
as presence of dust can make galaxies redder and shift them towards
the passive sequence of the $u-r$ color-mass diagram. The picture
which emerges is then, of a connected history of star formation
activity in the galaxy disc and in the the pseudobulges of S0
galaxies. We surmise that the origin of the old population of
pseudobulges in some S0 galaxies is due to shutting down of star
formation in their disc. \par
We have tried to explore the possible reason of shutting down of star
formation leading to the age bimodality found in pseudobulges
hosting S0 galaxies. Star formation in galaxies depends on
morphology and environment, then it becomes worthwhile to check their
possible correlation with age distribution of pseudobulges in S0
galaxies. In the right panel of Figure \ref{fig:fig1}, we have plotted a 2D
histogram of average environmental density vs $D_n(4000)$
distribution for pseudobulges hosting S0 galaxies in our sample. We
have obtained the average environmental density from \cite{Nair2010},
which defines it as the logarithm of inverse of distance to 5'th
nearest neighbour as defined in \cite{Baldry2006}.
We see a weak trend of the environment with $D_n(4000)$ index
where the old pseudobulge-hosting S0 appear to be in slightly higher
environmental density as compared to the young ones. We have
performed a two-sample Kolmogorov-Smirnov test to compare the
environmental density distribution of young and old pseudobulges. We
find that these samples of old and young pseudobulges could not have
been drawn from the same parent population, with at least 99.8\%
confidence. The mean environmental density of old and young
pseudobulge-hosting S0s is -0.1 Mpc$^{-2}$ and -0.5 Mpc$^{-2}$
respectively, although there is sufficient overlap in density
parameter space which weakens the trend. This indicates that
environment plays at most a weak role in driving the age bimodality
of pseudobulges in S0 galaxies.
\par To explore the importance of morphology of S0 galaxies in our
sample, we have divided them into two morphological subclasses. In the
literature, one generally puts the morphological classes S0-, S0, S0+
and S0/a galaxies under an umbrella term S0 galaxies
\citep{Laurikainen2011}. Out of these S0/a are more closer to spirals
and can have some faint spiral arms visible, while the others in S0
class are mostly featureless and are closer to ellipticals
galaxies. We have clubbed together the S0-, S0, S0+ morphological
classes and call them as early-type S0 galaxies. In the left panel of
Figure \ref{fig:both}, we have plotted the $D_n(4000)$ distribution
for early-type S0 and S0/a galaxies. We notice that majority of old
pseudobulges are found in early-type S0s while the younger ones are
more common in S0/a galaxies, but when combined they form the peaks of
bimodal distribution seen in the left panel of Figure \ref{fig:fig1}. We have also
plotted the young and old pseudobulge-hosting early type S0 and S0/s
galaxies on $u-r$ color-mass plane in the right panel of Figure
\ref{fig:both}. We find the same trend between global star formation
history and pseudobulge age as found previously in middle panel of Figure
\ref{fig:fig1}, where old pseudobulges are associated with
passive galaxies while the young bulges are predominantly found in
star forming galaxies. This plot then clearly shows that the age
bimodality in pseudobulges of S0 galaxies is strongly driven by the
morphology. We speculate that the morphology, which is shaped by the
dynamical history of the galaxies, quenches the star formation in the
disc, which then in turn stops the inward flow of gaseous material
from the galaxy disc to the bulges, thus contributing to the ageing
of pseudobulges as seen in these S0 galaxies.
\section{Summary}
\label{sec:sum}
We have presented a comparative study of the stellar
populations of pseudobulges hosted by S0 and spiral galaxies. We have
presented evidence of pseudobulge age bimodality in S0 galaxies which
is not seen in pseudobulges of spirals. Dividing the bulges into those
containing old and young populations, we see that old pseudobulges are
hosted by passive S0 galaxies, while the star forming S0 galaxies tend
to host young pseudobulges. We have tried to investigate the origin of
this age bimodality in pseudobulges of S0 galaxies by studying the
possible effect of the environment and the morphology. We do not see
any strong environmental effect which might drive this
bimodality. Dividing pseudobulge-hosting S0s into finer bins of
morphology, we find that early-type S0s preferentially host an older
pseudobulge while in the late-type S0s, i.e. the S0/a morphological
class, most of the pseudobulges are young. We surmise that the origin
of the old population of pseudobulges in some S0 galaxies is due to
quenching of star formation in their disc. We believe that the
dynamical history of these galaxies may have shaped their morphology
and may have quenched their disc, stopping the inward transport of
disc gas and thus making the bulges older by preventing the formation
of new stars. \par
In future, we plan to investigate the stellar population of
pseudobulges of S0 galaxies in detail using data from recent IFU
surveys such as SDSS MANGA. By studying the star formation history of
individual components of galaxies such as bulge, bar and the disc,
one can get more insight on the connection between the pseudobulge
stellar population and its relation to the star formation history of
the disc.
\section*{Acknowledgements}
We thank the anonymous referee for insightful comments that have improved both the content and presentation of this paper. We acknowledge support from a South African National Research Foundation grant (PID-93727) and from a bilateral grant under the Indo-South
Africa Science and Technology Cooperation (PID-102296) funded by the
Departments of Science and Technology (DST) of the Indian and South
African Governments.
|
1,941,325,221,063 | arxiv | \section{Introduction}
In nature and the real world, most data are nonlinear, nonstationary
and noisy, and general data-driven methods to analyze such data,
without \textit{a priori} assumptions basis, are demanded. About ten
years ago, such a method has been proposed to analyze nonlinear and
nonstationary time series: Hilbert-Huang transform (hereafter HHT)
\cite{huang1998emd,huang1999nvn}. The first step of this method is
the Empirical Mode Decomposition (EMD), which is used to decompose a
time series into a sum of different time series (modes), each one
having a characteristic frequency \cite{Wu2004a,flandrin2004emda}.
The modes are called Intrinsic Mode Functions (IMFs) and satisfy
the following two conditions: (\romannumeral1) the difference
between the number of local extrema and the number of zero-crossings
must be zero or one; (\romannumeral2) the running mean value of the
envelope defined by the local maxima and the envelope defined by the
local minima is zero. Each IMF has a characteristic scale which is
the mean distance between two successive maxima (or minima). The
procedure to decompose a signal into IMFs is the following:
\begin{itemize}
\item[1] The local extrema of the signal $X(t)$ are identified;
\item[2] The local maxima are connected together forming an upper envelope
$e_{\max}(t)$, which is obtained by a cubic spline interpolation.
The same is done for local minima, providing a lower envelope $e_{\min}(t)$;
\item[3] The mean is defined as $m_1(t)=(e_{\max}(t)+e_{\min}(t))/2$;
\item[4] The mean is subtracted from the signal, providing the local detail
$h_1(t)=X(t)-m_1(t)$;
\item[5] The component $h_1(t)$ is then examined to check if it satisfies
the conditions to be an IMF. If yes, it is considered as the first IMF and denoted
$C_1(t)=h_1(t)$. It is subtracted from the original signal and the first residual,
$r_1(t)=X(t)-C_1(t)$ is taken as the new series in step 1. On the other hand, if $h_1(t)$ is
not an IMF, a procedure called ``sifting process'' is applied as many times as
needed to obtain an IMF. The sifting process is the following: $h_1(t)$ is considered
as the new data; the local extrema are estimated, lower and upper envelopes
are formed and their mean is denoted $m_{11}(t)$. This mean is subtracted
from $h_1(t)$, providing $h_{11}(t)=h_1(t)-m_{11}(t)$.
Then it is checked again if $h_{11}(t)$ is an IMF. If not, the sifting process
is repeated, until the component $h_{1k}(t)$ satisfies the IMF conditions.
Then the first IMF is $C_1(t)=h_{1k}(t)$ and the residual $r_1(t)=X(t)-C_1(t)$
is taken as the new series in step 1.
\end{itemize}
The above sifting process should be stopped by a criterion
which is not discussed here: more details about the EMD algorithm can be found
in refs. \cite{huang1998emd,huang1999nvn,flandrin2004emda,flandrin2004emdb,huang2003cle}.
After decomposition, the original signal $X(t)$ is written as
a sum of IMF modes $C_{i}(t)$ and a residual $r_n(t)$
\begin{equation}
X(t)=\sum_{i=1}^{N}{C_{i}(t)}+r_{n}(t)
\end{equation}
EMD is associated with Hilbert Spectral Analysis (HSA)
\cite{Cohen1995,Long1995,huang1998emd}, which is applied to each
mode as a time frequency analysis, in order to locally extract a
frequency and an amplitude. More precisely, each mode function
$C(t)$ is associated with its Hilbert transform $\tilde{C}$
\begin{equation}
\tilde{C}(t)=\frac{1}{\pi}
\int_{-\infty}^{+\infty}\frac{C(\tau)}{t-\tau}\, \upd \tau
\end{equation}
and the combination of $C(t)$ and $\tilde{C}(t)$ gives the analytical signal
$z=C+j\tilde{C}=\mathcal{A}(t)e^{j\theta(t)}$, where $\mathcal{A}(t)$
is an amplitude time series and $\theta(t)$ is the phase of the mode
oscillation \cite{Cohen1995}. Within such approach
and neglecting the residual, the original time series is rewritten as
\begin{equation}
X(t)=Re\sum_{i=1}^{N}{\mathcal{A}_{i}(t)}e^{j \theta_i(t) }
\end{equation}
where $\mathcal{A}_i$ and $\theta_i$ are the amplitude and phase time series of mode $i$ and $Re$ means
real part \cite{huang1998emd,huang1999nvn}. For each mode, the Hilbert spectrum is defined as the square amplitude
$H(\omega,t)=\mathcal{A}^2(\omega,t)$, where $\omega=d\theta/dt$ is the instantaneous frequency
extracted using the phase information
$\theta(t)=\tan^{-1}\tilde{C}(t)/C(t)$. $H(\omega,t)$ gives a local
representation of energy in the time-frequency domain. The Hilbert
marginal spectrum of the original time series is then written as
$ h(\omega)=\int H(\omega,t) \, \upd t $ and corresponds
to an energy density at frequency $\omega$
\cite{Long1995,huang1998emd,huang1999nvn}.
Since its introduction, this method has attracted a large interest
\cite{huang2005emdbook}. It was shown to be an efficient method to
separate a signal into a trend and small scale fluctuations on a
dyadic bank
\cite{Wu2004a,flandrin2004emda,flandrin2004emdb}; it has
also been applied to many fields including physiology
\cite{Su2008}, geophysics \cite{Janosi2005}, climate studies
\cite{Sole2007}, mechanical engineering \cite{Chen2004} and
acoustics \cite{loutridis2005ril}, to quote a few.
These studies showed the applicability of the so-called EMD-HSA approach on many different time series.
In this letter, we apply the EMD and HSA approaches to fully developed turbulence time series. We first show
that the EMD method applies very nicely to turbulent velocity time series, with an almost dyadic filter
bank in the inertial range. We then show how the HSA can be generalized to take into account intermittency.
We apply this to the turbulence time series, providing a first characterization of the intermittency of turbulence
in an amplitude-frequency representation.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{Fig1}
\caption{Comparison of the Hilbert marginal energy spectrum (solid
line) and Fourier spectrum (dashed line, vertically shifted). The
slope of the reference line is $-5/3$. Both the second order
Hilbert and Fourier spectra indicate the same inertial subrange,
$10<f \,(\textrm{or }\omega)<1000 \, \textrm{Hz}$. The insert shows
the compensated spectra. The HHT spectra estimated using two different algorithms are shown for comparison, indicating a
stability of the spectrum with respect to the algorithm used.
}\label{fig:spectrum}
\end{figure}
\section{Application of EMD to turbulence time series}
We consider here a database obtained from measurements of nearly
isotropic turbulence downstream an active-grid
characterized by the Reynolds number $Re_{\lambda} =720$. The
sampling frequency
is $f_{\mathrm{s}}=40 \un{kHz} $ \cite{kang2003dta}.
The sampling time is $30\un{s}$ , and the total
number of data points per channel for each measurement is $1.2
\times 10^{6}$. We consider data in the streamwise direction at
position $x/M=48$, where $M$ is the mesh size and $x$ is the
distance in the streamwise direction. The mean velocity at this
location is $10.8 \un{ms^{-1}}$ and the turbulence intensity is
about $10\%$. For details about the experiment and the data see
ref.~\cite{kang2003dta}.
\begin{figure} \centering
\includegraphics[width=0.9\linewidth]{Fig2}
\caption{(a) Mean frequency versus mode number for the turbulent
velocity time series.
There is an exponential decrease with a slope very close to 1. This indicates that
EMD acts as a filter bank which is almost dyadic. (b) Fourier spectrum of each mode (from 1 to 12) showing that they
are narrow-banded. The slope of the reference line is $-5/3$ corresponding to
the inertial-range Kolmogorov spectrum.}\label{fig:scale}
\end{figure}
Figure \ref{fig:spectrum} shows the second order Hilbert and Fourier spectra of the longitudinal velocity.
A Kolmogorov $-5/3$
spectrum is observed in range $10 <f \,(\textrm{or }\omega)<1000 \un{Hz}$
for both spectra, indicating an inertial subrange over 2 decades.
Two different HHT spectra estimated using two different algorithms are shown in this figure:
the very similar shape of the spectra indicates a
stability of the spectrum with respect to the algorithm used. The scaling which is obtained
shows that Hilbert spectral analysis can be used to recover
Kolmogorov scaling in the inertial subrange.
The original velocity time series is divided into 73 non-overlapping segments
of $2^{14}$ points each. After decomposition,
the original velocity series is decomposed into several IMFs from 11
to 13 modes with one residual. The time scale is increasing with the
mode; each mode has a different mean frequency, which is estimated
by considering the (energy weighted) mean frequency in the Fourier
power spectrum. The relation between mode number $k$ and mean
frequency~\cite{huang1998emd} is displayed in
fig.~\ref{fig:scale} (a). The straight line in log-linear
plot which is obtained suggests the following relation
$\overline{f}(k) = f_0 \rho^{-k} $, where $\overline{f}$
is the mean frequency, $f_0 \simeq 22000$ is a constant and $\rho =
1.9 \pm 0.1$ is very close to 2, the slight discrepancy
from 2 may be an effect of intermittency. This result may also slightly depend on the number of
iterations of the sifting process: in the present algorithm, the latter is variable but some
proposed algorithms contain a fixed maximum number of iterations.
This indicates that EMD
acts as a dyadic filter bank in the frequency domain; an analogous
property was obtained previously using stochastic simulations of
Gaussian noise and fractional Gaussian noise (fGn)
\cite{flandrin2004emda,flandrin2004emdb,Wu2004a}, and it is
interesting to note here that the same result holds for fully
developed turbulence time series, possessing long-range correlations
and intermittency \cite{Frisch1995}.
We then interpret each mode according to its characteristic time
scale. When compared with the original Fourier spectrum of the
turbulent time series~(see fig.~\ref{fig:scale} (b)),
these modes can be termed as follows:
the first mode, which has the smallest time scale, corresponds to the measurement noise;
modes 2 and 3 are associated with the dissipation range of turbulence.
Mode 4 corresponds to the Kolmogorov scale, which is the scale below which
dissipation becomes important; it is a transition scale between inertial range and
dissipation range.
Modes 5 to 10 all belong to the inertial range corresponding
to the scale-invariant Richardson-Kolmogorov energy cascade \cite{Frisch1995};
larger modes belong to the large forcing scales.
Figure~\ref{fig:scale} (b) represents the Fourier power
spectra of each mode. It shows that each mode in the inertial range
is narrow-banded. This confirms that the EMD approach can be used as
a filter bank for turbulence time series. In the next section, we
focus on the intermittency properties.
\section{Intermittency and multiscaling properties: Arbitrary order Hilbert spectral analysis}
Intermittency and multiscaling properties have been found in many
fields, including turbulence \cite{Frisch1995}, precipitations
\cite{Schertzer1987}, oceanography \cite{Seuront1999}, biology
\cite{Ashkenazy2002}, finance \cite{Schmitt1999}, etc.
Multiscaling intermittency is often characterized using structure function of order $q>0$
as the statistical moment of the
fluctuations $\Delta X_{\tau}=\vert X(t+\tau)-X(t)\vert$ (see ref.
\cite{Frisch1995} for reviews):
\begin{equation}
\langle (\Delta X_{\tau})^q \rangle \sim C_q \tau^{\zeta(q)}
\end{equation}
where $C_q$ is a constant and $\zeta(q)$ is a scale invariant moment function; it is also a cumulant generating function,
which is nonlinear and concave and fully
characterizes the scale invariant properties of intermittency.
We present here a new
method to extract an analogous intermittency function using the EMD-HSA methodology.
The Hilbert spectrum $H(\omega,t)$ represents the original signal at the local level.
This can be used to
define the joint probability density function (pdf) $p(\omega,\mathcal{A})$ of the frequency $[\omega_i]$
and amplitude $[\mathcal{A}_i]$, which are extracted from all modes $i=1\cdots N$ together. The Hilbert
marginal spectrum is then rewritten as
\begin{equation}
h(\omega)=\int_0^{\infty} p(\omega,\mathcal{A}) \mathcal{A}^2 \, \upd \mathcal{A} \label{eq:marginal2}
\end{equation}
This definition corresponds to a second statistical moments. We then
naturally generalize eq.~(\ref{eq:marginal2}) into arbitrary
moments:
\begin{equation}
\mathcal{L}_q(\omega)=\int_0^{\infty} p(\omega,\mathcal{A}) \mathcal{A}^q \, \upd \mathcal{A}
\label{eq:arbitrary}
\end{equation}
where $q \ge 0$ and $h(\omega)=\mathcal{L}_2(\omega)$
\cite{Huang2008TSI}. In the inertial range, we assume the
following scaling relation:
\begin{equation}
\mathcal{L}_q(\omega) \sim \omega^{-\xi(q)}
\label{eq:arbitrary2}
\end{equation}
where $\xi(q)$ is the corresponding scaling exponent function in the
amplitude-frequency space. Equation~(\ref{eq:arbitrary}) provides
a new way to estimate the scaling exponents,
where, according to dimensional analysis, $\xi(q)-1$
can be compared to $\zeta(q)$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{Fig4}
\caption{ Scaling exponents $\xi(q)$ for fractional Brownian motion simulations with
$H=0.2 $, $0.4 $, $0.6 $ and $0.8$, respectively. }\label{fig:fbm2}
\end{figure}
We first validate the new method by using fractional Brownian
motion time series (fBm). They are characterized by the
Hurst number $0\le H \le 1$, and it is well-known that $\zeta(q)=qH$, hence we expect
$\xi(q)=1+qH$. We simulate 500 segments of length $2^{12}$ data
points each, using a wavelet based algorithm \cite{Abry1996}, with
different $H$ value
from 0.2 to 0.8. The Hilbert transform is numerically estimated by using
a FFT based method \cite{marplejr1999cdt}. The scale invariance is
perfectly respected as expected, this is not shown here,
see ref.\cite{Huang2008TSI} for more detail on validations of the
method with fBm simulation. We then represent the corresponding
scaling exponents $\xi(q)$ for various value of $q$ from 0 to 6, for
four values of
$H$ ($H=0.2$, $0.4$, $0.6$ and $0.8$) in fig.~\ref{fig:fbm2}. The perfect straight lines of equation
$1+q H$ confirm the usefulness of the new method to estimate
$\xi(q)$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{Fig5}
\caption{Representation of the joint pdf ${p}(\omega,\mathcal{A})$
(in log scale) of turbulent fluctuations in an amplitude-frequency
space. The scaling range $10<\omega<1000 \un{Hz}$ for frequencies
is shown as vertical dotted lines. The dashed line shows the
skeleton $\mathcal{A}_{\mathrm{s}}(\omega)$ of the joint pdf, which
is the amplitude for which the conditional pdf $p(\mathcal{A}\vert
\omega)$ is maximum.}\label{fig:jointpdf01}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{Fig6}
\caption{The skeleton of the joint pdf. (a)
$\mathcal{A}_{\mathrm{s}}(\omega)$
in log-log plot. A power law behaviour is observed in the
inertial subrange with scaling exponent 0.38, which is close to the
Kolmogorov value 1/3. (b) $p_{\max}(\omega)$
in log-log plot. A power law behaviour is observed
in the inertial subrange with scaling exponent 0.63. The vertical
lines show the corresponding inertial subrange
$10<\omega<1000\un{Hz}$. }\label{fig:jointpdf02}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{Fig7}
\caption{Representation of the rescaled conditional pdf
$p_1(\mathcal{A},\omega)$ in the inertial range, for fixed values of
$\omega=10$, $10^{1.5}$, $10^{2}$, $10^{2.5}$ and $10^{3}
\un{Hz}$.}\label{fig:rescale}
\end{figure}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.7\linewidth]{Fig8}
\caption{Representation of $\mathcal{L}_{q}(\omega)$, Hilbert
spectral analysis of velocity intermittency, using different orders
of moments (0, 1, 3, 4, 5 and 6). Power laws are observed on the
range $10<\omega<1000 \un{Hz}$ for all spectra. The value of the
scaling exponent $\xi(q)$ is shown in each
figure.}\label{fig:arbitrary}
\end{figure*}
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{Fig9}
\caption{Comparison of the scaling exponents $\xi(q)-1$ (diamond)
with the classical $\zeta(q)$ obtained from structure functions
analysis with the ESS method (dash-dotted line) and K41 $q/3$ (solid
line). The insert shows the departure from the K41
law.}\label{fig:scaling}
\end{figure}
We then consider turbulence intermittency properties using this approach.
The EMD-HSA methodological framework provides a way to represent
turbulent fluctuations in an amplitude-frequency space: the joint
pdf ${p}(\omega,\mathcal{A})$ is shown in fig.~\ref{fig:jointpdf01}.
The inertial subrange for frequencies is shown as vertical dotted
lines. This figure is the first 2D amplitude-frequency
representation of the pdf of turbulent fluctuations; it can be seen
graphically that the amplitudes decrease with increasing
frequencies, with a scaling trend. We show in the same graph
the skeleton $\mathcal{A}_{\mathrm{s}}(\omega)$ of the
joint pdf which corresponds to the amplitude for which the conditional pdf
$p(\mathcal{A}\vert \omega)$ is maximum:
\begin{equation}
\mathcal{A}_{\mathrm{s}}(\omega)=\mathcal{A}_0\, ;\,
p(\mathcal{A}_0,\omega)=\max_{\mathcal{A}}\{ p(\mathcal{A}\vert \omega)\}
\end{equation}
We then reproduce the skeleton in fig.~\ref{fig:jointpdf02} in two
different views: (a) $\mathcal{A}_{\mathrm{s}}(\omega)$ in log-log
plot; (b) skeleton pdf $p_{\max}(\omega)=p(
\mathcal{A}_{\mathrm{s}}(\omega),\omega)= \max_{\mathcal{A}}\{
p(\mathcal{A}\vert \omega)\}$
in log-log plot.
It is interesting to note that a power law behaviour is found for
both representations
\begin{equation}
\mathcal{A}_{\mathrm{s}}(\omega) \sim \omega^{-\beta_1},\,\,
p_{\max}(\omega) \sim \omega ^{-\beta_2}
\end{equation}
where $\beta_1 \simeq 0.38$, and $\beta_2 \simeq 0.63$. Dimensional
analysis provides the non-intermittent Kolmogorov value
$\beta_1=1/3$ and $\beta_2=2/3$. The difference with these
theoretical value may be an effect of intermittency. We note that
the value $\beta_1=0.38$ is comparable with the estimation of
$\zeta(1)=0.37$ given by Ref. \cite{Water1999}. We plot in
fig.~\ref{fig:rescale} the rescaled pdf $p_1( \mathcal{A},\omega) =
\omega^{\beta_2} p( \mathcal{A}/ \omega^{\beta_1},\omega)$, for
various fixed values of $\omega$. In case of monoscaling, these pdfs
should superpose perfectly; here the plot is scattered,
but nevertheless we note that the lack of superposition of these
rescaled pdfs is a signature of intermittency. Moments of
this pdf are less noisy as will be visible bellow. For comparison,
we plot the normal distribution (dashed line), lognormal
distribution (solid line) and log-Poisson distribution
(dashed-dotted line) in the same figure. It seems that the
log-Poisson distribution provides a better fit to the pdf than the
lognormal distribution. We also characterize intermittency in the
frequency space by considering marginal moments
$\mathcal{L}_{q}(\omega)$. Figure~\ref{fig:arbitrary} shows
$\mathcal{L}_{q}(\omega)$, Hilbert spectral analysis of velocity
intermittency, using different orders of moments (0, 1, 3,
4, 5 and 6). The moment of order 0 is the marginal pdf of the
instantaneous frequency, see eq.~(\ref{eq:arbitrary}). It is
interesting to note that this pdf is extremely ``wild'', having a
behaviour close to $\mathcal{L}_0(\omega)\sim \omega^{-1}$,
corresponding to a ``sporadic'' process whose probability density is
not normalizable
($\int p(\omega)\,\upd \omega$ diverges). This result is only obtained when all modes are considered together;
such pdf is not found for the frequency pdf of an individual mode.
This property seems to be rather general: we observed such pdf for moment of order zero using
several other time series:
for example surf-zone turbulence data, fBm \cite{Huang2008TSI}, river flow discharge data.
Hence it does not seems to be linked to turbulence itself,
but to be a main property of the HSA method, which still needs to be studied further.
We observe the power laws in range $10 <\omega<1000 \un{Hz}$ for all order moments.
The values of scaling exponents $\xi(q)$ are shown in each picture.
This provides a way to estimate scaling exponents $\xi(q)$ for every order of moment $q\ge
0$ on a continuous range of scales in the frequency space.
Next, we compare scaling exponents $\xi(q)-1$ estimated by our new
approach with the classical structure functions scaling exponent
function $\zeta(q)$ estimated using the extended self similarity
(ESS) method \cite{Benzi1993} in fig.~\ref{fig:scaling}. It can be
seen that $\xi(q)-1$ is nonlinear and
is close to $\zeta(q)$, but the departure from K41 law shows that the curvature is not the same:
$\xi(q)$ seems less concave than $\zeta(q)$.
Here, we provide some comments on some issues of the EMD
method. The main drawback of the EMD method is a lack of solid
theoretical ground, since it is almost empirical \cite{huang2005emdbook}. It has been
found experimentally that the method, especially for the HSA, is statistically stable
with different stopping criteria \cite{huang2003cle}. Recently,
Flandrin \textit{et al.} have obtained new theoretical results on
the EMD method \cite{flandrin2004emdb,rilling2006ise,rilling2008oot}.
However, more theoretical work is still needed to fully understand
this method.
\section{Conclusion}
We have applied here empirical mode decomposition to analyze a
high Reynolds number turbulent experimental
time series. After decomposition, the original velocity time series
is separated into several intrinsic modes. We showed that this method acts as an
almost dyadic filter bank in the frequency domain, confirming
previous results that have been obtained on Gaussian noise or fractional
Gaussian noise. Comparing
the Fourier spectrum of each mode, and the associated characteristic scale,
we can interpret each mode according to the range to which it belongs.
The first mode contains the smallest scale and the measurement noise;
two modes are associated to dissipation scales, and many modes are associated to
the inertial subrange corresponding to the turbulent energy cascade.
The last modes correspond to the large scales
associated to the coherent structures (energy-containing structures).
We have obtained a first 2D representation of the joint pdf
$p(\omega,\mathcal{A})$. We observed a interesting power law
behaviour with scaling exponent $\beta_1\simeq 0.38$ for the
location of the joint pdf skeleton points. We also observed a power
law behaviour with scaling exponent $\beta_2 \simeq 0.63$ for the
skeleton pdf $p_{\max}(\omega)$. It is also found that the
log-Poisson distribution provides a better fit to the velocity pdf
than the lognormal distribution. Then the intermittency information
in multiscaling (multifractal) turbulent processes was extracted
using the HSA framework.
The scaling exponents in amplitude-frequency space
($\xi(q)-1$) are close to the ones in real space $\zeta(q)$, despite
the quite different approaches used in both cases.
We have here
extended the EMD-HSA approach in a quite natural way in order to
consider intermittency. This provides a new time-frequency analysis
for multifractal time series, that is likely to be applicable to
other fields within the multifractal framework.
\acknowledgments This work is supported in part by the National
Natural Science Foundation of China (No.10672096 and No.10772110)
and the Innovation Foundation of Shanghai University. Y.~H. is
financed in part by a Ph.D. grant from the French Ministry of
Foreign Affairs. The EMD Matlab codes used in this paper are
written by P. Flandrin from laboratoire de Physique, CNRS \& ENS
Lyon (France): http://perso.ens-lyon.fr/patrick.flandrin/emd.html.
Experimental data have been measured in the Johns Hopkins
University's Corrsin wind tunnel and are available for download at
C. Meneveau's web page: http://www.me.jhu.edu/\~{}me\-neveau/datasets.html.
We thank the reviewers for helpful comments;
\bibliographystyle{eplbib}
|
1,941,325,221,064 | arxiv | \section{}
\noindent{ \bf Key-words:} Kinetic equations, travelling waves, dispersion relation\\
\noindent{\bf AMS Class. No:} {35Q92, 45K05, 35C07}
\section{Introduction}
\subsection*{The model.}
In this paper, we are interested in propagation phenomena occuring in the following reaction-transport equation
\begin{equation} \label{eq:main}
\begin{cases}
\partial_t f(t,x,v) + v \cdot \nabla_x f(t,x,v) = M(v) \rho(t,x) - f(t,x,v) + r\rho(t,x) \left( M(v) - f(t,x,v) \right),\\ \hfill (t,x,v) \in \mathbb{R}_+ \times \mathbb{R}^n \times V\, ,&\smallskip\\
f(0,x,v) = f_0(x,v)\,, \hfill (x,v) \in \mathbb{R}^n \times V\, ,
\end{cases}
\end{equation}
where $r>0$. The mesoscopic density $f$ depends on time $t\in\mathbb{R}^+$, position $x\in\mathbb{R}^n$ and velocity $v\in V$ and describes a population of individuals. The macroscopic density is $\rho(t,x) = \int_V f(t,x,v)\, dv$. The subset $V\subset \mathbb{R}^n$ is the set of all possible velocities. From now on, we assume
\begin{itemize}
\item[{\bf(H0)}]\label{H0} The velocity set $V \subset \mathbb{R}^n$ is compact.
\end{itemize}
For any given direction $e \in \mathbb{S}^{n-1}$, we define
\begin{equation*}
\overline{v}(e) = \max\left\lbrace v\cdot e, v\in V\right\rbrace, \qquad \mu(p) = \vert p \vert \overline{v}\left( \frac{p}{\vert p \vert} \right), \quad \mathrm{Arg}\,\mu(p) = \left\{ v\in V\mid v\cdot p = \mu(p)\right\}.
\end{equation*}
We set
\begin{equation*}
v_{max}:=\underset{v\in V}{\mathrm{sup}}\,|v|, \qquad |V|:=\int_V dv.
\end{equation*}
Individuals move following a so-called velocity-jump process. That is, they alternate successively a run phase, with velocity $v\in V$, and a change of velocity at rate 1, which we call the tumbling. The new velocity is chosen according to the probability distribution $M$. Throughout the paper, we assume
\begin{itemize}
\item[{\bf(H1)}]\label{H1} $M\in L^1(V)$, and
\begin{equation}\label{eq:hypM}
\left\langle v \right\rangle_M:=\int_{V} vM(v)dv = 0.
\end{equation}
\end{itemize}
Note that it is challenging to replace the linear BGK operator $M\rho - f$ by a more general collision operator of the form $P(f)-\Sigma f$ where $P$ is a positive operator.
However, to remain consistent with \cite{bouin_propagation_2015}, we will stick to their framework and leave this question for future work.
\begin{rem}
In fact, our analysis can easily be extended to the case $\left\langle v \right\rangle_M\in \mathbb{R}^n \setminus {0}$. Setting $\mathbb{V}:=V-\left\langle v \right\rangle_M$, $\mathbb{M}(w):=M(w+\left\langle v \right\rangle_M)$ and $\mathbb{F}(t,x,w):=f(t,x+\left\langle v \right\rangle_Mt,w+\left\langle v \right\rangle_M)$, for all $(t,x,w)\in\mathbb{R}_+\times\mathbb{R}^n\times \mathbb{V}$, we recover our assumptions in the new framework.
\end{rem}
The reproduction of individuals is taken into account through a reaction term of monostable type. The constant $r>0$ is the growth rate in absence of any saturation. New individuals start with a velocity chosen at random with the same probability distribution $M$. The quadratic saturation term accounts for local competition between individuals, regardless of their speed.
We assume that initially $0 \leq f_0 \leq M$, so that this remains true for all times, see \cite{bouin_propagation_2015,cuesta_traveling_2012}.
\subsection*{Earlier works and related topics}
It is relatively natural to address the question of spreading for \eqref{eq:main} since there is a strong link between \eqref{eq:main} and the classical Fisher-KPP equation \cite{fisher_wave_1937,kolmogorov_etude_1937}. Indeed, a suitable parabolic rescaling
\begin{equation} \label{eq:main2}
\epsilon^2 \partial_t g_\varepsilon + \epsilon v \cdot \nabla_x g_\varepsilon = \left(M(v) \rho_{g_\varepsilon} - g_\varepsilon\right) + \epsilon^2 r\rho_{g_\varepsilon} \left( M(v) - g_\varepsilon \right)\,,
\end{equation}
leads to the Fisher-KPP equation (see \cite{cuesta_traveling_2012} for example) in the limit $\varepsilon \to 0$,
\begin{align}\label{eq:kolmogorov_etude_1937}
&\partial_t \rho^0- \left\langle v^2 \right\rangle_M\partial_{xx} \rho^0 = r \rho^0 \left( 1 - \rho^0 \right)\, ,\\
&g^0 := \lim_{\varepsilon \to 0} g_\varepsilon = M \rho^0 \nonumber,
\end{align}
assuming that the two following conditions on $M$ hold:
\begin{equation*}
\int_V vM(v)dv=0,\quad \left\langle v^2 \right\rangle_M:=\int_V v^2 M(v)dv>0.
\end{equation*}
We recall that for nonincreasing initial data decaying sufficiently fast at $x = +\infty$, the solution of \eqref{eq:kolmogorov_etude_1937} behaves asymptotically as a travelling front moving at the minimal speed $c^* = 2 \sqrt{r\left\langle v^2 \right\rangle_M}$ \cite{kolmogorov_etude_1937,aronson_multidimensional_1978}.
However, even though the philosophy of the results will be the same in spirit, we emphasize that nothing related to this parabolic limit will be used in the present paper. Our argumentation does not rely on any perturbative analysis. Hence, we obtain results without any smallness assumption on the parameters. This will yield significant differences, regarding both the results and the methods of proof.
A short review of earlier results is now in order. Hadeler has worked on propagation for reaction-telegraph equations \cite{Hadeler_1988,Hadeler1999}, that can be seen as two-speeds kinetic models.
Morever, a similar type of result was obtained by Cuesta, Hittmeir and Schmeiser \cite{cuesta_traveling_2012} in the diffusive regime (\em i.e. \em for sufficiently small $\varepsilon$ in \eqref{eq:main}). Using a micro-macro decomposition, they constructed possibly oscillatory travelling waves of speed $c\geq 2\sqrt{rD}$ for $\epsilon$ small enough (depending on $c$). In addition, when the set of admissible speeds $V$ is bounded, $c> 2\sqrt{rD}$, and $\epsilon$ is small enough, they prove that the travelling wave constructed in this way is indeed nonnegative.
Propagation for the full kinetic model \eqref{eq:main} has then been investigated by the first author with Calvez and Nadin in \cite{bouin_propagation_2015}. In one dimension of velocities, and when the velocities are bounded, they proved the existence and stability of travelling waves solutions to \eqref{eq:main}. The minimal speed of propagation of the waves is determined by the resolution of a spectral problem in the velocity variable. In particular, it is not related with the KPP speed, except that the speeds coincide in the diffusive regime. It is worth mentioning that the case of unbounded velocities is significantly different as the front spreads with arbitrarily large speed \cite{bouin_propagation_2015}. This case shall not be discussed further in this paper. This phenomenon was newly appearing for this type of equations and unexpected from the macroscopic limit. One aim of this paper is to extend the construction of travelling waves solutions to any velocity dimension, which was left open after \cite{bouin_propagation_2015}.
There is a strong link between this KPP type propagation phenomena and large deviations for the underlying velocity-jump process. Indeed, it is well known that fronts in Fisher-KPP equations are so-called \textit{pulled fronts}, that is, are triggered by very small populations at the edge that are able to reproduce almost exponentially. Thus, studying large deviations for these type of processes at the kinetic level is an interesting problem in itself. In \cite{bouin_kinetic_2012,bouin_hamilton-jacobi_2015}, the authors have combined Hamilton-Jacobi equations and kinetic equations to study large deviations (and propagation) from a PDE point of view. These works show that he asymptotics of large deviations in the kinetic equation do not coincide with the asymptotic of large deviations obtained after a diffusive approximation.
As a side note, the Hamilton-Jacobi technique (that will be described in the next subsection) has also much been used recently to study long time dynamics in all sorts of stuctured models. An interested reader could describe the evolution of dominant phenotypical traits in a given population \NC{reading \cite{barles_concentration_2009,lorz_dirac_2011,bouin_hamiltonjacobi_2015}} and the references therein), study different adaptative dynamics issues \cite{diekmann_dynamics_2005}, describe propagation in reaction-diffusion models of kinetic types \cite{bouin_invasion_2012} but also in age renewal equations\cite{calvez_limiting_2016}. This approach has also recently been used to study large deviations of velocity jump-processes\cite{bouin_kinetic_2012,bouin_large_2016,caillerie_large_2017} or slow-fast systems \cite{bressloff_path_2014,bressloff_hamiltonian_2014,faggionato_averaging_2010,kifer_large_2009,perthame_asymmetric_2009}.
\subsection*{The Hamilton-Jacobi limit}
After the seminal paper by Evans and Souganidis \cite{freidlin_functional_1985,evans_pde_1989}, an important technique to derive the propagating behavior in reaction-diffusion equations is to revisit the WKB expansion to study hyperbolic limits. We will directly present the technique on our problem for conciseness but one can find the original framework for the Fisher-KPP equation in \cite{evans_pde_1989} and complements in \cite{barles_solutions_1994,barles_wavefront_1990,souganidis_front_1997,crandall_users_1992}.
We perform the hyperbolic scaling $\left( t,x,v \right) \to \left( \frac{t}{\varepsilon} , \frac{x}{\varepsilon} ,v \right)$ in \eqref{eq:main}. Importantly, the velocity variable is not rescaled (it cannot be rescaled since it lies in a bounded set). The \textit{kinetic Hopf-Cole transformation} (already used in \cite{bouin_kinetic_2012,caillerie_large_2017}) is written
\begin{equation}\label{eq:HopfColetransform}
\forall (t,x,v) \in \mathbb{R}^+ \times \mathbb{R}^n \times V, \qquad f^{\varepsilon}(t,x,v) = M(v) e^{-\frac{\varphi^{\varepsilon}(t,x,v)}{\varepsilon}}.
\end{equation}
Thanks to the maximum principle \cite{cuesta_traveling_2012}, $\varphi^{\varepsilon}$ is well defined and remains nonnegative for all times.
Plugging \eqref{eq:HopfColetransform} in \eqref{eq:main}, one obtains the following equation for $\varphi^{\varepsilon}$:
\begin{equation}\label{eq:mainHJeps}
\partial_t \varphi^{\varepsilon} + v \cdot \nabla_x \varphi^{\varepsilon} + r = (1+r)\int_{V} M(v') \left( 1-e^{\frac{\varphi^\varepsilon(v) - \varphi^\varepsilon(v')}{\varepsilon}} \right) dv' + r \rho^{\varepsilon}.
\end{equation}
Our aim is to pass to the limit in \eqref{eq:mainHJeps}. To make the convergence result appear naturally, we shall start by providing formal arguments. Assuming Lipschitz bounds on $\varphi^\varepsilon$, and since $\rho^\varepsilon$ is uniformly bounded, the boundedness of $\int_{V} M(v') ( 1- \exp((\varphi^\varepsilon(v) - \varphi^\varepsilon(v'))/\varepsilon) dv'$ implies that we expect the limit $\varphi^0$ to be independent of $v$. To identify the limit $\varphi^0$, we shall thus perform the following expansion
\begin{equation}\label{eq:WKBansatz}
\varphi^{\varepsilon}(t,x,v)=\varphi^0(t,x)+\varepsilon\eta(t,x,v).
\end{equation}
Plugging the latter into \eqref{eq:WKBansatz} yields
\begin{equation*}
\partial_t \varphi^0 + v\cdot\nabla_x \varphi^0+r=(1+r)\int_V M(v')\left(1-e^{\eta(v)-\eta(v')}\right)dv' + re^{-\frac{\varphi^0}{\varepsilon}}\int_V e^{-\eta(v')}dv'.
\end{equation*}
As a consequence, for any $(t,x) \in \left\{ \varphi^0>0\right\}$, we have
\begin{equation}\label{eq:spectralpb1}
\partial_t \varphi^0 + v\cdot\nabla_x \varphi^0=1-e^{\eta(v)}(1+r)\int_V M(v')e^{-\eta(v')}dv'.
\end{equation}
One should read this equation as an eigenvalue problem in the velocity variable. Indeed, setting
\begin{equation*}
p(t,x) = \nabla_x \varphi^0(t,x), \qquad \eta(t,x,v) := - \ln \left(\frac{Q_{p(t,x)}}{M(v)}\right), \qquad H(p(t,x)):=-\partial_t \varphi^0(t,x),
\end{equation*}
we see that $(H,Q)$ are the principal eigenelements of the following spectral problem
\begin{equation*}
(1+r)M(v) \int_V Q_p(v') \, dv' - \left( 1 - v \cdot p \right) Q_p(v) = H(p) Q_p(v).
\end{equation*}
The dependency with respect to $r$ can be identified by setting $p':=\frac{p}{1+r}$, $\mathcal{H}(\cdot):=\frac{H((1+r)\cdot)-r}{1+r}$ and $\widetilde{Q}_{p'}=Q_p$. Indeed, we have then that $\partial_t \varphi^0 + (r+1)\mathcal{H}(\frac{p}{r+1})+r = 0$ and the Hamiltonian $\mathcal{H}$ is given by
\begin{equation}\label{eq:spectralproblem}
\left( 1+ \mathcal{H}\left(p'\right) - v \cdot p' \right) \widetilde{Q}_{p'}(v) = M(v) \int_V \widetilde{Q}_{p'}(v') \, dv'.
\end{equation}
After these heuristics, we are now ready to define properly the Hamiltonian $\mathcal{H}$ involved.
\begin{definition}\label{def:hamiltonian}
We define, for $e \in \mathbb{S}^{n-1}$,
\begin{equation*}
l(e) = \int_V \frac{M(v)}{\overline{v}(e)-v \cdot e}dv.
\end{equation*}
The so-called singular set is defined by
\begin{equation}\label{eq:Sing}
\mathrm{Sing}\left(M\right):=\left\{p\in\mathbb{R}^n, \int_V \frac{M(v)}{\mu(p)-v\cdot p}dv\leq 1\right\} =\left\{p\in\mathbb{R}^n, \, l\left( \frac{p}{\vert p \vert } \right) \leq \vert p \vert \right\}.
\end{equation}
Then, the Hamiltonian $\mathcal{H}$ involved in this paper is given as follows:
\begin{itemize}
\item If $p\notin \mathrm{Sing}\left(M\right)$, then $\mathcal{H}$ is uniquely defined by the following implicit relation :
\begin{equation}\label{eq:defHimplicit}
\int_V \frac{M(v)}{1+ \mathcal{H}(p) -v\cdot p}dv=1,
\end{equation}
\item else, $\mathcal{H}(p)= \mu\left(p \right)- 1 $.
\end{itemize}
\end{definition}
The relevancy of such a definition, \textit{i.e.} the resolution of \eqref{eq:spectralproblem}, will be discussed in \Cref{sec:HJ} below. With this definition in hand, the convergence result for the sequence of functions $\varphi^\varepsilon$ is as follows.
\begin{theorem}\label{thm:HJlimit}
Suppose that (H0) and (H1) hold, and that the initial data satisfies
\begin{equation*}
\forall (x,v) \in \mathbb{R}^n \times V, \qquad \varphi^{\varepsilon}(0,x,v) = \varphi_0(x,v).
\end{equation*}
Then, $\left( \varphi^{\varepsilon} \right)_\varepsilon$ converges uniformly on all compacts of $\mathbb{R}_+^*\times \mathbb{R}^n\times V$ towards $\varphi^0$, where $\varphi^0$ does not depend on $v$. Moreover $\varphi^0$ is the unique viscosity solution of the following Hamilton-Jacobi equation:
\begin{equation}\label{eq:varHJ}
\begin{cases}
\min\left \lbrace \partial_t \varphi^0 + (1+r) \mathcal{H} \left(\frac{\nabla_x \varphi^0}{1+r} \right) + r , \varphi^0 \right\rbrace = 0, & \qquad (t,x) \in \mathbb{R}_+^* \times \mathbb{R}^n, \medskip \\
\varphi^0(0,x)= \underset{v\in V}{\mathrm{min}}\,\varphi_0(x,v),& \qquad x \in \mathbb{R}^n.
\end{cases}
\end{equation}
\end{theorem}
Let us now emphasize the differences between the result presented here and the very related works \cite{bouin_kinetic_2012,bouin_hamilton-jacobi_2015,caillerie_large_2017}. First, the results from \cite{bouin_kinetic_2012} and \cite{bouin_hamilton-jacobi_2015} only hold for $n=1$ and for $M\geq \delta>0$. In \cite{bouin_hamilton-jacobi_2015}, the first author successfully proved a convergence result in the case $r>0$. It is worth mentioning that a much wider class of collision operators was considered in \cite{bouin_hamilton-jacobi_2015}, but under the condition of existence of a $L^1$ eigenvector. We believe that the ideas of the present work could be used there, but with technicalities inherent from the spectral problem that would require a special study.
As explained before, the multidimensional case ($n>1$) is more delicate since the relation \eqref{eq:defHimplicit} may not have a solution. We refer to our \Cref{ex:counterexample} for a situation where this happens. In \cite{caillerie_large_2017}, the second author generalized the convergence result of \cite{bouin_kinetic_2012} in the multidimensional case, with no reaction term. However, the proof we design in this paper is simpler and more adaptable. For this we manage to use the half-relaxed limits of Barles and Perthame \cite{barles_exit_1988} in the spirit of \cite{bouin_hamiltonjacobi_2015}.
We point out that an asymptotic preserving scheme has been developed by Hivert in \cite{hivert_asymptotic_2017} to numerically solve \eqref{eq:mainHJeps} using the Hamilton-Jacobi framework developed in \cite{bouin_hamilton-jacobi_2015}.
We present the proof of \Cref{thm:HJlimit} in \Cref{sec:HJ} below.
\subsection*{Travelling wave solutions and spreading of planar like initial data}
We then investigate the existence of travelling wave solutions of \eqref{eq:main}. As in the mono-dimensional case treated in \cite{bouin_propagation_2015}, we will prove that there exists a minimal speed $c^*$ for which travelling wave solutions exist. We will use the following definition throughout the paper.
\begin{defi}\label{def:deftw}
A function $f$ is a travelling wave solution of speed $c \in \mathbb{R}_+$ and direction $e\in\mathbb{S}^{n-1}$ of equation \eqref{eq:main} if it can be written $f(t,x,v) = \tilde f \left( x \cdot e - ct , v \right)$, where the profile \NC{$\tilde f \in \mathcal{C}^2 \left( \mathbb{R}, L^1(V) \right)$} solves
\begin{equation}
\left( v \cdot e - c \right) \partial_\xi \tilde f = M(v) \tilde\rho - \tilde f + r \tilde\rho \left( M(v) - \tilde f\right)
\end{equation}
and satisfies
\begin{equation} \label{eq:deftw}\forall (z,v) \in\mathbb{R}\times V\,, \quad 0\leq \tilde f(z,v)\leq M(v)\,, \quad \lim_{z\to -\infty} \tilde f(z,v)=M(v)\,, \quad \lim_{z\to +\infty} \tilde f(z,v)=0\;.
\end{equation}
\end{defi}
It is well known for this kind of Fisher-KPP type problems that propagation fronts are so-called pulled fronts, that is the speed of propagation is given by seeking exponentially decaying solutions of the linearized problem in a moving frame. As a consequence, for any $\lambda > 0$, one can define $c(\lambda,e)$ using the spectral problem solved in \Cref{def:hamiltonian}. Indeed, we set
\begin{equation}\label{eq:linspeed}
c(\lambda,e) = (1+r)\mathcal{H} \left(\frac{\lambda e}{1+r} \right) + r.
\end{equation}
Then we have the formula for the minimal speed in the direction $e \in \mathbb{S}^{n-1}$.
\[ c^*(e)= \inf_{\lambda>0} c(\lambda,e)\, . \]
We obtain the following existence result.
\begin{thm} \label{thm:existence-tw}
Let $e\in \mathbb{S}^{n-1}$. For all $c \in [ c^*(e) , \overline{v}(e) )$, there exists a travelling wave solution of \eqref{eq:main} with speed $c$ and direction $e$. Moreover, there exists no positive travelling wave solution of speed $c\in [0,c^*(e))$.
\end{thm}
Following very closely the proof used in the mono-dimensional case, we shall prove this Theorem using sub and super-solution and a comparison principle satisfied by \eqref{eq:main}. We shall construct these sub- and super-solution using travelling wave solutions of the linearized problem. The main difference concerning the travelling wave result is the way we prove the minimality of the speed $c^*(e)$. Indeed, it might happen that $c(\lambda,e)$ is singular at its minimum $\lambda^*$ so that one can not reproduce the same argument as for the mono-dimensional case used in \cite{bouin_propagation_2015}, that was based on the Rouch\'e Theorem. Using the Hamilton-Jacobi framework above, in a similar fashion as in \cite{berestycki_spreading_2012 for the Fisher-KPP equation in an heterogeneous media}, we prove the following result.
\begin{prop} \label{prop:spreadingplanar}
Let $f_0$ be a non-zero initial data, compactly supported in some direction $e_0$, such that there exists $\gamma < 1$ such that
\begin{equation*}
\gamma M(v) \textbf{1}_{[-x_m,x_m]\cdot e_0 + e_0^\bot}(x) \leq f_0 (x,v)\leq M(v) \textbf{1}_{[-x_M,x_M]\cdot e_0 + e_0^\bot}(x),
\end{equation*}
for all $(x,v) \in \mathbb{R}^n \times V$. Let $f$ be the solution of the Cauchy problem \eqref{eq:main} associated to this initial data. Then we have
\begin{equation}\label{eq:frontplanar}
\lim_{t \to +\infty} \sup_{x\cdot e_0 > ct} \rho(t,x) = 0,\text{ if } c > c^*(e_0),
\end{equation}
\begin{equation}\label{eq:backplanar}
\lim_{t\to +\infty} f(t,e_0^\bot + c t e_0,v) = M(v) \,,\text{ if } c < c^*(e_0),
\end{equation}
uniformly in $v \in V$.
\end{prop}
\subsection*{Spreading of compactly supported initial data}
Finally, we also deduce from the Hamilton-Jacobi framework a spreading result for initial conditions that are compactly supported. To this aim, let us first define the speed $w^*(e_0)$ associated to any direction $e_0 \in \mathbb{S}^{n-1}$ via the following Freidlin-G\"artner formula (see \cite{freidlin_propagation_1979} for its first derivation).
\begin{equation*}
w^*(e_0) = \min_{\substack{e \in \mathbb{S}^{n-1}\\e_0 \cdot e > 0}}\left( \frac{c^*(e)}{e_0 \cdot e}\right).
\end{equation*}
We obtain the following result.
\begin{prop} \label{prop:spreadingbounded}
Let $f^0$ be a non-zero compactly supported initial data such that $0\leq f_0 (x,v)\leq M(v)$ for all $(x,v) \in \mathbb{R}^n \times V$. Let $f$ be the solution of the Cauchy
problem \eqref{eq:main} associated to this initial data. Then for any $e_0 \in \mathbb{S}^{n-1}$ and all $x\in \mathbb{R}^n$, we have
\begin{equation}\label{eq:frontcompact}
\lim_{t \to \infty} f(t,x + c t e_0,v) = 0, \qquad \text{ if } c > w^*(e_0),
\end{equation}
pointwise
and
\begin{equation}\label{eq:backcompact}
\lim_{t \to \infty} f(t,c t e_0,v) = M(v), \qquad \text{ if } 0 \leq c < w^*(e_0),
\end{equation}
for all $v \in V$.
\end{prop}
This result is interesting since contrary to the case of the usual Fisher-KPP equation in heterogeneous domains, where the Freidlin-G\"artner formula holds, see \cite{rossi_freidlin-gartner_2015}, here there is no heterogeneity in space. The heterogeneity coming potentially from the velocity set, this would not be present in the macroscopic limit (the Fisher-KPP equation, see above). Of course, if $V$ is rotationally symmetric, the speed $w^*$ is independent of the direction, and the propagation is radial.
One could wonder how the shape of $V$ (\textit{e.g.} its topological properties or its convexity) influences the shape of the front and the speed of propagation. We will investigate this question in a future work.
The rest of this paper is organized as follows. In \Cref{sec:HJ}, we prove the Hamilton-Jacobi limit. We discuss the construction of travelling waves and the spreading results in \Cref{sec:TW}.
\subsection*{Acknowledgements}
This work has started after a discussion with Olivier Benichou and Raphaël Voituriez. The authors thank warmly Vincent Calvez for early discussions about this problem, and for a careful reading of the manuscript. This work has benefited of an invaluable insight from Guy Barles, which, while originally meant for another manuscript, found a direct application in the present one. The authors are more than deeply grateful to Guy Barles for this. EB acknowledges the green hospitality of the University of Cambridge during the second semester of the academic year 2015-2016. EB and NC acknowledge the support of the ERC Grant MATKIT
(ERC-2011-StG). NC thanks the University of Cambridge for its hospitality during his three week stay over Spring 2016. In addition, NC has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and
innovation program (grant agreement No 639638). EB dedicates this work to Paul-Antoine Girard, as an encouragement to never give up and always slide towards his highest ambitions.
\section{The Hamilton-Jacobi limit}\label{sec:HJ}
In this Section, we present the proof of the convergence result \Cref{thm:HJlimit}. We then prove a convergence result for $\rho^\varepsilon$ in the region $\lbrace \varphi^0 = 0 \rbrace$. This result will help us to show that the speed of propagation is still the minimal speed of existence of travelling waves, despite the singularity of $\mathcal{H}$.
\subsection{The spectral problem}\label{thespectralproblem}
In this Section, we discuss the resolution of the spectral problem given by \eqref{eq:spectralproblem}. We also provide examples for which the singular set of $M$ is not empty.
\subsubsection{The resolution}
For any $p'>0$, we look for an eigenvalue $\mathcal{H}(p')$ associated to a positive eigenvector $\widetilde{Q}_{p}$ such that
\begin{equation*}
\left( 1 + \mathcal{H}(p') - v \cdot p' \right) \widetilde{Q}_{p'}(v) = M(v)\int_V \widetilde{Q}_{p'}(v') \, dv', \qquad v \in V.
\end{equation*}
Note that it may happen that $\widetilde{Q}_{p'}$ has a singular part. Since the problem is linear, one can always assume that $\widetilde{Q}_{p'}$ is a probability measure. We are thus led to find an eigenvalue $\mathcal{H}(p')$ such that there exists a probability measure $\widetilde{Q}_{p'}$\footnote{To avoid too many notation, we identify $\widetilde{Q}_{p'}$ to its density when relevant.} such that
\begin{equation*}
\left( 1 + \mathcal{H}(p') - v \cdot p' \right) \widetilde{Q}_{p'}(v) = M(v), \qquad v \in V.
\end{equation*}
To make the singular set $\mathrm{Sing}\left(M\right)$ appear naturally, let us first investigate the case when $\widetilde{Q}_{p'} \in L^1(V)$. If a solution exists, then the profile $\widetilde{Q}_{p'}$ necessarily satisfies the following equation:
\begin{equation}\label{eq:profile}
\widetilde{Q}_{p'}(v) = \frac{M(v)}{ 1 + \mathcal{H}(p') - v \cdot p'}, \qquad v \in V.
\end{equation}
This is only possible if such an expression defines a probability measure. As a consequence, one shall look for conditions under which there exists $\mathcal{H}(p')$ such that
\begin{equation*}
I \left(\mathcal{H}(p'),p' \right) := \int_V \frac{M(v')}{1 + \mathcal{H}(p') - v' \cdot p'} dv' = 1,
\end{equation*}
with $1 + \mathcal{H}(p') - v' \cdot p > 0$ for all $v' \in V$, that is $\mathcal{H}(p') > \mu(p') - 1$.
For any $p' \notin \text{Sing}(M)$, since the function $\xi \mapsto I(\xi,p')$ is decreasing over $(\mu(p') - 1,+\infty)$, $\mathcal{H}(p')$ exists and is unique in this interval since $I(\mu(p') - 1,p') > 1$, by the definition of $p'$ not being in the singular set.
However, for any $p' \in \text{Sing}(M)$, it is not possible to solve $I \left(\mathcal{H}(p'),p' \right) = 1$ since $I(\mu(p') - 1,p') \leq 1$. After Theorem 1.2 in \cite{coville_singular_2013-1}, there exists a solution to \eqref{eq:spectralproblem}, given by the couple $(\mathcal{H}(p'),\widetilde{Q}_{p'})$ where $\mathcal{H}(p')=\mu(p')-1$ and $\widetilde{Q}_{p'}$ is a positive measure given by:
\begin{equation*}
\widetilde{Q}_{p'} :=\frac{M(v)}{\mu(p')-v\cdot p'} dv + \left( 1 - \int_V \frac{M(v)}{\mu(p')-v\cdot p'} dv \right) \delta_w,
\end{equation*}
where $\delta_w$ is the dirac mass located at $w\in \mathrm{Arg}\,\mu(p')$.
From \cite{caillerie_large_2017}, we know that the set $\mathrm{Sing}(M)^{c}$ is convex and contains $0$. To identify the different cases where such a singularity set may occur, we detail three examples hereafter.
\subsubsection{Examples}
\begin{example}\label{ex:nosing}
In the one-dimensional case ($n=1$), we have $\mathrm{Sing}(M)=\emptyset$ when $\underset{v\in V}{\mathrm{inf}}\,M(v)>0$ since
\begin{equation}\label{eq:ex}
\int_{-\overline{v}}^{\overline{v}}\frac{M(v')}{\mu(p')-vp'}dv= \int_{-\overline{v}}^{\overline{v}}\frac{M(v)}{|p'|v_{max}-vp'}dv\geq \frac{\mathrm{inf}\,M(v)}{|p'|\overline{v}}\cdot \int_{-1}^{1}\frac{dv}{1-v}=+\infty.
\end{equation}
By monotone convergence we have
\begin{equation*}
\underset{H\to \mu(p')-1}{\mathrm{lim}}\,\int_{-\overline{v}}^{\overline{v}}\frac{M(v)}{1+H -vp'}dv = +\infty,
\end{equation*}
hence, for all $p'\in\mathbb{R}$, there exists a unique $\mathcal{H}(p')$ that solves the spectral problem in $L^1(V)$.
\end{example}
This latter framework is the one used in \cite{bouin_propagation_2015}. In fact, we can only require that $M$ does not cancel in a neighborhood of $v=\overline{v}$ in order to get $\mathrm{Sing}(M)=\emptyset$. Indeed, the integral in \eqref{eq:ex} will also diverge in that scenario. If $M(\overline{v})=0$, this argument may not work out. Consider for example:
\begin{example}\label{ex:annulebord}
Let $n=1$, $V=[-1,1]$ and $M(v)=\frac{3}{2}(1-|v|)^2$. Then,
\begin{align*}
l(1)=\int_{-1}^{1}\frac{M(v)}{1-v} \,dv &=\frac{3}{2}\int_{-1}^{1}\frac{(1-|v|)^2}{1-v} \, dv = 3 \int_{0}^{1}\frac{(1-v)^2}{(1-v)(1+v)} \, dv \\
&= 3 \int_{0}^{1}\frac{1-v}{1+v} \, dv = 3 \int_{0}^{1}\frac{2-(1+v)}{1+v} \, dv\\
&= 3 \left( \int_{0}^{1}\frac{2}{1+v} \, dv - 1 \right) = 3(2\ln(2)-1).
\end{align*}
Hence, $|p'| \geq 3(2\ln(2)-1)$ if and only if
\begin{equation*}
\int_{-1}^{1}\frac{M(v)}{\mu(p')-vp'}dv=\frac{3}{2|p'|}\int_{-1}^{1}\frac{(1-|v|)^2}{1-v}dv\leq 1,
\end{equation*}
therefore, $\mathrm{Sing}(M)=\left(-3(2\ln(2)-1),3(2\ln(2)-1)\right)^c$. Let us also notice that
\begin{align*}
\int_V \frac{M(v)}{(\overline{v}(1)-v)^2}dv &=\frac{3}{2}\int_{-1}^{1}\frac{(1-|v|)^2}{(1-v)^2}dv=3\int_{0}^{1}\frac{1+v^2}{(1+v)^2}dv\\
&=3\int_{0}^{1}\frac{(1+v)^2 - 2(1+v)+2}{(1+v)^2}dv\\
&=3\int_{0}^{1} \left( 1 - \frac{2}{1+v}+ \frac{2}{(1+v)^2} \right)dv\\
&=3(1 -2 \ln(2) + 1) = 6(1 - \ln(2)) <+\infty.
\end{align*}
We will make a use of this result later.
\end{example}
In the multi-dimensional case, we may encounter a singular set, even when $\underset{v\in V}{\mathrm{inf}}\,M(v)>0$. These singularities can occur in the simplest cases.
\begin{example}\label{ex:counterexample}
Let $n\geq1$, let $V=B(0,1)$ be the $n$-dimensional unit ball. Let $e=e_1$ and $M=\omega_{n}^{-1}.\mathds{1}_{\overline{B\left(0,1\right)}}$,
where $\omega_{n}$ is the Lebesgue measure of $V$. For $n=1$, since $M>0$ we have $\mathrm{Sing}(M)=\emptyset$ (recall \cref{ex:nosing}). Suppose now that $n>1$. Then,
\begin{eqnarray*}
l(e_1) &=&\int_{B\left(0,1\right)}\frac{M(v)}{\overline{v}(e_1)-v\cdot e_1}dv \\
& = & \frac{1}{\omega_{n}}\int_{B\left(0,1\right)}\frac{1}{1-v_1}dv\\
&=&\frac{1}{\omega_{n}}\int_{-1}^{1}\frac{1}{1-v_{1}}\left(\int \mathds{1}_{\left\{ v_1^2+v_2^2+\ldots+v_n^2\leq1\right\}}(v_2,\ldots,v_n)dv_2\ldots dv_n\right)dv_1.\\
\end{eqnarray*}
Now, for fixed $v_1$, the quantity $\int \mathds{1}_{\left\{ v_1^2+v_2^2+\ldots+v_n^2\leq1\right\}}(v_2,\ldots,v_n)dv_2\ldots dv_n$ is the Lebesgue measure of the $(n-1)$-dimensional ball of radius $\sqrt{1-v_1^2}$, hence
\begin{equation*}
\int \mathds{1}_{\left\{ v_1^2+v_2^2+\ldots+v_n^2\leq1\right\}}(v_2,\ldots,v_n)dv_2\ldots dv_n=\omega_{n-1}\times \left(\sqrt{1-v_1^2}\right)^{n-1}.
\end{equation*}
Finally,
\begin{align*}
l(e_1)& = \frac{\omega_{n-1}}{\omega_{n}}\int_{-1}^{1} \frac{\left(1-v_{1}^{2}\right)^{\frac{n-1}{2}}}{1-v_1} dv_{1} \\
&=\frac{2\omega_{n-1}}{\omega_{n}}\int_{0}^{1} \left(1-v_{1}^{2}\right)^{\frac{n-3}{2}} dv_{1}\\
&=\frac{2\omega_{n-1}}{\omega_{n}}\int_{0}^{\frac{\pi}{2}} \left(\cos(\theta)\right)^{n-2} d\theta,\\
&= \frac{n}{n-1},
\end{align*}
where we have used, for example, the relationship between the volume of the unit ball and the Wallis integrals. By rotational invariance, $\mathrm{Sing}(M)=B\left(0,\frac{n-1}{n}\right)^{c}$.
\end{example}
\subsection{Proof of \Cref{thm:HJlimit}}
In this Section, we now prove \Cref{thm:HJlimit}. We will use the half-relaxed limits method of Barles and Perthame \cite{barles_exit_1988}. In addition to that, and similarly to the papers \cite{bouin_kinetic_2012,caillerie_large_2017}, we need to use the perturbed test-function method. We emphasize that the corrected test function is defined thanks to the spectral problem \eqref{eq:spectralproblem}, keeping only the regular part of the eigenfunction (recall that it may have singularities).
Since the sequence $\varphi^\varepsilon$ is uniformly bounded by the maximum principle (check Proposition 5 in \cite{bouin_hamilton-jacobi_2015}), we can define its upper- and lower- semi continuous envelopes by the following formulas
\begin{equation*}
\varphi^*(t,x,v) = \limsup_{\substack{\varepsilon \to 0\\(s,y,w) \to (t,x,v)}} \varphi^\varepsilon(s,y,w), \qquad \varphi_*(t,x,v) = \liminf_{\substack{\varepsilon \to 0\\(s,y,w) \to (t,x,v)}} \varphi^\varepsilon(s,y,w)
\end{equation*}
Recall that $\varphi^*$ is upper semi-continuous, $\varphi_*$ is lower semi-continuous and that from their definition, one has $\varphi_* \leq \varphi^*$. We have the following:
\begin{prop}
Let $\varphi^\varepsilon$ be a solution to \eqref{eq:mainHJeps}.
\begin{enumerate}\label{prop:semilimits}
\item[(i)] The upper semi-limit $\varphi^*$ is constant with respect to the velocity variable on $\mathbb{R}_+^* \times \mathbb{R}^n$.
\item[(ii)] The function $(t,x) \mapsto \varphi^*(t,x)$ is a viscosity sub-solution to \eqref{eq:varHJ} on $\mathbb{R}_+^* \times \mathbb{R}^n$.
\item[(iii)] The function $(t,x) \mapsto \min_{w \in V} \varphi_*(t,x,w)$ is a viscosity super-solution to \eqref{eq:varHJ} on $\mathbb{R}_+^* \times \mathbb{R}^n$.
\end{enumerate}
\end{prop}
We recall that for all $(t,x)$, the minimum $\min_{w \in V} \varphi_*(t,x,w)$ is attained since $V$ is bounded and $\varphi_*$ is lower semi-continuous.
We point out here that if $r=0$, that is, the case of \cite{bouin_kinetic_2012,caillerie_large_2017}, it is not necessary to prove that $\varphi^*$ is constant in the velocity variable. One can replace this by proving that $\max_{w \in V} \varphi^*$ is a sub-solution to \eqref{eq:varHJ}. The fact that $\varphi^*$ is constant in the velocity variable is needed to control the limit of $\rho^\varepsilon$.
\begin{proof}[{\bf Proof of \Cref{prop:semilimits}}]
We start with the proof of (i). Take $(t^0,x^0,v^0) \in \mathbb{R}^{*}_+ \times \mathbb{R}^n \times V$. Let $\psi$ be a test function such that $\varphi^* - \psi$ has a strict local maximum at $(t^0,x^0,v^0)$. Then there exists a sequence $\left( t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}\right)$ of maximum points of $\varphi^{\varepsilon}-\psi$ satisfying $(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})\to(t^0,x^0,v^0)$. From this we deduce that $\lim_{\varepsilon \to 0} \varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) = \varphi^*(t,x,v)$. Recalling \eqref{eq:mainHJeps}, we have at $\left( t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}\right)$:
\begin{equation*}
\partial_t \psi + v^{\varepsilon} \cdot \nabla_x \psi +r = (1+r)\left(1-\int_V M(v') e^{\frac{\varphi^{\varepsilon} -\varphi^{\varepsilon'}}{\varepsilon}} dv'\right) +r\rho^{\varepsilon}.
\end{equation*}
From this, we deduce that
\begin{equation*}
\int_{V'} M(v') e^{\frac{\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) -\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v')}{\varepsilon}} dv'
\end{equation*}
is uniformly bounded for any $V' \subset V$. By the Jensen inequality,
\begin{equation*}
\exp\left( \frac{1}{\varepsilon\left\vert V'\right\vert_M} \int_{V'} \left(\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) -\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v') \right) M(v') dv' \right) \leq \frac{1}{\left\vert V' \right\vert_M}\int_{V'} M(v') e^{\frac{\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) -\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v')}{\varepsilon}} dv',
\end{equation*}
where $\left\vert V' \right\vert_M:=\int_{V'}M(v)dv$. Thus,
\begin{equation*}
\limsup_{\varepsilon \to 0} \left( \int_{V'} \left(\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) -\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v') \right) M(v') dv' \right) \leq 0.
\end{equation*}
We write
\begin{align*}
\int_{V'} \left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) M(v') dv' &= \int_{V'} \left[\left( \varphi^{\varepsilon}(v^{\varepsilon}) - \psi(v^{\varepsilon})\right) - \left( \varphi^{\varepsilon}(v') - \psi(v') \right) + \left( \psi(v^{\varepsilon}) - \psi(v')\right)\right] M(v') dv'\\
&= \int_{V'} \left[\left( \varphi^{\varepsilon}(v^{\varepsilon}) - \psi(v^{\varepsilon})\right) - \left( \varphi^{\varepsilon}(v') - \psi(v') \right) \right] M(v') dv'\\ &\hfill + \int_{V'} \left( \psi(v^{\varepsilon}) - \psi(v')\right) M(v') dv'
\end{align*}
We can thus use the Fatou Lemma, together with $- \limsup_{\varepsilon \to 0} \varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v') \geq - \varphi^*(t^0,x^0,v')$ to get
\begin{align*}
\left(\int_{V'} M(v')dv'\right) \varphi^*(v^0) - \int_{V'}\varphi^*(v')M(v') dv' & = \int_{V'} \left(\varphi^*(v^0) -\varphi^*(v') \right) M(v') dv' \\
&\leq
\int_{V'} \liminf_{\varepsilon \to 0} \left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) M(v') dv' \\
&\leq \liminf_{\varepsilon \to 0} \left( \int_{V'} \left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) M(v') dv' \right)\\
&\leq \limsup_{\varepsilon \to 0} \left( \int_{V'} \left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) M(v') dv' \right)\\
&\leq 0,
\end{align*}
We shall deduce, since the latter is true for any $\vert V' \vert$ that
\begin{equation*}
\varphi^*(t^0,x^0,v^0) \leq \inf_V \varphi^*(t^0,x^0,\cdot)
\end{equation*}
and thus $\varphi^*$ is constant in velocity.
We now continue with the proof of (ii). We have to prove that on $\lbrace\varphi^* >0\rbrace\cap(\mathbb{R}_+^* \times \mathbb{R}^n)$, the function $\varphi^*$ is a viscosity subsolution of \eqref{eq:varHJ}. To this aim, let $\psi \in C^2(\mathbb{R}_+^* \times \mathbb{R}^n)$ be a test function such that $\varphi^*-\psi$ has a local maximum in $(t^0,x^0)\in (\mathbb{R}_+^* \times \mathbb{R}^n) \cap \left\{\varphi^*>0\right\}$. We denote by $p^0(t^0,x^0) = \frac{\nabla_x \psi(t^0,x^0)}{1+r}$.
\medskip
{\bf \# First case : $p^0(t^0,x^0) \notin\mathrm{Sing}\,M$.}
\medskip
\noindent We define a corrector $\eta$ according to the following formula:
\begin{equation*}
\eta(v) = \ln \left( 1 + \mathcal{H}\left(p^0(t^0,x^0)\right) - v \cdot p^0(t^0,x^0) \right)
\end{equation*}
Let us define the perturbed test function $\psi^{\varepsilon}:=\psi +\varepsilon\eta$. We recall the fact that in this case $\int_V M(v') \exp(-\eta(v')) dv' = 1$. The function $\psi^\varepsilon$ converges uniformly to $\psi$ since $\eta$ is bounded on $V$. As a consequence, there exists a sequence $\left( t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}\right)$ of maximum points of $\varphi^{\varepsilon}-\psi^{\varepsilon}$ satisfying $(t^{\varepsilon},x^{\varepsilon})\to(t^0,x^0)$ and such that $\lim_{\varepsilon \to 0} \varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon) = \varphi^*(t^0,x^0)$. Recalling \eqref{eq:mainHJeps}, we have at $\left( t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}\right)$:
\begin{equation*}
\partial_t \psi^{\varepsilon} + v^{\varepsilon} \cdot \nabla_x \psi^{\varepsilon}+r = (1+r)\left(1-\int_V M(v') e^{\frac{\varphi^{\varepsilon} -\varphi^{\varepsilon'}}{\varepsilon}} dv'\right) +r\rho^{\varepsilon}.
\end{equation*}
Since $\left( t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}\right)$ is a maximum point, we may rearrange the r.h.s. of the latter so that the previous equation may be rewritten as follows
\begin{eqnarray*}
\partial_t \psi^{\varepsilon} + v^{\varepsilon} \cdot \nabla_x \psi^{\varepsilon} +r & \leq & (1+r)\left(1-\int_V M(v') \exp \left(\eta(v^\varepsilon)-\eta(v')\right) dv'\right)+r\rho^{\varepsilon},\\
& = & (1+r)\left(1- \left( \int_V M(v') \exp\left(-\eta(v')\right) dv'\right) \exp \left(\eta(v^\varepsilon)\right) \right)+r\rho^{\varepsilon},\\
& = & (1+r)\left(1- \exp \left(\eta(v^\varepsilon)\right) \right)+r\rho^{\varepsilon},\\
& = & - (1+r) \mathcal{H}\left(p^0(t^0,x^0)\right) + v^\varepsilon \cdot \nabla_x \psi^0(t^0,x^0) + r\rho^\varepsilon.
\end{eqnarray*}
Since $(t^0,x^0)\in\left\{ \varphi^*>0\right\}$ and $\lim_{\varepsilon \to 0} \varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon) = \varphi^*(t^0,x^0)$, we have that, eventually, $\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon) > \varphi^*(t^0,x^0)/2 >0$ for $\varepsilon$ sufficiently small. Since
\begin{equation*}
r\rho^\varepsilon \left( e^\frac{\varphi^{\varepsilon}}{\varepsilon} - 1 \right) = \left(1-\int_V M(v') e^{\frac{\varphi^{\varepsilon} -\varphi^{\varepsilon'}}{\varepsilon}} dv'\right)- \left( \partial_t \psi^{\varepsilon} + v^{\varepsilon} \cdot \nabla_x \psi^{\varepsilon} \right),
\end{equation*}
and the latter r.h.s. is uniformly bounded from above in $\varepsilon$, we deduce that $\lim_{\varepsilon \to 0}\rho^\varepsilon(t^\varepsilon,x^\varepsilon) = 0$. Taking the limit $\varepsilon\to 0$, we get
\begin{eqnarray*}
\partial_t \psi \left( t^{0},x^{0}\right) + (1+r)\mathcal{H}\left(\frac{\nabla_x \psi (t^0,x^0)}{1+r}\right)+r \leq 0.
\end{eqnarray*}
\medskip
{\bf \# Second case : $p^0(t^0,x^0) \in \mathrm{Sing}\,M$.}
\medskip
\noindent Let $v^*\in \mathrm{Arg} \, \mu(p^0(t^0,x^0))$. The function $(t,x)\mapsto \phi^\varepsilon (t,x,v^*)-\psi(t,x)$ has a local maximum at a point $(t^\varepsilon,x^\varepsilon)$ satisfying $(t^\varepsilon,x^\varepsilon)\to(t^0,x^0)$ as $\varepsilon\to0$. We then have:
\begin{eqnarray*}
\partial_t \psi (t^\varepsilon,x^\varepsilon)+ v^* \cdot \nabla_x \psi(t^\varepsilon,x^\varepsilon) + r & = & \partial_t \phi^\varepsilon (t^\varepsilon,x^\varepsilon,v^*)+v^* \cdot \nabla_x \phi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^*) + r \\
& = & (1+r)\int_{V} M(v') \left( 1-e^{\frac{\varphi^\varepsilon(v^*) - \varphi^\varepsilon(v')}{\varepsilon}} \right) dv' + r \rho^{\varepsilon} \\
& \leq & (1+r) + r \rho^{\varepsilon}.
\end{eqnarray*}
Since $(t^0,x^0)\in\left\{ \varphi^*>0\right\}$, we have $\rho^{\varepsilon}(t^{\varepsilon},x^{\varepsilon})\to 0$. As a consequence, taking the limit $\varepsilon\to0$, we get
\begin{equation*}
\partial_t \psi(t^0,x^0) +\mu(\nabla_x \psi(t^0,x^0)) \leq 1.
\end{equation*}
We finally turn to the proof of (iii). That is, the fact that on $\mathbb{R}_+^* \times \mathbb{R}^n$, the function $\min_{w \in V} \varphi_*(\cdot,w)$ is a viscosity supersolution of \eqref{eq:varHJ}.
Let $\psi \in C^1(\mathbb{R}_+^* \times \mathbb{R}^n)$ be a test function such that $\min_{w \in V} \varphi_* - \psi$ has a local minimum in $(t^0,x^0)\in \mathbb{R}_+^*$. We denote by $p^0(t^0,x^0) = \frac{\nabla_x \psi(t^0,x^0)}{1+r}$. We define the truncated corrector $\eta_\delta$,
\begin{eqnarray*}
\eta(v) & = & \ln \left( 1 + \mathcal{H}\left(p^0(t,x)\right) - v \cdot p^0(t,x) \right),\\
\eta_\delta(v) & = & \max\left( \eta(v) , -1/\delta \right).
\end{eqnarray*}
Let us define the perturbed test function $\psi^{\varepsilon}:=\psi +\varepsilon\eta_\delta$.
For any $\delta>0$, the function $\psi^\varepsilon$ converges uniformly to $\psi$ as $\varepsilon\to 0$ since $\eta_\delta$ is bounded on $V$. Since $\varphi_*(t^0,x^0,\cdot)$ attains its minimum at, say, $v^0$, we have, for all $v \in V$ and locally in the $(t,x)$ variables,
\begin{equation*}
\varphi_*(t^0,x^0,v^0) - \psi(t^0,x^0) =\min_{w \in V} \varphi_*(t^0,x^0) - \psi(t^0,x^0) \leq \min_{w \in V} \varphi_*(t,x) - \psi(t,x) \leq \varphi_*(t,x,v) - \psi(t,x),
\end{equation*}
and thus $(t^0,x^0,v^0)$ is a local minimum of $\varphi_* - \psi$, strict in the $(t,x)$ variables. By the definition of the lower semi-limit, there exists a sequence $\left( t_\delta^{\varepsilon},x_\delta^{\varepsilon},v_\delta^{\varepsilon}\right)$ of minimum points of $\varphi^{\varepsilon}-\psi^{\varepsilon}$ satisfying $(t_\delta^{\varepsilon},x_\delta^{\varepsilon})\to(t^0,x^0)$.
We obtain, after \eqref{eq:mainHJeps}, at the point $\left( t_\delta^{\varepsilon},x_\delta^{\varepsilon},v_\delta^{\varepsilon}\right)$ ,
\begin{eqnarray*}
\partial_t \psi^{\varepsilon} + v_\delta^{\varepsilon} \cdot \nabla_x \psi^{\varepsilon} +r & \geq & (1+r)\left(1-\int_V M(v') \exp \left(\eta_\delta(v^\varepsilon)-\eta_\delta(v')\right) dv'\right),\\
& = & (1+r)\left(1- \left( \int_V M(v') \exp\left(-\eta_\delta(v')\right) dv'\right) \exp \left(\eta_\delta(v_\delta^\varepsilon)\right) \right).
\end{eqnarray*}
Since the sequence $v_\delta^\varepsilon$ lies in a compact set, taking the limit $\varepsilon\to 0$ (up to extraction), we obtain $v_\delta^0$ such that
\begin{eqnarray*}
\partial_t \psi + v_\delta^{0} \cdot \nabla_x \psi +r & \geq & (1+r)\left(1- \left( \int_V M(v') e^{-\eta_\delta(v')} dv'\right) e^{\eta_\delta(v_\delta^0)} \right).
\end{eqnarray*}
By construction, $\eta_\delta \geq \eta$. As a consequence, $\int_V M(v') e^{-\eta_\delta(v')} dv' \leq \int_V M(v') e^{-\eta(v')} dv' \leq 1$. Thus,
\begin{eqnarray*}
\partial_t \psi + v_\delta^{0} \cdot \nabla_x \psi +r & \geq & (1+r)\left(1 - e^{\eta_\delta(v_\delta^0)} \right).
\end{eqnarray*}
We now pass to the limit $\delta \to 0$. By compactness of $V$, one can extract a converging subsequence from $(v_\delta^0)_\delta$, we denote by $v^*$ the limit.
\medskip
{\bf \# First case : $p^0(t^0,x^0) \notin\mathrm{Sing}\,M$.}
\medskip
\noindent In this case, since $\eta$ is bounded, $\eta_\delta = \eta$ for $\delta$ sufficiently small. Thus, passing to the limit $\delta \to 0$, one gets
\begin{eqnarray*}
\partial_t \psi + v^* \cdot \nabla_x \psi +r & \geq &(1+r)\left(1- \exp \left(\eta(v^*)\right) \right),\\
& = & - (1+r) \mathcal{H}\left(p^0(t^0,x^0)\right) + v^* \cdot \nabla_x \psi (t^0,x^0),
\end{eqnarray*}
from which we deduce
\begin{eqnarray*}
\partial_t \psi \left( t^{0},x^{0}\right) + (1+r)\mathcal{H}\left(\frac{\nabla_x \psi (t^0,x^0)}{1+r}\right)+r \geq 0.
\end{eqnarray*}
\medskip
{\bf \# Second case : $p^0(t^0,x^0) \in \mathrm{Sing}\,M$.}
\medskip
\noindent In this case, the corrector $\eta_\delta$ is
\begin{equation*}
\eta_\delta(v) = \max\left( \mathrm{ln}\left(\mu\left(\nabla_x \psi (t^0,x^0)\right)-v \cdot \nabla_x \psi (t^0,x^0)\right), -1/\delta \right).
\end{equation*}
If $v^* \notin \mathrm{Arg} \, \mu(p^0(t^0,x^0))$, since $\eta$ is bounded on all compacts of $V\setminus \mathrm{Arg} \, \mu(p^0(t^0,x^0))$, $\eta_\delta(v^0_{\delta})=\eta(v^0_{\delta})$ for $\delta$ sufficiently small and we recover the first case.
If $v^*\in \mathrm{Arg} \, \mu(p^0(t^0,x^0))$, then take $\delta' > 0$, one has when $\delta < \delta'$ is sufficiently small,
\begin{equation*}
- \frac{1}{\delta'} = \eta_{\delta'}(v_\delta^0) \geq \eta_{\delta}(v_\delta^0),
\end{equation*}
and thus $\lim_{\delta \to 0} \eta_{\delta}(v_\delta^0) = - \infty$. From that we conclude
\begin{eqnarray*}
\partial_t \psi + \mu\left(\nabla_x \psi (t^0,x^0)\right) \geq 1.
\end{eqnarray*}
\end{proof}
We now conclude with the proof of the convergence result. For this, we need to input initial conditions. Obviously, one cannot get any uniqueness result for the Hamilton-Jacobi equation \eqref{eq:varHJ} without imposing any initial condition. We now check the initial condition of \eqref{eq:varHJ} in the viscosity sense.
\begin{prop}\label{prop:inicond}
If one assumes that $\varphi_0^\varepsilon = \varphi_0$, the sequence $\varphi^\varepsilon$ converges uniformly on compact subsets of $\mathbb{R}_+^* \times \mathbb{R}^n$ to $\varphi^0$, the unique viscosity solution of
\begin{equation*}
\begin{cases}
\min\left \lbrace \partial_t \varphi^0 + (1+r) \mathcal{H} \left(\frac{\nabla_x \varphi^0}{1+r} \right) + r , \varphi^0 \right\rbrace = 0, & \qquad (t,x) \in \mathbb{R}_+^* \times \mathbb{R}^n, \medskip \\
\varphi^{0}(0,x) = \underset{v \in V}{\min} \, \varphi_0(x,v) ,& \qquad x \in \mathbb{R}^n.
\end{cases}
\end{equation*}
\end{prop}
\begin{proof}[{\bf Proof of \Cref{prop:inicond}}]
We extend the definition of $\varphi^*$ to $\left\{ t = 0 \right\} \times \mathbb{R}^n$ by the formula
\begin{equation*}
\displaystyle \varphi^*(0,x) = \limsup_{\substack{t \searrow 0^+\\x' \to x}} \varphi^*(t,x').
\end{equation*}
One has to prove the following
\begin{equation}\label{eq:inicondsub}
\min\left( \min\left \lbrace \partial_t \varphi^* + (1+r) \mathcal{H} \left(\frac{\nabla_x \varphi^*}{1+r} \right) + r , \varphi^* \right\rbrace, \varphi^* - \min_{v \in V} \varphi_0(\cdot,v) \right) \leq 0,
\end{equation}
on $\left\{ t = 0 \right\} \times \mathbb{R}^n$ in the viscosity sense.
Let $\psi \in \mathcal{C}^1 \left( \mathbb{R}^+ \times \mathbb{R} \right)$ be a test function such that $\varphi^*-\psi$ has a strict local maximum at $(t^0 = 0,x^0)$. We now prove that either
\begin{equation*}
\varphi^*(0,x^0) \leq \min_{v \in V} \varphi_0(x,v),
\end{equation*}
or
\begin{equation*}
\partial_t \psi + (1+r) \mathcal{H} \left(\frac{\nabla_x \psi}{1+r} \right) + r \leq 0 \qquad \textrm{ when } \qquad \varphi^*(0,x^0) > 0.
\end{equation*}
Suppose then that
\begin{equation*}
\label{u0x0}
\varphi^*(0,x^0) > \min_{v \in V} \varphi_0(x,v).
\end{equation*}
We shall now prove that
\begin{equation*}
\partial_t \psi + (1+r) \mathcal{H} \left(\frac{\nabla_x \psi}{1+r} \right) + r \leq 0,
\end{equation*}
since then $\varphi^*(0,x^0) > 0$. We now go through the same steps as for the proof of \Cref{prop:semilimits}, but with slight changes due to the present situation. We keep the same notations.
\medskip
{\bf \# First case : $p^0(t^0,x^0) \notin\mathrm{Sing}\,M$.}
\medskip
The function $\psi^\varepsilon$ converges uniformly to $\psi$ since $\eta$ is bounded on $V$. Adding this fact to the definition of $\varphi^*(0,x^0)$, we get the existence of a sequence $\left( t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}\right)$ of maximum points of $\varphi^{\varepsilon}-\psi^{\varepsilon}$ satisfying $t^{\varepsilon} > 0$, $(t^{\varepsilon},x^{\varepsilon})\to(0,x^0)$ and such that $\lim_{\varepsilon \to 0} \varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon) = \varphi^*(0,x^0)$. The rest of the proof is similar.
\medskip
{\bf \# Second case : $p^0(t^0,x^0) \in \mathrm{Sing}\,M$.}
\medskip
\noindent Let $v^*\in \mathrm{Arg} \, \mu(p^0(t^0,x^0))$. As for the previous case, due to the definition of $\varphi^*$, the function $(t,x)\mapsto \phi^\varepsilon (t,x,v^*)-\psi(t,x)$ has a local maximum at a point $(t^\varepsilon,x^\varepsilon)$ satisfying $(t^\varepsilon > 0,x^\varepsilon)\to(t^0,x^0)$ as $\varepsilon\to0$. The conclusion is the same.
\bigskip
\bigskip
We shall now prove that the initial condition for $\min_{w}\varphi_*$ is given by
\begin{equation}\label{eq:inicondsuper}
\max\left( \min\left \lbrace \partial_t \left( \min_{w}\varphi_*\right) + (1+r) \mathcal{H} \left(\frac{\nabla_x \left( \min_{w}\varphi_*\right)}{1+r} \right) + r , \min_{w}\varphi_* \right\rbrace, \min_{w}\varphi_* - \min_{v \in V} \varphi_0 \right) \geq 0,
\end{equation}
on $\left\{ t = 0 \right\} \times \mathbb{R}^n$ in the viscosity sense.
Let us prove \eqref{eq:inicondsuper}. Let $\psi \in \mathcal{C}^1 \left( \mathbb{R}^+ \times \mathbb{R} \right)$ be a test function such that $\min_{w \in V}\varphi_* - \psi$ has a strict local minimum at $(t^0 = 0,x^0)$. We now prove that either
\begin{equation*}
\min_{w \in V}\varphi_*(0,x^0,w) \geq \min_{v \in V} \varphi_0(x^0,v),
\end{equation*}
or
\begin{equation*}
\partial_t \psi + (1+r) \mathcal{H} \left(\frac{\nabla_x \psi}{1+r} \right) + r \geq 0 \qquad \textrm{ and } \qquad \min_{w \in V}\varphi_*(0,x^0,w) \geq 0.
\end{equation*}
Suppose that $\min_{w \in V}\varphi_*(0,x^0,w) < \min_{v \in V} \varphi_0(x^0,v)$. The argument now starts similarly as in the proof above. Let us define the perturbed test function $\psi^{\varepsilon}:=\psi +\varepsilon\eta_\delta$. For any $\delta>0$, the function $\psi^\varepsilon$ converges uniformly to $\psi$ since $\eta_\delta$ is bounded on $V$. Since $\varphi_*(0,x^0,\cdot)$ attains its minimum at, say, $v^0$, we have, for all $v \in V$ and locally in the $(t,x)$ variables,
\begin{equation*}
\varphi_*(0,x^0,v_0) - \psi(0,x^0) \leq \min_{w \in V} \varphi_*(0,x^0) - \psi(0,x^0) \leq \min_{w \in V} \varphi_*(t,x) - \psi(t,x) \leq \varphi_*(t,x,v) - \psi(t,x),
\end{equation*}
and thus $(0,x^0,v^0)$ is a local minimum of $\varphi_* - \psi$, strict in the $(t,x)$ variables. By the definition of the lower semi-limit, there exists a sequence $\left( t_\delta^{\varepsilon},x_\delta^{\varepsilon},v_\delta^{\varepsilon}\right)$ of minimum points of $\varphi^{\varepsilon}-\psi^{\varepsilon}$ satisfying $(t_\delta^{\varepsilon},x_\delta^{\varepsilon})\to(0,x^0)$. We first claim that there exists a subsequence $(t_{\varepsilon_{k}},x_{\varepsilon_{k}},v_{\varepsilon_{k}})_k$ of the above sequence, with $\varepsilon_k\to 0$ as $k\to \infty$, such that $t_{\varepsilon_{k}} > 0$, for all $k$.
Suppose that this is not true. Then, take a sequence $(x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}})_{k'}$ such that $(\varepsilon_{k'},x_\delta^{\varepsilon_{k'}})\to (0,x^0)$ and
that $\varphi^{\varepsilon_{k'}}-\psi^{\varepsilon_{k'}}$ has a local minimum at $\left(0,x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}}\right)$. It follows that, for all $(t,x,v)$ in some neighborhood of $(0,x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}})$, we have
\begin{align*}
\min_{v \in V} \varphi_0(x_\delta^{\varepsilon_{k'}},v) - \psi^{\varepsilon_{k'}}\left( 0,x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}}\right) & \leq \varphi_0(x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}}) - \psi^{\varepsilon_{k'}}\left( 0,x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}}\right) \\ &\leq \varphi^{\varepsilon_{k'}} \left( 0,x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}}\right) - \psi^{\varepsilon_{k'}}\left( 0,x_\delta^{\varepsilon_{k'}},v_\delta^{\varepsilon_{k'}}\right)\\ &\leq \varphi^{\varepsilon_{k'}} \left( t , x , v \right) - \psi^{\varepsilon_{k'}}\left( t , x ,v \right).
\end{align*}
Taking $ \underset{\underset{(t,x,v)\to (0,x^0,v_0)}{k'\to \infty}}{\liminf}$ at the both sides of the inequality, one obtains \begin{equation*}
\min_{v \in V} \varphi_0(x^0,v) - \psi \left( 0 , x^0 \right) \leq \min_{w \in V}\varphi_*(0,x^0) - \psi \left( 0 , x^0 \right).
\end{equation*}
However, this is in contradiction with $\min_{w \in V}\varphi_*(0,x^0,w) < \min_{v \in V} \varphi_0(x^0,v)$. Now having in hand that this sequence of times $t_{\varepsilon_n}>0$, one can reproduce the same argument as from the proof above along the subsequence $(t_{\varepsilon_n},x_{\varepsilon_n},v_{\varepsilon_n})$.
By the strong uniqueness principle satisfied by \eqref{eq:varHJ} (that is, a comparison principle for discontinuous sub- and super- solutions), we deduce that for all $(t,x,v) \in \mathbb{R}_+^* \times \mathbb{R}^n \times V$,
\begin{equation*}
\min_{w \in V} \varphi_*(t,x,w) \leq \varphi_*(t,x,v) \leq \varphi^*(t,x,v) = \varphi^*(t,x) \leq \min_{w \in V} \varphi_*(t,x,w)
\end{equation*}
We deduce that necessarily all these inequalities are equalities, and thus that $\varphi^\varepsilon$ converges locally uniformly towards $\varphi^0$, independent of $v$, on any subcompact of $\mathbb{R}_+^* \times \mathbb{R}^n$.
\end{proof}
\subsection{Convergence of the macroscopic density $\rho^\varepsilon$}
We prove a convergence result for $\rho^\varepsilon$ in the region $\lbrace \varphi^0 = 0 \rbrace$. Namely
\begin{proposition}\label{prop:zones}
Let $\varphi^\varepsilon$ be the solution of \eqref{eq:mainHJeps}. Then, uniformly on compact subsets of $\text{Int} \left\lbrace \varphi^0 = 0\right\rbrace $,
\begin{equation*}
\lim_{\varepsilon \to 0} \rho^\varepsilon = 1, \qquad \lim_{\varepsilon \to 0} f^\varepsilon \left( \cdot , v \right) = M(v).
\end{equation*}
\end{proposition}
\begin{proof}[\bf Proof of Proposition \ref{prop:zones}]
We develop similar arguments as in \cite{evans_pde_1989}. Let $K$ be a compact set of $\lbrace \varphi^0=0\rbrace$. Note that it suffices to prove the result when $K$ is a cylinder. Let $(t^0,x^0) \in \text{Int}\left( K \right)$ and the test function
\begin{equation*}
\forall (t,x) \in K, \qquad \psi^0(t,x) = \vert x - x^0 \vert^2 + \left( t - t^0 \right)^2.
\end{equation*}
Since $\varphi^0 = 0$ on $K$, the function $\varphi^0 - \psi^0$ admits a strict maximum in $(t^0,x^0)$. The locally uniform convergence of $\varphi^\varepsilon - \psi^0$ gives a sequence $(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})$ of maximum points with $(t^{\varepsilon},x^{\varepsilon}) \to (t^0,x^0)$ and a bounded sequence $v^{\varepsilon}$ such that at the point $(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})$ one has:
\begin{equation}\label{nimp}
\partial_t \psi^0 + v^{\varepsilon} \cdot \nabla_x \psi^0 + r \leq r \rho^{\varepsilon}.
\end{equation}
As a consequence, one has, since $r>0$,
\begin{equation}\label{convepsrho}
\rho^\varepsilon(t^{\varepsilon},x^{\varepsilon}) \geq 1 + o(1), \qquad \textrm{as } {\varepsilon \to 0},
\end{equation}
and then $\lim_{\varepsilon \to 0} \rho^\varepsilon(t^{\varepsilon},x^{\varepsilon}) = 1$ if one recalls $\rho^\varepsilon \leq 1$ (which, again, is a consequence of the maximum principle).
However, we need an extra argument to get $\lim_{\varepsilon \to 0} \rho^\varepsilon(t^0,x^0) = 1$. Since $(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})$ maximizes $\varphi^\varepsilon - \psi^0$, we deduce that for all $v\in V$, we have
\begin{equation*}
\varphi^\varepsilon\left(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}\right) - \psi^0(t^\varepsilon,x^\varepsilon) \geq \varphi^\varepsilon\left(t^0,x^0,v\right) - \psi^0(t^0,x^0).
\end{equation*}
Since $\psi^0(t^\varepsilon,x^\varepsilon) \geq 0$, $ \psi^0(t^0,x^0) = 0$, we find
\begin{equation}\label{eq:subf}
f^\varepsilon(t^0,x^0,v) = M(v) e^{- \frac{\varphi^\varepsilon(t^0,x^0,v)}{\varepsilon}} \geq M(v) e^{- \frac{\varphi^\varepsilon(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})}{\varepsilon}}.
\end{equation}
We shall now prove that $\lim_{\varepsilon \to 0} \varepsilon^{-1} \varphi^\varepsilon(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) = 0$. Let us rewrite \eqref{eq:mainHJeps} at the point $(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})$ in the form
\begin{equation*}
r \rho^\varepsilon(t^\varepsilon,x^\varepsilon)\left( e^{\frac{\varphi^\varepsilon(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})}{\varepsilon}} - 1\right) = \left(1-\int_V M(v') e^{\frac{\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) -\varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v')}{\varepsilon}} dv'\right) - \left( \partial_t \psi^0 + v \cdot \nabla_x \psi^0 \right)(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})
\end{equation*}
We finally deduce using the maximum principle in the latter r.h.s. that
\begin{equation*}
0 \leq r \rho^\varepsilon(t^\varepsilon,x^\varepsilon)\left( e^{\frac{\varphi^\varepsilon(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon})}{\varepsilon}} - 1\right) \leq o_{\varepsilon \to 0}(1)
\end{equation*}
and thus $\lim_{\varepsilon \to 0} \left( \varepsilon^{-1} \varphi^\varepsilon(t^{\varepsilon},x^{\varepsilon},v^{\varepsilon}) \right) = 0$. This implies $\lim_{\varepsilon \to 0} f^\varepsilon(t,x,v) = M(v)$ locally uniformly on $K \times V$.
\end{proof}
\subsection{Speed of expansion}
To be self-contained, we recall here how to study the propagation of the front after deriving the limit variational equation, in the case $r>0$. From Evans and Souganidis \cite{evans_pde_1989}, we are able to identify the solution of the variational Hamilton-Jacobi equation \eqref{eq:varHJ} using the Lagrangian duality. We emphasize that, in this context, one may assume that our initial condition is well-prepared, \em i.e. \em $\varphi(0,x,v)=\varphi_0(x)$. We recall the equation:
\begin{equation*}
\begin{cases}
\min\left \lbrace \partial_t \varphi + (1+r)\mathcal{H} \left(\frac{\nabla_x \varphi}{1+r} \right) + r, \varphi \right\rbrace = 0, \qquad \forall (t,x) \in \mathbb{R}_+^* \times \mathbb{R}^n, \medskip\\
\varphi(0,x) = \varphi_0(x).
\end{cases}
\end{equation*}
We recall from \cite{bouin_kinetic_2012,caillerie_large_2017} that the Hamiltonian $\mathcal{H}$ is convex. For any $e_0 \in \mathbb{S}^{n-1}$, we define the minimal speed in that direction by the formula
\begin{equation}\label{eq:linspeed}
c^*(e) = \inf_{\lambda > 0} c(\lambda,e), \quad c(\lambda,e) = \frac{1}{\lambda} \left( (1+r)\mathcal{H} \left(\frac{\lambda e}{1+r} \right) + r\right).
\end{equation}
We first discuss the speed of propagation of a front-like initial data.
\begin{proposition}\label{prop:nullsetfront}
Assume that
\begin{equation*}
\varphi_0(x) := \left\lbrace\begin{array}{lcl}
0& x \in e_0^\bot\\
+ \infty & \text{ else}\\
\end{array}\right..
\end{equation*}
Then the nullset of $\varphi$ propagates at speed $c^*(e_0)$ :
\begin{equation*}
\forall t \geq 0, \qquad \left\{ \varphi(t, \cdot ) = 0 \right\} = e_0^\bot + c^*(e_0) t \, e_0.
\end{equation*}
\end{proposition}
\begin{proof}[\bf Proof of Proposition \ref{prop:nullsetfront}]
We first notice that since the initial data is invariant under any translation in $e_0^\bot$, and the the equation \eqref{eq:varHJ} invariant by translation, the solution $\varphi$ depends only on $x\cdot e_0$. That is $\varphi(t,x) = \varphi(t,(x \cdot e_0)e_0) = \overline{\varphi}(t,x \cdot e_0)$.
The function $\overline{\varphi}$ satisfies
\begin{equation*}
\begin{cases}
\min\left \lbrace \partial_t \overline{\varphi} + (1+r)\mathcal{H} \left(\frac{\partial_\xi \overline{\varphi}}{1+r} e_0 \right) + r, \varphi \right\rbrace = 0, \qquad \forall (t,\xi) \in \mathbb{R}_+^* \times \mathbb{R}, \medskip\\
\overline{\varphi}(0,\xi) = \overline{\varphi}_0(\xi).
\end{cases}
\end{equation*}
where
\begin{equation*}
\overline{\varphi}_0(\xi) := \left\lbrace\begin{array}{lcl}
0& \xi = 0,\\
+ \infty & \text{ else}\\
\end{array}\right..
\end{equation*}
The Lagrangian associated to the latter Hamilton-Jacobi equation is by definition
\begin{align*}
\mathcal{L}(p) &= \sup_{q \in \mathbb{R}} \left( p q - (1+r)\mathcal{H}\left(\frac{q}{1+r} e_0\right) - r \right),\\
&= \sup_{q \in \mathbb{R}} \left( p q - (1+r)\mathcal{H}\left(\frac{\vert q \vert}{1+r} e_0\right) - r \right), \\
&= \sup_{q \in \mathbb{R}} \left( p q - \vert q \vert c(\vert q \vert,e_0) \right),
\end{align*}
To solve the variational Hamilton-Jacobi equation, let us define
\begin{equation*}
J(x,t) = \inf_{x \in X} \left \lbrace \int_{0}^{t} \left[ \mathcal{L}( \dot x ) \right] ds \, \big\vert \, x(0) = x, x(t) = 0 \right \rbrace
\end{equation*}
the minimizer of the action associated to the Lagrangian. Thanks to the so-called Freidlin condition, see \cite{evans_pde_1989,freidlin_functional_1985} we deduce that the solution of \eqref{eq:varHJ} is
\begin{equation*}
\overline{\varphi}(t,\xi) = \max \left( J(\xi,t) , 0 \right) .
\end{equation*}
The Hopf-Lax formula gives $J(\xi,t) = t \mathcal{L}\left( t^{-1}\xi\right)$
thanks to the assumption on the initial condition. Hence,
\begin{align*}
\xi \in \left\{ \overline{\varphi}(t, \cdot ) = 0 \right\} \Longleftrightarrow \mathcal{L}\left(t^{-1}\xi\right) \leq 0 \quad \Longleftrightarrow &\quad \sup_{q \in \mathbb{R}} \left( q \xi - \vert q \vert c(\vert q \vert, e_0) t \right) \leq 0, \\
\quad \Longleftrightarrow &\quad \forall q \in \mathbb{R}, \quad q \xi - \vert q \vert c(\vert q \vert,e_0) t \leq 0,\\
\quad \Longleftrightarrow &\quad \vert \xi \vert \leq c^*(e_0) t.
\end{align*}
We deduce the result for $\varphi$ by changing the variables back.
\end{proof}
For a compactly supported initial data, the issue of the speed of propagation in general is more involved, since different directions may have different speeds of propagation. Namely, the following Freidlin-G\"artner formula holds:
\begin{proposition}\label{prop:nullsetfreidlin_functional_1985}
Assume that
\begin{equation*}
\varphi_0(x) := \left\lbrace\begin{array}{lcl}
0& x=0\\
+ \infty & \text{ else}\\
\end{array}\right..
\end{equation*}
Define
\begin{equation*}
w^*(e_0) = \min_{\substack{e \in \mathbb{S}^{n-1}\\e_0 \cdot e > 0}}\left( \frac{c^*(e)}{e_0 \cdot e}\right).
\end{equation*}
Then the nullset of $\varphi$ propagates at speed $w^*(e_0)$ in the direction $e_0$ :
\begin{equation*}
\forall t \geq 0, \qquad \left\{ x \in \mathbb{R}, \; \varphi(t,x \, e_0) = 0 \right\} = \left\{ x \in \mathbb{R}, \; \vert x \vert \leq w^*(e_0) t \right\}.
\end{equation*}
\end{proposition}
\begin{proof}[\bf Proof of Proposition \ref{prop:nullsetfreidlin_functional_1985}]
The Lagrangian is by definition
\begin{align*}
\mathcal{L}(p) &= \sup_{q \in \mathbb{R}^n} \left( p \cdot q - (1+r)\mathcal{H}\left(\frac{q}{1+r}\right) - r \right), \\
&= \sup_{e \in \mathbb{S}^{n-1}} \sup_{\lambda \in \mathbb{R}^+} \left( \lambda p \cdot e - \left[ (1+r)\mathcal{H}\left(\frac{\lambda e}{1+r}\right) + r \right] \right),\\
&= \sup_{e \in \mathbb{S}^{n-1}} \sup_{\lambda \in \mathbb{R}^+} \left( \lambda \left[ p \cdot e - c(\lambda,e) \right]\right),
\end{align*}
To solve the variational Hamilton-Jacobi equation, let us define
\begin{equation*}
J(x,t) = \inf_{x \in X} \left \lbrace \int_{0}^{t} \left[ \mathcal{L}( \dot x ) \right] ds \, \big\vert \, x(0) = x, x(t) = 0 \right \rbrace
\end{equation*}
the minimizer of the action associated to the Lagrangian. Thanks to the so-called Freidlin condition, see \cite{evans_pde_1989,freidlin_functional_1985} we deduce that the solution of \eqref{eq:varHJ} is
\begin{equation*}
\varphi(x,t) = \max \left( J(x,t) , 0 \right) .
\end{equation*}
The Lax formula gives
\begin{equation*}
J(x,t) = \min_{y \in \mathbb{R}^n} \left\{ t \mathcal{L}\left( \frac{x-y}{t} \right) + \varphi_0(y) \right\} = t \mathcal{L}\left( \frac{x}{t} \right)
\end{equation*}
thanks to the assumption on the initial condition. Hence,
\begin{align*}
\varphi(t, x e_0 ) = 0 \Longleftrightarrow \mathcal{L}\left(\frac{x}{t} e_0 \right) \leq 0 \quad \Longleftrightarrow &\quad \sup_{e \in \mathbb{S}^{n-1}} \sup_{\lambda \in \mathbb{R}^+} \left( \lambda \left[ x (e_0 \cdot e) - c_e(\lambda) t \right]\right) \leq 0, \\
\quad \Longleftrightarrow &\quad \forall \lambda \in \mathbb{R}^+, \forall e \in \mathbb{S}^{n-1}, \quad \lambda \left[ (x\cdot e_0)(e_0 \cdot e) - c_e(\lambda) t \right] \leq 0,\\
\quad \Longleftrightarrow &\quad \forall e \in \mathbb{S}^{n-1}, \quad x(e_0 \cdot e) \leq c^*(e) t\\
\quad \Longleftrightarrow &\quad \vert x\vert \leq \min_{\substack{e \in \mathbb{S}^{n-1}\\e_0 \cdot e > 0}}\left( \frac{c^*(e)}{e_0 \cdot e}\right) t = w^*(e_0) t.
\end{align*}
\end{proof}
\section{Existence of travelling waves and spreading result}\label{sec:TW}
In this Section, we now explain how to construct travelling wave solutions to \eqref{eq:main}. We will follow closely the construction in \cite{bouin_propagation_2015}. As is classical in this type of Fisher-KPP problems, the speeds of propagation are given by studying the linearized problem at infinity. As we will see later on, the main difference that has motivated this paper is the possible singularity of $c(\lambda,e)$ at $\lambda^*(e)$.
\subsection{Proof of Theorem \ref{thm:existence-tw} : Travelling wave solutions}
Given a direction $e \in \mathbb{S}^{n-1}$, looking for exponential solutions to the linearized problem of the form $e^{-\lambda\left(x\cdot e - c(\lambda,e) t\right)} F_{\lambda,e}(v)$ for any positive $\lambda$ is exactly looking for solutions to
\begin{equation*}
\left[ 1 + \lambda ( c(\lambda,e) - v \cdot e) \right] F_{\lambda,e}(v) = (1+r) M(v) \int_V F_{\lambda,e}(v') dv', \qquad v \in V.
\end{equation*}
In view of earlier computations, it boils down to setting $c(\lambda,e)$ as in \eqref{eq:linspeed} and $F_{\lambda,e} := \tilde Q_{\frac{\lambda e}{1+r}}$ as in \eqref{eq:profile}.
Recall that $\frac{\lambda e}{1+r} \in \text{Sing}(M)$ if and only if $l(e)\leq \frac{\lambda}{1+r}$, that is $\lambda \geq \tilde\lambda(e) := (1+r) l(e)$. Thus, for $\lambda \leq \tilde\lambda(e)$, the function $c(\lambda,e)$ is convex and regular, and the profile is explicitly given by
\begin{equation*}
F_{\lambda,e}\left(v\right)=\frac{(1+r) M\left(v\right)}{1+ \lambda ( c(\lambda,e) - v\cdot e)} > 0.
\end{equation*}
For $\lambda \geq \tilde \lambda(e)$, that is to say $\frac{\lambda e}{1+r} \in \text{Sing}(M)$ one has $c(\lambda,e) = \overline{v}(e) - \frac{1}{\lambda}$ which is concave and increasing. As such, the infimum of $\lambda \mapsto c(\lambda,e)$ is attained for a $\lambda \leq \tilde\lambda(e)$, which we denote $\lambda^*(e)$. As a consequence, the minimal speed $c^*(e)$ is always associated to an integrable eigenvector, since if $\lambda^*(e) = \tilde \lambda(e)$, one has
\begin{equation*}
F_{\tilde\lambda(e),e}\left(v\right) = \frac{(1+r) M\left(v\right)}{\tilde\lambda(e) \left( \bar v(e) - v\cdot e \right)},
\end{equation*}
with $\int_V F_{\tilde\lambda(e),e}\left(v\right) dv = 1$ thanks to the definition of $\tilde\lambda(e)$.
Given a direction $e \in \mathbb{S}^{n-1}$, we shall now discuss the type of functions $\lambda \mapsto c(\lambda,e)$ that may arise from this problem. Qualitatively, four situations may happen. The first possibility is the one already appearing in \cite{bouin_propagation_2015} in the mono-dimensional case, that is $\tilde{\lambda}(e)=+\infty$ and thus $\mathrm{Sing}(M)=\emptyset$. We plot an exemple of this case in Figure \ref{fig:Shape}, case 1. If $\tilde{\lambda}(e) < +\infty$, three supplementary situations can occur. Either the infimum of $\lambda \mapsto c(\lambda,e)$ is attained for $\lambda<\tilde{\lambda}(e)$, as shown in Figure \ref{fig:Shape}, case 2, either it is attained for $\lambda=\tilde{\lambda}(e)$. In the latter case, the infimum can either be attained at a point where the left derivative of $c(\lambda,e)$ is zero (Figure \ref{fig:Shape}, case 3), or where it is negative (Figure \ref{fig:Shape}, case 4).
\begin{figure}[htbp]
\label{fig:Shape}
\begin{center}
\subfigure[Case $1$]
{
\includegraphics[width=0.40\linewidth]{case1.eps}
}
\subfigure[Case $2$]
{
\includegraphics[width=0.40\linewidth]{case2.eps}
}
\subfigure[Case $3$]
{
\includegraphics[width=0.40\linewidth]{case3.eps}
}
\subfigure[Case $4$]
{
\includegraphics[width=0.40\linewidth]{case4.eps}
}
\end{center}
\caption{Various cases of speed functions $\lambda \mapsto c(\lambda,e)$. Red plain line: $\lambda \mapsto c(\lambda,e)$. Black dotted line: $\lambda \mapsto \bar v(e) - \frac{1}{\lambda}$. (a) $n=1$, $V=[-1,1]$, $e=1$, $M\equiv \frac{1}{2}$ and $r=1$. In this case, $\mathrm{Sing}(M)=\emptyset$ so that the function $\lambda\mapsto c(\lambda,1)$ is regular. This is the case discussed in \cite{bouin_propagation_2015}. (b) $n=2$, $V=D(0,1)$, $M\equiv \frac{1}{\pi}$ and $r=1$. In this case, $\mathrm{Sing}(M)\neq \emptyset$ but the minimum of $c(\lambda,e_1)$ is attained for $\lambda<\tilde{\lambda}(e_1)=4$. (c) $n=1$, $V=[-1,1]$, $M(v)=\frac{3}{2}(1-|v|)^2$ and $r=-1+l(1)^{-2}\int_{-1}^1 \frac{M(v)}{(1-v)^2}dv\approx 0.37$. In this case, the minimum of $c(\lambda,e_1)$ is attained for $\lambda=\tilde{\lambda}(e_1)$, with a zero left derivative. Numerically $\tilde{\lambda}(1)\approx 1.58$. (d) $n=1$, $V=[-1,1]$, $M(v)=\frac{3}{2}(1-|v|)^2$ and $r=1$. In this case, the minimum of $c(\lambda,1)$ is attained for $\lambda=\tilde{\lambda}(1)$, with a negative left derivative. Numerically, $\tilde{\lambda}(1)\approx 2.31$.}
\end{figure}
\begin{rem}
One can get a criterion to check which case holds. The dispersion relation defining $c(\lambda,e)$ on $(0,\tilde\lambda(e))$ is
\begin{equation*}
\mathcal{I}(\lambda,c(\lambda,e),e) = 1,
\end{equation*}
where
\begin{equation}\label{dispersionrelation}
\mathcal{I}(\lambda,c,e):=\int_V \frac{(1+r) M\left(v\right)}{1+ \lambda ( c - v\cdot e)} dv.
\end{equation}
Differentiating with respect to $\lambda$, we find
\begin{equation*}
\int_V \frac{\lambda c'(\lambda,e) M\left(v\right)}{\left[1+ \lambda ( c(\lambda,e) - v\cdot e)\right]^2} dv + \int_V \frac{( c(\lambda,e) - v\cdot e)M\left(v\right)}{\left[1+ \lambda ( c(\lambda,e) - v\cdot e)\right]^2} dv = 0
\end{equation*}
Recalling $\int_V \frac{M\left(v\right)}{1+ \lambda ( c(\lambda,e) - v\cdot e)} dv = (1+r)^{-1}$ and defining
\begin{equation*}
\mathcal{J}(\lambda,e) = \int_V \frac{M\left(v\right)}{\left[1+ \lambda ( c(\lambda,e) - v\cdot e)\right]^2} dv,
\end{equation*}
we get
\begin{equation*}
c'(\lambda,e) = \left( 1 - \frac{(1+r)^{-1}}{J(\lambda,e)} \right) \frac{1}{\lambda^2}.
\end{equation*}
As such, computing the value of $\lim_{\lambda \to \tilde\lambda(e)^-} \mathcal{J}(\lambda,e)$ allows to know in which case one falls. Indeed, the function $\lambda \mapsto c(\lambda)$ attains its minimum at $\tilde\lambda(e)$ if and only if $c'\left(\tilde\lambda^-(e)\right) \leq 0$, which is equivalent to $\mathcal{J}(\tilde\lambda(e)) \leq (1+r)^{-1}$ which is in turn equivalent to
\begin{equation}\label{eq:square}
\int_V \frac{M\left(v\right)}{\left( \bar v(e) - v\cdot e \right)^2} dv \leq (1+r) l(e)^2 ,
\end{equation}
which can be checked case by case. Note that one has always, given the Cauchy-Schwarz inequality, $l(e)^2 \leq \int_V \frac{M\left(v\right)}{\left( \bar v(e) - v\cdot e \right)^2} dv$.
\end{rem}
\begin{example}
Let us look back at \Cref{ex:annulebord}. As was stated, $l(1)=3(2\ln(2)-1)$ and $\int_V \frac{M(v)}{(1 - v)^2}dv=6(1-\ln(2))<+\infty$. Thus, for $r >-1+ l(1)^{-2}\int \frac{M(v)}{(1 - v)^2}dv>0$, the condition (\ref{eq:square}) is satisfied so the minimum of $\lambda \mapsto c(\lambda,e)$ is attained at $\tilde{\lambda}(e)$. For $r=-1 + l(1)^{-2}\int_V \frac{M(v)}{(1 - v)^2}dv$ the minimum has its left derivative equal to 0 (\em i.e. \em $\lambda^*(1)=\tilde{\lambda}(1)$). We illustrate those results in \cref{fig:Shape}, case 3 and 4.
\end{example}
Since $c(\lambda,e)$ tends to infinity when $\lambda$ tends to $0$, for any $c \geq c^*(e)$ one can find $\lambda \in (0, \tilde\lambda(e)]$ such that $c(\lambda,e) = c$.
Fix $c\in \left( c^*(e), \overline{v}(e) \right)$. Denote $\lambda_c$ is the smallest solution in $(0,\tilde\lambda(e))$ of $c(\lambda_c,e) = c$. Notice that by construction it is possible to obtain $F_{\lambda_c,e}$ integrable and bounded (bounded since $c > c^*(e))$, the proof of \cite{bouin_propagation_2015}, Section 3.2, that constructs sub and super solutions for \eqref{eq:main} is unchanged. From the construction of a pair of sub- and super-solutions, we deduce the existence of travelling wave solutions exactly as in \cite{bouin_propagation_2015}, by a monotonicity method when $c > c^*(e)$ and passing to the limit $c \to c^*(e)$ to get the case $c=c^*(e)$.
The main difference between the mono-dimensional case of \cite{bouin_propagation_2015} and the higher dimensional case comes here. It is rather non-standard and interesting that the function giving the speed of propagation could be singular at its minimum value.
To prove that $c^*$ is still the minimal speed of propagation, the arguments used in \cite[Lemma 3.10]{bouin_propagation_2015} are not applicable. These arguments can be summarized as follows : in the one dimensional case when $M\geq\delta>0$, the function $\lambda \mapsto \mathcal{I}(\lambda,c,e)$ (recall (\ref{dispersionrelation})) is analytic. Thus, we can not find $\lambda>0$ such that $\mathcal{I}(\lambda,c,e)=1$ when $c<c^*$. However, an argument using the Rouch\'e Theorem states that we can solve this problem in $\mathbb{C}\setminus \mathbb{R}$. Assuming that there exists a travelling wave solution $f$ for $c<c^*$, we then can use such a $\lambda\in \mathbb{C}$ to construct a subsolution under $f$ which dos not converge to 0 as $x\to \infty$. In our framework, the function $\lambda\mapsto\mathcal{I}(\lambda,c,e)$ might not be analytic around $\lambda^*(e)$, which prevents us from using this technique. We thus choose to use the Hamilton-Jacobi framework combined to the comparaison principle.
We now prove the following lemma.
\begin{lem}\label{lem:minspeed}
Let $f$ be a travelling wave solution to \eqref{eq:main} in the direction $e \in \mathbb{S}^{n-1}$, with speed $c$. Then $c\geq c^*(e)$.
\end{lem}
\begin{proof}[{\bf Proof of \Cref{lem:minspeed}}]
\NC{Let $f$ be such a travelling wave solution with initial data $\tilde f(x,v)$, \em i.e \em $f(t,x,v)=\tilde{f}(x\cdot e - c t,v)$. After \Cref{prop:zones}, we deduce that $f^\varepsilon(t,x,v) = \tilde{f}\left(\frac{1}{\varepsilon}\left( x\cdot e - c t \right),v\right)$} satisfies $\lim_{\varepsilon \to 0} f^\varepsilon = M$ on $x\cdot e - c t < 0$ and $\lim_{\varepsilon \to 0} f^\varepsilon = 0$ on $x\cdot e - c t > 0$.
Take $0<\gamma < 1$ and define $g(x,v) = \gamma M(v) \textbf{1}_{[-1,1]\times \mathbb{R}^{n-1}}(x)$ and $g^{\varepsilon}(x,v)=g(x/\varepsilon,v)$. We have
\begin{equation*}
\psi^\varepsilon(x) = - \varepsilon \ln(g(x/\varepsilon,v)/M) = \left\lbrace\begin{array}{lcl}
- \varepsilon \ln(\gamma) & x \in [-\varepsilon,\varepsilon] e_0 + e_0^\bot\\
+ \infty & \text{ else}\\
\end{array}\right..
\end{equation*}
Since $\lim_{z\to -\infty} \tilde f(z,v)=M$ uniformly in $v \in V$, one can shift the profile sufficiently enough so that $M \geq f \geq g \geq 0$. Thus, the comparison principle (see \cite{bouin_propagation_2015}, Proposition 2.2 for a proof) yields that $f^\varepsilon \geq g^\varepsilon$. Passing to the limit $\varepsilon \to 0$, and recalling \Cref{thm:HJlimit}, \Cref{prop:zones} and \Cref{prop:nullsetfront}, we deduce that
\begin{equation*}
\left( e_0^\bot + c^*(e_0) t e_0 \right) \cdot e_0 - ct \leq 0,
\end{equation*}
from which the result follows.
\end{proof}
From the Hamilton-Jacobi formalism, we may also deduce the following.
\begin{proof}[{\bf Proof of Proposition \ref{prop:spreadingplanar}}]
We start by proving \eqref{eq:frontplanar}. For this, we use the the super-solution naturally provided by the linearized problem. We have
\begin{equation*}
f(t,x,v) \leq \min\lbrace M(v) , e^{-\lambda^*(e_0)\left(x\cdot e_0 - c^*(e_0) t\right)} F_{\lambda^*(e_0),e_0}(v) \rbrace
\end{equation*}
As a consequence,
\begin{equation*}
\rho(t,x) \leq \min\lbrace 1 , e^{-\lambda^*(e_0)\left(x\cdot e_0 - c^*(e_0) t\right)} \rbrace,
\end{equation*}
and thus one has $\lim_{t \to +\infty} \sup_{x\cdot e_0 > ct} \rho(t,x) = 0$.
For \eqref{eq:backplanar}, we use the Hamilton-Jacobi results in the following way. We first notice that since the initial data is invariant under any translation in $e_0^\bot$, and the the equation \eqref{eq:varHJ} invariant by translation, the solution $f(t,x,v)$ depends only on $x\cdot e_0$. That is $f(t,x,v) = f(t,(x \cdot e_0)e_0,v) = \tilde{f}(t,x \cdot e_0,v)$. For any $c < c^*(e_0)$, recalling \Cref{thm:HJlimit}, \Cref{prop:zones} and \Cref{prop:nullsetfront}, we have
\begin{equation*}
\lim_{t \to \infty} f(t,e_0^\bot + c t e_0,v) = \lim_{t \to \infty} \tilde f(t,ct,v) = \lim_{\varepsilon \to 0} \tilde f^\varepsilon(1,c,v) = M(v),
\end{equation*}
since $c < c^*(e_0)$.
\end{proof}
\subsection{Proof of \Cref{prop:spreadingbounded} : spreading of a compactly supported initial data}
We finally prove \Cref{prop:spreadingbounded}. The spreading result \eqref{eq:frontcompact} goes as for the Fisher-KPP equation in an heterogeneous media \cite{berestycki_spreading_2012}. It can be found by using the super solution
\begin{equation*}
\overline{f}(t,x,v) = \inf_{e \in \mathbb{S}^{n-1}} e^{-\lambda^*(e)\left(x\cdot e - c^*(
e) t\right)} Q_{\lambda^*(e) e}(v)
\end{equation*}
By the comparison principle, and since the initial data is compactly supported, the function $\overline{f}$ lies above $f$ (multiplying $\overline{f}$ by a big constant if necessary). We deduce that for any given $e_0 \in \mathbb{S}^{n-1}$, and any fixed $x \in \mathbb{R}^n$,
\begin{equation*}
f(t,x+c e_0 t,v) \leq \inf_{e \in \mathbb{S}^{n-1}} e^{-\lambda^*(e)\left((x+c e_0 t)\cdot e - c^*(
e) t\right)} Q_{\lambda^*(e) e}(v) = \inf_{e \in \mathbb{S}^{n-1}} e^{-\lambda^*(e)\left(x\cdot e + c e_0\cdot e t - c^*(
e) t\right)} Q_{\lambda^*(e) e}(v).
\end{equation*}
Moreover, the domain of $Q_{\lambda^*(e)e}$ contains $V\setminus \left\{v_{max}e\right\}$ and $Q_{\lambda^*(e)e}$ is bounded on all compact sets of $V\setminus \left\{v_{max}e\right\}$. Hence, for fixed $v\in V$, we can choose $e\in \mathbb{S}^{n-1}$ such that $v \in V\setminus \left\{v_{max}e\right\}$. Then, as soon as $c > w^*(e_0)$, we have $c (e \cdot e_0) > c^*(e)$ for any $e$, and thus $\lim_{t \to \infty} f(t,x+c e_0 t,v) = 0$.
Moreover, we shall prove \eqref{eq:backcompact} as follows. For any $c < c^*(e_0)$, recalling \Cref{thm:HJlimit}, \Cref{prop:zones} and \Cref{prop:nullsetfreidlin_functional_1985}, we have
\begin{equation*}
\lim_{t \to \infty} f(t,c t e_0,v) = \lim_{t \to \infty} \tilde f(t,ct,v) = \lim_{\varepsilon \to 0} \tilde f^\varepsilon(1,ce_0,v) = M(v),
\end{equation*}
since $c < w^*(e_0)$.\begin{flushright}$\square$\end{flushright}
\bibliographystyle{plain}
|
1,941,325,221,065 | arxiv | \section{Introduction}\label{sec:intro}
Subspace methods have been widely used in signal/image processing,
pattern recognition, computer vision, etc. \cite{turk1991eigenfaces,
bouwmans2009subspace, kriegel2009clustering, lu2011survey, gu2011joint,
wang2015joint}. They can have different names and emphasis in various
contexts such as manifold learning \cite{wang2005adaptive,
lin2008riemannian}. Generally speaking, one uses a subspace to denote
the feature space of a certain object class, ({\em e.g.}, the subspace
of the dog object class) or the dominant feature space by dropping less
important features ({\em e.g.}, the subspace obtained via principal
component analysis or PCA). The subspace representation offers a
powerful tool for signal analysis, modeling and processing. Subspace
learning is to find subspace models for concise data representation and
accurate decision making based on training samples.
Most existing subspace methods are conducted in a single stage. We may
ask whether there is an advantage to perform subspace learning in
multiple stages. Research on generalizing from one-stage subspace
learning to multi-stage subspace learning is rare. Two PCA stages are
cascaded in the PCAnet \cite{chan2015pcanet}, which provides an
empirical solution to multi-stage subspace learning. Little research on
this topic may be attributed to the fact that a straightforward cascade
of linear multi-stage subspace methods, which can be expressed as the
product of a sequence of matrices, is equivalent to a linear one-stage
subspace method. The advantage of linear multi-stage subspace methods
may not be obvious from this viewpoint.
Yet, multi-stage subspace learning may be worthwhile under the following
two conditions. First, the input subspace is not fixed but growing from
one stage to the other. For example, we can take the union of a pixel
and its eight nearest neighbors to form an input space in the first
stage. Afterward, we enlarge the neighborhood of the center pixel from
$3 \times 3$ to $5 \times 5$ in the second stage. Clearly, the first
input space is a proper subset of the second input space. By
generalizing it to multiple stages, it gives rise to a ``successive
subspace growing" process. This process exists naturally in the
convolutional neural network (CNN) architecture, where the response in a
deeper layer has a larger receptive field. In our words, it corresponds
to an input of a larger neighborhood. Instead of analyzing these
embedded spaces independently, it is advantageous to find a
representation of a larger neighborhood using those of its constituent
neighborhoods of smaller sizes in computation and storage efficiency.
Second, special attention should be paid to the cascade interface of two
consecutive stages as elaborated below.
When two consecutive CNN layers are in cascade, a nonlinear activation
unit is used to rectify the outputs of convolutional operations of the
first layer before they are fed to the second layer. The importance of
nonlinear activation to the CNN performance is empirically verified, yet
little research is conducted on understanding its actual role. Along
this line, it was pointed out in \cite{kuo2016understanding} that there
exists a sign confusion problem when two CNN layers are in cascade. To
address this problem, Kuo {\em et al.} proposed the Saak (subspace
approximation via augmented kernels) transform \cite{kuo2018data} and
the Saab (subspace approximation via adjusted bias) transform
\cite{kuo2019interpretable} as an alternative to nonlinear activation.
Both Saak and Saab transforms are variants of PCA. They are carefully
designed to avoid sign confusion.
One advantage of adopting Saak/Saab transforms rather than nonlinear
activation is that the CNN system is easier to explain
\cite{kuo2019interpretable}. Specifically, Kuo {\em et al.}
\cite{kuo2019interpretable} proposed the use of multi-stage Saab
transforms to determine parameters of convolutional layers and the use
of multi-stage linear least-squares (LLS) regression to determine
parameters of fully-connected (FC) layers. Since all parameters of CNNs
are determined in a feedforward manner without any backpropagation (BP)
in this design, it is named the ``feedforward design". Yet, the
feedforward design is drastically different from the BP-based design.
Retrospectively, the work in \cite{kuo2019interpretable} offered the
first ``successive subspace learning (SSL)" design example although the
SSL term was not explicitly introduced therein. Although being inspired
by the deep learning (DL) framework, SSL is fundamentally different in
its model formulation, training process and training complexity. We will
conduct an in-depth comparison between DL and SSL in Sec.
\ref{sec:discussion}.
SSL can be applied but not limited to parameters design of a CNN. In
this work, we will examine the feedforward design as well as SSL from a
higher ground. Our current study is a sequel of cumulative research
efforts as presented in \cite{kuo2016understanding, kuo2018data,
kuo2019interpretable, kuo2017cnn}. Here, we introduce SSL formally and
discuss its similarities and differences with DL. To illustrate the
flexibility and generalizability of SSL, we present an SSL-based machine
learning system for object classification. It is called the PixelHop
method. The block diagram of the PixelHop system deviates from the
standard CNN architecture completely since it is not a network any
longer. The word ``hop'' is borrowed from graph theory. For a target
node in a graph, its immediate neighboring nodes connected by an edge
are called its one-hop neighbors. Its neighboring nodes connected to
itself through $n$ consecutive edges via the shortest path are the
$n$-hop neighbors. The PixelHop method begins with a very localized
region; namely, a single pixel denoted by ${\bf p}$. It is called the
$0$-hop input. We concatenate the attributes of a pixel, and attributes
of its one-hop neighbors to form a one-hop neighborhood denoted by
$\mathfrak{N}_1 ({\bf p})$. We can keep enlarging the input by
including larger neighborhood regions. This idea applies to structured
data ({\em e.g.,} images) as well as unstructured data ({\em e.g.}, 3D
point cloud sets). An SSL-based 3D point cloud classification scheme,
called the PointHop method, was proposed in \cite{zhang2019pointhop}.
If we implement the above idea in a straightforward manner, the
dimension of neighborhood $\mathfrak{N}_i ({\bf p})$, where $i=1, 2,
\cdots, I$ is the stage index, will grow very fast as $i$ becomes
larger. To control the rapid dimension growth of $\mathfrak{N}_i ({\bf
p})$, we use the Saab transform to reduce its dimension. Since no label
is used in the Saab transform, it is an unsupervised dimension reduction
technique. To reduce the dimension of the Saab responses at each stage
furthermore, we exploit the label of training samples to perform
supervised dimension reduction, which is implemented by a label-assisted
regression (LAG) unit. As a whole, the PixelHop method provides an
extremely rich feature set by integrating attributes from near-to-far
neighborhoods of selected spatial locations. Finally, we adopt an
ensemble method to combine features and train a classifier, such as
the support vector machine (SVM) \cite{cortes1995support} and the random
forest (RF) \cite{breiman2001random}, to provide the ultimate
classification result. Extensive experiments are conducted on three
datasets (namely, MNIST, Fashion MNIST and CIFAR-10 datasets) to
evaluate the performance of the PixelHop method. It is shown by
experimental results that the PixelHop method outperforms classical CNNs
of similar model complexity in classification accuracy while demanding
much lower training complexity.
Our current work has three major contributions. First, we introduce the
SSL notion explicitly and make a thorough comparison between SSL and DL.
Second, the LAG unit using soft pseudo labels as presented in Sec.
\ref{subsec:clf} is novel. Third, we use the PixelHop method as an
illustrative example for SSL, and conduct extensive experiments to
demonstrate its performance.
The rest of this paper is organized as follows. The PixelHop method is
presented in Sec. \ref{sec:pixelhop}. Experimental results of the
PixelHop method are given in Sec. \ref{sec:experiments}. Comparison
between DL and SSL is discussed in Sec. \ref{sec:discussion}. Finally,
concluding remarks are drawn and future research topics are pointed out
in Sec. \ref{sec:conclusion}.
\section{PixelHop Method}\label{sec:pixelhop}
We present the PixelHop method to illustrate the SSL methodology for
image-based object classification in this section. First, we give an
overview of the whole system in Sec. \ref{subsec:overview}. Then, we
study the properties of Saab filters that reside in each PixelHop unit
in Sec. \ref{subsec:Saab}. Finally, we examine the label-assisted
regression (LAG) unit of the PixelHop system in Sec. \ref{subsec:clf}.
\begin{figure*}[!htbp]
\begin{center}
\includegraphics[width=0.95\textwidth]{figure/overview_hop.png}
\end{center}
\caption{The block diagram of the PixelHop method.}\label{fig:overview}
\end{figure*}
\subsection{System Overview}\label{subsec:overview}
The block diagram of the PixelHop method is given in Fig.
\ref{fig:overview}. Its input can be graylevel or color images. They
are fed into a sequence of $I$ PixelHop units in cascade to obtain the
attributes of the $i$th PixelHop unit, $i=1, 2, \cdots, I$, as shown in
module \#1. The attributes in spatial locations of each PixelHop unit
are aggregated in multiple forms and, then, fed into the LAG unit for
further dimension reduction to generate $M$ attributes per unit as shown
in module \#2. Finally, these attributes are concatenated to form the
ultimate feature vector of dimension $M \times I$ for image
classification as shown in module \#3. The function of each module is
stated below.
\begin{itemize}
\setlength{\itemsep}{-2pt}
\item Module \#1: A sequence of PixelHop units in cascade \\
The purpose of this module is to compute attributes of near-to-far
neighborhoods of selected pixels through $I$ PixelHop units. The block
diagram of a PixelHop unit is shown in Fig. \ref{fig:pixelhop}. The
$i$th PointHop unit, $i=1,\cdots, I$, concatenates attributes of the
$(i-1)$th neighborhood, whose dimension is denoted by $K_{(i-1)}$, of a
target pixel and its $N_i$ neighboring pixels to form a neighborhood
union. Through this process, the dimension of the enlarged neighborhood
is equal to $K_{(i-1)} \times (N_i+1)$. Without loss of generality, we
set $N_i=8$ for all $i$ in our implementation. If we do not take any
further action, the attribute dimension will be $K_0 9^i$ at the $i$th
unit. It is critical to apply a dimension reduction technique so as to
control the rapidly growing dimension. This is achieved by a subspace
approximation technique; namely, the Saab transform, as illustrated in
Fig. \ref{fig:pixelhop_block}.
Each PixelHop unit yields a neighborhood representation corresponding to
its stage index and input neighborhood size. At the $i$th PixelHop unit,
we see that the spectral dimension is reduced from $9 K_{i-1}$ to $K_i$
after the Saab transform while the spatial dimension remains the same.
Since the neighborhoods of two adjacent pixels are overlapping with each
other at each PixelHop unit, there exists spatial redundancy in the
attribute representation. For this reason, we insert the standard
$(2\times2)$-to-$(1\times1)$ maximum pooling unit between two
consecutive PixelHop units as shown in Fig. \ref{fig:overview}. After
the pooling, the spatial resolution is reduced from $S_{(i-1)} \times
S_{(i-1)}$ to $S_{i} \times S_{i}$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.45\textwidth]{figure/PixelHop_Unit.PNG}
\end{center}
\caption{The block diagram of a PixelHop unit in the PixelHop
system.}\label{fig:pixelhop}
\end{figure}
\begin{figure*}[!htbp]
\begin{subfigure}{0.3\textwidth}
\centering
{\includegraphics[width=0.4\textwidth]{figure/Neighborhood_Subspace.PNG}}
\caption{}
\end{subfigure}
\begin{subfigure}{0.6\textwidth}
\centering
{\includegraphics[width=1.0\textwidth]{figure/Subspace_Approx.PNG}}
\caption{}
\end{subfigure}
\caption{The block diagram of a PixelHop unit: (a) neighborhood
construction by taking the union of a center pixel and its eight nearest
neighborhood pixels and (b) the use of the Saab transform to reduce the
dimension from $9 K_{(i-1)}$ to $K_i$.}\label{fig:pixelhop_block}
\end{figure*}
\item Module \#2: Aggregation and supervised dimension reduction via the
label-assisted regression (LAG) unit \\
The output from the $i$th PixelHop unit has a dimension of $S_{(i-1)}
\times S_{(i-1)} \times K_i$ as illustrated in Fig.
\ref{fig:aggregation}. The maximum pooling scheme is used to reduce the
spatial dimension before the output is fed into the next PixelHop unit
as described in module \#1. To extract a diversified set of features at
the $i$th stage, we consider multiple aggregation schemes such as taking
the maximum, the minimum, and the mean values of responses in small
nonoverlapping regions. The spatial size of features after aggregation
is denoted as $P_{i} \times P_{i}$, where $P_i$ is a hyper-parameter to
choose. We will explain the relationship between $P_i$ and $S_i$ in Sec.
\ref{sec:experiments}. Afterward, we reduce the feature dimension based
on supervised learning. For a given neighborhood size, we expect
attributes of different object classes to follow different distributions.
For example, a cat image has fur texture while a car image does not. This
property can be exploited to allow us to find a more concise
representation, which will be elaborated in Sec. \ref{subsec:clf}. In
Fig. \ref{fig:overview}, we use $M$ to denote the dimension of
the feature vector at each PixelHop unit.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{figure/aggregation.png}
\end{center}
\caption{Illustration of the aggregation unit in module \#2.}\label{fig:aggregation}
\end{figure}
\item Module \#3: Feature concatenation across all PixelHop units and
Classification \\
We concatenate $M$ features from $I$ PixelHop units to get a total of $M
\times I$ features in module \#3. Afterward, we train a multi-class
classifier using these features by following the standard pattern
recognition procedure. In the experiment, we use the SVM classifier
with its kernel being the radial basis function (RBF).
\end{itemize}
\subsection{Properties of Saab Filters}\label{subsec:Saab}
The Saab transform decomposes a signal space into two subspaces - the DC
(direct current) subspace and the AC (alternating current) subspace. It
uses the DC filter, which is a normalized constant-element vector, to
represent the DC subspace. It applies PCA to the AC subspace to derive
the AC filters. We will examine two issues below: 1) the relationship
between the number of AC filters and the subspace approximation
capability, and 2) the fast convergence behavior of AC filters.
{\bf Number of AC filters} We show the relationship between the energy
preservation ratio and the number of Saab AC filters in Fig.
\ref{fig:log_energy}. We see that leading AC filters can capture a large
amount of energy while the capability drops rapidly as the index becomes
larger. We plot four energy thresholds: 95\% (yellow), 96\% (red), 97\%
(blue), 98\% (green) and 99\% (purple) in Fig. \ref{fig:log_energy}.
This suggests that we may use a higher energy ratio in the beginning
PixelHop units and a lower energy ratio in the latter PixelHop units if
we would like to balance the classification performance and the
complexity.
\begin{figure*}[!htbp]
\begin{center}
\begin{subfigure}[(a)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/energy_1_cifary.png}}
\caption{PixelHop Unit1}
\end{subfigure}
\begin{subfigure}[(b)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/energy_2_cifary.png}}
\caption{PixelHop Unit2}
\end{subfigure}
\begin{subfigure}[(c)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/energy_3_cifary.png}}
\caption{PixelHop Unit3}
\end{subfigure}
\begin{subfigure}[(c)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/energy_4_cifary.png}}
\caption{PixelHop Unit4}
\end{subfigure}
\end{center}
\caption{The log energy plot as a function of the number of AC filters
tested on the luminance (Y) channel of color images from CIFAR-10 dataset, where the yellow, red, blue, green
and purple dots indicate the cumulative energy ratio of 95\%, 96\%, 97\%, 98\%
and 99\%, respectively.}\label{fig:log_energy}
\end{figure*}
{\bf Fast Convergence.} Subspace approximation using the Saab transform
is an unsupervised learning process. The unsupervised learning pipeline
in module \# of the PixelHop system actually does not demand a large
number of data samples. We conduct experiments on the CIFAR-10 dataset
as an example to support this claim. This system contains four PixelHop
units, where the number of AC filters is chosen by setting the energy
preservation ratio to $95\%$. The Saab filters are derived from the
covariance matrix of DC-removed spatial-spectral cuboids. If the
covariance matrix converges quickly, the Saab filters should converge
fast as well. To check the convergence of the covariance matrix, we
compute the Frobenius norm of the difference of two covariance matrices
using $K_t$ and $K_{(t+1)}$ cuboid samples. We plot the
dimension-normalized Frobenius norm difference, denoted by
$\Delta_{(t+1)}$, as a function of $K_{(t+1)}$ in Fig.
\ref{fig:cov_plot}. The curve is obtained as the average of five runs.
We see that the Frobenius norm difference converges to zero very
rapidly, indicating a fast-converging covariance matrix. Furthermore, we
compute the cosine similarity between the converged Saab filters (using
all 50K training images of the dataset) as well as Saab filters obtained
using a certain number of images. The results are shown in Fig.
\ref{fig:cosine_plot}. We see that AC filters converge to the final one
with about 1K training images in the first two PixelHop units and about
2.5K training images in the last two PixelHop units.
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[(a)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth]{figure/cov_1_cifary.png}}
\caption{PixelHop Unit1}
\end{subfigure}
\begin{subfigure}[(b)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth]{figure/cov_2_cifary.png}}
\caption{PixelHop Unit2}
\end{subfigure}
\begin{subfigure}[(c)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth]{figure/cov_3_cifary.png}}
\caption{PixelHop Unit3}
\end{subfigure}
\begin{subfigure}[(d)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth]{figure/cov_4_cifary.png}}
\caption{PixelHop Unit4}
\end{subfigure}
\caption{The Frobenius norm of the difference of two covariance
matrices, $\Delta_{(t+1)}$, using $K_t$ and $K_{(t+1)}$ sample
patches is plotted as a function of $K_{(t+1)}$.} \label{fig:cov_plot}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[(a)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/cosine_1_cifary.png}}
\caption{PixelHop Unit1}
\end{subfigure}
\begin{subfigure}[(b)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/cosine_2_cifary.png}}
\caption{PixelHop Unit2}
\end{subfigure}
\begin{subfigure}[(c)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/cosine_3_cifary.png}}
\caption{PixelHop Unit3}
\end{subfigure}
\begin{subfigure}[(c)]{0.35\textwidth}
\centering
{\includegraphics[width=1\textwidth,height=0.18\textheight]{figure/cosine_4_cifary.png}}
\caption{PixelHop Unit4}
\end{subfigure}
\caption{The cosine similarity between the AC filters obtained using all
training images and those obtained using a certain number of images is
plotted as a function of the number of images of the latter. Five
representative AC filters at each PixelHop unit are selected for the
illustration purpose.}\label{fig:cosine_plot}
\end{figure*}
\subsection{Label-Assisted Regression (LAG)}\label{subsec:clf}
Our design of the Label-Assisted reGression (LAG) unit is motivated by
two observations. First, each PixelHop unit offers a representation of a
neighborhood of a certain size centered at a target pixel. The size
varies from small to large ones through successive neighborhood
expansion and subspace approximation. The representations are called
attributes. We need to integrate the local-to-global attributes across
multiple PixelHop units at multiple selected pixels to solve the image
classification problem. One straightforward integration is to
concatenate all attributes to form a long feature vector. Yet, the
dimension of concatenated attributes is too high to be effective. We
need another way to lower the dimension of the concatenated attributes.
Second, CNNs use data labels effectively through BP. It is desired to
find a way to use data labels in SSL. The attributes of the same object
class are expected to reside in a smaller subspace in high-dimensional
attribute space. We attempt to search for the subspace formed by samples
of the same class for dimension reduction. This procedure demands a
supervised dimension reduction technique.
Although
being presented in different form, a similar idea was investigated in
\cite{kuo2019interpretable}. To give an interpretation to the first FC
layer of the LeNet-5, Kuo {\em et al.} \cite{kuo2019interpretable}
proposed to partition samples of each digit class into 12 clusters to
generate 12 pseudo-classes to account for intra-class variabilities. By
mimicking the dimension of the first FC layer of the LeNet-5, we have
120 clusters ({\em i.e.}, 12 clusters per digit for 10 digits) in total.
Since each training sample belongs to one of 120 clusters, we assign it
to a one-hot vector in a space of dimension 120. Then, we can set up a
least-squared regression (LSR) system containing 120 affine equations
that map samples in the input feature space to the output space that is
formed by 120 one-hot vectors. The one-hot vector is used to indicate a
cluster with a hard label. In this work, we adopt soft-labeled output
vectors in setting up the LSR problem. The learning task is to use the
training samples to determine the elements in the regression matrix.
Then, we apply the learned regression matrix to testing samples for
dimension reduction.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.55\textwidth]{figure/LAG.png}
\end{center}
\caption{Illustration of the relationship between a feature point and
centroids of different classes in the LAG unit.}\label{fig:lag}
\end{figure}
By following the notation in Fig. \ref{fig:aggregation}, after spatial
aggregation, the $i$th PixelHop unit yields a vector of dimension $1
\times P_i P_i K_i$, where $P_i \times P_i$ denotes the number of
selected pixels and $K_i$ denotes the attribute number. As illustrated
in Fig. \ref{fig:lag}, we study the distribution of these concatenated
attribute vectors based on their labels through the following steps:
\begin{enumerate}
\setlength{\itemsep}{-2pt}
\item We cluster samples of the same class to create object-oriented
subspaces and find the centroid of each subspace.
\item Instead of adopting a hard association between samples and
centroids, we adopt a soft association. As a result, the target output
vector is changed from the one-hot vector to a probability vector.
\item We set up and solve a linear LSR problem using the probability vectors.
\end{enumerate}
The regression matrix obtained in the last step is the label-assisted regressor.
For Step \#1, we adopt the k-means clustering algorithm. It applies to
samples of the same class only. We partition samples of each class into
$L$ clusters Suppose that there are $J$ object classes, denoted by
$O_j$, $j=1, \cdots, J$ and the dimension of the concatenated attribute
vectors is $n$. For Step \#2, we denote the vector of concatenated
attributes of class $O_j$ by ${\bf x}_j=(x_{j,1}, x_{j,2}, \cdots,
x_{j,n})^T \in R^n$. Also, we denote centroids of $L$ clusters by ${\bf
c}_{j,1}$, ${\bf c}_{j,2}$, $\cdots$, ${\bf c}_{j,L}$. Then, we define
the probability vector of sample ${\bf x}_j$ belonging to centroid ${\bf
c}_{j',l}$ as
\begin{equation}\label{eq1}
\mbox{Prob}({\bf x}_j,{\bf c}_{j',l})= 0 \quad \mbox{if} j \ne j',
\end{equation}
and
\begin{equation}\label{eq2}
\mbox{Prob}({\bf x}_j,{\bf c}_{j,l})= \frac{\exp(-\alpha d({\bf x}_j,{\bf c}_{j,l}))}
{\sum_{l=1}^{L} \exp(-\alpha d( {\bf x}_j, {\bf c}_{j,l}) )},
\end{equation}
where $d({\bf x},{\bf y})$ is the Euclidean distrance between vectors
${\bf x}$ and ${\bf y}$ and $\alpha$ is a parameter to determine the
relationship between the Euclidean distance and the likelihood for a
sample belonging to a cluster. The larger $\alpha$, the probability
decays faster with the distance. The short the Euclidean distance, the
larger the likelihood. Finally, we can define the probability of sample
${\bf x}$ belonging to a subspace spanned by centroids of class $j$ as
\begin{equation}\label{eq3}
{\bf p}_{j}({\bf x}_{j'})={\bf 0}, \quad \mbox{if} j \ne j',
\end{equation}
where ${\bf 0}$ is the zero vector of dimension $L$, and
\begin{equation}\label{eq4}
{\bf p}_{j}({\bf x}_j)=(\mbox{Prob}({\bf x}_j,{\bf c}_{j,1}), \cdots
\mbox{Prob}({\bf x}_j,{\bf c}_{j,l}), \cdots
\mbox{Prob}({\bf x}_j,{\bf c}_{j,L}))^T.
\end{equation}
Finally, we can set up a set of linear LSR equations to relate
the input attribute vector and the output probability vector as
\begin{equation}\label{eq:l3sr}
\left[
\begin{array}{ccccc}
a_{11} & a_{12} & \cdots & a_{1n} & w_1 \\
a_{21} & a_{22} & \cdots & a_{2n} & w_2 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
a_{M1} & a_{M2} & \cdots & a_{Mn} & w_M
\end{array}
\right]
\left[\begin{array}{c}
x_{1} \\
x_{2} \\
\vdots \\
x_{n} \\
1
\end{array}
\right]
=
\left[\begin{array}{c}
{\bf p}_{1}({\bf x}) \\
\vdots \\
{\bf p}_{j}({\bf x}) \\
\vdots \\
{\bf p}_{J}({\bf x}) \\
\end{array}
\right],
\end{equation}
where $M=J \times L$ is the total number of centroids, parameters $w_1$,
$w_2$, $\cdots$, $w_M$ are the bias terms and ${\bf p}_{j}({\bf x})$ is
defined in Eq. (\ref{eq4}). It is the probability vector of dimension
$L$, which indicates the likelihood for input ${\bf x}$ to belong to the
subspace spanned by the centroids of class $j$. Since ${\bf x}$ can
belong to one class only, we have zero probability vectors with respect
to $J-1$ classes.
\section{Experimental Results}\label{sec:experiments}
We organize experimental results in this section as follows. First, we
discuss the experimental setup in Sec. \ref{subsec:setup}. Second, we
conduct the ablation study and study the effects of different parameters
on the Fashion MNIST dataset in Sec. \ref{exp:ablation}. Third, we
perform error analysis, compare the performance of color image
classification using different color spaces and show the scalability of
the PixelHop method in Sec. \ref{subsec:error}. Finally, we conduct
performance benchmarking between the PixelHop method and the LeNet-5
network \cite{Lecun98gradient-basedlearning}, which is a classical CNN
architecture of model complexity similar to the PixelHop method in terms
of classification accuracy and training complexity in Sec.
\ref{exp:comparison}.
\subsection{Experiment Setup}\label{subsec:setup}
We test the classification performance of the PixelHop method on three
popular datasets: MNIST \cite{Lecun98gradient-basedlearning}, Fashion
MNIST \cite{xiao2017fashion} and CIFAR-10 \cite{krizhevsky2009learning}.
The MNIST dataset contains gray-scale images of handwritten digits (from
$0$ to $9$). It has 60,000 training images and 10,000 testing images.
The original image size is $28 \times 28$ and zero-padding is used to
enlarge the image size to $32\times32$. The Fashion MNIST dataset
contains gray-scale fashion images. Its image size and numbers of
training and testing images are the same as those of the MNIST dataset.
The CIFAR-10 dataset has 10 object classes of color images and the image
size is $32\times32$. It has 50,000 training images and 10,000 testing
images.
The following parameters are used in the default setting, called
PixelHop, in our experiments.
\begin{enumerate}
\setlength{\itemsep}{-2pt}
\item Four PixelHop units are cascaded in module \#1. To decide the
number of Saab AC filters in the unsupervised dimension reduction
procedure, we set the total energy ratio preserved by AC filters to 97\%
for MNIST and Fashion MNIST and 98\% for CIFAR-10.
\item To aggregate attributes spatially in module \#2, we average responses of
nonoverlapping patches of sizes $4 \times 4$, $4 \times 4$, $2 \times 2$
and $2 \times 2$ in the first, second, third and fourth PixelHop units,
respectively, to reduce the spatial dimension of attribute vectors.
Mathematically, we have
\begin{equation}
P_1=0.25 S_0, \quad P_2=0.25 S_1, \quad P_3=0.5 S_2, \quad P_4=0.5 S_3.
\end{equation}
As a result, the first to the fourth PixelHop units have outputs of
dimension $8 \times 8 \times K_1$, $4 \times 4 \times K_2$, $4 \times 4
\times K_3$, and $2 \times 2 \times K_4$, respectively. Then, we feed
all of them to the supervised dimension reduction unit.
\item We set $\alpha=10$ and the number of clusters for each object
class to $L=5$ in the LAG unit of module \#2. Since there are $J=10$
object classes in all three datasets of concern, the dimension is
reduced to $J \times L=50$.
\item We use the multi-class SVM classifier with the Radial Basis
Function (RBF) as the kernel in module \#3. Before training the SVM
classifier, we normalize each feature dimension to be a zero mean random
variable with the unit variance.
\end{enumerate}
Although the hyper parameters given above are chosen empirically, the
final performance of the PixelHop system is relatively stable if their
values are in the ballpark.
\subsection{Ablation Study}\label{exp:ablation}
\begin{table*}[h!]
\centering
\caption{Ablation study for Fashion MNIST, where the fourth and the
eighth rows are the settings adopted by PixelHop and PixelHop$^+$,
respectively.}\label{table:Ablation}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{Feature Used} & \multicolumn{2}{|c|}{DR}
&\multicolumn{4}{|c|}{Aggregation}
& \multicolumn{2}{|c|}{Classifier}
& \multirow{2}{*}{Test ACC (\%)} \\ \cline{1-10}
ALL & Last Unit & LAG &PCA&Mean&Min&Max&Skip&SVM&RF&\\ \hline
&\checkmark&\checkmark&&\checkmark&&&&\checkmark&& 89.88\\ \hline
&\checkmark&&\checkmark&\checkmark&&&&\checkmark&&89.11 \\ \hline
\checkmark&&\checkmark&&\checkmark&&&&&\checkmark&89.31 \\ \hline
\checkmark&&\checkmark&&\checkmark&&&&\checkmark&&{\bf 91.30} \\ \hline
\checkmark&&\checkmark&&&\checkmark&&&\checkmark&&91.16 \\ \hline
\checkmark&&\checkmark&&&&\checkmark&&\checkmark&&90.83 \\ \hline
\checkmark&&\checkmark&&&&&\checkmark&\checkmark&&91.14 \\ \hline
\checkmark&&\checkmark&&\checkmark&\checkmark&\checkmark&&\checkmark&&{\bf 91.68} \\ \hline
\end{tabular}
\end{table*}
We use the Fashion MNIST dataset as an example of the ablation study.
We show the test averaged classification accuracy (ACC) results for the
Fashion MNIST dataset under different settings in Table
\ref{table:Ablation}. We can reach a classification accuracy of 91.30\%
with the default setting (see the fourth row). It is obtained by
concatenating image representations from all four PixelHop units, using
mean-pooling to reduce the spatial dimension of attribute vectors,
label-assisted regression (LAG) and the SVM classifier. Furthermore, we
can boost the classification accuracy by adopting three pooling schemes
({\em i.e.} max-, mean- and min-pooling) together (see the eighth row).
This is called PixelHop$^+$.
We compare the classification performance using the output of an
individual PixelHop unit, PixelHop and PixelHop$^+$ in Table
\ref{table:r_aggregation}. We see from the table clear advantages of
aggregating features across all PixelHop units.
\begin{table*}[h!]
\normalsize
\centering
\caption{Comparison of the classification accuracy (\%) using features
from an individual PixelHop unit, PixelHop and PixelHop$^+$ for Fashion
MNIST.} \label{table:r_aggregation}
\begin{tabular}{ccccccc} \hline
Dataset& HOP-1 & HOP-2 & HOP-3 & HOP-4 & PixelHop & PixelHop$^+$ \\ \hline
MINST &97.00 & 98.35 & 98.45 &98.71 & 98.90 &{\bf 99.09} \\ \hline
Fashion MINST & 87.38& 89.35 & 89.96 &89.88 & 91.30 &{\bf 91.68} \\ \hline
CIFAR-10 &52.27 & 67.86 & 69.08 &67.91 & 71.37 &{\bf 72.66} \\ \hline
\end{tabular}
\end{table*}
\noindent
{\bf Number of Saab AC filters.} We study the relationship between the
classification performance and the energy preservation ratio of the Saab
AC filters in Fig. \ref{fig:r_energy}, where the $x$-axis indicates the
cumulative energy ratio preserved by AC filters. Although preserving
more AC energy can improve the classification performance, the rate of
improvement is slow. On the other hand, we need to pay a price of adding
more AC filters. The corresponding AC filter numbers at each PixelHop
unit at each energy threshold value are listed in Fig.
\ref{fig:r_energy} to illustrate the performance-complexity tradeoff.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.45\linewidth]{figure/energy_th_fmnist.png}
\end{center}
\caption{The classification accuracy as a function of the total energy
preserved by AC filters tested on Fashion MNIST, where the corresponding
AC filter numbers at each PixelHop unit are also listed to illustrate the
performance-complexity tradeoff.} \label{fig:r_energy}
\end{figure}
\begin{table*}[h!]
\footnotesize
\centering
\caption{The confusion matrix for the MNIST dataset, where the first row shows
the predicted object labels and the first column shows the true object labels.} \label{table:cm_mnist}
\begin{tabular}{ccccccccccc} \hline
& {\bf 0} & {\bf 1} & {\bf 2} & {\bf 3} & {\bf 4} & {\bf 5} & {\bf 6} & {\bf 7} & {\bf 8} & {\bf 9} \\ \hline
{\bf 0} &0.996& 0.000& 0.000& 0.000& 0.001& 0.000& 0.001& 0.001& 0.001& 0.000 \\ \hline
{\bf 1} &0.000& 0.997& 0.001& 0.000& 0.000& 0.001& 0.001& 0.000& 0.000& 0.000 \\ \hline
{\bf 2} &0.000& 0.002& 0.992& 0.000& 0.000& 0.000& 0.000& 0.003& 0.002& 0.001 \\ \hline
{\bf 3} &0.000& 0.000& 0.002& 0.995& 0.000& 0.000& 0.000& 0.001& 0.002& 0.000 \\ \hline
{\bf 4} &0.000& 0.000& 0.003& 0.000& 0.991& 0.000& 0.001& 0.000& 0.001& 0.004 \\ \hline
{\bf 5} &0.001& 0.000& 0.000& 0.000& 0.000& 0.998& 0.001& 0.000& 0.000& 0.000 \\ \hline
{\bf 6} &0.003& 0.002& 0.000& 0.000& 0.001& 0.004& 0.987& 0.000& 0.002& 0.000 \\ \hline
{\bf 7} &0.000& 0.002& 0.008& 0.001& 0.000& 0.000& 0.000& 0.986& 0.001& 0.002 \\ \hline
{\bf 8} &0.003& 0.000& 0.004& 0.001& 0.001& 0.000& 0.000& 0.002& 0.986& 0.003 \\ \hline
{\bf 9} &0.001& 0.002& 0.003& 0.002& 0.006& 0.001& 0.000& 0.003& 0.002& 0.980 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[h!]
\scriptsize
\centering
\caption{The confusion matrix for the Fashion MNIST dataset, where the
first row shows the predicted object labels and the first column shows
the true object labels.} \label{table:cm_fmnist}
\begin{tabular}{ccccccccccc} \hline
& T-shirt/top & Trouser& Pullover & Dress & Coat & Sandal & Shirt & Sneaker & Bag & Ankle boot \\ \hline
T-shirt/top & 0.883& 0.000& 0.015& 0.016& 0.005& 0.000& 0.072& 0.000& 0.009& 0.000 \\ \hline
Trouser & 0.001& 0.980& 0.000& 0.013& 0.002& 0.000& 0.002& 0.000& 0.002& 0.000 \\ \hline
Pullover & 0.015& 0.001& 0.877& 0.009& 0.053& 0.000& 0.044& 0.000& 0.001& 0.000 \\ \hline
Dress &0.017& 0.006& 0.010& 0.919& 0.023& 0.000& 0.022& 0.000& 0.003& 0.000 \\ \hline
Coat &0.000& 0.001& 0.056& 0.027& 0.866& 0.000& 0.050& 0.000& 0.000& 0.000 \\ \hline
Sandal &0.000& 0.000& 0.000& 0.000& 0.000& 0.979& 0.000& 0.016& 0.000& 0.005 \\ \hline
Shirt &0.110& 0.000& 0.048& 0.021& 0.072& 0.000& 0.742& 0.000& 0.007& 0.000 \\ \hline
Sneaker &0.000& 0.000& 0.000& 0.000& 0.000& 0.010& 0.000& 0.971& 0.000& 0.019 \\ \hline
Bag &0.003& 0.001& 0.003& 0.002& 0.002& 0.001& 0.001& 0.004& 0.983& 0.000 \\ \hline
Ankle boot &0.000& 0.000& 0.000& 0.000& 0.000& 0.005& 0.001& 0.026& 0.000& 0.968 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[h!]
\scriptsize
\centering
\caption{The confusion matrix for the CIFAR-10 dataset, where the first
row shows the predicted object labels and the first column shows the
true object labels.} \label{table:cm_cifar}
\begin{tabular}{ccccccccccc} \hline
& airplane & automobile& bird & cat & deer & dog & frog & horse & ship & truck \\ \hline
airplane & 0.783& 0.023& 0.034 &0.017& 0.014& 0.008& 0.012& 0.009& 0.067& 0.033 \\ \hline
automobile &0.029& 0.827& 0.010& 0.011& 0.001& 0.005& 0.007& 0.002& 0.023& 0.085 \\ \hline
bird &0.062& 0.006& 0.618& 0.064& 0.082& 0.071& 0.061& 0.016& 0.009& 0.011 \\ \hline
cat &0.023& 0.016& 0.071& 0.549& 0.061& 0.174& 0.056& 0.030& 0.008& 0.012 \\ \hline
deer &0.032& 0.003& 0.070& 0.062& 0.695& 0.031& 0.043& 0.051& 0.011& 0.002 \\ \hline
dog &0.011& 0.006& 0.059& 0.196& 0.049& 0.601& 0.026& 0.037& 0.009& 0.006 \\ \hline
frog &0.007& 0.005& 0.049& 0.059& 0.025& 0.027& 0.822& 0.002& 0.003& 0.001 \\ \hline
horse &0.023& 0.008& 0.033& 0.048& 0.052& 0.070& 0.007& 0.755& 0.001& 0.003 \\ \hline
ship &0.063& 0.042& 0.011& 0.017& 0.002& 0.006& 0.003& 0.006& 0.821& 0.029 \\ \hline
truck &0.036& 0.080& 0.010& 0.016& 0.008& 0.009& 0.005& 0.013& 0.028& 0.795 \\ \hline
\end{tabular}
\end{table*}
\subsection{Error Analysis, Color Spaces and Scalability}\label{subsec:error}
\noindent
{\bf Error Analysis.} We provide confusion matrices for MNIST, fashion
MNIST and CIFAR-10 in Table \ref{table:cm_mnist}, Table
\ref{table:cm_fmnist} and Table \ref{table:cm_cifar}, respectively.
Furthermore, we show some error cases in Fig. \ref{fig:error} and have
the following observations.
\begin{itemize}
\setlength{\itemsep}{-2pt}
\item For the MNIST dataset, the misclassified samples are truly
challenging. To handle these hard samples, we may need to turn to a
rule-based method. For example, humans often write ``4" in two strokes
and ``9" in one stroke. If we can identify the troke number from a
static image, we can use the information to make a better prediction.
\item For the Fashion MNIST dataset, we see that the ``shirt" class is
the most challenging one. As shown in Fig. \ref{fig:error}, the shirt
class is a general class that overlaps with the ``top", the ``pullover"
and the ``coat" classes. This is the main source of erroneous
classifications.
\item For the CIFAR-10 dataset, the ``dog" class can be confused with
the ``cat" class. As compared with other object classes, the ``dog" and
the ``cat" classes share more visual similarities. On one hand, it may
demand more distinctive features to differentiate them. On the other
hand, the error is caused by the poor image resolution. The ``Ship" and
the ``airplane" classes form another confusing pair. The background is
quite similar in these two object classes, i.e. containing the blue sky
and the blue ocean. It is a challenging task to recognize small objects
in poor resolution images.
\end{itemize}
\begin{figure*}[!htbp]
\begin{center}
\includegraphics[width=0.98\textwidth]{figure/error.png}
\end{center}
\caption{Representative misclassified images in three benchmarking
datasets, where the first three rows show erroneous predictions in MNIST
and the title above each sample indicates the ground-truth and
prediction. The fourth and the seventh rows erroneous predictions of the
``shirt" class and a confusing pair ``pullover vs. coat" in Fashion
MNIST, and the last four rows show two confusing pairs, ``dog vs. cat"
and ``ship vs. airplane", in CIFAR-10.}\label{fig:error}
\end{figure*}
\noindent
{\bf Different Color Spaces.} We report experimental results on CIFAR-10
with different color representations in Table \ref{table:r_color}. We
consider three color spaces - RGB, YCbCr, and Lab. The three color
channels are combined with different strategies: 1) three channels are
taken as the input jointly; 2) each channel is taken as the input
individually and all three channels are concatenated in module \#3; 3)
luminance and chrominance components are processed individually and
concatenated in module \#3. We see an advantage of processing one
luminance channel (L or Y) and two chrominance channels (CbCr or ab)
separately and then concatenate extracted features at the classification
stage. This observation is consistent with our prior experience in color
image processing.
\begin{table}[h!]
\normalsize
\centering
\caption{Comparison of classification accuracy (\%)
using different color representations on CIFAR-10.}
\label{table:r_color}
\begin{tabular}{ccccccc} \hline
& RGB &R,G,B & YCbCr & Y,CbCr & Lab & L,ab \\ \hline
Testing & 68.90& 69.96 &68.74& 71.05 &67.05 & {\bf 71.37}\\ \hline
Training &84.11 &85.06& 84.05 & 86.03 & 87.46 & {\bf87.65} \\ \hline
\end{tabular}
\end{table}
\noindent
{\bf Weak supervision.} Since PixelHop is a nonparametric learning
method, it is scalable to the number of training samples. In contrast,
LeNet-5 is a parametric learning method, and its model complexity is
fixed regardless of the training data number. We compare the
classification accuracies of LeNet-5 and PixelHop in Fig.
\ref{fig:r_Scalability}, where only 1/4, 1/8, 1/16,1/32, 1/64, 1/128, of
the original training dataset are randomly selected as the training data
for MNIST, Fashion MNIST and CIFAR-10. After training, we apply the
learned systems to 10,000 testing data as usual. As shown in Fig.
\ref{fig:r_Scalability}, when the number of labeled training data is
reduced, the classification performance of LeNet-5 drops faster than
PixelHop. For the extreme case with 1/128 of the original training data
size (i.e., 460 training samples), PixelHop outperforms LeNet-5 by 1.3\%
and 13.4\% in testing accuracy for MNIST and Fashion MNIST,
respectively. Clearly, PixelHop is more scalable than LeNet-5 against
the smaller training data size. This could be explained by the
fast convergence property of the Saab AC filters as discussed in
Sec. \ref{subsec:Saab}.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[MNIST]{0.45\textwidth}
\centering
{\includegraphics[width=1\textwidth]{figure/sm_mnist.png}}
\end{subfigure}
\begin{subfigure}[Fashion MNIST]{0.45\textwidth}
\centering
{\includegraphics[width=1\textwidth]{figure/sm_fmnist.png}}
\end{subfigure}
\begin{subfigure}[CIFAR-10]{0.45\textwidth}
\centering
{\includegraphics[width=1\textwidth]{figure/sm_cifar.png}}
\end{subfigure}
\caption{Comparison of testing accuracy (\%) of LeNet-5 and
PixelHop with different training sample numbers for MNIST,
Fashion MNIST and CIFAR-10.}\label{fig:r_Scalability}
\end{figure}
\subsection{Performance Benchmarking between PixelHop and LeNet-5}
\label{exp:comparison}
We compare the training complexity and the classification performance of
the LeNet-5 and the PixelHop method in this subsection. These two
machine learning models share similar model complexity. We use the Lab
color space to represent images in CIFAR-10 and build two PixelHop
pipelines for the luminance and the chrominance spaces in modules \#1
and \#2 separately. Then, they are integrated in module \#3 for final
decision. For the LeNet-5, we train all networks using TensorFlow
\cite{tensorflow2015-whitepaper} with 50 epochs and a batch size of 32.
The classic LeNet-5 architecture \cite{Lecun98gradient-basedlearning}
was designed for the MNIST dataset. We use it for the Fashion MNIST
dataset as well. We modify the network architecture slightly to handle
color images in the CIFAR-10 dataset. The parameters of the original and
the modified LeNet-5 are shown in Table \ref{table:mLeNet-5}. The
modified LeNet-5 was originally proposed in \cite{kuo2019interpretable}.
\begin{table}[htb]
\begin{center}
\footnotesize
\caption{Comparison of the original and the modified LeNet-5
architectures.}\label{table:mLeNet-5}
\begin{tabular}{ccc} \hline
Architecture & Original LeNet-5 & Modified LeNet-5 \\ \hline
1st Conv Layer Kernel Size & $5 \times 5 \times 1$ & $5 \times 5 \times 3$ \\ \hline
1st Conv Layer Kernel No. & $6$ & $32$ \\ \hline
2nd Conv Layer Kernel Size & $5 \times 5 \times 6$ & $5 \times 5 \times 32$ \\ \hline
2nd Conv Layer Kernel No. & $16$ & $64$ \\ \hline
1st FC Layer Filter No. & $120$ & $200$ \\ \hline
2nd FC Layer Filter No. & $84$ & $100$ \\ \hline
Output Node No. & $10$ & $10$ \\ \hline
\end{tabular}
\end{center}
\end{table}
We compare the classification performance of the PixelHop method with
LeNet-5 and the feedforward-designed CNN (FF-CNN)
\cite{kuo2019interpretable} for all three datasets in Table
\ref{table:accuracy_1}. The FF-CNN method shares the same network
architecture with LeNet-5. Yet, it determines the model parameters in
the one-pass feedforward manner. As shown in Table
\ref{table:accuracy_1}, FF-CNN performs the worst while PixelHop$^+$
performs the best in all datasets. The latter outperforms LeNet-5 by
0.05\%, 0.6\% and 3.94\% for MNIST, Fashion MNIST and CIFAR-10,
respectively. PixelHop also outperforms LeNet-5 for Fashion MNIST and
CIFAR-10.
Furthermore, we compare the training time of LeNet-5 and PixelHop for
all three datasets in Table \ref{table:time}. PixelHop takes less
training time than LeNet-5 for all three datasets in a CPU, where the
CPU is Intel(R) Xeon(R) CPU E5-2620 v3 at 2.40GHz. Although these
comparisons are still preliminary, we do see that PixelHop can be
competitive in terms of classification accuracy and training complexity.
\begin{table}[h!]
\centering
\caption{Comparison of testing accuracy (\%) of LeNet-5,
feedforward-designed CNN (FF-CNN), PixelHop and PixelHop$^+$ for MNIST,
Fashion MNIST and CIFAR-10. } \label{table:accuracy_1}
\begin{tabular}{cccc} \hline
Method & MNIST & Fashion MNIST & CIFAR-10 \\ \hline
LeNet-5 & 99.04 & 91.08 & 68.72 \\ \hline
FF-CNN & 97.52 & 86.90 & 62.13 \\ \hline
PixelHop & 98.90 & 91.30 & 71.37 \\ \hline
PixelHop$^+$ &{\bf 99.09} & {\bf 91.68} & {\bf 72.66} \\ \hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{Comparison of training time of the LeNet-5 and the PixelHop method on
the MNIST, the Fashion MNIST and the CIFAR-10 datasets.} \label{table:time}
\begin{tabular}{cccc} \hline
Method& MNIST & Fashion MNIST & CIFAR-10 \\ \hline
LeNet-5 & $\sim$25 min & $\sim$25 min & $\sim$45 min \\ \hline
PixelHop & $\sim$15 min & $\sim$15 min & $\sim$30 min \\ \hline
\end{tabular} \\
\end{table}
\section{Discussion}\label{sec:discussion}
In this section, we first summarize the key ingredients of SSL in Sec.
\ref{subsec:SSL}. Then, extensive discussion on the comparison of SSL
and DL is made in Sec. \ref{subsec:comparison} to provide further
insights into the potential of SSL.
\subsection{Successive Subspace Learning (SSL)}\label{subsec:SSL}
The PixelHop method presented in Sec. \ref{sec:pixelhop} offers a
concrete example of SSL. Another design
based on the SSL principle, called the PointHop method, was proposed in
\cite{zhang2019pointhop}. It is worthwhile to obtain a high-level
abstraction for these methods. Generally speaking, an SSL system
contains four ingredients:
\begin{enumerate}
\setlength{\itemsep}{-2pt}
\item successive near-to-far neighborhood expansion in multiple stages;
\item unsupervised dimension reduction via subspace approximation at each stage;
\item supervised dimension reduction via label-assisted regression (LAG); and
\item feature concatenation and decision making.
\end{enumerate}
For the first ingredient, we compute the attributes of local-to-global
neighborhoods of selected pixels in multiple stages successively. The main
advantage of this design is that the attributes of near and far
neighboring pixels can be propagated to the target pixel through local
communication. The attributes can be gray-scale pixel values (for
gray-scale images), the RGB pixel values (for color images), position
coordinates (for point cloud data sets), etc. The attributes of near
neighbors can be propagated in one hop while that of far neighbors can
be propagated through multiple hops as well as through spatial pooling.
For this reason, it is called the PixelHop (or PointHop) method. As the
hop number becomes larger, we examine a larger neighborhood. Yet, the
attribute dimension of a neighborhood will grow rapidly since it is
proportional to the number of pixels in the neighborhood.
To control the speed of growing dimensions without sacrificing
representation accuracy much, we need to find an approximate subspace in
the second ingredient. Specifically, we exploit statistical correlations
between attribute vectors associated with neighborhoods. PCA is a
one-stage subspace approximation technique. When we consider successive
subspace approximation in the SSL context, we adopt the Saab or the Saak
transform to eliminate the sign confusion problem. This topic was
discussed in \cite{kuo2016understanding}, \cite{kuo2018data},
\cite{kuo2019interpretable}. PCA and Saab/Saak transforms are
unsupervised dimension reduction techniques.
In earlier (or later) stages, the neighborhood size is smaller (or
larger), the attribute vector dimension is smaller (or larger), and the
number of independent neighborhoods is larger (or smaller). All of them
can contribute to classification accuracy. For an object class, the
attributes of its near and far neighbors follow a certain distribution.
We use the label information at all stages to achieve further dimension
reduction. The label-assisted regression (LAG) unit was developed in
Sec. \ref{subsec:clf} for this purpose. This corresponds to the third
ingredient.
For the last ingredient, dimension-reduced attributes from all stages
are concatenated to form the ultimate feature vector and a multi-class
classifier is trained for the classification task.
\begin{table*}[htb]
\centering
\caption{Similarities of SSL and DL.} \label{table:similarities}
\begin{tabular}{ccc} \hline
& SSL & DL \\ \hline
Attributes collection & Successively growing neighborhoods & Gradually enlarged receptive fileds \\ \hline
Attributes processing & Trade spatial for spectral dimensions & Trade spatial for spectral dimensions \\ \hline
Spatial dim. reduction & Spatial pooling & Spatial pooling \\ \hline
\end{tabular}
\end{table*}
\subsection{Comparison of DL and SSL}\label{subsec:comparison}
SSL and DL have some high-level concept in common, yet they are
fundamentally different in their models, training processes and training
complexities. We list similarities of SSL and DL in Table
\ref{table:similarities}.
Similarities reside in the high-level principle. Both collect
attributes in the pixel domain by employing successively growing
neighborhoods. Both trade spatial-domain patterns for spectral
components using convolutional filters. As the neighborhood becomes
larger, the dimension of spectral components become larger. Due to
significant neighborhood overlapping, there exists strong redundancy
between neighborhoods of adjacent pixels. The spatial pooling is
adopted by reducing such redundancy.
\begin{table*}[htb]
\centering
\caption{Differences of SSL and DL.} \label{table:differences}
\begin{tabular}{ccc}\hline
& SSL & DL \\ \hline
Model expandability & Non-parametric model & Parametric model \\ \hline
Incremental learning & Easy & Difficult \\ \hline
Model architecture & Flexible & Networks \\ \hline
Model interpretability & Easy & Difficult \\ \hline
Model parameter search & Feedforward design & Backpropagation \\ \hline
Training/testing complexity & Low & High \\ \hline
Spectral dim. reduction & Subspace approximation & Number of filters \\ \hline
Task-independent features & Yes & No \\ \hline
Multi-tasking & Easy & Difficult \\ \hline
Incorporation of priors and constraints & Easy & Difficult \\ \hline
Weak supervision & Easy & Difficult \\ \hline
Adversarial Attacks & Difficult & Easy \\ \hline
\end{tabular}
\end{table*}
Next, we show differences between SSL and DL in Table \ref{table:differences}
and elaborate them below.
\begin{itemize}
\setlength{\itemsep}{-2pt}
\item Model expandability \\
DL is a parametric learning method. One selects a fixed network
architecture to begin with. The superior performance of DL is
attributed to a very large model size, where the number of model
parameters is typically larger than the number of training samples,
leading to an over-parameterized network. This could be a waste of
resource. Traditional parametric learning methods do not have enough
model parameters to deal with datasets of a larger number of samples
with rich diversity. SSL adopts a non-parametric model. It is flexible
to add and/or delete filters at various units depending on the size of
the input dataset. SSL can handle small and large datasets using an
expandable model. This is especially attractive for edge computing. Its
model complexity can be adjusted flexibly based on hardware constraints
with graceful performance tradeoff.
\item Incremental learning \\
It is challenging to adapt a trained DL model to new data classes and
samples. Since SSL employs a non-parametric model, we can check whether
existing Saab filters can express the new data well. If not, we can add
more Saab filters in the unsupervised dimension reduction part. Furthermore,
we can expand the regression matrix to accommodate new classes.
\item Model architecture \\
DL demands a network architecture that has an end node at which a cost
function has to be defined. This is essential to allow BP to train the
network parameters. In contrast, the architecture of an SSL design is
more flexible. We can extract rich features from processing units in
multiple stages. Furthermore, we can conduct ensemble learning on these
features.
\item Model interpretability \\
The DL model is a black-box tool. Many of its properties are not well
understood. The SSL model is a white-box, which is mathematically
transparent.
\item Model parameter search \\
DL determines model parameters using an end-to-end optimization
approach, which is implemented by BP. SSL adopts unsupervised and
supervised dimension reduction techniques to zoom into an effective
subspace for feature extraction. The whole pipeline is conducted in a
one-pass feedforward fashion.
\item Training and testing complexity \\
DL demands a lot of computing resources in model training due to BP. As
the number of layers goes extremely deep (say, 100 and 150 layers),
the inference can be very expensive as well. The training complexity of SSL
is significantly lower since it is a one-pass feedforward design. Its
testing complexity is determined by the stage number. If the number of
stages is small, inference can be done effectively.
\item Spectral dimension reduction \\
Although DL and SSL both use convolutional operations, they have
different meanings. Convolutions in DL are used to transform one
representation to another aiming at end-to-end optimization of the
selected cost function for the network. Convolutions in SSL are used to
find projections onto principal components of the subspace.
\item Task-independent features \\
DL uses both input images and output labels to determine system
parameters. The derived features are task dependent. SSL contains two
feature types: task-independent features and task-dependent features.
The features obtained by unsupervised dimension reduction are
task-independent while those obtained by supervised dimension reduction
are task-dependent.
\item Multi-tasking \\
DL can integrate the cost functions of multiple tasks and define a new
joint cost function. This joint cost function may not be optimal with
respect to each individual task. SSL can obtain a set of
task-independent features and feed them into different LAG units and
different classifiers to realize multi-tasking.
\item Incorporation of priors and constraints \\
DL may add new terms to the original cost function, which corresponds to
priors and constraints. The impact of the modified cost function on the
learning system is implicit and indirect. SSL can use priors and
constraints to prune attributes of small and large neighborhoods
directly before they are fed into the classifier.
\item Weak supervision \\
A large number of labeled data are needed to train DL models. Data
augmentation is often used to create more training samples. It was shown
in Sec. \ref{sec:experiments} that SSL outperforms DL in the weak
supervision case. This could be attributed to that the unsupervised
dimension reduction process in successive PixelHop units do not demand
labels. Labels are only needed in the LAG units and the training of a
classifier. Besides, we may adopt a smaller SSL model in the beginning.
Then, we can grow the model size by adding more confident test samples
to the training dataset.
\item Adversarial attacks \\
It is well known that one can exploit the DL network model to find a
path from the output decision space to the input data space. Then, a
decision outcome can be changed by adding small perturbations to the
input. The perturbation can be so small that humans may not be able to
see. As a result, two almost identical images will result in different
predictions. This is one major weakness of DL networks. In SSL, we
expect that weak perturbation can be easily filtered out by PCA, and it
is challenging for attackers to conduct similar attacks.
\end{itemize}
\section{Conclusion and Future Work}\label{sec:conclusion}
A successive subspace Learning (SSL) methodology was introduced and the
PixelHop method was proposed in this work. In contrast with traditional
subspace methods, SSL examines the near- and far-neighborhoods of a set
of selected pixels. It uses the training data to learn three sets of
parameters: 1) Saab filters for unsupervised dimension reduction in the
PixelHop unit, 2) regression matrices for supervised dimension reduction
in the LAG unit, and 3) parameters required by the classifier. Extensive
experiments were conducted on MNIST, Fashion MNIST and CIFAR-10 to
demonstrate the superior performance of the PixelHop method in terms of
classification accuracy and training complexity.
SSL is still at its infancy. There exist rich opportunities for further
development and extension. A couple of them are mentioned below. First,
generative adversarial networks (GAN) have been developed as generative
models, and they find applications in style transfer, domain adaptation,
data augmentation, etc. It seems feasible to develop an SSL-based
generative model. That is, we need to ensure that attribute vectors
associated with neighborhoods of various sizes of the source-domain and
target-domain images share the same distribution. Second, we would like
to investigate SSL-based contour/edge detection and image segmentation
techniques. Historically, contour/edge detection and image segmentation
played an important role in low-level computer vision. Their importance
drops recently due to the flourish of DL. Yet, any feedforward computer
vision pipeline should benefit from these basic operations. Based on
this foundation, we can tackle with object detection and object
recognition problems using the SSL framework.
\section*{Acknowledgement}\label{sec:acknowledgement}
This research is supported in part by DARPA and Air Force Research
Laboratory (AFRL) under agreement number FA8750-16-2-0173 and in part by
the U.S. Army Research Laboratory's External Collaboration Initiative
(ECI) of the Director's Research Initiative (DRIA) program. The U.S.
Government is authorized to reproduce and distribute reprints for
Governmental purposes notwithstanding any copyright notation hereon.
The views and conclusions contained in this document are those of the
authors and should not be interpreted as necessarily representing the
official policies or endorsements, either expressed or implied, of
DARPA, the Air Force Research Laboratory (AFRL), the U.S. Army Research
Laboratory (ARL) or the U.S. Government.
\bibliographystyle{unsrt}
|
1,941,325,221,066 | arxiv | \section{Introduction} \label{sec:intr}
Triangle-free strongly regular graphs (TFSR graphs), sometimes also called
SRNT (for strongly regular no triangles) is a fascinating object in algebraic
combinatorics. Except for the trivial bipartite series, there are only seven
such graphs known (see e.g. \cite{God}). At the same time, the existing
feasibility conditions still leave out many possibilities. For example, there
are still 66 prospective values of parameters with $\lambda_1\leq 10$, where
$\lambda_1$ is the second largest eigenvalue of $G$ \cite[Tables 1,2]{Big};
the most prominent of them probably being the hypothetical Moore graph of
degree 57. This situation is in sharp contrast with general strongly regular
graphs (or, for that matter. with finite simple groups) where non-trivial
infinite series are abundant, see e.g. \cite[Chapter 10]{GoR}.
Somewhat superficially, the methods employed for studying (triangle-free)
strongly regular graphs can be categorized in ``combinatorial'' and
``arithmetic/algebraic'' methods. The latter are based upon spectral
properties of $G$ or modular counting. The former are to a large extent based
on calculating various quantities (that we will highly prefer to normalize in
such a way that they become densities in $[0,1]$), and these calculations
look remarkably similar to those used in asymptotic extremal combinatorics,
particularly in the proofs based on flag algebras. The unspoken purpose of
this paper is to highlight and distill these connections between the two
areas. To that end, we introduce and study a natural extremal problem
corresponding to strong regularity.
Before going into some technical details, it might be helpful to digress on
the apparent contradiction of studying highly symmetric and inherently finite
objects with methods that are quite analytical and continuous in their
nature. The key to resolving this is the simple observation that has been
used in extremal combinatorics many times: any finite graph (or, for that
matter, more complicated combinatorial object) can be alternately viewed as
an analytical object called its {\em stepfunction graphon} \cite[\S
7.1]{Lov4} or, in other words, {\em infinite blow-up}. It is obtained by
replacing every vertex with a measurable set of appropriate measure. To this
object we can already apply all methods based on density calculations, and
the conversion of the results back to the finite world is straightforward.
\bigskip
Let us now fix some notation. All graphs $G$ in this paper are simple and, unless otherwise noted, triangle-free. By $n=n(G)$ we always denote the number of vertices, and let
$$
\rho=\rho(G) \stackrel{\rm def}{=} \frac{2|E(G)|}{n(G)^2}
$$
be the edge density of $G$. Note that the normalizing factor here is
$\frac{n^2}2$, not ${n\choose 2}$: the previous paragraph provides a good
clue as why this is much more natural choice. A {\em $\rho$-regular graph} is
a regular graph $G$ with $\rho(G)=\rho$. We let
$$
a(G)\stackrel{\rm def}{=} \min_{(u,v)\not\in E(G)} \frac{|N_G(u)\cap N_G(v)|}{n(G)},
$$
where $N_G(v)$ is the vertex neighbourhood of $v$. For a rational number $\rho\in [0,1/2]$, we let
\begin{equation} \label{eq:a_rho}
a(\rho) \stackrel{\rm def}{=} \max\set{a(G)}{G\ \text{is a triangle-free}\ \rho\text{-regular graph}}
\end{equation}
Our goal is to give upper bounds on $a(\rho)$.
\begin{remark} \label{rem:maxvssup}
We stress that we do have here maximum, not just supremum, this will be
proven below (see Corollary \ref{cor:finiteness}). In particular, $a(\rho)$
is also rational. Another finiteness result (Corollary \ref{cor:finiteness2})
says that for every $\epsilon>0$ there exist only finitely many rationals
$\rho$ with $a(\rho)\geq\epsilon$. While this result is of somewhat
existential nature (the bound is double exponential in $1/\epsilon$), it
demonstrates, somewhat surprisingly, that our relaxed version of strong
regularity still implies at least some rigidity properties that might be
expected from much more symmetric structures in algebraic combinatorics.
\end{remark}
\begin{remark} \label{rem:no_graphons} The definition of $a(G)$ readily extends to graphons,
and it is natural to ask whether this would allow us to extend the definition
of $a(\rho)$ to irrational $\rho$ or at least come up with interesting
constructions beyond finite graphs: such constructions are definitely not
unheard of in the extremal combinatorics. Somewhat surprisingly (again), the
answer to both questions is negative. Namely, we have the dichotomy: every
triangle-free graphon $W$ (we do not even need regularity here) is either a
finite stepfunction of a finite vertex-weighted graph or satisfies $a(W)=0$
(Theorem \ref{thm:no_graphon}).
\end{remark}
\begin{remark} Every TFSR graph $G$ with parameters $(n,k,c)$, where $k$ is the
degree and $c$ is the size of common neighbourhoods of non-adjacent vertices
leads to the lower bound $a(k/n)\geq c/n$. Thus, optimistically, one could view upper bounding the function $a(\rho)$ as an approach to finding more feasibility conditions for TFSR graphs based on entirely combinatorial methods. This hope is somewhat supported by the fact that our bound is tight for the values corresponding to four (out of seven) known TSFR graphs, as well as an infinite sequence of values not ruled out by other conditions.
\end{remark}
\begin{remark} \label{rem:large_density}
As we will see below, in the definition \eqref{eq:a_rho} we can replace
ordinary $\rho$-regular triangle-free graphs with weighted twin-free
$\rho$-regular triangle-free graphs that can be additionally assumed to be
maximal. A complete description of such graphs with $\rho>1/3$ was obtained
in \cite{BrT}. Along with very simple Lemma \ref{lem:linear} below, this
allows us to completely compute the value of $a(\rho)$ for $\rho>1/3$ and, in
particular, determine those values of $\rho$ for which $a(\rho)> 0$. Using
relatively simple methods from Section \ref{sec:combinatorial}, we can prove
the bounds $a(\rho)\leq \frac{\rho}3\ (1/3\leq\rho\leq 3/8)$, $a(\rho)\leq
3\rho-1\ (3/8\leq \rho\leq 2/5)$ and $a(\rho)=0\ (2/5<\rho<1/2)$. But since
they are significantly inferior (that is, for $\rho<2/5$) to those that
follow from \cite{BrT}, we will save space and {\bf in the rest of the paper
focus on the range $\rho\leq 1/3$}.
\end{remark}
Our main result is shown on Figure \ref{fig:main}.
\begin{figure}[tb]
\begin{center}
\epsfig{file=main.eps,width=10cm}\\ \vspace{1cm} \epsfig{file=krein.eps,width=5cm} \hspace{3cm} \epsfig{file=dense.eps,width=5cm}
\vspace{1cm}
\caption{The main result \label{fig:main}}
\end{center}
\end{figure}
The analytical expressions for our upper bound $a_0(\rho)$ will be given in
Theorem \ref{thm:main}; for now let us briefly comment on a few features of
Figure \ref{fig:main}.
\begin{remark}
The bound is tight for the values $\rho=\frac{11}{50}, \frac 3{10},
\frac 5{16}$ corresponding to Higman-Sims, Petersen and Clebsch,
respectively. It is piecewise linear for $\rho\geq 9/32$ and involves three
algebraic functions of degree $\leq 4$ when $\rho\leq 9/32$.
\end{remark}
\clearpage
\begin{remark} \label{rem:krein}
Let us explain the reasons for using the term ``Krein bound''. It may not be seen well
on Figure \ref{fig:main} but this curve has a singular point at
\begin{equation}\label{eq:rho0}
\rho_0\stackrel{\rm def}{=} \frac 3{98}(10-\sqrt{2})\approx 0.263.
\end{equation}
For $\rho\geq \rho_0$, $a_0(\rho)$ is a solution to a polynomial equation
$g_K(\rho,a) =0$ that is most likely an artifact of the proof method (and it
gets superseded at $\rho\approx 0.271$ by other methods anyway). The bound
for $\rho\leq \rho_0$ is more interesting.
Recall
(see e.g. \cite[Chapter 10.7]{GoR}) that the Krein parameters $K_1,K_2$
provide powerful constraints $K_1\geq 0,\ K_2\geq 0$ on the existence
of strongly regular graphs, and in the special
case of triangle-free graphs we are interested in this paper they can
be significantly simplified \cite{Big}.
Now, $K_1,K_2$ are rational functions of $k,c$ {\em and} non-trivial
eigenvalues $\lambda_1,\lambda_2$ of the adjacency matrix. As
such, when written as functions of $k,c$, they become (conjugate) algebraic
quadratic functions and thus do not seem to possess any obvious
combinatorial meaning. Their {\em product}, however, is the {\em
rational} function in $k,c$:
\begin{equation} \label{eq:K1K2}
K_1K_2=(k-1)(k-c)(k^2-k(3c+1)-c^3+4c^2-c)\geq 0
\end{equation}
Re-writing the non-trivial term here in the variables $\rho=k/n,\
c=a/n$ (and recalling that $n=1+ \frac{k(k-1+c)}c$), we will get a
constraint $f_K(\rho,a)\geq 0$ that holds for all TFSR graphs. What we
prove with purely combinatorial methods is that for $\rho\leq\rho_0$ (and
less us remark that all hypothetical TFSR graphs are confined to that
region) this inequality holds in much less rigid setting.
\smallskip
As a by-side heuristical remark, this bound was discovered by flag-algebraic
computer experiments with particular values of $\rho$ corresponding to
potential TFSR graphs from \cite[Tables 1,2]{Big}. The result turned out
to be tight precisely for those values for which
$c=\lambda_1(\lambda_1-1)$, which is equivalent to $K_2=0$. The connection
to Krein parameters and, as a consequence, the hypothesis $f_K(\rho,a)\geq 0$
suggested itself immediately.
\end{remark}
\section{Preliminaries} \label{sec:prel}
We utilize all notation introduced in the previous section. In particular,
all graphs $G=(V(G),E(G))$ are simple and, unless otherwise noted,
triangle-free, and $n=n(G)$ is the number of vertices.
Let us now remind some rudimentary notions from the language of flag algebras
(see \cite[\S 2.1]{flag}) restricted to graphs. A {\em type} $\sigma$ is
simply a totally labelled graph, that is a graph on the vertex set
$[k]\stackrel{\rm def}{=}\{1,2,\ldots,k\}$ for some $k$ called the {\em size} of $\sigma$.
Figure \ref{fig:types} shows all types used in this paper, including the
trivial type 0 of size 0.
\begin{figure}[ht]
\input{types.eepic}
\caption{\label{fig:types} Types}
\end{figure}
A {\em flag} is a graph partially labelled by labels from $[k]$ for some
$k\geq 0$. Every flag $F$ belongs to the unique type obtained by removing all
unlabelled vertices. Figure \ref{fig:flags} lists all flags we need in this
paper.
\begin{figure}[ht]
\input{flags.eepic}
\caption{\label{fig:flags} Flags}
\end{figure}
Mnemonic rules used in this notation are reasonably consistent: the
subscript, when present, normally denotes the overall number of vertices in
the flag. The first part of the superscript denotes the type of the flag. The
remaining part, when present, helps to identify the flag in case of
ambiguity. For example, there is only one flag $P_3^N$ based on the path of
length 2 and the type $N$. There are, however, two flags based on its
complement $\bar P_3$, and $\bar P_3^{N.c}$ [$\bar P_3^{N.b}$] is the flag in
which the first labelled vertex is the central [border, respectively] vertex
in $\bar P_3$.
\clearpage
Also, for $S\subseteq [3]$ we denote by $F^{\mathcal I}_S$ the flag with 3
labelled independent vertices and one unlabelled vertex connected to the
vertices from $S$. Thus, $S_4^{\mathcal I} = F^{\mathcal I}_{\{1,2,3\}}$ and
$T_4^{\mathcal I} = F^{\mathcal I}_{\{3\}}$.
\medskip
Let $F$ be a flag of type $\sigma$ with $k$ labelled vertices and $\ell-k$
unlabelled ones, and $v_1,\ldots,v_k$ be ({\em not} necessarily distinct)
vertices in the target graph $G$ that span the type $\sigma$, that is
$(v_i,v_j)\in E(G)$ if and only if $(i,j)\in E(\sigma)$. Then we let
$F(v_1,\ldots,v_k)$ be the probability that after picking
$\boldsymbol{w_{k+1}},\ldots, \boldsymbol{w_\ell}\in V(G)$ independently at random, the
$\sigma$-flag induced in $G$ by $v_1,\ldots,v_k,\boldsymbol{w_{k+1}},\ldots,
\boldsymbol{w_\ell}$ is isomorphic (in the label-preserving way) to $F$. We stress
that $\boldsymbol{w_{k+1}},\ldots,\boldsymbol{w_\ell}$ are chosen completely independently at
random; in particular some or all of them may be among $\{v_1,\ldots,v_k\}$.
When this happens, we treat colliding vertices as non-adjacent twins.
We will also need some basic operations on flags (multiplication, evaluation
and lifting operators, to be exact) but since they will not be needed until
Section \ref{sec:analytical}, we defer it until then.
In this notation $\rho=\frac{2|E(G)|}{n^2}$ is the edge density,
$e(v)=\frac{|N_G(v)|}{n}$ is the relative degree of $v$ and $P_3^N(u,v) =
\frac{|N_G(u)\cap N_G(v)|}{n^2}$ is the relative size of the common
neighbourhood of $u$ and $v$. A graph $G$ is {\em $\rho$-regular} if
$e(v)\equiv \rho$. Etc.
\smallskip
{\bf Warning.} When evaluating [the density of] say $C_4$, we must take into
account not only induced copies, but also contributions made by paths $P_3$
(one collapsing diagonal) and even by edges (both diagonals collapsing).
\smallskip
We let
$$
a(G) \stackrel{\rm def}{=} \min_{(u,v)\not\in E(G)} P_3^N(u,v)
$$
and, for a rational $\rho\in [0,1/2]$, we also let
$$
a(\rho)\stackrel{\rm def}{=}\max \set{a(G)}{G\ \text{a triangle-free $\rho$-regular graph}}
$$
(we will prove below that the minimum value here is actually attained).
\section{The statement of the main result}
Many of our statements and proofs, particularly for small values of $\rho$, involve rather cumbersome computations.
A Maple worksheet with supporting evidence can be found at {\tt http://people.cs.uchicago.edu/\~{}razborov/files/tfsr.mw}
Let\footnote{This is the non-trivial factor in \eqref{eq:K1K2} re-written in terms of $\rho,a$}
$$
f_K(\rho,a) \stackrel{\rm def}{=} a^3+(3\rho-4)a^2 + (5\rho-1)a -4\rho^3+\rho^2.
$$
Then
$$
f_K(\rho,\rho^2) =\rho^3(\rho^3+3\rho^2-4\rho+1)>0
$$
(since $\rho\leq 1/3$) while
$$
f_K\of{\rho,\frac{\rho^2}{1-\rho}} = -\frac{\rho^5(1-2\rho)}{(1-\rho)^3}<0.
$$
Let $\mathsf{Krein}(\rho)$ be the largest (actually, the only) root of the
cubic polynomial equation $f_K(\rho,z)=0$ in the interval\footnote{The left
end of this interval is determined entirely by convenience, but the right end
represents a trivial upper bound on $a(\rho)$ resulting from double counting
copies of $C_4$. See the calculation after \eqref{eq:trivial_bound} for more
details.} $z\in \left[ \rho^2, \frac{\rho^2}{1-\rho}\right]$.
Next, let
\begin{eqnarray*}
&& g_K(\rho,a) \stackrel{\rm def}{=} a^4+a^3((4\sqrt 2-8)\rho +7-4\sqrt 2) + a^2\rho((6-4\sqrt 2)\rho +8\sqrt 2 -13)\\
&& \hspace{\longeqskiplength} +a\rho(\rho^2 +(15-10\sqrt 2)\rho+2\sqrt 2-3)+\rho^3((8\sqrt 2-12)\rho+3-2\sqrt 2)
\end{eqnarray*}
(the meaning of this expression might become clearer in Section \ref{sec:krein}). We again have $g_K(\rho,\rho^2)>0$,
\begin{equation} \label{eq:g_K}
g_K\of{\rho,\frac{\rho^2}{1-\rho}} =-\frac{\rho^7(1-2\rho)}{(1-\rho)^4}<0,
\end{equation}
and we define $\widehat{\mathsf{Krein}}(\rho)$ as the largest (unique) root of the equation $g_K(\rho,z)=0$ in the interval $z\in \left[\rho^2, \frac{\rho^2}{1-\rho}\right]$.
We note that $\mathsf{Krein}(\rho_0)= \widehat{\mathsf{Krein}}(\rho_0) =
\frac{\rho_0}{3}$ (recall that $\rho_0$ is given by \eqref{eq:rho0}), and
that they have the same first derivative at $\rho=\rho_0$ as well. It should
also be noted that $\widehat{\mathsf{Krein}}(\rho)\geq \mathsf{Krein}(\rho)$
and that they are very close to each other. For example, let
$$
\rho_1\approx 0.271
$$
be the appropriate root of the equation $g_K(\rho,\frac{1-3\rho}2)=0$; this
is the point at which Krein bounds yield to more combinatorial methods, see
Figure \ref{fig:main}. Then in the relevant interval $\rho\in[\rho_0,\rho_1]$
we have $\widehat{\mathsf{Krein}}(\rho) \leq \mathsf{Krein}(\rho)+3\cdot
10^{-6}$.
We finally let
$$
\mathsf{Improved}(\rho) \stackrel{\rm def}{=} \frac{15-22\rho-2\sqrt{242\rho-27-508\rho^2}}{74},
$$
and let
$$
\rho_2 \stackrel{\rm def}{=} \frac{66+2\sqrt{13}}{269} \approx 0.272
$$
be the root of the equation $\mathsf{Improved(\rho)}=\frac {1-3\rho}{2}$.
We can now explain Figure \ref{fig:main} as follows:
\begin{theorem} \label{thm:main}
For $\rho\leq 1/3$ we have $a(\rho)\leq a_0(\rho)$, where
$$
a_0(\rho)\stackrel{\rm def}{=}
\begin{cases}
\mathsf{Krein}(\rho), & \rho\in [0,\rho_0]\\
\widehat{\mathsf{Krein}}(\rho), & \rho\in [\rho_0,\rho_1]\\
\frac{1-3\rho}2, & \rho \in [\rho_1, \rho_2]\\
\mathsf{Improved}(\rho), & \rho\in [\rho_2,9/32]\\
\rho/3, & \rho\in [9/32,3/10]\\
2\rho-\frac 12, & \rho\in [3/10, 5/16]\\
\frac 25\rho, & \rho\in [5/16,1/3].
\end{cases}
$$
\end{theorem}
\section{Finiteness results} \label{sec:boundedeness}
Before embarking on the proof of Theorem \ref{thm:main}, let us fulfill the promise made in Remarks \ref{rem:maxvssup} and \ref{rem:no_graphons}.
Throughout the paper we will be mostly working with (vertex)-weighted graphs,
i.e. with graphs $G$ equipped with a probability measure $\mu$ on $V(G)$,
ordinary graphs corresponding to the uniform measure. The flag-algebraic
notation $F(v_1,\ldots,v_k)$ introduced in Section \ref{sec:prel} readily
extends to this case simply by changing the sampling distribution from
uniform to $\mu$.
The {\em twin relation $\approx$} on $G$ is given by $u\approx v$ iff $N_G(u)=N_G(v)$, and a graph $G$ is {\em twin-free} if its twin relation is trivial. Factoring a graph by its twin relation gives us a {\bf twin-free weighted} graph $G^{\text{red}}$ that preserves all properties of the original graph $G$ (like the values $\rho(G)$ and $a(G)$, $\rho$-regularity or triangle-freeness) we are interested in this paper.
Our main technical argument in this section is the following
\begin{theorem} \label{thm:boundedeness}
Let $(G,\mu)$ be a vertex-weighted triangle-free twin-free graph and $a\stackrel{\rm def}{=} a(G,\mu)$. Then
$$
n(G) \leq (2a^{-1})^{1+a^{-1}} +2a^{-1}.
$$
\end{theorem}
\begin{proof}
Let $n\stackrel{\rm def}{=} n(G)$ and $V(G) \stackrel{\rm def}{=} \{v_1,\ldots,v_n\}$, where $\mu(v_1)\geq\ldots
\geq \mu(v_n)$. Choose the maximal $k$ with the property
$\mu(\{v_k,\ldots,v_n\})\geq a/2$. Then, by averaging, we have
$\frac{1-a/2}{k-1}\geq\frac{a/2}{n-k+1}$ which is equivalent to
$$
n\leq 2a^{-1}(n-k+1).
$$
Hence, denoting
$$
W_0\stackrel{\rm def}{=} \{v_{k+1},\ldots,v_n\}
$$
(note for the record that $\mu(W_0)< a/2$), it suffices to prove that
\begin{equation} \label{eq:w0bound}
|W_0| \leq (2a^{-1})^{a^{-1}}.
\end{equation}
For $W\subseteq V(G)$ let us define
$$
K(W) \stackrel{\rm def}{=} \bigcap_{w\in W} N_G(w);
$$
note that $K(W)\cap W=\emptyset$.
The bound \eqref{eq:w0bound} will almost immediately follow from the following
two claims.
\begin{claim} \label{clm:easy}
For any $W\subseteq V(G)$ and $v^\ast\not\in W\cup K(W)$ we have
$$
\mu\of{\of{\bigcup_{v\in K(w)}N_G(v)}\cup N_G(v^\ast)} \geq
\mu\of{\bigcup_{v\in K(w)}N_G(v)} +a.
$$
\end{claim}
\begin{proofof}{Claim \ref{clm:easy}}
Since $v^\ast\not\in K(W)$, there exists $w\in W$ such that
$(v^\ast,w)\not\in E(G)$; moreover, $w\neq v^\ast$ since $v^\ast\not\in W$.
Now, all vertices in $N_G(v^\ast)\cap N_G(w)$ contribute to the difference
$N_G(v^\ast)\setminus \bigcup_{v\in K(w)}N_G(v)$ (since $w\in W$ and $G$ is
triangle-free).
\end{proofof}
\begin{claim} \label{clm:difficult}
For every $W\subseteq V(G)$ with $\mu(W)\leq a/2$ and $|W|\geq 2$ there
exists $v^\ast\not\in W\cup K(W)$ such that\footnote{note that this bound is
about absolute {\bf sizes}, not about measures}
$$
|W\cap N_G(v^\ast)| \geq \frac a2|W|.
$$
\end{claim}
\begin{proofof}{Claim \ref{clm:difficult}}
Let
$$
L(W) \stackrel{\rm def}{=} \set{v\not\in W}{N_G(v)\cap W\not\in \{\emptyset,W\}}.
$$
Note that $L(W)$ is disjoint from both $W$ and $K(W)$ and that there are no
edges between $K(W)$ and $L(W)$. The desired vertex $v^\ast$ will belong to
$L(W)$, and we consider two (similar) cases.
{\bf Case 1.} $K(W)=\emptyset$.
\noindent In this case we have
\begin{equation} \label{eq:Lw}
L(W) = \of{\bigcup_{w\in W} N_G(w)}\setminus W.
\end{equation}
W.l.o.g. we can assume that $n\geq 3$ which implies (since $G$ is twin-free)
that $G$ is not a star. That is, for every $w\in V(G)$ there exists $v\neq w$
non-adjacent to it and hence we have the bound $e(w)\geq P_3^N(v,w)\geq a$ on
the minimum degree. Along with \eqref{eq:Lw} and the assumption $\mu(w)\leq
a/2$, we get $\mu(N_G(w)\cap L(W))\geq a/2$ for any $w\in W$. Now the
existence of the required $v^\ast\in L(W)$ follows by standard double
counting of edges between $W$ and $L(W)$ (note that, unlike $L(W)$, the set
$W$ is {\bf not} weighted in this argument according to $\mu$).
\smallskip
{\bf Case 2.} $K(W)\neq\emptyset$.
\noindent Then $W$ is independent and the condition $v\not\in W$ in the
definition of $L(W)$ can be dropped. Fix arbitrarily $w\neq w'\in W$ (this is
how we use the assumption $|W|\geq 2$). Then $w,w'$ are not twins and
$N_G(w)\triangle N_G(w')\subseteq L(W)$, hence $L(W)\neq\emptyset$. Fix
arbitrarily $v\in L(W)$ and $w\in W$ with $(v,w)\not\in E(G)$. Then
\begin{equation} \label{eq:p3vsl}
N_G(v) \cap N_G(w) \subseteq L(W)
\end{equation}
(since there are no edges between $L(W)$ and $K(W)$) hence $\mu(L(W))\geq a$.
We claim that actually $\mu(N_G(w)\cap L(W))\geq a$ for {\bf every} $w\in W$.
Indeed, if $N_G(w)\supseteq L(W)$ this follows from the bound we have just
proved, and if there exists $v\in L(W)$ with $(v,w)\not\in E(G)$, this
follows from \eqref{eq:p3vsl}. The analysis of Case 2 is now completed by the
same averaging argument as in Case 1 (with the final bound improved by a
factor of two).
\end{proofof}
The rest of the proof of Theorem \ref{thm:boundedeness} is easy. We start
with the set $W_0$ and then, using Claims \ref{clm:difficult} and
\ref{clm:easy}, recursively construct sets $W_0\supset W_1\supset
W_2\supset\ldots$ such that\footnote{We could have shaved off an extra factor
$2^{r-1}$ by observing that Case 1 in Claim \ref{clm:difficult} may occur at
most once.} $|W_r|\geq (2a^{-1})^r|W_0|$ and
\begin{equation}\label{eq:termination}
\mu\of{\bigcup_{v\in K(W_r)}N_G(v)}\geq ar.
\end{equation}
This process may terminate for only one reason: when the assumption
$|W_r|\geq 2$ from Claim \ref{clm:difficult} no longer holds. On the
other hand, due to \eqref{eq:termination}, it must terminate within $a^{-1}$
steps. The bound \eqref{eq:w0bound} follows, and this also completes the
proof of Theorem \ref{thm:boundedeness}.
\end{proof}
\begin{remark}
The bound in Theorem \ref{thm:boundedeness} is essentially tight. Indeed, let
us consider the graph $G_h$ on $n=2h+2^h$ vertices
$$
\set{u_{i\epsilon}}{i\in [h],\ \epsilon\in \{0,1\}} \stackrel .\cup \set{v_a}{a\in \{0,1\}^h},
$$
and let $E(G_h)$ consist of the matching $\set{(u_{i0}, u_{i1})}{i\in
[h]}\cup \set{(v_a,v_{1-a})}{a\in\{0,1\}^h}$ as well as the cross-edges
$\set{(u_{i\epsilon}, v_a)}{a(i)=\epsilon}$. Then $G$ is a triangle-free
twin-free graph and for every $(w,w')\not\in E(G)$, $N_G(w)\cap N_G(w')$
either contains an $u$-vertex or contains at least $2^{h-2}$ $v$-vertices.
Hence if we set up the weights as $\mu(u_{i\epsilon}) =\frac 1{4h}$ and
$\mu(v_a)=2^{-h-1}$, we will have $a(G,\mu)\geq \frac 1{4h}$ and $n(G)$ is
inverse exponential in $a(G,\mu)^{-1}$.
\end{remark}
Before deriving consequences mentioned in the introduction, we need a simple
exercise in linear algebra (and optimization).
\begin{lemma} \label{lem:linear}
Let $G$ be a finite graph. Then there exists at most one value $\rho=\rho_G$ for which there exist vertex weights $\mu$ such that $(G,\mu)$ is $\rho$-regular. Whenever $\rho_G$ exists, it is a rational number. Moreover, in that case there are {\bf rational} weights $\eta$ such that $(G,\eta)$ is $\rho_G$-regular and
$$
a(G,\eta) = \max \set{a(G,\mu)}{(G,\mu)\ \text{is}\ \rho_G-\text{regular}}.
$$
\end{lemma}
\begin{proof}
Fix an arbitrary system of weights $\mu$ for which $(G,\mu)$ is
$\rho$-regular for some $\rho$. Let $A$ be the adjacency matrix of $G$,
$\boldsymbol\mu$ be the (column) vector comprised of vertex weights and
$\mathbf j$ be the identically one vector. Then the regularity condition
reads as $A\boldsymbol{\mu}=\rho\cdot \mathbf j$. Since $\mathbf j$ is in the
space spanned by the columns of $A$, there exists a {\bf rational} vector
$\boldsymbol{\eta}$ such that $A\boldsymbol{\eta}=\mathbf j$. Now, on the one
hand $\boldsymbol{\eta}^T A\boldsymbol{\mu}=\rho\cdot (\boldsymbol{\eta}^T
\mathbf{j})$ and, on the other hand, $\boldsymbol{\eta}^T
A\boldsymbol{\mu}=\mathbf{j}^T \boldsymbol{\mu}=1$ (the latter equality holds
since $\mu$ is a probability measure). Hence $\rho = (\boldsymbol{\eta}^T
\mathbf{j})^{-1}$ is a rational number not depending on $\mu$.
For the second part, we note that the linear program
$$
\begin{cases}
a\to\max & ~\\
\eta(v)\geq 0 & (v\in V(G))\\
\sum_{v}\eta(v)=1 &\\
e(v)=\rho & (v\in V(G))\\
P_3^N(v,w) \geq a & ((v,w)\not\in E(G))
\end{cases}
$$
with rational coefficients in the variables $\eta(v)$ is feasible since $\mu$
is its solution. Hence it also has an optimal solution with rational
coefficients.
\end{proof}
Let us now derive consequences.
\begin{corollary} \label{cor:finiteness}
For every rational $\rho$ there exists a finite triangle-free $\rho$-regular graph $G$ such that $a(G)$ attains the maximum value $a(\rho)$ among all such graphs.
\end{corollary}
\begin{proof}
We can assume w.l.o.g. that $a(\rho)>0$. Let $\{G_n\}$ be an increasing
sequence of graphs such that $\lim_{n\to\infty} a(G_n)=a(\rho)$. Then Theorem
\ref{thm:boundedeness} implies that $\{G_n^{\text{red}}\}$ may assume only
finitely many values. Hence (by going to a subsequence) we can also assume
that all $G_n$ correspond to different vertex weights $\mu_n$ of the same
(twin-free) graph $G$. But now Lemma \ref{lem:linear} implies the existence
of rational weights $\eta(v)$, say $\eta(v)=\frac{N_v}{N}$ for integers
$N_v,N$ such that $a(G,\eta) = a(\rho)$. We convert $(G,\eta)$ to an ordinary
graph replacing every vertex $v$ with a cloud of $N_v$ twin clones.
\end{proof}
\begin{corollary} \label{cor:finiteness2}
For every $\epsilon>0$ there are only finitely many $\rho$ with
$a(\rho)\geq\epsilon$. In other words, 0 is the only accumulation point of
${\rm im}(a)$.
\end{corollary}
\begin{proof}
Immediately follows from Theorem \ref{thm:boundedeness} and Lemma
\ref{lem:linear} since according to the latter, the edge density $\rho$ is
completely determined by the skeleton $G$ of a $\rho$-regular weighted graph
$(G,\mu)$.
\end{proof}
Now we prove that there are no ``inherently infinite'' triangle-free graphons
$W$ with $a(W)>0$. Since this result is somewhat tangential to the rest of
the paper, we will be rather sketchy and in particular we refer the reader to
\cite{Lov4} for all missing definitions.
A graphon $W\function{[0,1]\times [0,1]}{[0,1]}$ is {\em triangle-free} if
$$
\int\int\int W(x,y)W(y,z)W(x,z)dxdydz=0.
$$
Given a graphon $W$, let $P_3^N\function{[0,1]\times [0,1]}{[0,1]}$ be
defined by $P_3^N(x,y) = \int W(x,y)W(x,z)dz$; Fubini's theorem implies that
$P_3^N$ is defined a.e. and is measurable. We define $a(W)$ as the maximum
value $a$ such that
\begin{equation} \label{eq:a_w}
\lambda\of{\set{(x,y)\in [0,1]^2}{W(x,y)<1\Longrightarrow P_3^N(x,y)\geq a}}=1.
\end{equation}
To every finite vertex-weighted graph $(G,\mu)$ we can associate the
naturally defined {\em step-function graphon} $W_{G,\mu}$ (see \cite[\S
7.1]{Lov4} or Section \ref{sec:intr} above), and two graphons are {\em
isomorphic} if they have the same sampling statistics \cite[\S 7.3]{Lov4}.
\begin{theorem} \label{thm:no_graphon}
Let $W$ be a triangle-free graphon. Then we have the following dichotomy:
either $a(W)=0$ or $W$ is isomorphic to $W_{G,\mu}$ for some finite
vertex-weighted triangle-free graph $(G,\mu)$.
\end{theorem}
\begin{proof} (sketch)
Assume that $a(W)>0$, that is \eqref{eq:a_w} holds for some $a>0$. Let
$\boldsymbol{G_n}$ be the random sample from the graphon $W$; this is a probability
measure on the set $\mathcal G_n$ of triangle-free graphs on $n$ vertices up
to isomorphism. A standard application of Chernoff's bound along with
\eqref{eq:a_w} gives us that
\begin{equation}\label{eq:bound_a}
\prob{a(\boldsymbol{G_n})\leq a/2} \leq \exp(-\Omega(n)).
\end{equation}
Now, if we equip $\prod_{n\in\mathbb N}\mathcal G_n$ with the product measure
$\prod_n \boldsymbol{G_n}$, then the fundamental fact from the theory of graph limits
is that the sequence of graphs $\boldsymbol{G_n}$ sampled according to this measure
converges to $W$ with probability 1, and the same holds for their twin-free
reductions $\boldsymbol{G_n^{\text{red}}}$. Since the series $\sum_n\exp(-\Omega(n))$
converges, Theorem \ref{thm:boundedeness} along with \eqref{eq:bound_a}
implies that the number of vertices in $\boldsymbol{G_n^{\text{red}}}$ is bounded,
also with probability 1. Then a simple compactness argument shows that it
contains a sub-sequence converging to $W_{G,\mu}$ for some finite weighted
graph $(G,\mu)$.
\end{proof}
\section{The proof of Theorem \ref{thm:main}}
We fix a triangle-free $\rho$-regular graph $G$, and for the reasons
explained in Remark \ref{rem:large_density}, we assume that $\rho\leq\frac
13$. We have to prove that $a(G)\leq a_0(\rho)$, that is there exists a pair
of non-adjacent vertices $u,v$ with $P_3^N(u,v)\leq a_0(\rho)$. We work in
the set-up of Section \ref{sec:boundedeness}, that is we replace $G$ with its
weighted twin-free reduction $(G,\mu)$; the weights $\mu$ will be dropped
from notation whenever it may not create confusion. We also let $a\stackrel{\rm def}{=}
a(G,\mu)>0$ throughout.
\subsection{$\rho\geq \rho_1$: exploiting combinatorial structure}
\label{sec:combinatorial}
The only way in which we will be using twin-freeness is the following claim
(that was already implicitly used in the proof of Theorem
\ref{thm:boundedeness}).
\begin{claim} \label{clm:upper_bound}
For any two non-adjacent vertices $u\neq v$, $P_3^N(u,v)\leq \rho-a$.
\end{claim}
\begin{proof}
First we have $P_3^N(u,v)+\bar P_3^{N,c}(u,v) = e(v)=\rho$. Thus it remains
to prove that $\bar P_3^{N,c}(u,v)\geq a$. But since $u$ and $v$ are not
twins and $e(u)=e(v)$, there exists a vertex $w\in N_G(u)\setminus N_G(v)$.
Then $a\leq P_3^N(v,w)\leq \bar P_3^{N,c}(u,v)$, the last inequality
holds since $G$ is triangle-free.
\end{proof}
We now fix, for the rest of the proof, two non-adjacent vertices $v_1,v_2$
with $P_3^N(v_1,v_2)=a$. Let $P\stackrel{\rm def}{=} N_G(v_1)\cap N_G(v_2)$ (thus $\mu(P) =
P_3^N(u,v)=a$) and we also let $I\stackrel{\rm def}{=} V(G)\setminus (N_G(v_1)\cup N_G(v_2))$
(note that $v_1,v_2\in I$). We can easily compute $\mu(I)=I_3^N(v_1,v_2)$ by
inclusion-exclusion as follows:
\begin{equation}\label{eq:i3N}
I_3^N(v_1,v_2) =1- e(v_1)-e(v_2)+P_3^N(v_1,v_2) =1-2\rho+a.
\end{equation}
\begin{claim} \label{clm:v3}
For any $w\in P$ there exists $v_3\in I$ such that $(w,v_3)\not\in E$.
\end{claim}
\begin{proof}
The assumptions $\rho\leq \frac 13$ and $a>0$ imply, along with
\eqref{eq:i3N}, that $I_3^N(v_1,v_2)>\rho$. As $e(w)=\rho$, Claim
\ref{clm:v3} follows.
\end{proof}
Before proceeding further, let us remark that $a_0(\rho)\geq\frac{\rho}3$ for
$\rho\in [\rho_1,1/3]$ (verifications of computationally unpleasant
statements like this one can be found in the Maple worksheet at\\ {\tt
http://people.cs.uchicago.edu/\~{}razborov/files/tfsr.mw}). Hence we can and
will assume w.l.o.g. that
\begin{equation}\label{eq:first_bound}
a>\frac{\rho}3.
\end{equation}
\begin{claim} \label{clm:w}
For any $v_3\in I$ we have $S_4^I(v_1,v_2,v_3)>0$, that is there exists a
vertex $w\in P$ adjacent to $v_3$.
\end{claim}
\begin{proof}
Since $P$ is non-empty, we can assume w.l.o.g. that $\exists w\in P\
((v_3,w)\not \in E)$ (otherwise we are done). Now we have the computation
(again, since $G$ is triangle-free)
\begin{equation} \label{eq:S_bound}
\longeq{&&\rho = e(v_3) \geq P_3^N(v_3,v_1)+P_3^N(v_3,v_2)+ P_3^N(v_3,w)-S_4^I(v_1,v_2,v_3)
\\ && \hspace{\longeqskiplength}\geq
3a-S_4^I(v_1,v_2,v_3).}
\end{equation}
The claim now follows from \eqref{eq:first_bound}.
\end{proof}
Let now $c\stackrel{\rm def}{=} |P|$ be the {\bf size} of $P$ (weights are ignored). Claims
\ref{clm:v3} and \ref{clm:w} together imply that $c\geq 2$. The rest of the
analysis depends on whether $c=2$, $c=3$ or $c\geq 4$.
\subsubsection{$c=2$}
Let $P=\{w,w'\}$, where $\mu(w)\geq \mu(w')$, and note that $\mu(w')\leq
\frac a2$. By Claim \ref{clm:v3}, there exists $v_3\in I$ such that
$(w,v_3)\not\in E$. We have $S_4^I(v_1,v_2,v_3)\leq \mu(w')\leq \frac a2$.
Along with \eqref{eq:S_bound}, this gives us the bound
\begin{equation} \label{eq:twofifth}
a\leq \frac 25\rho.
\end{equation}
By Claim \ref{clm:w}, for any $v_3\in I$ we have either $(w,v_3)\in E(G)$ or
$(w',v_3)\in E(G)$. In other words, the neighbourhoods of $v_1,v_2,w,w'$
cover the whole graph or, equivalently, $I_3^N(v_1,v_2)+I_3^N(w,w')=1$. Now,
$I_3^N(v_1,v_2)=1-2\rho+a$ by \eqref{eq:i3N}, and for $(w,w')$ this
calculation still works in the ``right'' direction: $I_3^N(w,w')=1-2\rho
+P_3^N(w,w')\geq 1-2\rho+a$. Thus we get $a\leq 2\rho-\frac 12$. Along with
\eqref{eq:twofifth}, we get that $a\leq\min\of{\frac 25\rho, 2\rho-\frac
12}\leq a_0(\rho)$ (see the Maple worksheet) and this completes the analysis
of the case $c=2$.
\subsubsection{$c=3$} \label{sec:c3}
Let $P=\{w_1,w_2,w_3\}$. We abbreviate $F^{\mathcal I}_{\{i\}}(w_1,w_2,w_3)$
to $F_i$, $F^{\mathcal I}_{\{i,j\}}(w_1,w_2,w_3)$ to $F_{ij}$ and
$F^{\mathcal I}_{\{1,2,3\}}(w_1,w_2,w_3)$ (= $S_4^{\mathcal I}(w_1,w_2,w_3)$)
to $f_3$. In our claims below we will always assume that
$\{i,j,k\}=\{1,2,3\}$ is an arbitrary permutation on three elements.
We begin with noticing that Claim \ref{clm:upper_bound} applied to the pair
$(w_i,w_k)$ gives us $F_{ik}+f_3\leq \rho-a$ that can be re-written (since
$F_i+F_{ij}+F_{ik}+f_3=e(w_i)=\rho$) as
\begin{equation} \label{eq:ivsij}
F_i+F_{ij} \geq a.
\end{equation}
On the other hand, the bound $P_3^N(w_i,w_j)\geq a$ re-writes as
\begin{equation}\label{eq:opposite}
F_{ij} + f_3\geq a.
\end{equation}
We also note that \eqref{eq:ivsij} (along with its analogue obtained by
changing $F_i$ to $F_j$) implies
\begin{equation} \label{eq:ivsj}
F_{ij} = 0 \Longrightarrow (F_i\geq a \land F_j\geq a).
\end{equation}
\begin{claim} \label{clm:ijvsk}
$F_{ij}>0\Longrightarrow F_k\geq a.$
\end{claim}
\begin{proof}
Let $v$ be any vertex contributing to $F_{ij}$, that is $(w_i,v), (w_j,v)\in
E(G)$ while $(w_k,v)\not\in E(G)$. Then $a\leq P_3^N(w_k,v)\leq F_k$.
\end{proof}
Now, \eqref{eq:ivsj} along with Claim \ref{clm:ijvsk} imply that there exist
at least two indices $i\in [3]$ with $F_i\geq a$. Assume w.l.o.g. that
$F_1,F_2\geq a$. Our goal (that, somewhat surprisingly, is the most
complicated part of the analysis) is to show that in fact $F_3\geq a$ as
well.
\begin{claim} \label{clm:non_zero}
$F_i>0$.
\end{claim}
\begin{proof}
When $i=1.2$, we already have the stronger fact $F_i\geq a$ so we are only
left to show that $F_3>0$. Assume the contrary. Then $F_{12}=0$ by Claim
\ref{clm:ijvsk}, hence $f_3\geq a$ by \eqref{eq:opposite}. Also, $F_{13}\geq
a$ and $F_{23}\geq a$ by \eqref{eq:ivsij} (with $i=3$). Summing all this up,
$\rho=e(w_3)=F_{13}+F_{23}+f_3\geq 3a$, contrary to the assumption
\eqref{eq:first_bound}.
\end{proof}
The next claim, as well as Claim \ref{clm:C4} below, could have been also
written very concisely at the expense of introducing a few more flags; we did
not do this since those flags are not used anywhere else in the paper.
\begin{claim} \label{clm:p4}
There is an edge between {\rm [}the sets of vertices corresponding to{\rm ]}
$F_i$ and $F_j$.
\end{claim}
\begin{proof}
Since $\{i,j\}\cap \{1,2\} \neq \emptyset$, we can assume w.l.o.g. that
$i=1$. We have
$$
\rho=e(w_1) =F_1+F_{1j}+F_{1k}+f_3
$$
and $F_1\geq a,\ F_{1j}+f_3\geq a$ (by \eqref{eq:opposite}). Hence $F_{1k}<a$ due to
\eqref{eq:first_bound}. Let now $v$ be an arbitrary vertex contributing to
$F_j$ that exists by Claim \ref{clm:non_zero}. We have $P_3^N(v,w_1)\geq a$,
and all contributions to it come from either $F_{1k}$ or $F_1$. Since
$F_{1k}<a$, $v$ must have at least one neighbor in $F_1$.
\end{proof}
\begin{claim} \label{clm:almost_last}
$F_i+F_{ij}+F_{ik}\geq 2a$.
\end{claim}
\begin{proof}
Let $v,v'$ be as in Claim \ref{clm:p4} with $i:=k$, i.e. $(v,v')\in E(G)$,
$v$ contributes to $F_k$ and $v'$ contributes to $F_j$. Then $2a\leq
P_3^N(w_i,v) + P_3^N(w_i,v')\leq F_i + F_{ij} + F_{ik}$ simply because
$(v,v')$ is an edge, and this implies that the sets corresponding to
$P_3^N(w_i,v),\ P_3^N(w_i,v')$ are disjoint.
\end{proof}
\begin{claim} \label{clm:fij}
$F_{ij}>0$.
\end{claim}
\begin{proof}
Assuming the contrary, we get $f_3\geq a$ from \eqref{eq:opposite} and
$F_i+F_{ik}\geq 2a$ from Claim \ref{clm:almost_last}. This (again)
contradicts $e(w_i)=\rho<3a$.
\end{proof}
Now we finally have
\begin{claim} \label{clm:Fi}
$F_i\geq a$.
\end{claim}
\begin{proof}
Immediate from Claims \ref{clm:ijvsk} and \ref{clm:fij}.
\end{proof}
\begin{claim} \label{clm:sophisticated}
$\mu(w_i)+F_{jk}\geq 4a-\rho$.
\end{claim}
\begin{proof}
Let (by Claim \ref{clm:Fi}) $v$ be any vertex contributing to $F_i$. Then we
have the computation (cf. \eqref{eq:S_bound}):
\begin{equation}\label{eq:exact_formula}
\longeq{&&\rho= e(v) = T_4^{\mathcal I}(v_1,v_2,v) + P_3^N(v_1,v) + P_3^N(v_2,v) -
S_4^{\mathcal I}(v_1,v_2,v)\\ && \hspace{\longeqskiplength} \geq T_4^{\mathcal I}(v_1,v_2,v)+2a-\mu(w_i).}
\end{equation}
On the other hand,
\begin{equation}\label{eq:local}
2a\leq P_3^N(v,w_j) +P_3^N(v,w_k) \leq T_4^{\mathcal I}(v_1,v_2,v) + F_{jk}
\end{equation}
(note that $v$ may not be connected to vertices in $F_{ij},F_{ik},f_3$ as it
would have created a triangle with $w_i$). The claim follows from comparing
these two inequalities.
\end{proof}
Let us now extend the notation $f_3=F_{\{1,2,3\}}$ to
$$
f_\nu \stackrel{\rm def}{=} \sum_{S \in {[3] \choose \nu}} F_S.
$$
Then Claim \ref{clm:w} implies $f_0=0$ and hence
\begin{equation} \label{eq:volume}
f_1+f_2+f_3=\mu(I) =1-2\rho+a
\end{equation}
and also
\begin{equation} \label{eq:density}
f_1+2f_2+3f_3 =\sum_i e(w_i) = 3\rho.
\end{equation}
Next, Claim \ref{clm:Fi} implies
\begin{equation} \label{eq:f1}
f_1\geq 3a
\end{equation}
and Claim \ref{clm:sophisticated}, after summing it over $i\in [3]$ gives us
\begin{equation} \label{eq:f2}
f_2\geq 11a-3\rho.
\end{equation}
Resolving \eqref{eq:volume} and \eqref{eq:density} in $f_3$, we get
\begin{equation} \label{eq:resolving}
2f_1+f_2 =3-9\rho+3a.
\end{equation}
Comparing this with \eqref{eq:f1} and \eqref{eq:f2} gives us the bound
\begin{equation} \label{eq:3_14}
a\leq \frac 3{14}(1-2\rho)
\end{equation}
which is $\leq a_0(\rho)$ as long as $\rho\in [9/32, 1/3]$.
To complete the analysis of case $c=3$ we still have to prove that
$a(\rho)\leq \mathsf{Improved}(\rho)$ for $\rho_1\leq\rho\leq \frac 9{32}$.
As it uses some material from the proof of the Krein bound, we defer this to
Section \ref{sec:improved}.
\subsubsection{$c\geq 4$}
Fix arbitrarily distinct $w_1,w_2,w_3,w_4\in P$ and let us employ the same
notation $F_i, F_{ij}, F_{ijk}$ as in the previous section;
$\{i,j,k,\ell\}=\{1,2,3,4\}$. As before, let
$$
f_\nu=\sum_{S\in {[4]\choose\nu}} F_{S}.
$$
Note that since we allow $c>4$, this time $f_0$ need not necessarily be zero. We further let
$$
\widehat F_S \stackrel{\rm def}{=} \sum_{T\subseteq [4] \atop T\cap S\neq\emptyset} F_T
$$
be the measure of $\bigcup_{i\in S} N_G(w_i)$, and we also use abbreviations
$\widehat F_i, \widehat F_{ij}, \widehat F_{ijk}, \widehat F_{1234}$ in this
case.
\smallskip
To start with, $\widehat F_i=\rho$ and Claim \ref{clm:upper_bound} implies $\widehat F_{ij}\geq \rho+a$.
\begin{claim} \label{clm:ijk}
$\widehat F_{ijk} \geq \rho +2a.$
\end{claim}
\begin{proof}
For $S\subseteq \{i,j,k\}$, let $F_S^\ast\stackrel{\rm def}{=} F_S+F_{S\cup \{\ell\}}$ be the result of ignoring $w_\ell$ and we (naturally) let
$$
f_\nu^\ast\stackrel{\rm def}{=} \sum_{S\in {\{i,j,k\} \choose \nu}} F_S^\ast.
$$
Then (cf. \eqref{eq:density})
$$
f_1^\ast+2f_2^\ast+3f_3^\ast =3\rho,
$$
and also
$$
f_2^\ast+3f_3^\ast = P_3^N(w_i,w_j) + P_3^N(w_i,w_k) +P_3^N(w_j,w_k) \leq 3(\rho-a)
$$
by Claim \ref{clm:upper_bound}. Besides, $\widehat F_{ijk} =f_1^\ast+f_2^\ast +f_3^\ast$.
If $f_2^\ast=0$, we are done: $\widehat F_{ijk}=3\rho-2f_3^\ast\geq
3\rho-2(\rho-a)=\rho+2a$. Hence we can assume that $f_2^\ast>0$, say,
$F_{ij}^\ast>0$. Pick an arbitrary vertex $v$ corresponding to $F_{ij}^\ast$
then, as before, $\widehat F_{ijk} = \widehat F_{ij} + F_k^\ast\geq
\rho+a+P_3^N(v,w_k)\geq \rho+2a$.
\end{proof}
\begin{lemma} \label{lem:1234}
$\widehat F_{1234} \geq \rho+3a.$
\end{lemma}
\begin{proof}
First, $\widehat F_{1234}=\widehat F_{jk\ell}+F_i\geq \rho+2a+F_i$ by Claim \ref{clm:ijk}.
Hence we can assume that $F_i<a$ (for all $i\in [4]$, as usual). Also, we can assume that $f_3=0$ since otherwise we are done by the same reasoning as in the proof of Claim \ref{clm:ijk}.
Now, let $\Gamma$ be the graph on $[4]$ with the set of edges
$$
E(\Gamma) =\set{(i,j)}{F_{ij}>0}.
$$
Analogously to \eqref{eq:ivsij}, we have
\begin{equation} \label{eq:triple}
F_i+F_{ij}+F_{i\ell} \geq a
\end{equation}
(recall that $F_{ij\ell}=0$) and, analogously to Claim \ref{clm:ijvsk},
\begin{equation} \label{eq:ijvskl}
F_{ij}>0 \Longrightarrow F_k+F_{k\ell} \geq a.
\end{equation}
Next, \eqref{eq:triple}, along with $F_i<a$, implies that the minimum degree
of $\Gamma$ is $\geq 2$, that is $\Gamma$ is the complement of a matching.
Hence there are only three possibilities: $\Gamma=K_4$, $\Gamma=C_4$ or
$\Gamma= K_4-e$, and the last one is ruled out by \eqref{eq:ijvskl} along
with $F_k<a$.
If $\Gamma=K_4$ then summing up \eqref{eq:ijvskl} over all choices of
$k,\ell$, we get $3f_1+2f_2\geq 12a$. Adding this with $f_1+2f_2+4f_4=4\rho$,
we get $\widehat F_{1234} = f_1+f_2+f_4\geq \rho+3a$. Thus it remains to deal
with the case $\Gamma=C_4$, say $E(\Gamma)=\{(1,2), (2,3), (3,4), (4,1)\}$.
\smallskip
First we observe (recall that $f_3=0$) that
$$
f_4=P_3^N(w_1,w_3) (= P_3^N(w_2,w_4)) \geq a.
$$
Next, \eqref{eq:triple} amounts to
\begin{equation} \label{eq:ipm1}
F_i +F_{i,i+1} \geq a
\end{equation}
(all summations in indices are mod 4) and hence
$2F_i+F_{i,i+1}+F_{i,i-1}+f_4\geq 3a$. Comparing with
$$
F_i +F_{i,i+1} +F_{i,i-1} +f_4 = e(w_i) = \rho,
$$
we see that $F_i\geq 3a-\rho$ which is strictly positive by the assumption
\eqref{eq:first_bound}. Likewise, $F_{i,i+1}=\rho-f_4-(F_i+F_{i,i-1})\leq
\rho-2a<a$.
\begin{claim} \label{clm:C4}
There is an edge between $F_i$ and $F_{i+1}$.
\end{claim}
\begin{proofof}{Claim \ref{clm:C4}} This is similar to the proof of Claim
\ref{clm:p4}. Pick up a vertex $v$ contributing to $F_i$ ($F_i>0$ as we just
observed). Then $P_3^N(w_{i+1},v)\leq F_{i+1}+F_{i+1,i+2}$ and since we
already know that $F_{i+1,i+2}<a$, there exists a vertex corresponding to
$F_{i+1}$ and adjacent to $v$.
\end{proofof}
\begin{claim} \label{clm:last}
$F_i+F_{i+1}+F_{i,i+1}\geq 2a$.
\end{claim}
\begin{proofof}{Claim \ref{clm:last}} This is similar to the proof of Claim
\ref{clm:almost_last}. Pick vertices $v,v'$ witnessing Claim \ref{clm:C4}
with $i:=i+2$, so that in particular $(v,w_{i+2}), (v',w_{i-1}), (v,v')$ are
all in $E(G)$ while $(v,w_{i+1}), (v',w_i)$ are not. Then
$$
2a\leq P_3^N(v,w_{i+1}) + P_3^N(v',w_i) \leq F_i + F_{i+1} +F_{i,i+1}
$$
since $P_3^N(v,w_{i+1})\leq F_{i+1}+F_{i,i+1}$, $P_3^N(v',w_i)\leq F_i +
F_{i,i+1}$ and the corresponding sets are disjoint since $(v,v')$ is an edge.
\end{proofof}
Now we can complete the proof of Lemma \ref{lem:1234}:
$$
\widehat F_{1234} = (F_1 + F_{12} + F_{14} + F_{1234}) + (F_2+F_{23}) +
(F_3+F_4+F_{34}) \geq \rho+3a
$$
by \eqref{eq:ipm1} and Claim \ref{clm:last}.
\end{proof}
This also completes the proof of Theorem \ref{thm:main} for $\rho\geq \rho_1$
(that is, modulo the bound $\mathsf{Improved}(\rho)$ deferred to Section
\ref{sec:improved}). Indeed, since $\widehat F_{1234}\leq 1-2\rho+a$, Lemma
\ref{lem:1234} implies $a\leq \frac{1-3\rho}{2}$ which is $\leq a_0(\rho)$ as
long as $\rho\in [\rho_1, 1/3]$.
\subsection{Analytical lower bounds} \label{sec:analytical}
In this section we prove the bounds $a(\rho)\leq \mathsf{Krein}(\rho)\
(\rho\leq \rho_0)$, $a(\rho)\leq \widehat{\mathsf{Krein}}(\rho)\ (\rho\in
[\rho_0,\rho_1])$ and $a(\rho)\leq \mathsf{Improved}(\rho)\ (\rho\in [\rho_2,
9/32])$. We keep all the notation and conventions from the previous section.
\medskip
Let us continue a bit our crash course on flag algebras we began in Section
\ref{sec:prel}. The product $F_1(v_1,v_2,\ldots,v_k)F_2(v_1,v_2,\ldots,v_k)$,
where $F_1$ and $F_2$ are flags of the same type and $v_1,\ldots,v_k\in V(G)$
induce this type in $G$, can be always expressed as a {\em fixed} (that is,
not depending on $G,v_1,\ldots,v_k$) linear combination of expressions of the
form $F(v_1,\ldots,v_k)$. The general formula is simple (see \cite[eq.
(5)]{flag}) but it will be relatively clear how to do it in all concrete
cases we will be dealing with. We stress again that it is only possible
because we sample vertices with repetitions, otherwise the whole theory
completely breaks down. Also, things can be easily set up in such a way that,
after extending it by linearity to expressions $f(v_1,\ldots,v_k)$, where $f$
is a formal $\mathbb R$-linear combination of flags, this becomes the product
in a naturally defined commutative associative algebra.
We also need the {\em averaging} or {\em unlabelling} operator\footnote{For
the reader familiar with graph limits, let us remark that their operator is
different but connected to ours via a simple M\"obius transformation,
followed by summation over several types.} $f\mapsto \eval f{\sigma,\eta}$.
Let $\sigma$ be a type of size $k$, and $\eta\injection{[k']}{[k]}$ be an
injective mapping, usually written as $[\eta_1,\ldots,\eta_{k'}]$ or even
$\eta_1$ when $k=1$ (here $\eta_1,\ldots,\eta_{k'}$ are pairwise different
elements of $[k]$). Then we have the naturally defined type $\sigma|_\eta$ of
size $k'$ given by $(i,j)\in E\of{\sigma|_\eta}$ if and only if
$(\eta_i,\eta_j)\in E(\sigma)$. Now, given a linear combination $f$ of
$\sigma$-flags and $w_1,\ldots,w_{k'}\in V(G)$ spanning the type
$\sigma|_\eta$, we consider the expectation $\expect{f(\bar v_1,\ldots\bar
v_k)}$, where $\bar v_j$ is $w_i$ if $j=\eta_i$ and picked according to the
measure $\mu$, independently of each other, when $j\not\in{\rm im}(\eta)$. Again,
there is a very simple general formula computing this expectation as a real
linear combination of $\sigma|_\eta$-flags, denoted by $\eval f{\sigma,
[\eta_1,\ldots,\eta_{k'}]}$ that, again, does not depend on
$G,w_1,\ldots,w_{k'}$ \cite[\S 2.2]{flag}.
\begin{remark} \label{rem:normalization}
It is important (and turns out very handy in concrete computations) to note
that we set $f(\bar v_1,\ldots, \bar v_k)\stackrel{\rm def}{=} 0$ if $\bar v_1,\ldots,\bar v_k$
do not induce $\sigma$. In particular, we let
\begin{equation}\label{eq:typeflag}
\langle \sigma, \eta \rangle \stackrel{\rm def}{=} \eval{1}{\sigma,\eta};
\end{equation}
this is simply the pair $(\sigma,\eta)$ viewed as a $\sigma|_\eta$-flag with
an appropriate coefficient \cite[Theorem 2.5(b)]{flag}. In other words,
$\eval{f}{\sigma,\eta}$ is {\bf not} the conditional
expectation by the event ``$(\bar v_1,\ldots,\bar v_k)$ induce $\sigma$'' but
the expectation of $f$ multiplied by the characteristic function of this
event.
\end{remark}
Finally, we also need the lifting operator $\pi^{\sigma,\eta}$, where
$\sigma,\eta$ are as above. Namely, for a $\sigma|_\eta$-flag $F$, let
$$
\pi^{\sigma,\eta}(F)(v_1,\ldots,v_k) \stackrel{\rm def}{=} F(v_{\eta_1},\ldots,v_{\eta_{k'}})
$$
be the result of forgetting certain variables among $v_1,\ldots,v_k$ and
possibly re-enumerating the remaining ones according to $\eta$. It may look
trivial but we will see below that it turns out to be very handy in certain
calculations. Also note that, unlike $\eval{\cdot}{\sigma,\eta}$,
$\pi^{\sigma,\eta}$ does respect the multiplicative structure.
When $\eta$ is empty, $\eval f{\sigma,\eta}$ and $\pi^{\sigma, \eta}$ are abbreviated to $\eval f{\sigma}$ and $\pi^\sigma$, respectively.
The main tool in flag algebras is the light version of the Cauchy-Schwartz
inequality formalized as
\begin{equation} \label{eq:es}
\eval{f^2}{\sigma,\eta} \geq 0,
\end{equation}
and the power of the method relies on the fact that positive linear combinations of these inequalities can be arranged as a semi-definite programming problem. But the resulting proofs are often very non-instructive, so in this paper we have decided to use more human-oriented language of optimization. Let us stress that, if desired, the argument can be also re-cast as a purely symbolic sum-of-squares computation based on statements of the form \eqref{eq:es}.
\bigskip
After this preliminary work, let us return to the problem at hand. As in the
previous section, we fix arbitrarily two non-adjoint vertices $v_1,v_2$ with
$P_3^N(v_1,v_2)=a$ and let $P\stackrel{\rm def}{=} N_G(v_1)\cap N_G(v_2)$, $I\stackrel{\rm def}{=} V(G)\setminus
(N_G(v_1) \cup N_G(v_2))$. Recall that $\mu(P) =a$ and $\mu(I) = 1-2\rho+a$.
\subsubsection{Krein bounds} \label{sec:krein}
We are going to estimate the quantity $\eval{S_4^{\mathcal I}(T_4^{\mathcal
I} + S_4^{\mathcal I})}{\mathcal I, [1,2]}(v_1,v_2)$ from both sides and
compare results.
The upper bound does not depend on whether $\rho\leq \rho_0$ or not and it
consists of several typical flag-algebraic computations.
{\bf Convention.} When the parameters $(v_1,v_2,\ldots,v_k)$ in flags are
omitted, this means that the inequality in question holds for their arbitrary
choice. We specify them explicitly when the fact depends on the specific
property $P_3^N(v_1,v_2)=a$ of $v_1$ and $v_2$.
As we have already implicitly computed in the previous section,
$$
\eval{(S_4^{\mathcal I})^2}{\mathcal I, [1,2]} =\frac 13K_{32}^N =\frac 12\eval{K_{32}^{\mathcal P}}{\mathcal P,[1,2]}.
$$
Similarly,
$$
\eval{S_4^{\mathcal I}T_4^{\mathcal I}}{{\mathcal I,[1,2]}} = \frac 12\eval{U_5^{\mathcal P}}{\mathcal P,[1,2]}.
$$
Altogether we have
\begin{equation} \label{eq:sst}
\eval{S_4^{\mathcal I}(S_4^{\mathcal I}+T_4^{\mathcal I})}{\mathcal I,[1,2]} = \frac 12\eval{K_{32}^{\mathcal P}+U_5^{\mathcal P}}{\mathcal P, [1,2]}.
\end{equation}
On the other hand, we note that $P_3^{E,b} =\pi^{E,2}(e) =\rho$ and since $\frac 12P_3^{1,b}=\eval{P_3^{E,b}}{E,1}$, we also have $P_3^{1,b}=2\rho^2$. Hence
\begin{equation} \label{eq:kuvw}
2\rho^2 =\pi^{\mathcal P,3}(P_3^{1,b}) = K_{32}^{\mathcal P} + U_5^{\mathcal P} +V_5^{\mathcal P,1} +V_5^{\mathcal P,2}.
\end{equation}
Let us compute the right-hand side here. We have
$$
V_5^{\mathcal P,1}=2\eval{V_5^{\mathcal D,1}}{\mathcal D, [1,2,3]}
$$
\begin{equation} \label{eq:v1v2specific}
\langle \mathcal D, [1,2,3] \rangle(v_1,v_2) = \pi^{\mathcal P, [1,2]}(\bar P_3^{N,b})(v_1,v_2) =\rho-a
\end{equation}
(see the definition \eqref{eq:typeflag}) and
$$
V_5^{\mathcal D,1} = \pi^{\mathcal D,[3,4]}(P_3^N) \geq a.
$$
Putting these together,i
$$
V_5^{\mathcal P,1}(v_1,v_2,w) \geq 2a(\rho-a)\ (w\in P)
$$
and, by symmetry, the same holds for $V_5^{\mathcal P,2}$. Comparing with \eqref{eq:kuvw}, we find that
\begin{equation} \label{eq:kplusu}
(K_{32}^{\mathcal P} +U_5^\mathcal P)(v_1,v_2,w)\leq 2\rho^2-4a(\rho-a) = 2((\rho-a)^2+a^2).
\end{equation}
Averaging this over all $w\in P$ and taking into account \eqref{eq:sst}, we arrive at our first main estimate
\begin{equation} \label{eq:s4t4}
\eval{S_4^{\mathcal I}(S_4^{\mathcal I} + T_4^{\mathcal I})}{\mathcal I,[1,2]}(v_1,v_2) \leq a(\rho^2-2a(\rho-a)).
\end{equation}
\medskip
For the lower bound we first claim that
\begin{equation}\label{eq:t4I}
T_4^{\mathcal I} \leq S_4^{\mathcal I} +\rho-2a.
\end{equation}
This was already established in \eqref{eq:exact_formula}, but let us re-cup the argument using the full notation:
$$
\rho=\pi^{\mathcal I,3}(e) = T_4^{\mathcal I} + \pi^{\mathcal I,[1,3]}(P_3^N) + \pi^{\mathcal I,[2,3]}(P_3^N)-S_4^{\mathcal I} \geq T_4^{\mathcal I}+2a-S_4^{\mathcal I}.
$$
Next, we need a lower bound on $T_4^N(v_1,v_2)=\eval{T_4^{\mathcal
I}}{\mathcal I,[1,2]}(v_1,v_2)$, that is on the density of those edges that
have both ends in $I$. For that we first classify all edges of $G$ according
to the number of vertices they have in $I$:
\begin{equation} \label{eq:rho_expansion}
\pi^N(\rho) = T_4^N + \of{S_4^N + \sum_{i=1}^2 V_4^{N,i}} +P_4^N.
\end{equation}
Now,
$$
S_4^N(v_1,v_2) = 2\eval{\pi^{\mathcal P,3}(e)}{\mathcal P,[1,2]}(v_1,v_2) = 2a\rho.
$$
Further we note that
\begin{equation}\label{eq:rhoqv}
\rho(\rho-a) = \eval{\pi^{\mathcal Q_i,3}(e)}{\mathcal Q_i,[1,2]}(v_1,v_2)= \frac 12 \of{V_4^{N,i}+P_4^N}(v_1, v_2)\ (i=1,2).
\end{equation}
Summing this over $i=1,2$ and plugging our findings into \eqref{eq:rho_expansion}, we get
\begin{equation} \label{eq:t4n_prel}
\rho = T_4^N(v_1,v_2)+2a\rho +4\rho(\rho-a) -P_4^N(v_1,v_2).
\end{equation}
So, the only thing that still remains is to estimate $P_4^N(v_1,v_2)$ but
this time {\bf from below}. For that it is sufficient to compute its
contribution to the right-hand side of \eqref{eq:rhoqv} (letting, say,
$i:=1$):
$$
a(\rho-a) \leq \eval{\pi^{\mathcal Q_1,[2,3]}(P_3^N)}{\mathcal Q_1, [1,2]}=\frac 12 P_4^N(v_1,v_2).
$$
Substituting this into \eqref{eq:t4n_prel}, we arrive at our estimate on the number of edges entirely within $I$:
\begin{equation} \label{eq:t4n}
\longeq{&&\eval{T_4^{\mathcal I}}{\mathcal I}(v_1,v_2) =T_4^N(v_1,v_2)\\
&& \hspace{\longeqskiplength} \geq \rho-2a\rho-4\rho(\rho-a) +2a(\rho-a) = \rho - 2(\rho^2+(\rho-a)^2).}
\end{equation}
We are now prepared to bound $\eval{S_4^{\mathcal I}(S_4^{\mathcal I} +T_4^{\mathcal I})}{\mathcal I, [1,2]}(v_1,v_2)$ from below. As a piece of intuition, let us re-normalize $S_4^{\mathcal I}$ and $T_4^{\mathcal I}$ by the known values $\langle \mathcal I, [1,2]\rangle =1+a-2\rho$ (cf. Remark \ref{rem:normalization}) so that they become random variables in the triangle
$$
\mathbb T = \set{(S_4^{\mathcal I}, T_4^{\mathcal I})}{T_4^{\mathcal I}\geq 0,\ T_4^{\mathcal I}\leq S_4^{\mathcal I}+\rho-2a,\ S_4^{\mathcal I}\leq a}.
$$
Then we know the expectation of $S_4^{\mathcal I}$, have the lower bound
\eqref{eq:t4n} on the expectation of $T_4^{\mathcal I}$, and we need to
bound the expectation of $S_4^{\mathcal I}(S_4^\mathcal I + T_4^{\mathcal
I})$, also from below. For that purpose we are going to employ duality, i.e.
we are looking for coefficients $\alpha,\beta,\gamma$ depending on $a,\rho$
only such that
$$
L(x,y) \stackrel{\rm def}{=} x(x+y)-(\alpha x+\beta y +\gamma)
$$
is non-negative on $\mathbb T$, and applying $\eval{\cdot}{\mathcal I,
[1,2]}$ to this relation produces ``the best possible result''. As we
mentioned above, an alternative would be to write down an explicit
``sum-of-squares'' expression: the resulting proof would be shorter but it
would be less intuitive.
Let us first observe the obvious upper bound
\begin{equation} \label{eq:trivial_bound}
a\leq \frac{\rho^2}{1-\rho},
\end{equation}
it follows from the computation $3\rho^2=3\eval{P_3^1}1 = P_3 =3
\eval{P_3^N}N\geq 3a(1-\rho)$. Next, the right-hand side of \eqref{eq:t4n} is
a concave quadratic function in $a$, with two roots $a_1(\rho)\stackrel{\rm def}{=}
\rho-\frac{\sqrt{2\rho-4\rho^2}}{2}$, $a_2(\rho)\stackrel{\rm def}{=}
\rho-\frac{\sqrt{2\rho+4\rho^2}}{2}$. Further, $a_1(\rho)\leq a_0(\rho)\leq
\frac{\rho^2}{1-\rho}\leq a_2(\rho)$. Hence we can assume w.l.o.g. that the
right-hand side in \eqref{eq:t4n} is non-negative. Therefore, by decreasing
$T_4^{\mathcal I}$ if necessary, we can assume that the bound \eqref{eq:t4n}
on its expectation is actually tight.
Next, we note that since the quadratic form $x(x+y)$ is indefinite, the
function $L(x,y)$ attains its minimum somewhere on the border of the compact
region $\mathbb T$. Since $L$ is linear on the line $x=a$ we can further
assume that the minimum is attained at one of the lines $y=0$ or
$y=x+\rho-2a$. Note further that along both these lines $L$ is convex.
\bigskip
We begin more specific calculations with the bound $g_K(\rho,a)\geq 0$ that
is less interesting but also less computationally heavy. As a motivation for
the forthcoming computations, we are looking for two points $(x_0,0)$,
$(x_1,x_1+\rho-2a)$ on the lines $T_4^{\mathcal I}=0$, $T_4^{\mathcal I} =
S_4^{\mathcal I}+\rho-2a$ that are collinear\footnote{cf. \eqref{eq:t4n}, the
normalizing factor $1-2\rho+a$ is suggested by Remark
\ref{rem:normalization}. The particular choice of $c_x,c_y$ is needed only
for the ``best possible result'' part.} with the point $(c_x,c_y)$, where
$$
c_x\stackrel{\rm def}{=} \frac{a\rho}{1-2\rho+a},\ \ \ c_y\stackrel{\rm def}{=} \frac{\rho-2(\rho^2+(\rho-a))^2}{1-2\rho+a}.
$$
and such that the function $L(x,0)$ has a double root at $x_0$ while $L(x,x+\rho-2a)$ has a double root at $x_1$. Solving all this in $\alpha,\beta,\gamma,x_0,x_1$ gives us (see the Maple worksheet)
\begin{eqnarray}
\label{eq:x0} x_0 &=& c_x+(\sqrt 2 -1)c_y\\
\nonumber x_1 &=& \of{1-\frac{\sqrt 2}2}((\sqrt 2+1)x_0-(\rho-2a))\\
\nonumber \alpha &=& 2x_0\\
\nonumber \beta &=& (3-2\sqrt 2)(2(\sqrt 2+1)x_0-(\rho-2a))\\
\nonumber \gamma &=& -x_0^2.
\end{eqnarray}
The remarks above imply that indeed $L(x,y)|_{\mathbb T}\geq 0$ hence we have
\begin{equation} \label{eq:st4_lower_2}
\eval{S_4^{\mathcal I}(S_4^{\mathcal I} + T_4^{\mathcal I})}{\mathcal I, [1,2]}
\geq \alpha a\rho +\beta (\rho-2(\rho^2+(\rho-a)^2) +\gamma(1-2\rho+a).
\end{equation}
Comparing this with \eqref{eq:s4t4}, we get (up to the positive
multiplicative factor $\frac{1-2\rho+a}{2}$) that $g_K(\rho,a)\geq 0$.
Given the way the function $\widehat{\mathsf{Krein}}$ was
defined, $g_K(\rho,a)<0$ whenever $a\in\of{\widehat{\mathsf{Krein}}(\rho),
\frac{\rho^2}{1-\rho}}$. The required bound $a\leq
\widehat{\mathsf{Krein}}(\rho)$ now follows from \eqref{eq:trivial_bound}.
\medskip
The improvement $f_K(\rho,a)\geq 0$ takes place when the right-hand side in
\eqref{eq:x0} is $>a$ since then we can hope to utilize the condition
$S_4^\mathcal I\leq a$. As above, we first explicitly write down a solution
of the system obtained by replacing the equation $L'(x,0)|_{x=x_0}=0$ with
$x_0=a$ and only then justify the result.
Performing the first step in this program gives us somewhat cumbersome
rational functions that we attempt to simplify by introducing the
abbreviations
\begin{eqnarray*}
u_0(\rho,a) &\stackrel{\rm def}{=}& \frac 17(\rho+2a-2a\rho-4\rho^2)\\
u_1(\rho,a) &\stackrel{\rm def}{=}& \frac 17(3\rho-a-7a^2+15a\rho-12\rho^2)\\
u(\rho,a) &\stackrel{\rm def}{=}& 4u_0(\rho,a) + u_1(\rho,a).
\end{eqnarray*}
Then we get
\begin{eqnarray*}
x_0 &=& a\\
x_1 &=& \frac{a(2a-\rho^2-3\rho a)}{u(\rho,a)}\\
\alpha &=& 2a+ \frac{7(\rho-a)(u_1(\rho,a)^2-2u_0(\rho,a)^2)}{u(\rho,a)^2}\\
\beta &=& \frac{a(34u_0(\rho,a)^2+3u_1(\rho,a)^2-4u_0(\rho,a)u_1(\rho,a)-
2a\rho(1-3\rho+a)^2)}{u(\rho,a)^2}\\
\gamma &=& a^2-\alpha a.
\end{eqnarray*}
In order to analyze this solution, we first note that due to the bound just
established we can assume w.l.o.g. that
$$
a \in [\mathsf{Krein}(\rho), \widehat{\mathsf{Krein}}(\rho)].
$$
The function $u_0(\rho,a)$ is linear and increasing in $a$ and $u_0\of{\rho,
\mathsf{Krein}(\rho)}>0\ (\rho\neq 0)$ hence $u_0(\rho,a)\geq 0$. The
function $u_1(\rho,a)$ is quadratic concave in $a$ and $u_1\of{\rho,
\mathsf{Krein}(\rho)}, u_1\of{\rho, \widehat{\mathsf{Krein}}(\rho)}\geq 0$.
These two facts imply that $u(\rho,a)>0$ ($\rho>0$) hence our functions are
at least well-defined.
Next, $u_0,u_1\geq 0$ imply that $L'(x,0)|_{x=a} = 2a-\alpha$ has the sign
opposite to $u_1(\rho,a)-\sqrt 2u_0(\rho,a)$. This expression (that up to a
constant positive factor is equal to $c_x+(\sqrt 2-1)c_y-a$) is also concave
in $a$. Moreover, it is non-negative for $\rho \in [0,\rho_0],\ a \in
[\mathsf{Krein}(\rho), \widehat{\mathsf{Krein}}(\rho)]$ (at $\rho=\rho_0$ the
two bounds meet together: $\mathsf{Krein}(\rho_0) =
\widehat{\mathsf{Krein}}(\rho_0)=\rho_0/3$ and also $u_1(\rho,\rho/3)-\sqrt
2u_0(\rho,\rho/3)=0$). This completes the proof of $L'(x,0)|_{x=a}\leq 0$
hence (given that $L(a,0)=0$) we have $L(x,0)\geq 0$ for $x\leq a$. As we
argued above, this gives us $L|_{\mathbb T}\geq 0$ which implies
\eqref{eq:st4_lower_2}, with new values of $\alpha,\beta,\gamma$. Comparing
it with \eqref{eq:t4n}, we get $f_K(\rho,a)\geq 0$, up to the positive
multiplicative factor $\frac{2a(\rho-a)}{u(\rho,a)}$. This concludes the
proof of $f_K(\rho,a)\geq 0$ whenever $\rho\leq\rho_0$ and hence of the bound
$a\leq\mathsf{Krein}(\rho)$ in that interval.
\medskip
As a final remark, let us note that since the final bound $f_K(\rho,a)\geq 0$
has a very clear meaning in algebraic combinatorics, it looks likely that the
disappointingly complicated expressions we have encountered in proving it
might also have a meaningful interpretation. But we have not pursued this
systematically.
\subsubsection{The improved bound for $c=3$} \label{sec:improved}
Let us now finish the proof of the bound $a\leq \mathsf{Improved}(\rho),\
\rho \in [\rho_2,9/32]$ left over from Section \ref{sec:c3}. We utilize all
the notation introduced there, assume that $c=3$, and we need to prove that
$f_I(\rho,a)\geq 0$. We also introduce the additional notation
$$
a_i \stackrel{\rm def}{=} \mu(w_i)\ (i=1..3)
$$
for the weights of the vertices comprising the set $P$; thus, $\sum_{i=1}^3
a_i=a$.
We want to obtain an upper bound on $T_4^N(v_1,v_2)$ and then compare it with
\eqref{eq:t4n}. Let us split $I = J\stackrel .\cup K$, where $J$ corresponds
to $f_1$ and $K$ corresponds to $f_2+f_3$. Recalling that
$$
T_4^N = \eval{T_4^\mathcal I}{\mathcal I, [1,2]},
$$
let us split the right-hand side according to this partition as (with slight
abuse of notation)
$$
\eval{T_4^\mathcal I}{\mathcal I, [1,2]} = \eval{T_4^\mathcal I}{\mathcal J,
[1,2]} + \eval{T_4^\mathcal I}{\mathcal K,
[1,2]}.
$$
When $v\in J$ corresponds to $F_i$, we have $S_4^{\mathcal I}(v_1,v_2,v) =
a_i$ and hence, by \eqref{eq:t4I}, $T_4^{\mathcal I}(v_1,v_2,v)\leq
\rho-2a+a_i$. Thus
$$
\eval{T_4^\mathcal I}{\mathcal J, [1,2]} \leq \sum_i F_i(\rho-2a+a_i).
$$
In order to bound $\eval{T_4^\mathcal I}{\mathcal K, [1,2]}$, we first note
that $K$ is independent (every two vertices in $K$ have a common neighbor in
$P$). Furthermore, the only edges between $K$ and $J$ are between parts
corresponding to $F_i$ and $F_{jk}$. Hence $\eval{T_4^\mathcal I}{\mathcal K,
[1,2]}\leq \sum_i F_iF_{jk}$ and we arrive at the bound
\begin{equation} \label{eq:symmetric}
T_4^N(v_1,v_2) \leq \sum_i F_i(\rho-2a+F_{jk} +a_i).
\end{equation}
Next, let us denote by $\epsilon_i$ the (non-negative!) deficits in Claim
\ref{clm:sophisticated}:
$$
\epsilon_i \stackrel{\rm def}{=} a_i+F_{jk} - 4a+\rho;\ \epsilon_i\geq 0.
$$
Then \eqref{eq:symmetric} re-writes as follows:
$$
T_4^N(v_1,v_2) \leq 2af_1+\sum_{i=1}^3 F_i\epsilon_i.
$$
Let us now assume w.l.o.g. that $F_1\geq F_2\geq F_3$. Then, since all
$\epsilon_i$ are non-negative,
$$
\sum_{i=1}^3 F_i\epsilon_i \leq F_1\cdot \sum_{i=1}^3\epsilon_i = F_1
(f_2-11a+3\rho) = F_1(3-6\rho-8a-2f_1),
$$
where the last equality follows from \eqref{eq:resolving}. Summarizing,
\begin{eqnarray*}
T_4^N(v_1,v_2) &\leq& 2af_1+F_1(3-6\rho-8a-2f_1) = F_1(3-6\rho-8a) -
2f_1(F_1-a)\\ &\leq& F_1(3-6\rho-8a) - 2(F_1+2a)(F_1-a),
\end{eqnarray*}
where the last inequality holds since $F_1\geq a$ and $f_1=F_1+F_2+F_3\geq
F_1+2a$ by Claim \ref{clm:Fi}. The right-hand side here is a concave
quadratic function in $F_1$; maximizing, we find
$$
T_4^N(v_1,v_2)\leq \frac{33}2a^2+15a\rho-\frac{15}2a+\frac 92\rho^2 -\frac
92\rho +\frac 98.
$$
Comparing with \eqref{eq:t4n}, we get a constraint $Q(\rho,a)\geq 0$ that is
quadratic concave in $a$, and $\mathsf{Improved}(\rho)$ is its smallest root.
Moreover, $Q\of{\rho, \frac 3{14}(1-2\rho)} =
-\frac{(11\rho-2)(9-32\rho)}{49}\leq 0$ since $\rho_2>\frac 2{11}$. Hence the
preliminary bound \eqref{eq:3_14} can be improved to $a\leq
\mathsf{Improved}(\rho)$.
\section{Conclusion}
In this paper we have taken a prominent open problem in the algebraic graph
theory and considered its natural semi-algebraic relaxation in the vein of
extremal combinatorics. The resulting extremal problem displays a remarkably
rich structure, and we proved upper bounds for it employing methods greatly
varying depending on the range of edge density $\rho$. Many of these methods
are based on counting techniques typical for extremal combinatorics, and one
bound has a clean interpretation in terms of algebraic Krein bounds for the
triangle-free case.
\medskip
The main generic question left open by this work is perhaps how far can this
connection between the two areas go. Can algebraic combinatorics be a source
of other interesting extremal problems? In the other direction, perhaps flag
algebras and other advanced techniques from extremal combinatorics can turn
out to be useful for ruling out the existence of highly symmetric
combinatorial objects with given parameters? These questions are admittedly
open-ended so we would like to stop it here and conclude with several
concrete open problems regarding TFSR graphs and their relaxations introduced
in this paper.
Can the Krein bound $a(\rho)\leq \mathsf{Krein}(\rho)$ be improved for small
values of $\rho$? Of particular interest are the values $\rho=\frac{16}{77}$,
$\rho=\frac 5{28}$ or $\rho =\frac 7{50}$, ideally showing that
$a\of{\frac{16}{77}}= \frac 4{77}$, $a\of{\frac{5}{28}}= \frac 1{28}$ or
$a\of{\frac{7}{50}}= \frac 1{50}$. In other words, can we show that like the
four denser TFSR graphs, the M22 graph, the Gewirtz graph and the Hoffman
graph are also extremal configurations for their respective edge densities?
Another obvious case of interest is $\rho=\frac{57}{3250}$ corresponding to
the only hypothetical unknown Moore graph. More generally, can we rule out
the existence of a TSFR graph for at least one additional pair $(\rho,a)$ by
showing that actually $a(\rho)\leq a$?
For some ``non-critical'' (that is, not corresponding to TFSR graphs) $\rho$
it is sometimes also possible to come up with constructions providing
non-trivial {\bf lower} bounds on $a(\rho)$. A good example\footnote{Let us
remind that we confine ourselves to the region $\rho\leq 1/3$. A complete
description of all non-zero values $a(\rho)$ for $\rho>1/3$ follows from
\cite{BrT}.} is provided by the Kneser graphs $\text{KG}_{3k-1,k}$ having
$\rho=\frac{{2k-1 \choose k}}{{3k-1\choose k}}$ and $a=\frac{1}{{3k-1\choose
k}}$ but there does not seem to be any reasons to believe that they are
optimal. Are there any other values of $\rho$ for which we can compute
$a(\rho)$ exactly? Of particular interest here is the value $\rho=1/3$
critical for the Erd\"os-Simonovits problem (see again \cite[Problem 1]{BrT}
and the literature cited therein). Can we compute $a(1/3)$ or at least
determine whether $a(1/3)=0$ or not?
Speaking of which, is there {\bf any} rational $\rho\in (0,1/3]$ for which
$a(\rho)=0$? Equivalently, does there exist $\rho\in [0,1/3]$ for which there
are no triangle-free $\rho$-regular graphs (or, which is the same, weighted
twin-free graphs) of diameter 2? Note for comparison that there are many such
values for $\rho>1/3$; in fact, all examples leading to non-zero $a(\rho)$
fall into one of a few infinite series.
We conclude by remarking in connection with this question that regular
weighted triangle-free twin-free graphs of diameter 2 seem to be extremely
rare: a simple computer search has shown that Petersen is the {\bf only} such
graph on $\leq 11$ vertices with $\rho\leq 1/3$.
\bibliographystyle{alpha}
|
1,941,325,221,067 | arxiv | \section{Introduction}
By April 2021, coronavirus(COVID-19) pandemic has affected 219 nations around the world with 136 million total cases and 2.94 million deaths. With this pandemic situation, a rapid increase in social media usage was noticed. In measures, during 2020, 490 million new users joined indicating a more than 13\% year-on-year growth \cite{Kemp2021users}. This growth is mainly resulted due to the impacts on day-to-day activities and information sharing and gathering requirements related to the pandemic.
As a drawback of these exponential growths, the dark side of social media is further revealed during this COVID-19 infodemic \cite{mourad2020critical}. The spreading of false and harmful information resulted in panic and confusions which make the pandemic situation worse. Also, the inclusion of false information reduced the usability of a huge volume of data which is generated via social media platforms with the capability of fast propagation. To handle these issues and utilise social media data effectively, accurate identification of false information is crucial. Considering the high data generation in social media, manual approaches to filter false information require significant human efforts. Therefore an automated technique to tackle this problem will be invaluable to the community.
Targeting the infodemic that occurred with COVID-19, NLP4IF-2021 shared task was designed to predict several properties of a tweet including harmfulness, falseness, verifiability, interest to the general public and required attention. The participants of this task were required to predict the binary aspect of the given properties for the test sets in three languages: Arabic, Bulgarian and English provided by the organisers. Our team used recently released transformer models with the text classification architecture to make the predictions and achieved the 4$^{th}$ place in all the languages while maintaining the simplicity and universality of the method. In this paper, we mainly present our approach, with more details about the architecture including an experimental study. We also provide our code to the community which will be freely available to everyone interested in working in this area using the same methodology\footnote{The GitHub repository is publicly available on \url{https://github.com/tharindudr/infominer}}.
\section{Related Work}
Identifying false information in social media has been a major research topic in recent years. False information detection methods can be mainly categorised into two main areas; Content-based methods and Social Context-based methods \cite{10.1145/3393880}.
Content-based methods are mainly based on the different features in the content of the tweet. For example, \citet{10.1145/1963405.1963500} find that highly credible tweets have more URLs, and the textual content length is usually longer than that of lower credibility tweets. Many studies utilize the lexical and syntactic features to detect false information. For instance, \citet{qazvinian-etal-2011-rumor} find that the part of speech (POS) is a distinguishable feature for false information detection. \citet{6729605} find that some types of sentiments are apparent features of machine learning classifiers, including positive sentiments words (e.g., love, nice, sweet), negating words (e.g., no, not, never), cognitive action words (e.g., cause, know), and inferring action words (e.g., maybe, perhaps). Then they propose a periodic time-series model to identify key linguistic differences between true tweets and fake tweets. With the word embeddings and deep learning getting popular in natural language processing, most of the fake information detection methods were based on embeddings of the content fed into a deep learning network to perform the classification \cite{10.5555/3061053.3061153}.
Traditional content-based methods analyse the credibility of the single microblog or claim in isolation, ignoring the high correlation between different tweets and events. However, Social Context-based methods take different tweets in a user profile or an event to identify false information. Many studies detect false information by analyzing users’ credibility \cite{li-etal-2019-rumor} or stances \cite{10.1145}. Since this shared is mainly focused on the content of the tweet to detect false information, we can identify our method as a content-based false information identification approach.
\section{Data}
\label{sec:data}
The task is about predicting several binary properties of a tweet on COVID-19: whether it is harmful, whether it contains a verifiable claim, whether it may be of interest to the general public, whether it appears to contain false information, etc.
\cite{NLP4IF-2021-COVID19-task}. The data has been released for three languages; English, Arabic and Bulgarian \footnote{The dataset can be downloaded from \url{https://gitlab.com/NLP4IF/nlp4if-2021}}. Following are the binary properties that the participants should predict for a tweet.
\begin{enumerate}[I]
\item \textbf{Verifiable Factual Claim}: Does the tweet contain a verifiable factual claim?
\item \textbf{False Information}: To what extent does the tweet appear to contain false information?
\item \textbf{Interest to General Public}: Will the tweet have an effect on or be of interest to the general public?
\item \textbf{Harmfulness}: To what extent is the tweet harmful to the society?
\item \textbf{Need of Verification}: Do you think that a professional fact-checker should verify the claim in the tweet?
\item \textbf{Harmful to Society}: Is the tweet harmful for the society?
\item \textbf{Require attention}: Do you think that this tweet should get the attention of government entities?
\end{enumerate}
\section{Architecture}
The main motivation for our architecture is the recent success that the transformer models had in various natural language processing tasks like sequence classification \cite{ranasinghe-hettiarachchi-2020-brums, ranasinghe2019brums, pitenis-etal-2020-offensive}, token classification \cite{mudes, ranasinghe2021semeval}, language detection \cite{jauhiainen2021}, word context prediction \cite{hettiarachchi-ranasinghe-2020-brums, hettiarachchi2021semeval} question answering \cite{yang-etal-2019-end-end-open} etc. Apart from providing strong results compared to RNN based architectures \cite{hettiarachchi-ranasinghe-2019-emoji, ranasinghe2019brums}, transformer models like BERT \cite{devlin-etal-2019-bert} provide pretrained multilingual language models that support more than 100 languages which will solve the multilingual issues of these tasks \cite{ranasinghe2020wlv, ranasinghe2021tallip, ranasinghe-zampieri-2020-multilingual}.
For sequence classification tasks transformer models take an input of a sequence and outputs the representations of the sequence. There can be one or two segments in a sequence which are separated by a special token [SEP] \cite{devlin-etal-2019-bert}. In this approach we considered a tweet as a sequence and no [SEP] token is used. Another special token [CLS] is used as the first token of the sequence which contains a special classification embedding. For text classification tasks, transformer models take the final hidden state $\textbf{h}$ of the [CLS] token as the representation of the whole sequence \cite{10.1007/978-3-030-32381-3_16}. A simple softmax classifier is added to the top of the transformer model to predict the probability of a class $c$ as shown in Equation \ref{equ:softmax} where $W$ is the task-specific parameter matrix. In the classification task all the parameters from transformer as well as W are fine tuned jointly by maximising the log-probability of the correct label. The architecture of transformer-based sequence classifier is shown in Figure \ref{fig:architecture}.
\begin{equation}
\label{equ:softmax}
p(c|\textbf{h}) = softmax(W\textbf{h})
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{images/InfoMiner.png}
\caption{Text Classification Architecture}
\label{fig:architecture}
\end{figure}
\renewcommand{\arraystretch}{1.2}
\begin{table*}[!ht]
\begin{center}
\small
\scalebox{0.95}{
\begin{tabular}{l l c c c c c c c | c}
\toprule
&{\bf \makecell{Model} } & \makecell{\bf I} & \makecell{\bf II} & \makecell{\bf III} & \makecell{\bf IV} & \makecell{\bf V} & \makecell{\bf VI} & \makecell{\bf VII} & \makecell{\bf Mean} \\
\midrule
\multirow{2}{*}{\bf English}
& roberta-base & 0.822 & 0.393 & 0.821 & 0.681 & 0.461 & 0.235 & 0.251 & 0.523 \\
& bert-base-cased & 0.866 & 0.461 & 0.893 & 0.740 & 0.562 & 0.285 & 0.303 & 0.587 \\
\midrule
\multirow{3}{*}{\bf Arabic}
& bert-multilingual-cased & 0.866 & 0.172 & 0.724 & 0.400 & 0.557 & 0.411 & 0.625 & 0.536 \\
& arabert-v2 & 0.917 & 0.196 & 0.782 & 0.469 & 0.601 & 0.433 & 0.686 & 0.583 \\
& arabert-v2-tokenized & 0.960 & 0.136 & 0.873 & 0.571 & 0.598 & 0.424 & 0.678 & 0.606 \\
\midrule
\multirow{1}{*}{\bf Bulgarian} &
bert-multilingual-cased & 0.845 & 0.098 & 0.516 & 0.199 & 0.467 & 0.303 & 0.196 & 0.375 \\
\bottomrule
\end{tabular}
}
\end{center}
\caption{Macro F1 between the algorithm predictions and human annotations for development set in all the languages. Results are sorted from Mean F1 score for each language.}
\label{tab:results}
\end{table*}
\renewcommand{\arraystretch}{1.2}
\begin{table*}[!ht]
\begin{center}
\small
\scalebox{0.95}{
\begin{tabular}{l l c c c c c c c | c}
\toprule
&{\bf \makecell{Model} } & \makecell{\bf I} & \makecell{\bf II} & \makecell{\bf III} & \makecell{\bf IV} & \makecell{\bf V} & \makecell{\bf VI} & \makecell{\bf VII} & \makecell{\bf Mean} \\
\midrule
\multirow{3}{*}{\bf English}
& Best System & 0.835 & 0.913 & 0.978 & 0.873 & 0.882 & 0.908 & 0.889 & 0.897 \\
& InfoMiner & 0.819 & 0.886 & 0.946 & 0.841 & 0.803 & 0.884 & 0.867 & 0.864 \\
& Random Baseline & 0.552 & 0.480 & 0.457 & 0.473 & 0.423 & 0.563 & 0.526 & 0.496\\
\midrule
\multirow{3}{*}{\bf Arabic}
& Best System & 0.843 & 0.762 & 0.890 & 0.799 & 0.596 & 0.912 & 0.663 & 0.781 \\
& InfoMiner & 0.852 & 0.704 & 0.774 & 0.743 & 0.593 & 0.698 & 0.588 & 0.707 \\
& Random Baseline & 0.510 & 0.444 & 0.487 & 0.442 & 0.476 & 0.584 & 0.533 & 0.496 \\
\midrule
\multirow{3}{*}{\bf Bulgarian}
& Best System & 0.887 & 0.955 & 0.980 & 0.834 & 0.819 & 0.678 & 0.706 & 0.837 \\
& InfoMiner & 0.786 & 0.749 & 0.419 & 0.599 & 0.556 & 0.303 & 0.631 & 0.578 \\
& Random Baseline & 0.594 & 0.502 & 0.470 & 0.480 & 0.399 & 0.498 & 0.528 & 0.496 \\
\bottomrule
\end{tabular}
}
\end{center}
\caption{Macro F1 between the InfoMiner submission and human annotations for test set in all the languages. Best System is the results of the best model submitted for each language as reported by the task organisers \cite{NLP4IF-2021-COVID19-task}.}
\label{tab:results_test}
\end{table*}
\section{Experimental Setup}
We considered the whole task as seven different classification problems. We trained a transformer model for each label mentioned in Section \ref{sec:data}. This gave us the flexibility to fine-tune the classification model in to the specific label rather than the whole task. Given the very unbalanced nature of the dataset, the transformer models tend to overfit and predict only the majority class. Therefore, for each label we took the number of instances in the training set for the minority class and undersampled the majority class to have the same number of instances as the minority class.
We then divided this undersampled dataset into a training set and a validation set using 0.8:0.2 split. We mainly fine tuned the learning rate and number of epochs of the classification model manually to obtain the best results for the development set provided by organisers in each language. We obtained $1e^{-5}$ as the best value for learning rate and 3 as the best value for number of epochs for all the languages in all the labels. The other configurations of the transformer model were set to a constant value over all the languages in order to ensure consistency between the languages. We used a batch-size of eight, Adam optimiser \cite{kingma2014adam} and a linear learning rate warm-up over 10\% of the training data. The models were trained using only training data. We performed early stopping if the evaluation loss did not improve over ten evaluation rounds. A summary of hyperparameters and their values used to obtain the reported results are mentioned in Table \ref{tab:params}. The optimized hyperparameters are marked with $\ddag$ and their optimal values are reported. The rest of the hyperparameter values are kept as constants. We did not use any language specific preprocessing techniques in order to have a flexible solution between the languages. We used a Nvidia Tesla K80 GPU to train the models. All the experiments were run for five different random seeds and as the final result, we took the majority class predicted by these different random seeds as mention in \citet{hettiarachchi-ranasinghe-2020-infominer}. We used the following pretrained transformer models for the experiments.
\paragraph{bert-base-cased} - Introduced in \citet{devlin-etal-2019-bert}, the model has been trained on a Wikipedia dump of English using Masked Language Modelling (MLM) objective. There are two variants in English BERT, base model and the large model. Considering the fact that we built seven different models for each label, we decided to use the base model considering the resources and time.
\paragraph{roberta-base} - Introduced in \citet{liu2019roberta}, RoBERTa builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates. RoBERTa has outperformed BERT in many NLP tasks and it motivated us to use RoBERTa in this research too. Again we only considered the base model.
\paragraph{bert-nultilingual-cased} - Introduced in \citet{devlin-etal-2019-bert}, the model has been trained on a Wikipedia dump of 104 languages using MLM objective. This model has shown good performance in variety of languages and tasks. Therefore, we used this model in Arabic and Bulgarian.
\paragraph{AraBERT}
Recently language-specific BERT based models have proven to be very efficient at language understanding. AraBERT \cite{antoun-etal-2020-arabert} is such a model built for Arabic with BERT using scraped Arabic news websites and two publicly available Arabic corpora; 1.5 billion words Arabic Corpus \cite{elkhair201615} and OSIAN: the Open Source International Arabic News Corpus \cite{zeroual-etal-2019-osian}. Since AraBERT has outperformed multilingual bert in many NLP tasks in Arabic \cite{antoun-etal-2020-arabert} we used this model for Arabic in this task. There are two version in AraBERT; AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the Farasa Segmenter \cite{abdelali-etal-2016-farasa}.
\section{Results}
When it comes to selecting the best model for each language, highest F1 score out of the evaluated models was chosen. Due to the fact that our approach uses a single model for each label, our main goal was to achieve good F1 scores using light weight models. The limitation of available resources to train several models for all seven labels itself was a very challenging task to the team but we managed to evaluate several.
As depicted in Table \ref{tab:results}, for English, bert-base-cased model performed better than roberta-base model. For Arabic, arabert-v2-tokenized performed better than the other two models we considered. For Bulgarian, with the limited time, we could only train bert-multilingual model, therefore, we submitted the predictions from that for Bulgarian.
As shown in Table \ref{tab:results_test}, our submission is very competitive with the best system submitted in each language and well above the random baseline. Our team was ranked 4$^{th}$ in all the languages.
\section{Conclusion}
We have presented the system by InfoMiner team for NLP4IF-2021-Fighting the COVID-19 Infodemic. We have shown that multiple transformer models trained on different labels can be successfully applied to this task. Furthermore, we have shown that undersampling can be used to prevent the overfitting of the transformer models to the majority class in an unbalanced dataset like this. Overall, our approach is simple but can be considered as effective since it achieved 4$^{th}$ place in the leader-board for all three languages.
One limitation in our approach is that it requires maintaining seven transformer models for the seven binary properties of this task which can be costly in a practical scenario which also restricted us from experimenting with different transformer types due to the limited time and resources. Therefore, in future work, we are interested in remodeling the task as a multilabel classification problem, where a single transformer model can be used to predict all seven labels.
\section*{Acknowledgments}
We would like to thank the shared task organizers for making this interesting dataset available. We further thank the anonymous reviewers for their insightful feedback.
|
1,941,325,221,068 | arxiv |
\subsection{Impact of Different Edge Types}
In this section, we conduct an ablation study to analyze the impact of each edge type on the overall performance of our model. We keep the same neural architecture and change the code representation by removing edge types. Then we train different variants of \methodnamews~ on the same datasets on the any-code completion and code classification.
The results of our ablation study are shown in Table~\ref{tab:ablation-study}. We can see that any of the dependency edges can improve the performance of \textit{any-code completion}. This implies that the dependency information is useful for our Code Hierarchy. Among the three edge types, DF-E contributes the least to the performance in the any-code completion task while CD-E has the greatest impact. For code classification, NS-E has the strongest impact while DF-E still performs the worst. This implies two points: (1) The impact of each edge type varies depending on the downstream tasks; and (2) Data-flow information does not always perform well, and its impact should be investigated further. This result is consistent with the work of ~\citet{zhou2019devign}, as the data-flow edges perform poorly in their graph representation.
\subsection{Visualize Semantic-Equivalent Programs with Pretrained Model}
Despite the fact that different pre-trained models use different pretraining objectives, they all aim for the same result: after pretraining, the model should be able to produce similar vector representations for semantic-equivalent programs, and they should be close in the vector space. We compare the quality of vectors produced by our model with others to see how well \methodnamews~ can perform in general.
Code classification datasets such as POJ-104~\cite{mou2016convolutional} are good matches for obtaining a set of semantically-equivalent programs. We randomly sample 10 classes from POJ-104. Then we feed them through our pretrained \methodnamews-C++~ model, and finally, we use a pooling over all the node embeddings of each graph to obtain a representative vector for this graph. For the other pretrained language models, we choose CodeBERT~\cite{codebert} and CodeT5~\cite{codet5} since they are two of the most well-known and SOTA methods. In addition, they are pretrained on multiple languages, so they are a good fit for our case study. Because CodeBERT is an encoder-only model, the extracted vectors represent the [CLS] tokens' final hidden states. CodeT5, on the other hand, uses an encoder-decoder model, with the features used being the hidden states of the [EOS] tokens after the last layer in the decoder. We then project all of the vectors into two-dimensional space for visualization by using t-SNE~\cite{van2008visualizing}. Figure ~\ref{fig:poj104_tsne} shows the visualization of different code models on the dataset. It can be seen that the vectors
produced by \methodnamews~ group similar code snippets into the same cluster with clearer boundaries than CodeT5 and CodeBERT. This implies that our pretraining goal is superior to the others in that it produces embeddings of semantic-equivalent programs that are closer in the vector space.\\
\begin{figure}[h]
\centering
\subfloat[\text{HIRGAST}]{\includegraphics[width=.30\textwidth]{images/poj_hirgast.pdf}}
\subfloat[CodeT5]{\includegraphics[width=.30\textwidth]{images/poj_codet5.pdf}}
\subfloat[CodeBERT]{\includegraphics[width=.30\textwidth]{images/poj_codebert.pdf}}
\caption{Visualization with 10 classes on POJ-104 by t-SNE. Legends on the corner denote classes}
\label{fig:poj104_tsne}
\end{figure}
\subsection{Model Explainability} \label{subsec:graph-analysis}
An interesting aspect of our model design that we want to show is the capability of explaining the prediction. To do this, we use the Contrastive gradient-based saliency maps~\cite{saliency_map}. This method is to differentiate the model output with respect to the input, which can be obtained by back-propagation. The inputs are the nodes in the graph $\mathcal{G}$ in our case. This method assumes that the norm of a node's gradient indicates its importance. However, because negative gradients are difficult to explain, the negative values are truncated to zeros to retain only positive values. After computing the scores for all nodes, we use min-max normalization to adjust the scores between 0 and 1. Note that this can be done in both the \textit{Subtree-level} and the \textit{AST-level}. We choose the code classification for this analysis. We randomly select a few examples in our C++1400 test set, and here we show one representative sample.
Figure \ref{fig:case-study-1-graph} shows a visualization of the nodes in the \textit{AST-level}. Each node is associated with a score, which represents the node's importance given a prediction. Table~\ref{tab:top-10-nodes} shows top-10 most important nodes, which are aligned with specific statements in the original source code. We can also visualize which tokens inside a statement are important\footnote{Due to the page constraint, we only show the visualization at the \textit{AST-level}, readers are referred to the Supplementary Material for the visualization of the \textit{Subtree-level}}.
The results in Figure \ref{fig:case-study-1-graph} and Table \ref{tab:top-10-nodes} indicate \methodnamews~ captures the nodes having direct effects on the output; specifically, nodes 21, 13 and 22 are related to the data flow while nodes 15, 20 and 16 are about the control flow. The appearance of node 1 may be due to its token being duplicated in many files, which probably confuses \text{HIRGAST}. Nodes 19 and 5 are root nodes of the subtrees representing the while loop and the entire program respectively, thus they contain the information of their children in addition to their own. Focusing on these nodes reveals \methodnamews~ can understand a piece of code in a broad sense, which surprisingly matches the purpose of Code Hierarchy.
\begin{minipage}[h]{\textwidth}
\begin{minipage}[b]{0.43\textwidth}
\centering
\includegraphics[width=\linewidth]{images/case-study-1.pdf}
\captionof{figure}{Score of each node is on the top of the corresponding node.}
\label{fig:case-study-1-graph}
\end{minipage}
\hfill
\begin{minipage}[b]{0.49\textwidth}
\centering
\begin{tabular}{ll}
\toprule
Node ID & Token \\
\midrule
1 & using namespace std\\
15 & n == 0\\
19 & while\\
20 & n > 0\\
5 & int main()\\
21 & zero = zero + n / 5\\
13 & cin $\gg$ n\\
22 & n = n / 5\\
16 & break\\
7 & main\\
\bottomrule
\end{tabular}
\captionof{table}{Top 10 nodes along with their tokens}
\label{tab:top-10-nodes}
\end{minipage}
\end{minipage}
\subsection{Any-code completion}
\subsection{Representing Code as Hierarchy} \label{subsec:retrieve-subtrees}
\begin{figure}[!htp]
\centering
\includegraphics[scale=.2]{images/representation.png}
\caption{An example of Code Hierarchy. The gray nodes in the AST-level layer are subtree nodes, while non-subtree nodes remain white. Each subtree node (gray node) in the AST-level layer has been abstracted and represented for a subtree in the Subtree-level layer. The table in the bottom left corner maps a node index to its corresponding node type.}
\label{fig:code_graph_representation}
\end{figure}
We aim to identify and extract a set of subtrees $S$ from an AST $T$. Then we \textit{abstract} each node that represents a subtree $s \in S$ by replacing it with a new node in $T$, reducing $T$ into another tree $T\prime$ with a smaller size, where $Size(T\prime) < Size(T)$. We call the new nodes added to $T$ as the \textit{subtree nodes} to distinguish them with the AST nodes. The set of subtrees $S$ is kept separately.
Such subtrees will then be extracted in the first layer of the Code Hierachy.
A subtree will be chosen if its root type is \textit{expression} or \textit{simple statement}. Specifically, a statement that does not contain other statements is considered as \textit{simple}. The reason is that very large statements, such as for loops, while loops, or if statements, can contain complex code structures. In fact, such large statements may occupy a large portion of the content in some small programs, reducing the effectiveness of our MSP task~\footnote{The set of node types can be looked up in our Supplementary Material}.
These steps can be done by depth-first preorder traversal of the AST. If the type of a node $n$ is in $S$, we replace the whole subtree where $n$ is the root, and we do not traverse further into lower depths. This procedure is executed recursively until all subtrees are obtained. Once done, we get a new tree $T\prime$ and a set of subtrees $S$. Note that some nodes (subtree nodes) in $T\prime$ point to elements in $S$. This feature is the key to representing the Hierarchy. Figure~\ref{fig:code_graph_representation} depicts such a \textit{Code Hierarchy} where node 5 is a new node in the tree $T\prime$, and it represents a specific subtree with three nodes ${b, >, 0}$.
\par
\subsection{Converting ASTs to Graphs with Semantic Enrichment}\label{subsec:hie-graph-rep}
Given the tree $T\prime$ from the previous step, we enrich $T\prime$ with semantic information to make $T\prime$ become a graph $\mathcal{G}$. Such information is added to the AST-level by connecting the nodes with different edge types. Through our process, there are four edge types, including AST Edge, Control-Dependence Edge, Data-Flow Edge and Next-Subtree Edge. Each of them is described further below.
\begin{enumerate}[leftmargin=*]
\item \textbf{AST Edge (AST-E):} Since $T\prime$ consists of a mixture of original AST nodes from $T$ and new subtree nodes, the AST edges serve as a skeleton for the syntactical representation of code. In Figure ~\ref{fig:code_graph_representation}, AST edges are denoted as green arrows.
\item \textbf{Control-Dependence Edge (CD-E):} Control dependency occurs when a program instruction executes if evaluating the preceding instruction permits its execution. For example, in Figure ~\ref{fig:code_graph_representation}, the subtree $sum = b$ is \textit{control dependent} on the subtree $b > 0$, so $b > 0$ connects with $sum = b$ through a control-dependence edge (red dashed arrows in Figure ~\ref{fig:code_graph_representation}). Besides, to keep the order of execution in a function, we connect the root of one statement to the root of the next statement. As in Figure ~\ref{fig:code_graph_representation}, for instance, there is a control-dependence edge connecting node 3 to node 4 where node 3 represents the declaration statement and node 4 is the root of the if statement.
\item\textbf{Data-Flow Edge (DF-E)} Data flow indicates the way values of variables change over time as a program is executed. We use Use-Define chain analysis~\cite{weiser1984program}, a well-known data-flow analysis technique to extract such information. For example, the variable $sum$ is first defined in line 2, then it is used in line 4 and 6, then the subtree $sum = 0$ connects the two subtrees $sum = b$ and $return~sum$ through data-flow edges (yellow arrows in Figure ~\ref{fig:code_graph_representation}).
\item\textbf{Next-Subtree Edge (NS-E)} This edge type represents the textual order of subtrees, not the execution ones. For example, the subtree $sum = b$ is written right after $b > 0$, but $sum = b$ may not be necessarily executed after $b > 0$. With that, $b > 0$ connects with $sum = b$ through a next-subtree edge (blue arrows in Figure~\ref{fig:code_graph_representation}).
\end{enumerate}
\subsection{Neural Network Architecture} \label{sec:model-architecture}
In this section, we describe our \methodnamews~ to process the Code Hierarchy obtained in the steps above. Our architecture consists of a Tree-based Convolutional Neural Network~\cite{mou2016convolutional} (TBCNN) and a Heterogeneous Graph Transformer ~\cite{hgt}~(HGT). At first, each node in a subtree $s \in S$ contains two attributes: \textit{token} and \textit{type}. The the initial representation of a node can be computed by concatenating the embeddings of its \textit{token} and its \textit{type}, where such embeddings can be looked up from two embedding matrices (token embedding matrix and type embedding matrix) initialized randomly as the learnable parameters. A token is encoded as the sum of the embeddings of its subtokens by a lookup table, while the type embeddings are retrieved by another lookup table. Then, the TBCNN receives each vectorized subtree $s$ in the \textit{Subtree-level layer} and encodes it into a fixed-size embedding. This embedding will also be used as the initial representation for the corresponding node in the graph at the \textit{AST-level layer}. The HGT is then used to perform message passing on the nodes to accumulate information.
\subsubsection{Tree-based Convolutional Neural Network (TBCNN)}
TBCNN~\cite{mou2016convolutional} is designed to process tree-structure through the tree-based convolution operator. In a TBCNN, there is at least one tree-based convolutional layer. Each layer is a feature detector and has a fixed-depth convolutional window called the kernel, sliding over the entire tree to extract features. Formally, this procedure can be summarized as:
$y = f \left( \sum_{1}^{n} W_{\text{conv}, i} \cdot x_i + b_{\text{conv}} \right)$, where $f$ is an activation function, $W_{\text{conv}, i}$ are the weight matrices, $x_i$ are the vectors of nodes inside the sliding window, and $b_{\text{conv}}$ is the bias. Because of the fixed number of weight matrices, TBCNNs always see all trees as continuous binary trees. This leads $W_{\text{conv}, i}$ to be different for each node.
In summary, at each convolutional step, the feature of node $i$ is accumulated by its direct children in a sliding window simultaneously. At the end of this step, the fix-sized embedding of a subtree is computed by using a max pooling operator over all of the nodes in such subtree.
\subsubsection{Heterogeneous Graph Transformer (HGT)}
We describe the Heterogeneous Graph Transformer to process our graph in this section.
A heterogeneous graph is defined as a directed graph $\mathcal{G} = (\mathcal{V}_1 \cup \mathcal{V}_2, \mathcal{E}, \mathcal{A}, \mathcal{R})$ where $\mathcal{V}_1$ is the node set where a node represents a subtree, $\mathcal{V}_2$ is a set of non-subtree AST nodes, and $\mathcal{E}$ is the edge set. Each node and edge are associated with the types $\tau(n) \in \mathcal{A}$ and $\phi(e) \in \mathcal{ R}$, respectively. As previously mentioned, the embedding of each node $v_1 \in \mathcal{V}_1$ can be computed from the TBCNN step. For each node $v_2 \in \mathcal{V}_2$, at first, we compute an initial vector by concatenating the embeddings of its \textit{token} and its \textit{type}, which is then fed to a 1-layer nonlinear network, then we annotate this node with the obtained vector. A HGT layer can be decomposed into three components: heterogeneous mutual attention, heterogeneous message passing and target-specific aggregation. The overall process can be written as:
\begin{equation}
\small
H^l[t] \leftarrow \underset{\forall s \in N(t), \forall e \in E(s, t)}{Aggregate}\left( Attention(s,e,t) \cdot Message(s,e,t) \right)
\end{equation}
where $N(t)$ is the set of source nodes of node $t$ and $E(s,t)$ denotes all the edges from node $s$ to node $t$. $H^{l}$ is the output of the $l$-th HGT layer, and the subsequent layer receives it as the input. Given a node $t$, $Attention(\cdot)$ computes the score for each source node $s \in N(t)$, $Message(\cdot)$ extracts the message by using the source node $s$, $Aggregate(\cdot)$ is a operation where the node $t$ incorporates the messages of all the source nodes $N(t)$ (details can be seen in our Supplementary Material).
\subsection{Pre-training as Missing Subtree Prediction (MSP)}
Because training our model from scratch for each task is costly, we use a pretraining strategy to train a base model before fine-tuning it for downstream tasks. We propose a novel pretraining objective called\textit{ Missing Subtree Prediction} (MSP) which predicts a missing subtree as a sequence of tokens (which represents for that missing subtree) by its surrounding context \footnote{For example, if the masked node in the Figure \ref{fig:code_graph_representation} is node 6, the model tries to predict the missing subtree as a sequence\texttt{ sum = b}.}.
We use the information of the AST-level layer and the other subtrees in the Subtree-level layer to predict a target missing subtree. By doing this, the model learns the relationship between different code components, both syntactically and semantically, i.e., how the other code components are organized structurally to reconstruct the subtrees. We feed all node embeddings from the HGT's output into a vanilla Transformer decoder to predicting the subtree token. Formally, given a set of training samples $D=\left \{ \left \langle \textbf{\text{n}}^{(s)}, \textbf{\text{y}}^{(s)} \right \rangle \right \}_{s=1}^S$ where $\textbf{\text{n}}^{(s)}$ is the set of nodes after randomly masking one of the nodes in the graph $\mathcal{G}^{(s)}$, and $\textbf{\text{y}}^{(s)}$ is the token sequence of the masked node with the length $J^{(s)}$, the pretraining objective is to maximize log-likelihood of the training data:
\begin{equation}
\small
\underset{\boldsymbol{\theta}}{\text{max}} \mathcal{L}\left ( \boldsymbol{\theta} \right )= \underset{\boldsymbol{\theta}}{\text{min}} \sum_{s=1}^S -\log P\left(\textbf{\text{y}}^{(s)} | \textbf{\text{n}}^{(s)} ;\boldsymbol{\theta} \right) = \underset{\boldsymbol{\theta}}{\text{min}} \sum_{s=1}^S \sum_{j=1}^{J^{(s)}}- \log P\left(\textbf{\text{y}}_{j}^{(s)} | \textbf{\text{n}}^{(s)}, \textbf{\text{y}}_{<j}^{(s)} ;\boldsymbol{\theta} \right)
\end{equation}
\subsection{Model Pretraining with MSP}
It should be noted that our model is language-agnostic. We choose Java and C++ as the two programming languages in our case, then we pretrain two models on two datasets. These models will then be used for evaluating or fine-tuning in the following sections.
For Java, we choose the Java-small~\cite{alon2018code2seq} dataset, which has been used for pretraining in previous work~\citenumber{alon2020slm}. In addition, it is also used for the any-code completion task~\cite{alon2020slm}, which is the aim of our evaluation. This dataset has been split into 1210272/19165/9156 samples for training/testing/validation. We only use the training and evaluation part of this dataset for our pretraining.
For C++, we choose the C++1000 dataset from Project CodeNet~\cite{puri2021codenet} . It comprises of a large number of C++ programs in 1000 classes for the code classification task. This dataset has been split into 316799/98516/78702 samples for training/testing/validation. Since we use it for pretraining, we do not consider the class information. This dataset is chosen for the same reason as the Java-small dataset: it contains small programs, and it is relatively clean and large enough.
We parse all of the training samples in the two datasets (Java-small and C++1000) into ASTs by using tree-sitter~\footnote{https://github.com/tree-sitter/tree-sitter}. Then, for each sample, we identify all of the statement- and expression subtrees. Then we randomly mask the subtrees to generate training instances for our MSP task. This step results in two foundation models: \methodnamews-Java~ and \methodnamews-C++~.
\subsection{Any-code completion}
\textbf{Datasets.} We choose the test set of Java-small for evaluation which consists of 98516 samples. For each sample, we randomly select one subtree for masking, resulting in 98516 test instances.\\
\textbf{Baselines.} We choose SLM~\cite{alon2020slm} as the baseline for evaluation, which is the state-of-the-art technique for any-code completion task. We follow the steps described in the official artifact~\footnote{https://github.com/tech-srl/slm-code-generation} to create and process our test instances that fit into the format of SLM for a fair comparison. Then we use the test API provided by SLM to evaluate our test instances.
We also choose sequence-to-sequence models, including a vanilla Transformer~\cite{NIPS2017_attention} and a BiLSTM as other baselines. Given a code snippet, we replace the target with a special token
\texttt{\textless mask\textgreater}, and then train the network to predict the target as a sequence of subtokens. Transformer uses six layers, $d_{\text{model}} = 768, d_{\text{ff}} = 3072$ and twelve self attention heads per layer. For BiLSTM, its encoder is a bidirectional 4-layer LSTM and its decoder is an unidirectional 4-layer LSTM with the hidden size $d_{\text{hidden}} = 1000$. \\
\textbf{Metrics.} We use the top-1 exact match accuracy (Acc$@$1) and BLEU~\cite{papineni-etal-2002-bleu}~as the metrics. The Acc$@$1 is defined as the prediction generated by the model which is identical to the target sentence (ignoring cases and whitespaces).\\
\textbf{Results.} Table \ref{tab:any-code-completion} shows that \methodnamews~-Java achieves the best results among the baselines in terms of Acc@1 and BLEU. Note that we do not perform any training here, but we use the foundation model \methodnamews-Java~ to evaluate on the test instances of Java-Small. This demonstrates the effectiveness of using the Code Hierarchy as the context to predict missing subtrees.
{\footnotesize
\begin{table}[!htb]
\begin{minipage}{.5\linewidth}
\caption{Results on any-code completion}
\begin{tabular}{ l c c }
\toprule
Methods & Acc@1 & BLEU\\
\midrule
SLM & 5.31 & 23.96 \\
Transformer & 7.78 & 28.11\\
BiLSTM $\rightarrow$ LSTM & 6.37 & 26.77\\
\midrule
\methodnamews-Java~ & \textbf{10.02} & \textbf{31.51}\\
\bottomrule
\end{tabular}
\label{tab:any-code-completion}
\end{minipage}
\begin{minipage}{.5\linewidth}
\caption{Results on code clone detection}
\quad
\begin{tabular}{ l c c c}
\toprule
Methods & Precision & Recall & F1\\
\midrule
CDLH & 47 & 73 & 57\\
PDG+GGNN&77.3&43.6&55.8\\
ASTNN & 98.5 & 88.3 & 93.1 \\
\midrule
\methodnamews-C++~ & 97.3 & \textbf{96.3} & \textbf{97.1}\\
\bottomrule
\end{tabular}
\label{tab:code-clone-detection}
\end{minipage}
\end{table}
}
\subsection{Code Classification}
\textbf{Datasets.} We choose the POJ-104~\cite{mou2016convolutional} since it is one of most well-known benchmarks for code classification. However, POJ-104 is said to be easy to achieve good results and it is small-scale (52000 samples for 104 classes). Since we aim to perform a large-scale evaluation for this task, we need to have a larger scale dataset. As a result, we chose the C++1400 and C++1000 datasets from Project CodeNet\citenumber{puri2021codenet}, which are on a much larger scale for code classification. The C++1400 is made up of many C++ programs that have been organized into 1400 classes. It consists of 267413/83562/66868 samples for training/testing/validation. \\
\textbf{Baselines.} We follow ~\citet{puri2021codenet} to choose these baselines: MLP, CNN, C-BERT\citenumber{DBLP:c-bert}, GCN\citenumber{kipf2017semi} and GIN~\citenumber{xu2018gin} for the C++1000 and C++14000 datasets. For the POJ-104, we refer to ~\cite{zhang2019astnn} to choose ASTNN\citenumber{zhang2019astnn}, TBCNN\citenumber{mou2016convolutional}, and PDG +GGNN\citenumber{Allamanis2018} as the baselines.\\
\textbf{Results.} Table \ref{tab:classification} shows the results of \methodnamews~ for the code classification task. There are two settings, one trained from scratch, and the other is trained from fine-tuning from the foundation model (\methodnamews-C++~ in this case). It is shown that we can achieve significant improvement with fine-tuning ($\approx$ 2\%) compared to training from scratch for all of the datasets. The "\methodnamews~ (fine-tuning)" is also the best among the baselines for any of the datasets.
\subsection{Code clone detection}
\textbf{Datasets.} We follow \citet{zhang2019astnn} to create an OJ-clone dataset based on POJ-104 by sampling a subset from all pairs of clones and non-clones. In particular, there are 29989 training samples (1957 positive samples), 9996 validation samples (673 positive samples), and 9998 testing samples (656 positive samples).\\
\textbf{Baselines.} We refer to the baselines used in ASTNN\citenumber{zhang2019astnn} to evaluate the OJ-clone dataset. The first baseline is the ASTNN \citenumber{zhang2019astnn}, which is the SOTA method for clone detection on the OJ-clone. The others are CDLH\citenumber{10.5555/cdlh} and GGNN on the Program Dependence Graph (PDG + GGNN).\\
\textbf{Metrics.} We use Precision, Recall and F1 as the metrics for this task.\\
\textbf{Results.} The results for code clone detection are shown in Table \ref{tab:code-clone-detection}. Overall, \methodnamews-C++~ has the best performance in terms of F1. It outperforms the second-best baseline ASTNN by a large margin ($\approx$ 4\%). Although ASTNN has higher precision, our approach is better with regard to recall and F1.
{
\begin{table}[htb]
\begin{minipage}{.48\linewidth}
\centering
\caption{\small Results on classification. We use \methodnamews-C++~ for this task.}
\fontsize{7.5}{8.5}\selectfont
\begin{tabular}{lccc}
\toprule
Methods & C++1000 & C++1400 & POJ-104\\
\midrule
MLP & 68.47 & 64.63 &-\\
CNN & 94.14 & 93.89 &-\\
C-BERT & 93.80 & 91.89&-\\
GCN & 95.88 & 95.39&-\\
GIN & 96.49 & 96.08 &-\\
ASTNN & -& -& 98.0\\
TBCNN & -&- &94.0\\
PDG+GGNN & -& -& 79.6\\
\midrule
\methodnamews~ (scratch) & 96.09 & 94.73 & 97.59\\
\methodnamews~ (fine-tuning) & \textbf{98.45} & \textbf{98.05} & \textbf{98.04} \\ \bottomrule
\end{tabular}
\label{tab:classification}
\end{minipage}\qquad
\begin{minipage}{.44\linewidth}
\caption{\small Summary of the ablation studies. We use \methodnamews-Java~ for any-code completion, and for code classification (Code C), we use \methodnamews-C++~.}
\flushright
\fontsize{7.5}{8.5}\selectfont
\centering
\begin{tabular}{ l c c c}
\toprule
&\multicolumn{2}{c}{Any-code Completion} & \multicolumn{1}{c}{Code C}\\
\cmidrule(lr){2-3}\cmidrule(lr){4-4}
& Acc@1 & BLEU & Acc \\\midrule
AST-E & 9.07 & 29.29 & 94.14 \\
AST-E + CD-E & 9.71& 30.53 & 94.30\\
AST-E + DF-E & 9.17 & 29.40 & 93.42\\
AST-E + NS-E & 9.54 & 30.23 & 94.71\\
\midrule
\methodnamews~ & \textbf{10.02} & \textbf{31.51} & \textbf{94.73}\\
\bottomrule
\end{tabular}
\label{tab:ablation-study}
\end{minipage}
\end{table}
}
\section{0pt}{8pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsection{0pt}{8pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsubsection{0pt}{8pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\setlength{\arrayrulewidth}{0.3mm}
\title{Learning to Represent Programs \\ with Code Hierarchies}
\author{%
Minh Nguyen\\
Ho Chi Minh City University of Technology \\ \&\&
FSoft AI Lab \\
[email protected]
\And
Nghi D. Q. Bui\\
School of Information Systems\\
Singapore Management University\\
[email protected] \\
}
\setlength{\parskip}{0.1cm}
\newcommand{\marginpar{FIX}}{\marginpar{FIX}}
\newcommand{\marginpar{NEW}}{\marginpar{NEW}}
\makeatletter
\def\RemoveSpaces#1{\zap@space#1 \@empty}
\makeatother
\newcommand{\citenumber}[1]{
\cite{#1}%
}%
\setcitestyle{numbers}
\begin{document}
\vspace*{-30pt}
\maketitle
\begin{abstract}
\input{abstract}
\end{abstract}
\section{Introduction} \label{sec:introduction}
\input{introduction}
\vspace{-12pt}
\section{Related Work}
\label{sec:related}
\input{related}
\section{Technical Details}
\label{sec:overview}
\input{approach}
\vspace{-12pt}
\section{Applications}
\label{sec:applications}
\input{applications}
\section{Empirical Evaluation}
\label{sec:eval}
\input{eval}
\section{Model Analysis} \label{sec:add_analysis}
\input{additional_analysis}
\section{Discussion \& Future Work} \label{sec:dis}
\input{discussion}
\section{Conclusion} \label{sec:conclusion}
\input{conclusion}
\bibliographystyle{plainnat}
\section{%
\ifnum0=1%
\addtocounter{alphasect}{1}
\fi%
\oldsection}%
\renewcommand\thesection{%
\ifnum0=
\Alph{alphasect}%
\else
\arabic{section}%
\fi%
}%
\newenvironment{alphasection}{%
\ifnum0=1%
\errhelp={Let other blocks end at the beginning of the next block.}
\errmessage{Nested Alpha section not allowed}
\fi%
\setcounter{alphasect}{0}
\def0{1}
}{%
\setcounter{alphasect}{0}
\def0{0}
}%
\begin{alphasection}
\section{Types of Subtrees}\label{sec:app-sub-trees}
All the datasets used in this work are written in Java and C/C++ (we consider that C and C++ are the same). Therefore, there are different types of the subtrees between two programming languages. As aforementioned, whenever a traverser visits a node whose type belongs to the list below, the node is treated as a subtree.
\subsection{Java}
All the types of the subtrees includes: method\_declaration, local\_variable\_declaration, cast\_expression, conditional\_expression, assignment\_expression, method\_invocation, binary\_expression, unary\_expression, comma\_expression, update\_expression, return\_statement, break\_statement, continue\_statement, identifier, array\_access, field\_access, throw\_statement.
\subsection{C/C++}
All the types of the subtrees are: translation\_unit, preproc\_function\_def, type\_definition, goto\_statement, pointer\_expression, function\_definition, declaration, cast\_expression, conditional\_expression, assignment\_expression, call\_expression, binary\_expression, unary\_expression, comma\_expression, update\_expression, return\_statement, break\_statement, continue\_statement, identifier, subscript\_expression, field\_expression.
\section{Graph Classification Analysis}\label{sec:app-graph-analysis}
\subsection{Details of the Case Study in Subsection \ref{subsec:graph-analysis}}\label{subsec:app-delayed-info-analysis}
\textbf{Problem.}
The problem used in the subsection \ref{subsec:graph-analysis} has the id $p00052$. The description is as follows.\\
\begin{tcolorbox}
Write a program that receives a natural number $n~( n \leq 20000)$ and outputs the number of consecutive 0(s) at the end of $n!$.
\end{tcolorbox}
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\ttfamily\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\lstset{style=mystyle}
\textbf{Input.}
The input in the subsection \ref{subsec:graph-analysis} is as below
\begin{lstlisting}
using namespace std;
int main()
{
int n, zero;
while(1){
cin >> n;
if(n == 0) break;
else {
zero = 0;
while(n > 0){
zero = zero + n/5;
n = n/5;
}
cout << zero << endl;
}
}
return 0;
}
\end{lstlisting}
\section{Detailed Settings}
\subsection{Pre-training task}
We apply the hyper-parameters listed in Table \ref{tab:hyper-param-pre-training-task}.
\begin{table}
\centering
\caption{Hyper-parameters for the pre-training task}
\begin{tabular}{ll}
\toprule
Hyper-parameters & Value \\
\midrule
Optimizer & Adam\\
Learning rate & 0.0001\\
Dim & 512\\
Hidden dim in FFN & 1024 \\
Dropout rate & 0.1\\
Num of TBCNN layers & 1\\
Num of graph steps & 8 (2-2-1-2-1)\\
Num of decoder layer & 6\\
Num of heads in graph & 8\\
Num of heads in decoder & 8\\
\bottomrule
\end{tabular}
\label{tab:hyper-param-pre-training-task}
\end{table}
\end{alphasection} |
1,941,325,221,069 | arxiv | \section{Introduction}
By now it appears increasingly likely that Quantum Einstein Gravity
(QEG), the quantum field theory of gravity whose underlying degrees of
freedom are those of the spacetime metric, can be defined
nonperturbatively as a fundamental, ``asymptotically safe'' theory
(Lauscher 2002). By definition, its bare action is given by a
non--Gaussian renormalization group (RG) fixed point. In the framework
of the ``effective average action'' a suitable fixed point is known to
exist within certain approximations. They suggest that the fixed point
should also exist in the exact theory, implying its nonperturbative
renormalizability.
The general picture regarding the RG behavior of QEG as it has emerged
so far points towards a certain analogy between QEG and non--Abelian
Yang--Mills theories, Quantum Chromo--Dynamics (QCD) say. For example,
like the Yang--Mills coupling constant, the running Newton constant
$G=G(k)$ is an asymptotically free coupling, it vanishes in the
ultraviolet (UV), i.\,e.\ when the typical momentum scale $k$ becomes
large. In QCD the realm of asymptotic freedom is realized for momenta
$k$ larger than the mass scale $\Lambda_{\text{QCD}}$ which is induced
dynamically. In QEG the analogous role is played by the Planck mass
$m_{\text{Pl}}$. It delimits the asymptotic scaling region towards the
infrared (IR). For $k \gg m_{\text{Pl}}$ the RG flow is well described
by its linearization about the non--Gaussian fixed point. Both in QCD
and QEG simple local approximations (truncations) of the running
Wilsonian action (effective average action) are sufficient above
$\Lambda_{\text{QCD}}$ and $m_{\text{Pl}}$, respectively. However, as
the scale $k$ approaches $\Lambda_{\text{QCD}}$ or $m_{\text{Pl}}$
from above, many complicated, typically nonlocal terms are generated
in the effective action. In fact, in the IR, strong renormalization
effects are to be expected because gauge (diffeomorphism) invariance
leads to a massless excitation, the gluon (graviton), implying
potential IR divergences which the RG flow must cure in a dynamical
way. Because of the complexity of the corresponding flow equations it
is extremely difficult to explore the RG flow of QCD or QEG in the IR,
far below the UV scaling regime, by analytical methods. In QCD,
lattice results and phenomenology suggest that the nonperturbative IR
effects modify the classical Coulomb term by adding a confinement
potential to it which increases (linearly) with distance: $V (r) = -
a/r + \kappa \,r$.
The problem of the missing mass or ``dark matter'' is one of the most
puzzling mysteries of modern astrophysics. It is an intriguing idea
that the apparent mass discrepancy is not due to an unknown form of
matter but rather indicates that we are using the wrong theory of
gravity, Newton's law in the non--relativistic and General Relativity
in the relativistic case. If one tries to explain the observed
non--Keplerian rotation curves of galaxies or clusters in terms of a
modified Newton law, a nonclassical term needs to be added to the
$1/r$-potential whose relative importance grows with distance. In
``MOND'', for instance, a point mass $M$ produces the potential $\phi
(r) = - G M / r + \sqrt{a_{0} \, G M \,} \, \ln (r)$ and it is
tempting to compare the $\ln(r)$-term to the qualitative similar
confinement potential in (quenched) QCD. It seems not unreasonable to
speculate that the ``confinement'' potential in gravity is a quantum
effect which results from the antiscreening character of quantum
gravity (Lauscher 2002) in very much the same way as this happens in
Yang--Mills theory. If so, the missing mass problem could get resolved
in a very elegant manner without the need of introducing dark matter
on an ad hoc basis. In (Reuter 2004a,b) this idea has been explored
within a semi--phenomenological analysis of the effective average
action of quantum gravity.
\section{RG running of the gravitational parameters}
The effective average action $\Gamma_{k} [g_{\mu \nu}]$ is a ``coarse
grained'' Wilsonian action functional which defines an effective field
theory of gravity at the variable mass scale $k$. Roughly speaking,
the solution to the effective Einstein equations $\delta \Gamma_{k} /
\delta g_{\mu \nu} =0$ yields the metric averaged over a spacetime
volume of linear extension $k^{-1}$. (From the technical point of view
$k$ is a IR cutoff introduced into the functional integral over the
microscopic metric in such a way that only quantum fluctuations of
wavelengths smaller than $k^{-1}$ are integrated out.) In a physical
situation with a typical scale $k$, the effective field equation
$\delta \Gamma_{k} / \delta g_{\mu \nu} =0$ ``knows'' about all
quantum effects relevant at this particular scale. For $k$ fixed, the
functional $\Gamma_{k}$ should be visualized as a point in the space
of all action functionals. When the RG effects are ``switched on'',
one obtains a curve in this space, the RG trajectory, which starts at
the bare action $S \equiv \Gamma_{k \to \infty}$ and ends at the
ordinary effective action $\Gamma \equiv \Gamma_{k \to 0}$. At the
exact level, $\Gamma_{k}$ contains all the infinitely many invariants
one can construct from $g_{\mu \nu}$, their $k$-dependent prefactors
having the interpretation of scale dependent gravitational coupling
constants. To become technically feasible most of the investigations
using the effective average action formalism employ the so--called
Einstein-Hilbert approximation which retains only Newton's constant
$G(k)$ and the cosmological constant $\Lambda(k)$ as running
parameters. If one introduces the dimensionless couplings $g(k) \equiv
k^2 G(k)$ and $\lambda(k) \equiv \Lambda(k)/k^2$ the RG equations
governing their scale dependence read $k \partial_k g = \beta_g(g,
\lambda)$, $k \partial_k \lambda = \beta_\lambda(g, \lambda)$ with
known beta--functions $\beta_g$ and $\beta_\lambda$. The RG flow on
the $g$-$\lambda$--plane displays two fixed points: a Gaussian fixed
point (GFP) at the origin, and the non-Gaussian fixed point (NGFP) at
$g_{*}>0$, $\lambda_{*}>0$ which is necessary for asymptotic safety.
The RG trajectories are classified as of Type Ia, IIa (separatrix),
and IIIa depending on whether, when $k$ is lowered, they run towards
negative, vanishing, and positive values of the cosmological constant,
respectively. In (Reuter 2004b) the very special trajectory which seems
realized in Nature has been identified and its parameters were
determined. This trajectory is of Type IIIa; see fig.\ \ref{fig1}.
\begin{figure}[h]
\centering \includegraphics[width=12cm]{Reuter_fig1.ps}
\caption{Nature's Type IIIa trajectory and the separatrix.
The dashed line is a classical RG trajectory along which $G(k), \Lambda (k) = const$. (From (Reuter 2004b).)}
\label{fig1}
\end{figure}
For $k \rightarrow \infty$ it starts infinitesimally close to the
NGFP. Then, lowering $k$, the trajectory spirals about the NGFP and
approaches the ``separatrix'', the distinguished trajectory which ends
at the GFP. It runs almost parallel to the separatrix for a very long
``RG time''; only in the ``very last moment'' before reaching the GFP,
at the turning point T, it gets driven away towards larger values of
$\lambda$. In fig.\ \ref{fig1} the points P$_1$ and P$_2$ symbolize
the beginning and the end of the regime in which classical general
relativity is valid (``GR regime''). The classical regime starts soon after the
turning point T which is passed at the scale $k_{\rm T} \approx
10^{-30} m_{\mbox{\scriptsize Pl}}$.
In (Reuter 2004b) we argued that to the right of the point P$_2$ there
starts a regime of strong IR renormalization effects which might
become visible at astrophysical and cosmological length scales. In
fact, within the Einstein-Hilbert approximation, trajectories of Type
IIIa cannot be continued to the extreme IR ($k \rightarrow 0$). They
terminate at a non-zero value of $k$ as soon as the trajectory reaches
$\lambda = 1/2$. (Close to the question mark in fig.\ \ref{fig1}.)
Before it starts becoming invalid and has to be replaced by a more
precise treatment, the Einstein-Hilbert approximation suggests that
$G$ will increase, while $\Lambda$ decreases, as $\lambda \nearrow
1/2$.
The Type IIIa trajectory of QEG which Nature has selected is highly
special in the following sense. It is fine-tuned in such a way that it
gets {\it extremely} close to the GFP before ``turning left''. The
coordinates $g_{\rm T}$ and $\lambda_{\rm T}$ of the turning point are
both very small: $g_{\rm T} = \lambda_{\rm T} \approx 10^{-60}$. The
coupling $g$ decreases from $g(k) = 10^{-70}$ at a typical terrestrial
length scale of $k^{-1} = 1$ m to $g(k) = 10^{-92}$ at the solar
system scale of $k^{-1} = 1$ AU, and finally reaches $g(k) =
10^{-120}$ when $k$ equals the present Hubble constant $H_0$.
In fact, the Hubble parameter $k = H_0$ is approximately the scale
where the Einstein-Hilbert trajectory becomes unreliable. The
observations indicate that today the cosmological constant is of the
order $H_0^2$. Interpreting this value as the running $\Lambda(k)$ at
the scale $k = H_0$, the dimensionless $\lambda(k)$, at this scale, is
of the order unity: $\lambda(H_0) \equiv \Lambda(H_0)/H_0^2 =
\mathcal{O} (1)$. So it is roughly near the present Hubble scale
where the IR effects should have grown large.
In principle it should be possible to work out the predictions of the
theory for cosmological scales by an ab initio calculation within QEG.
Unfortunately, because of the enormous technical complexity of the RG
equations, this has not been possible in practice yet. In this
situation one can adopt a phenomenological strategy, however. One
makes an ansatz for the RG trajectory which has the general features
discussed above, derives its consequences, and confronts them with the
observations. In this manner the observational data can be used in
order to learn something about the RG trajectory in the
nonperturbative regime which is inaccessible to an analytic treatment
for the time being. Using this strategy, the cosmological consequences
of a very simple scenario for the $k \to 0$ behavior has been worked
out; the assumption proposed in (Bonanno 2002) is that the IR effects
lead to the formation of a second NGFP into which the RG trajectory
gets attracted for $k \to 0$. This hypothesis leads to a
phenomenologically viable late--time cosmology with a variety of
rather attractive features. It predicts an accelerated expansion of
the universe and explains, without any fine tuning, why the
corresponding matter and vacuum energy densities are approximately
equal.
\section{Galaxy rotation curves}
Given the encouraging results indicating that the IR effects are ``at
work'' in cosmology, by continuity, it seems plausible to suspect that
somewhere between solar system and cosmological scales they should
first become visible. In (Reuter 2004a,b) we therefore investigated
the idea that they are responsible for the observed non--Keplerian
galaxy rotation curves. The calculational scheme used there was a kind
of ``RG improvement'', the basic idea being that upon identifying the
scale $k$ with an appropriate geometric quantity comparatively simple
(local) truncations effectively mimic much more complicated (nonlocal)
terms in the effective action. Considering spherically symmetric,
static model galaxies only, the scale $k$ was taken to be the inverse
of the radial proper distance which boils down to $1 / r$ in leading
order. Since the regime of galactic scales turned out to lie outside
the domain of validity of the Einstein--Hilbert approximation the only
practical option was to make an ansatz for the RG trajectory $\big \{ G
(k), \Lambda (k), \cdots \big \}$ and to explore its observable
consequences. In particular a relationship between the $k$-dependence
of $G$ and the rotation curve $v (r)$ of the model galaxy has been
derived.
The idea was to start from the classical Einstein--Hilbert action and
to promote $G$ and $\Lambda$ to scalar fields:$S = \frac{1}{16 \pi} \,
\int \!\! \text{d}^{4} x~ \sqrt{-g\,} \big\{ R / G (x) - 2 \, \Lambda
(x) / G (x) \big \}$. Upon adding a matter contribution this action
implies the modified Einstein equation $G_{\mu \nu} = - \Lambda (x) \,
g_{\mu \nu} + 8 \pi \, G (x) \, \bigl( T_{\mu \nu} + \Delta T_{\mu
\nu} \bigr)$ with $\Delta T_{\mu \nu} \equiv \frac{1}{8 \pi} \,
\bigl( D_{\mu} D_{\nu} - g_{\mu \nu} \, D^{2} \bigr) \, G^{-1}$. In
(Reuter 2004a) we analyzed the weak field, slow--motion approximation
of this theory for a time--independent Newton constant $G = G
(\mathbf{x})$ and $\Lambda \equiv 0$. In this (modified) Newtonian
limit the equation of motion for massive test particles has the usual
form, $\ddot {\mathbf{x}} (t) = - \nabla \phi$, but the potential
$\phi$ obeys a modified Poisson equation:
\begin{align}
\nabla^{2} \phi = 4 \pi \,
\overline{G} \, \rho_{\text{eff}}
\quad \text{where }
\rho_{\text{eff}} \equiv
\rho + \bigl( 8 \pi \, \overline{G} \, \bigr)^{-1} \, \nabla^{2}
\mathcal{N}.
\end{align}
Here it is assumed that $T_{\mu \nu}$ describes
pressureless dust of density $\rho$ and that $G (\mathbf{x})$ does not
differ much from the constant $\overline{G}$. Setting $G (\mathbf{x})
\equiv \overline{G} \, \bigl[ 1 + \mathcal{N} (\mathbf{x}) \bigr]$ we
assumed that $\mathcal{N} (\mathbf{x}) \ll 1$. Apart from the rest
energy density $\rho$ of the ordinary (``baryonic'') matter, the
effective energy density $\rho_{\text{eff}}$ contains an additional
contribution $\bigl( 8 \pi \, \overline{G} \, \bigr)^{-1} \,
\nabla^{2} \mathcal{N} (\mathbf{x}) = \bigl( 8 \pi \, \overline{G}^{2}
\, \bigr)^{-1} \, \nabla^{2} G (\mathbf{x})$ due to the position
dependence of Newton's constant. Since it acts as a source for $\phi$
on exactly the same footing as $\rho$ it mimics the presence of ``dark
matter''.
Up to this point the discussion applies to an arbitrary prescribed
position dependence of Newton's constant, not necessarily related to a
RG trajectory. In the case of spherical symmetry the natural choice of
the geometric cutoff is $k = \xi / r$ with $\xi$ a constant of order
unity. Hence we obtain the position dependent Newton constant $G
(\mathbf{x}) \equiv G (r)$ as $G (r) \equiv G ( k = \xi / r)$. Writing
again $G \equiv \overline{G} \, \left[ 1 + \mathcal{N} \, \right]$, $G
(k)$ should be such that $\mathcal{N} \ll 1$.
Let us make a simple model of a spherically symmetric ``galaxy''. For
an arbitrary density profile $\rho = \rho (r)$ the solution of the
modified Poisson equation reads
\begin{align}
\phi (r) = \int \limits_{}^{r} \!\! \text{d} r^{\prime}~
\frac{\overline{G} \, \mathcal{M} (r^{\prime})}{{r^{\prime}}^{2}}
+ \tfrac{1}{2} \, \mathcal{N} (r)
\label{48}
\end{align}
where $\mathcal{M} (r) \equiv 4 \pi \, \int_{0}^{r} \!\!
\text{d} r^{\prime}~ {r^{\prime}}^{2} \, \rho (r^{\prime})$ is the
mass of the ordinary matter contained in a ball of radius $r$. On
circular orbits test particles in the potential \eqref{48} have the
velocity $v^{2} (r) = r \, \phi^{\prime} (r)$ so that we obtain the
rotation curve
\begin{align}
v^{2} (r) = \frac{\overline{G} \, \mathcal{M} (r)}{r}
+ \frac{1}{2} \, r \, \frac{\text{d}}{\text{d} r} \, \mathcal{N} (r).
\label{49}
\end{align}
We identify $\rho$ with the density of the ordinary luminous matter
and model the luminous core of the galaxy by a ball of radius $r_{0}$.
The mass of the ordinary matter contained in the core is $\mathcal{M}
(r_{0}) \equiv \mathcal{M}_{0}$, the ``bare'' total mass of the
galaxy. Since, by assumption, $\rho=0$ and hence $\mathcal{M} (r) =
\mathcal{M}_{0}$ for $r > r_{0}$, the potential outside the core, in
the halo, is $\phi (r) = - \overline{G} \, \mathcal{M}_{0} / r +
\mathcal{N} (r) / 2$.
As an example, let us adopt the power law $G (k) \propto k^{-q}$ with
$q>0$ which was motivated in (Reuter 2004a,b). We assume that this
$k$--dependence starts inside the core of the galaxy so that $G (r)
\propto r^{q}$ everywhere in the halo. For the modified Newtonian
limit to be realized, the position dependence of $G$ must be weak.
Therefore we shall tentatively assume that the exponent $q$ is very
small. Expanding to first order in $q$ we obtain $\mathcal{N} (r) = q
\, \ln (r)$. In the halo, this leads to a logarithmic modification of
Newton's potential: $\phi (r) = - \overline{G} \, \mathcal{M}_{0} / r
+ \frac{q}{2} \, \ln (r)$. The corresponding rotation curve is $v^{2}
(r) = \overline{G} \, \mathcal{M}_{0} / r + q/2$. At large distances
the velocity approaches a constant $v_{\infty} = \sqrt{q/2\,}$.
Obviously the rotation curve implied by the $k^{-q}$--trajectory does
indeed become flat at large distances -- very much like those we
observe in Nature. Typical measured values of $v_{\infty}$ range from
$100$ to $300\,$km/sec, implying $q \approx 10^{-6}$ which is indeed
very small. Including the core region, the complete rotation curve
reads $v^{2} (r) = \overline{G} \, \mathcal{M} (r) / r + q/2$. For a
realistic $\mathcal{M} (r)$ its $r$-dependence is in rough
qualitative agreement with the observations.
Our $v^{2} (r)$ is identical to the one obtained from standard
Newtonian gravity by postulating dark matter with a density
$\rho_{\text{DM}} \propto 1 / r^{2}$. We see that if $G (k) \propto
k^{-q}$ with $q \approx 10^{-6}$ no dark matter is needed. The
resulting position dependence of $G$ leads to an effective density
$\rho_{\text{eff}} = \rho + q / \bigl( 8 \pi \, \overline{G} \, r^{2}
\bigr)$ where the $1/r^{2}$--term, well known to be the source of a
logarithmic potential, is present as an automatic consequence of the
RG improved gravitational dynamics.
Thus it seems that if the observed non--Keplerian rotation curves are
due to a renormalization effect, the scale dependence of Newton's
constant should be roughly similar to $G (k) \propto k^{-q}$. Knowing
this, it will be the main challenge for future work to see whether a
corresponding RG trajectory is actually predicted by the flow
equations of QEG. For the time being an ab initio calculation of this
kind, while well--defined conceptually, is still considerably beyond
the state of the art as far as the technology of practical RG
calculations is concerned. In performing such calculations it might
help to rewrite the nonlocal terms generated during the flow in terms
of local field monomials by introducing extra fields besides the
metric. This is a standard procedure in the Wilsonian approach which
often allows for a simple local description of the effective IR
dynamics. It is tempting to speculate that the resulting local
effective field theory might be related to the generalized gravity
theory in (Papers I) which includes a Kalb--Ramond field; it is fully
relativistic and explains the galaxy and cluster data with remarkable
precision.
|
1,941,325,221,070 | arxiv | \section{Introduction}
Phase-change materials (PCM) utilize the reversible change from an amorphous phase to a crystalline phase to encode binary data.\cite{wuttig-nature2007, OvshinskyPRL}
The readability of the stored data is guaranteed by the pronounced differences in the electrical and/or optical properties of both phases.
Cu$_2$GeTe$_3$ (CGT) is a new PCM which is expected to be used for a next generation of (non-volatile) data storage devices. \cite{Sutou2012,Saito2013}
The CGT crystalline film was found to be amorphized by laser irradiation with a lower power and shorter pulse width than currently employed GeSbTe alloys, which are essential properties to achieve rapid data recording and low power consumption in PCMs. \cite{Yamada1991, wuttig-nature2007, Saito2013}
In contrast to widely studied PCM systems like GeSbTe, amorphous CGT is denser than the crystal, and the phase transition takes place to a tetrahedrally bonded crystal, a very different geometry compared to the octahedrally bonded cubic structures adopted by GeSbTe systems. \cite{Saito2013,Saito2014}
A result of this peculiar structure is a negative optical contrast, i.e.\ the reflectivity of the crystalline phase is lower than that of the amorphous phase,\cite{Saito2013} contrary to GeSbTe with a positive optical contrast.\cite{Yamada1991,shportko2008}
The structure of the crystal phase of CGT consists of a three-dimensional arrangement of slightly distorted corner-sharing CuTe$_4$ and GeTe$_4$ tetrahedra, with the space group $Imm2$. \cite{DelgadoGCT} (A schematic view is illustrated in the supplemental information.)
Concerning the structure of the amorphous phase, on the other hand, the state of research is inconsistent:
different average coordination numbers have been reported by x-ray diffraction in combination with x-ray absorption fine structure (XAFS) measurements,\cite{JovariGCT} by XAFS investigations alone \cite{Kamimura2016} and by \textit{ab-initio} molecular dynamics simulations~(AIMD).\cite{Skelton2013, Chen2015} The experimental results so far indicate that all atoms are roughly fourfold coordinated, which would constitute a large similarity to the tetrahedral crystal structure. AIMD simulations find much larger coordination numbers of Ge and Cu, with values exceeding 6 for Cu and about 4.5 for Ge.
Agreement is reached only on two points, namely the unusually large average coordination numbers for Cu and Te atoms, and on the existence of a significant number of homopolar bonds for Cu-Cu and Te-Te pairs (so called ``wrong'' bonds, which do not exist in the crystalline phase).
Apart from the investigation of nearest neighbor arrangements, ring statistics calculations offer the possibility to characterize the topological connectivity of network structures.
For GeSbTe, ring structures have been investigated both experimentally \cite{KoharaAPL} and theoretically by DFT simulations.\cite{AkolaJones, akola2009, HegedusNM}
In general, the fast phase change ability of GeSbTe was attributed to a strong preference of (alternating) even-fold rings, facilitating the phase transition to the crystal with a similar ring structure.
For CGT, so far there are only theoretical investigations available
\cite{Skelton2013,Chen2015} The reported ring structures are strikingly different from the known features of GeSbTe, especially concerning a large contribution of 3-fold rings. However, an experimental support for this finding is still missing.
The structural description also needs to be explained in the larger context of the transition from the crystal to the amorphous phase.
Again, for GeSbTe, such models already exist and have been controversially discussed, e.g.\ the (modified) ``umbrella-flip'' model\cite{KolobovNatMat, hosokawa-GST} or the ring statistics analogy model\cite{KoharaAPL}. To build a suitable model for this process in CGT, detailed structural information on the short- and intermediate-range order of the amorphous phase are required. A powerful method to extract this kind of information is anomalous x-ray scattering (AXS).
The aim of this article is thus to propose such a phase-change model for CGT, based on a combination of anomalous x-ray scattering and extended XAFS experiments, analyzing the datasets with a reverse Monte Carlo (RMC) modeling procedure.
\section{Experimental}
The amorphous CGT sample was prepared by radio-frequency sputtering deposition from GeTe and CuTe alloy targets on SiO$_2$ (20~nm)/Si (0.7~mm) substrates. Details on the sample preparation are outlined in refs.~\citen{Saito2013} and~\citen{Saito2014}. We note that the as-deposited phase exhibits almost identical properties compared to the melt-quenched film that would be generated in a phase-change memory device.\cite{Saito2013}
The AXS experiment were performed at the beamline BM02 of the European Synchrotron Radiation Facility (ESRF).
AXS utilizes the anomalous variation of the atomic form factor $f$ of a specific element in the vicinity of an x-ray absorption edge.\cite{Waseda1984} The experimentally accessible information are the differential structure factors $\Delta_kS(Q)$:
\begin{small}
\begin{align*}
\Delta_k S(Q) = \frac{ \Delta_k \left[C\cdot I(Q,E_{1},E_{2})\right] - \Delta_k \left[ \langle f^2\rangle - \langle f\rangle^2 \right]}{ \Delta_k \left[ \langle f\rangle^2 \right]} ,
\label{eq_theorie19}
\end{align*}
\end{small}
which are calculated from the difference ($\Delta_k$) of two scattering experiments with intensities $I(Q,E)$ conducted at energies $E_1$ and $E_2$ close to the absorption edge. $C$ denotes the normalization factor. The $\Delta_kS(Q)$ functions contain structural information specifically related to the element $k$.
The relative increase of this information can be illustrated by the AXS weighting factors $w_{ij}$ for the partial contributions of all elements $i,j$:
\begin{equation*}
\Delta_k S(Q) = \sum_{i,j} \Delta_k w_{ij}(Q)\cdot S_{ij}(Q).
\label{eq_theorie20}
\end{equation*}
They are illustrated for CGT in Table~\ref{tab:wij}.
Note that the $w_{ij}$ have a small $Q$ dependence, and are given here exemplary for $Q=1.9$~\AA$^{-1}$, i.e.\ at the first $S(Q)$ maximum position.
Incident energies for the measurements were selected 20 and 200~eV below the Cu and Ge $K$ edge, as well as 30 and 300~eV below the Te $K$ edge, respectively.
The experiments were performed in transmission geometry using a container cell with 7~$\mu$m Kapton windows and appropriate thicknesses for each investigated energy region. The data were corrected for absorption effects and Compton scattering, and normalized using the Krogh-Moe-Norman method.\cite{KroghMoe,norman}.
Further details on the theoretical and experimental background of AXS can be found elsewhere.\cite{Waseda1984, HosokawaPRB, stellhorn-zpc, HosokawaZPC, HosokawaEPJST}
The XAFS experiments were conducted at BL12C of the Photon Factory in the High Energy Accelerator Research Organization (KEK-PF), in fluorescence mode.
The incident x-ray intensity was measured using an ion chamber, and the fluorescent x-ray intensity from the sample was detected using a 19-channel pure Ge solid state detector. XAFS functions were determined near the $K$ edges of Cu and Ge.
Both AXS and XAFS data are displayed in Fig.~\ref{fig:Exp_data} with black symbols/lines.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{GCT_expData.png}
\caption{Experimental data. (a) AXS, (b) Ge XAFS, (c) Cu XAFS. Black squares and black lines are experimental data. RMC fits are displayed as colored lines, with data acquired near the absorption edges of
Te (purple),
Ge (blue) and
Cu (red),
the total $S(Q)$ is shown in grey.
}
\label{fig:Exp_data}
\end{center}
\end{figure}
In the reverse Monte Carlo procedure, the real sample is modeled by an ensemble of atoms as hard spheres in a simulation box. In each simulation step, individual atoms are moved randomly to minimize the difference between experimental structure factors and those computed from the simulated configuration using a Metropolis algorithm.
We employed the RMC\_POT package\cite{Gereben, Gereben2012} for our simulations.
An input configuration of 10,000 atoms with proper stoichiometry with an initial random distribution of the atoms in a box corresponding to the number density of 0.0385~\AA$^{-3}$ was used.
Minimal interatomic distances for the individual correlations Cu-Cu, Cu-Ge, Cu-Te Ge-Ge, Ge-Te, and Te-Te were set to 2.45, 2.35, 2.35, 2.35, 2.35 and 2.45~\AA, respectively. These distances were chosen near the values for the respective sums of the covalent radii,\cite{pyykko-covalent-radii} and were adjusted to fit the first coordination shells adequately.
Two different conditions were explored with different sets of input data. First, in order to compare our findings with existing results, we included only the total structure factor $S(Q)$ (obtained at 300~eV below the Te $K$ absorption edge energy) and the two XAFS datasets in a RMC simulation, and excluded the formation of Cu-Ge bonds in the simulation by increasing the Cu-Ge minimal interatomic distance to 3.1~\AA\ (``limited approach''). The results were expected to be comparable to previously published data by J\'{o}v\'{a}ri \textit{et al.}~\cite{JovariGCT}, the only difference in the procedure being that no Te XAFS data were included in our model.
Secondly, the AXS data were included as well, i.e.\ the individual $\Delta_kS(Q)$'s for each element. This approach therefore illustrates the effect of the AXS data. Furthermore, the formation of Cu-Ge bonds was not excluded (``present model''). This kind of model reflects findings of the theoretical studies, in which no specific restriction to the possible types of bonds are found, i.e.\ Cu-Ge bonds are present.\cite{Skelton2013, Chen2015}
The resulting RMC fits for this model are illustrated in Fig.~\ref{fig:Exp_data} with colored lines.
To evaluate the significance of the Cu-Ge bonds, a different RMC run was performed with all of the experimental datasets, but excluding the Cu-Ge bond. For this, the minimal interatomic distance of the Cu-Ge pair was raised to 3.1~\AA.
Relative to the simulations including the Cu-Ge bond, this leads to an increase in the goodness-of-fit values $R_w$ for the EXAFS dataset of Ge (by 8.4\%) and for the $\Delta_{\rm Ge}S(Q)$ function (by 7.2\%), and to a smaller amount for the EXAFS dataset of Cu (by 2.1\%). This indicates the presence of Cu-Ge bonds in the material.
\setlength{\tabcolsep}{2pt
\begin{table
\caption{Weighting factors $w_{ij}$ in CGT at 1.9~\AA$^{-1}$ near the first $S(Q)$ maximum, in percent.}
\label{tab:wij}
\begin{center}
\begin{tabular}{l|cccccc}
\hline
& \small Ge-Ge &\small Ge-Cu &\small Ge-Te &\small Cu-Cu &\small Cu-Te &\small Te-Te \\
\hline
$S(Q)$ & 1.9 & 7.0 & 16.8 & 6.5 & 30.9 & 36.9 \\
$\Delta_{\rm Ge}S(Q)$ & 11.5 & 24.3 & 68.3 & -1.7 & -4.0 & 1.6 \\
$\Delta_{\rm Cu}S(Q)$ & 0.2 & 13.1 & 0.9 & 20.3 & 65.6 & -0.1 \\
$\Delta_{\rm Te}S(Q)$ & 0.0 & 0.0 & 14.0 & 0.0 & 25.8 & 60.1 \\
%
\hline
\end{tabular}
\end{center}
\end{table}
\section{Results}
From the RMC generated models, the 6 independent correlations of element pairs in CGT are calculated.
Figures~\ref{fig:pfq_ppcf} and \ref{fig:pfq_pfq} give an overview of all partial structure factors $S_{ij}(Q)$ and pair correlation functions $g_{ij}(r)$ obtained from the RMC simulation for the present model. Average bond lengths extracted from the pair correlations are listed in Table~\ref{tab:Disttable}, in comparison with data from two other studies and the corresponding values for the CGT crystal. The bond lengths are averaged over all correlations of the respective element.
In general, bond lengths become slightly larger in the amorphous state compared to the crystal. The largest differences between different approaches is observed for Cu-related bonds: contrary to ref.~\cite{JovariGCT}, we find that the distances become somewhat larger than in the crystal, but the elongation is less compared to AIMD results \cite{Skelton2013}.
Note that the precision of RMC with respect to interatomic distances in this approach is around $\pm0.05$~\AA.
The partial and total coordination numbers are tabulated in Table~\ref{tab:CNtable}. Cut-off distances for the calculation of the coordination numbers were set to the first minimum in the pair correlation functions around 3.0~\AA.
\begin{figure
\begin{center}
\includegraphics[width=0.75\linewidth]{export_pfq_tetraGe_2.png}
\caption{RMC results for the partial structure factors $S_{ij}(Q)$ in the present model. }
\label{fig:pfq_pfq}
\end{center}
\end{figure}
\begin{figure
\begin{center}
\includegraphics[width=0.75\linewidth]{export_ppcf_tetraGe_3.png}
\caption{RMC results for the partial pair correlations functions $g_{ij}(r)$ in the present model. }
\label{fig:pfq_ppcf}
\end{center}
\end{figure}
\begin{table
\caption{Average bond lengths in~\AA\ for each element, in comparison with other studies. }
\label{tab:Disttable}
\begin{center}
\begin{tabular}{c|cccc}
\hline
species & present & RMC\cite{JovariGCT} & AIMD\cite{Skelton2013} & Crystal\cite{DelgadoGCT} \\
\hline
Cu & 2.65 & 2.57 & 2.79 & 2.61 \\
Ge & 2.61 & 2.56 & 2.62 & 2.51 \\
Te & 2.63 & 2.65 & 2.65 & 2.58 \\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{4pt}
\begin{table}
\caption{Partial and total coordination numbers of CGT in comparison with other studies. Partial coordinations $ij$ are given for $j$ atoms around $i$ centers.
}
\label{tab:CNtable}
\begin{center}
\begin{tabular}{l|c|c|cc}
\hline
\multirow{ 2}{*}{elem.} & \multicolumn{2}{c|}{RMC results } & \multicolumn{2}{c}{references } \\
\cline{2-5}
& present & limited & RMC\cite{JovariGCT} & AIMD\cite{Skelton2013} \\
\hline
CuGe & 0.73 & - & - & 0.62 \\
CuCu & 2.38 & 2.35 & 2.20$\pm0.4$ & 2.34 \\
CuTe & 2.29 & 2.10 & 1.86$\pm0.3$ & 3.70 \\
GeGe & 0.72 & 1.41 & 1.52$\pm0.4$ & 0.12 \\
GeTe & 1.83 & 2.68 & 2.51$\pm0.5$ & 3.09 \\
TeTe & 2.19 & 2.08 & 1.72$\pm0.3$ & 0.60 \\
\hline
$N$(Cu) & 5.40 & 4.45 & 4.06$\pm0.6$ & 6.67 \\
$N$(Ge) & 4.02 & 4.09 & 4.03 \hspace{4ex} & 4.47 \\
$N$(Te) & 4.64 & 4.41 & 4.10$\pm0.5$ & 4.18 \\
$\langle N \rangle$ & 4.79 & 4.37 & 4.08 \hspace{4ex} & 5.06 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Discussion}
\subsection{Coordination numbers and interatomic distances}
It is notable that - within the precision of the experimental methods - interatomic distances (see Table~\ref{tab:Disttable})
in the amorphous phase of CGT do not differ largely from the corresponding crystalline phase.
This indicates a remarkable similarity between the two phases, which is not found in other comparable phase-change materials:
in GeSbTe, for example, the shortening of the Ge-Te bond was the basis for the proposed ``umbrella-flip'' model\cite{KolobovNatMat}.
We found that the obtained coordination numbers for the limited RMC approach in Table~\ref{tab:CNtable} are in agreement with the data by J\'{o}v\'{a}ri \textit{et al.}~\cite{JovariGCT} within the reported experimental uncertainties of $\pm$0.3-0.6.
The effect of the AXS data in the present, full approach is mainly seen in the Ge environment, where a reduced number of Ge-Ge and Ge-Te bonds is found in favor of the Cu-Ge bonds. The existence of these bonds is difficult to judge from only XAFS and total scattering data, cf. ref.~\citen{JovariGCT}, but it is evident from the AXS data.
Despite the disagreements between this model and any of the reference models, some consistent observations can be made: The structure of CGT is characterized by high average coordination numbers, especially around Cu; all coordination numbers are actually larger than in the corresponding crystal; also, a large number of homopolar bonds is found, especially Cu-Cu (for every model) and Te-Te (only in the experimentally obtained models) bonds.
\subsection{Bond angle distribution}
By including the AXS data, reliable information on structural features beyond the coordination numbers can be obtained. A detailed analysis of the present RMC model provides information on bond angle distributions (BAD) and ring statistics of the network.
We calculated the BAD around the individual elements, shown in Fig.~\ref{fig:BAD}.
In general, broad distributions around 109$^\circ$ are observed for all correlations, corresponding to a distorted tetrahedral coordination (109.5$^\circ$). This corresponds to the large number of 4-fold coordinated atoms, and shows a large similarity to the crystal structure, where only tetrahedral configurations are found, though with a much narrower distribution (104$^\circ$-114$^\circ$).
In addition, peaks around 60$^\circ$ are found and are mainly connected with Cu-related correlations.
The results are consistent with theoretical studies.\cite{Skelton2013,Chen2015}
The low number of 90$^\circ$ angles is a striking difference to GeSbTe-based PCMs,\cite{JovariJPCM, jovari2008, akola2009} and demonstrates that the amorphous phases of CGT and GeSbTe systems are dominated by very different structural motifs.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\linewidth]{total_BAD_weighted2.png}
\caption{Bond angle distribution in a-CGT, around
Cu (red),
Ge (blue) and
Te atoms (purple).
}
\label{fig:BAD}
\end{center}
\end{figure}
\subsection{Ring statistics}
These features can be understood by considering the rings statistics, which were calculated using the R.I.N.G.S. program.\cite{RINGS}
A ``ring'' is defined as a closed path of covalent bonds originating from and leading back to the same atom. For the ring statistics analysis, irreducible rings were searched in the amorphous network, i.e.\ closed paths that cannot be decomposed into smaller rings. The results are shown in Fig.~\ref{fig:rings}.
A broad distribution of ring structures is found with a shallow maximum for 6-membered rings. This centering around the 6-rings shows a correspondence to the crystal structure, where only 6-membered rings are found (inset in Fig.~\ref{fig:rings}).
Furthermore, a large number of 3-fold rings is found, which corresponds to the peak around 60$^\circ$ in the BAD. A similar feature is observed even in an AIMD study.\cite{Chen2015}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{normalized_Rings.png}
\caption{Ring statistics in a-CGT. Inset: c-CGT.}
\label{fig:rings}
\end{center}
\end{figure}
\setlength{\tabcolsep}{6pt}
\begin{table*}
\caption{Composition of the rings. The values denote the number of atoms of the respective element in the $n$-fold ring. The values in brackets indicate the relative increase from the ring composition expected by the concentration.}
\label{tab:rings_composition}
\begin{center}
\begin{tabular}{c|cccccc}
\hline
element & $n$=3 & 4 & 5 & 6 & 7 & 8 \\
\hline
%
Cu & 1.56 & 1.97 & 2.41 & 2.66 & 2.96 & 3.37 \\
& (+56\%) & (+48\%) & (+44\%) & (+33\%) & (+27\%) & (+26\%) \\
Ge & 0.59 & 0.86 & 0.82 & 0.88 & 1.08 & 1.47 \\
& (+18\%) & (+28\%) & (-2\%) & (-12\%) & (-8\%) & (+10\%) \\
Te & 0.85 & 1.18 & 1.78 & 2.46 & 2.96 & 3.16 \\
& (-43\%) & (-41\%) & (-29\%) & (-18\%) & (-15\%) & (-21\%) \\
\hline
\end{tabular}
\end{center}
\end{table*}
The significance of the large contribution of 3-fold rings was evaluated by an additional RMC run, in which the formation of 3-rings was constrained by including a penalty term for 60$^\circ$ angles in the BAD.
Thereby, the number of 3-rings was reduced by 97\%; however, a significant increase was found in the goodness-of-fit values $R_w$ for the total $S(Q)$ by a factor of 1.40, and for the differential datasets $\Delta_kS(Q)$, especially for Ge (2.55), but also for Te (1.43) and to a smaller amount for Cu (1.14), indicating that 3-rings are an important component of the structure, and should be included in the modeling process.
The $R_w$ values of the XAFS datasets increase only by a small amount (by a factor of 1.03-1.10). This indicates that information on bond angles and on the ring structure is not directly available from the XAFS data.
GeSbTe-based PCMs show a markedly different ring distribution, where even-membered ring structures are supposed to be dominant.\cite{KoharaAPL, AkolaJones, akola2009} In GeSbTe, this structural feature is explained by a similarity to the crystal structure, where resonance bonding via $p$-orbitals (concomitant with 90$^\circ$ bond angles and 4-fold rings) plays an important role for the stability.\cite{shportko2008}
The ring structures of CGT require a different explanation. For more details on the network, we analyzed the composition of the rings, shown in Table~\ref{tab:rings_composition}. The table displays the average number of atoms from a specific element in an $n$-fold ring (between $3<n<8$).
The table also indicates the relative difference to the expected ring composition; for example, a 6-fold ring in Cu$_2$Ge$_1$Te$_3$ can be expected to consist of 2 Cu, 1 Ge and 3 Te atoms.
In general, Cu is found in ring structures to a much larger degree than what would be expected from its relative concentration (+39\% on average). This finding agrees well with the high coordination number of the Cu atoms.
For the most important ring sizes, typical building blocks of the rings structures are composed of:
\vspace{-1ex}
\begin{itemize}\setlength{\itemsep}{-0.2ex}
\item 3-fold rings: Cu$_2$Te,
\item 5-fold rings: Cu$_2$GeTe$_2$
\item 6-fold rings: Cu$_3$GeTe$_2$ and Cu$_2$GeTe$_3$
\end{itemize}
Except for Cu$_2$GeTe$_3$, which can be formed as an alternating ring structure (and is the only ring structure for the
crystalline phase), these typical buildings blocks cannot be realized without the formation of ``wrong'' bonds of Cu-Ge or Cu-Cu.
\subsection{Model for the phase transition in CGT}
\begin{figure
\begin{center}
\includegraphics[width=0.54\linewidth,valign=t]{GCT123_model_a_3.png}
\hspace{0.01\linewidth}
\includegraphics[width=0.42\linewidth,valign=t]{GCT123_model_b_3.png} \\
\vspace{4ex}
\includegraphics[width=0.54\linewidth,valign=t]{GCT123_model_c_3.png}
\hspace{0.01\linewidth}
\includegraphics[width=0.42\linewidth,valign=t]{GCT123_model_d_3.png}
\caption{Model for the phase transition in CGT. The crystal structure is shown in (a) and a schematic representation of the ring structure in (b).
During the phase transition, atoms in the crystal move towards the 6-ring centers, resulting in new ring structures and the formation of wrong bonds (red) in the amorphous phase, illustrated in (c) and (d). Atomic movements resulting in new bonds are marked with arrows in (a) and (b).
Colors denote
Te (gold),
Ge (blue) and
Cu (red) atoms.
Images of the structures were produced using VESTA.\cite{vesta}
In (b) and (d), filled circles denote Te and empty circles are Ge/Cu atoms. The numbers indicate the size of the ring.
}
\label{fig:model}
\end{center}
\end{figure}
From the structural information described so far, it is possible to draw a detailed model of the amorphous structure, aimed to explain the fast structural phase transition in CGT. This model is illustrated in Fig.~\ref{fig:model}.
Starting from the crystal structure, only small movements of the atoms are required to reach the amorphous state. Namely, there is movement of atoms (especially the Cu atoms) towards the centers of the 6-fold rings. These movements are illustrated in Fig.~\ref{fig:model}~(a) for one unit cell and in (b) schematically for the ring structure with arrow symbols.
They lead to the increased coordination numbers of Te and Cu compared to the crystal and the increasing density of the amorphous phase.
Note that the high mobility of the Cu atoms is also suggested as a key factor for the phase change process
by a recent investigation of combined hard x-ray photoelectron spectroscopy and AIMD.\cite{Kobayashi2018}
Concomitantly, this movement leads to the fragmentation of the 6-fold rings and the formation of smaller ring structures, shown in Fig.~\ref{fig:model}~(c) and~(d), in which ``wrong'' bonds of especially Cu-Cu and Te-Te are realized.
The maximum at $n=6$ in the ring statistics
reveals that a significant number of the 6-rings of the crystal structure are conserved, but become largely distorted, as indicated by the wide bond angle distribution in Fig.~\ref{fig:BAD}.
This dominance of the 6-rings in the amorphous phase certainly contributes to the high speed of the phase transition.
Finally, the redistribution of chemical bonds also leads to the formation of larger rings sizes with $n\geq7$.
Finally, we note that the focus so far has been the amorphization process. This formalism was chosen because it is straightforward to understand the structure of the amorphous phase as derived from the crystal. Technically, a fast crystallization, i.e.\ the reverse mechanism, is more important. This process can be understood as the reverse motion, i.e.\ in Fig.~\ref{fig:model} from (c,~d) to (a,~b), which in the same way requires only small atomic motions due to the similarities of the local structure.
\FloatBarrier
\section{Conclusion}
In conclusion, we present a model for the structure of the amorphous phase of Cu$_2$GeTe$_3$, based on the analysis of experimental data from AXS and XAFS, modeled by RMC.
The extensive experimental approach represents a distinct improvement compared to previous experimental results.
We confirmed the formation of smaller ring structures and a large number of homopolar bonds, in agreement with theoretical studies. The structural properties are used to
draw a qualitative model of the phase-change process, in which atoms (especially Cu) move towards the centers of the 6-fold rings of the crystal, thereby forming new bonds and resulting in a broader distribution of ring structures, but also preserving some structural motifs of the crystal, like the interatomic distances and the high coordination numbers.
\section{Acknowledgements}
The authors acknowledge partial financial support by the Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research on Innovative Areas `3D Active-Site Science' (No.\ 26105006).
JRS also acknowledges financial support as Overseas researcher under a Postdoctoral Fellowship of JSPS (No.\ P16796).
BP thanks the Fond der Chemischen Industrie for financial support.
The AXS experiments were performed at BM02 of the ESRF (Experimental nos.\ HC-2213 and HC-2534).
The XAFS experiments were carried out at BL12C of the KEK-PF
(Proposal nos.\ 2010G559 and 2012G522).
We are indebted to L.~Pusztai (Wigner Research Centre for Physics, Hungary and Kumamoto University, Japan) for valuable discussions on the RMC data analysis,
and to Y.~Saito and S.~Shindo (Tohoku University, Japan) for their help with the sample preparation.
|
1,941,325,221,071 | arxiv | \section{Introduction}
The local density of dark matter (DM) at the Sun's location in the Galaxy may not be spherically symmetric because the Sun, in its motion through the Galactic halo, is expected to create a trailing DM wake \citep{2019arXiv190110605H,2019MNRAS.tmp.1533B}. Thus, DM would be overdense behind the Sun inducing an asymmetry which, according to some researchers \citep{2019arXiv190110605H,2019MNRAS.tmp.1533B}, may allow for tighter constraints on the DM density due to its effects on the orbital motions of the planets of our solar system.
The perturbing gravitational acceleration experienced by a test particle orbiting our star which moves in the DM background can approximately be written in some coordinate system as \citep{1983A&A...117....9M, 2019arXiv190110605H,2019MNRAS.tmp.1533B}
\eqi
{\bds A}_\mathrm{DM} = -\rp{4\,\uppi\,G^2\,\varrho_\mathrm{DM}\,\mathrm{M}_\odot}{\sigma^2}\,\qua{0.21\,\ln\ton{\rp{r\,\mathrm{v}^2_\odot}{2\mu_\odot}} + 0.44\,\rp{{\bds{\hat{v}}_\odot}\bds\cdot{\bds{\hat{r}}} }{\left| {\bds{\hat{v}}_\odot}\bds\cdot{\bds{\hat{r}}} \right|}}\bds{\hat{v}}_\odot.\lb{wakeacc}
\eqf
In \rfr{wakeacc}, $G$ is the Newtonian gravitational constant, $\mu_\odot\doteq G\,\mathrm{M}_\odot$ is the Sun's gravitational parameter, $\mathrm{M}_\odot$ is its mass, ${\bds v}_\odot$ is the velocity of the Sun's motion through the Galactic DM halo, $\mathrm{v}_\odot = \left|{\bds v}_\odot\right|$ is its speed, $\varrho_\mathrm{DM}$ is the unperturbed local DM density, $\sigma = \mathrm{v}_\odot/\sqrt{2}$ is its one-dimensional velocity dispersion, and $\bds{\hat{r}}$ is the versor of the heliocentric position vector $\bds r$ of the planet.
The planetary observations are processed in the International Celestial Reference System (ICRS), whose fundamental plane is the celestial equator at the reference epoch J2000. Thus, ${\bds{\hat{v}}}_\odot$ must be transformed from the Galactic coordinate system (GalCS), which is a right-handed one whose $x$ axis points towards the Galactic Center, the $z$ axis is directed towards the North Galactic Pole (NGP), and the $y$ axis is aligned with the local direction of the large scale ordered rotation of the Galactic disk, to the equatorial system of ICRS. To the accuracy level required by the problem at hand, such a task can straightforwardly be accomplished with the inverse of the matrix $\mathcal{N}$ in \citet{2011A&A...536A.102L}. In the GalCS, ${\bds v}_\odot$ is
\eqi
{\bds v}_\odot^\mathrm{GalCS}= \grf{U_\odot,\,V_\odot+\Theta_\odot,\,W_\odot},
\eqf
where \citep{2018RNAAS...2c.156M} $\Theta_\odot = 233.3\,\mathrm{km\,s}^{-1}$ is the circular speed of the Local Standard of Rest (LSR), and \citep{2010MNRAS.403.1829S}
$U_\odot =11.1 \,\mathrm{km\,s}^{-1},\,V_\odot = 12.24\,\mathrm{km\,s}^{-1},\,W_\odot = 7.25\,\mathrm{km\,s}^{-1}$ are the components of the velocity of the Sun with respect to the LSR itself. Thus, the unit vector of the Sun's Galactic velocity, referred to the ICRS, turns out to be
\eqi
{\bds{\hat{v}}}_\odot = \grf{0.45574,\,-0.494244,\,0.740287}.
\eqf
In this Letter, I investigate in detail the orbital effects of \rfr{wakeacc} on the planets of our solar system without any a priori simplifying assumptions on their orbital configuration in order to make a consistent and unambiguous comparison with the observable quantities actually delivered by the astronomers. For other investigations about Saturn, performed with different methodologies, see \citet{2019arXiv190110605H,2019MNRAS.tmp.1533B}. In particular, I numerically calculate the long-term rates of change of all the Keplerian orbital elements of a test particle orbiting a primary under the influence of the perturbing acceleration of \rfr{wakeacc}. I apply my results to Saturn, and compare its DM-induced secular rates with the most recent bounds on any anomalous extra-precessions of it existing in the literature. Moreover, I numerically simulate the Earth-Saturn range signature due to \rfr{wakeacc}, and compare it with the currently available range residuals computed by the astronomers with the data collected by the \textit{Cassini} spacecraft from 2004 to 2017.
\section{The Keplerian orbital elements and the Earth-Saturn range}
Here, I investigate some of the consequences of \rfr{wakeacc} in terms of the orbital effects induced by it on the motion of a test a particle around its primary in a restricted two-body system.
In principle, it would be possible to analytically work out the long-term rates of change of its Keplerian orbital elements with the Gauss perturbative equations applied to \rfr{wakeacc} by averaging their right-hand-sides, evaluated onto an unperturbed Keplerian ellipse as reference trajectory, over an orbital period. In view of how cumbersome such an approach is, however I will take a numerical approach. In particular, I simultaneously integrate the equations of motion of, say, Saturn in Cartesian rectangular coordinates and the Gauss equations for each orbital element with and without \rfr{wakeacc} over a time span as long as 100 centuries in order to clearly identify the sought features of motion: both runs share the same initial conditions, as retrieved from the Internet from the WEB interface HORIZONS maintained by NASA Jet Propulsion Laboratory (JPL).
For consistency reasons with the planetary data reductions available in the literature, I use the equatorial coordinates of the ICRS. Then, for each orbital element, Fig.\,\ref{figura1} plots the time series resulting from the difference between the runs with and without \rfr{wakeacc}. Finally, I fit a linear model to its numerically produced signal, and estimate its slope: the results are given in Table\,\ref{tavola1}.
\begin{figure}[htb]
\begin{center}
\centerline{
\vbox{
\begin{tabular}{cc}
\epsfysize= 5.2 cm\epsfbox{wakelatus.eps} & \epsfysize= 5.2 cm\epsfbox{wakeecce.eps}\\
\epsfysize= 5.2 cm\epsfbox{wakeincli.eps} & \epsfysize= 5.2 cm\epsfbox{wakenodo.eps}\\
\epsfysize= 5.2 cm\epsfbox{wakeperi.eps} & \epsfysize= 5.2 cm\epsfbox{wakeeta.eps}\\
\end{tabular}
}
}
\caption{
Numerically integrated shifts of the semilatus rectum $p$, eccentricity $e$, inclination $I$, longitude of the ascending node $\Omega$, longitude of perihelion $\varpi$, and mean anomaly at epoch $\eta$ of Saturn induced by the Solar DM wake acceleration of \rfr{wakeacc} over a time span of 100 centuries. The units are metres for $p$ and nanoarcseconds (nas) for all the other orbital elements. They were obtained for each orbital element as differences between two time series calculated by numerically integrating the barycentric Kronian equations of motion in Cartesian rectangular coordinates with and without \rfr{wakeacc} for $\varrho_\mathrm{DM} = 0.018\,\mathrm{M}_\odot\,\mathrm{pc}^{-3}$ \citep{2019MNRAS.tmp.1533B}. The initial conditions, referred to the celestial equator at the reference epoch J2000, were retrieved from the WEB interface HORIZONS by NASA JPL; they were the same for both the integrations. The Sun's Galactic velocity ${\bds v}_\odot$ was transformed to the International Celestial Reference System (ICRS). The slopes of the resulting secular trends are listed in Table\,\ref{tavola1}. }\label{figura1}
\end{center}
\end{figure}
\clearpage{}
\begin{table}
\caption{Estimated slopes of the secular trends induced by the Solar DM wake acceleration of \rfr{wakeacc}
for $\varrho_\mathrm{DM} = 0.018\,\mathrm{M}_\odot\,\mathrm{pc}^{-3}$ \citep{2019MNRAS.tmp.1533B} on the semilatus rectum $p$, eccentricity $e$, inclination $I$, longitude of the ascending node $\Omega$, longitude of perihelion $\varpi$, and mean anomaly at epoch $\eta$ of Saturn according to Fig.\,\ref{figura1}. The units are millimetres per century $\ton{\mathrm{mm\,cty}^{-1}}$ for $p$, and nanoarcseconds per century $\ton{\mathrm{nas\,cty}^{-1}}$ for all the other orbital elements.
}\lb{tavola1}
\begin{center}
\small{
\begin{tabular}{|l|l|l|l|l|l|}
\hline
$\dot p\,\left(\textrm{mm}\,\textrm{cty}^{-1}\right)$
& $\dot e\,\left(\textrm{nas}\,\textrm{cty}^{-1}\right)$
& $\dot I\,\left(\textrm{nas}\,\textrm{cty}^{-1}\right)$
& $\dot\Omega\,\left(\textrm{nas}\,\textrm{cty}^{-1}\right)$
& $\dot\varpi\,\left(\textrm{nas}\,\textrm{cty}^{-1}\right)$
& $\dot\eta\,\left(\textrm{nas}\,\textrm{cty}^{-1}\right)$ \\
\hline
$-0.1$ & $0.2$ & $-0.06$ & $0.2$ & $-2.1$ & $2.2$\\
\hline
\end{tabular}
}
\end{center}
\end{table}
It turns out that the impact of the Sun's DM wake on Saturn's motion is totally negligible. Indeed, its predicted orbital effects are as low as $\simeq 0.1\,\mathrm{millimeters\,per\,century}\,\ton{\mathrm{mm\,cty}^{-1}}$ and
$\simeq 0.05-2\,\mathrm{\mathrm{nanoarcseconds\,per\,century}}\,\ton{\mathrm{nas\,cty}^{-1}}$. On the other hand, the present-day formal accuracies in constraining any anomalous orbital rate of change of Saturn amount to $\simeq 17\,\mathrm{m\,cty}^{-1}$ and $\simeq 0.002-2\,\mathrm{milliarcseconds\,per\,century}\,\ton{\mathrm{mas\,cty}^{-1}}$, respectively, as tentatively calculated by \citet{2019AJ....157..220I} on the basis of the latest results by \citet{2018AstL...44..554P} with the recent EPM2017 ephemerides.
I also looked at the geocentric Kronian range by numerically producing a simulated time series $\Delta\rho(t)$ caused by \rfr{wakeacc} over the same time span (2004-2017) of the data record collected by the \textit{Cassini} spacecraft during its long-lasting tour in the system of the ringed planet. The time series was obtained from a simultaneous numerical integration of the barycentric equations of motion of all the major bodies of the solar system from 2004 April 1 to 2017 September 15. Two runs, sharing the same initial conditions and standard dynamical models accurate to the first post-Newtonian level, with the exception of \rfr{wakeacc} which was turned off in one of them, were performed. Then, two time series for the Earth-Saturn range were calculated, and their difference was taken as representative of $\Delta\rho(t)$ and plotted in Fig.\,\ref{figura2}.
\begin{figure}[htb]
\begin{center}
\centerline{
\vbox{
\begin{tabular}{c}
\epsfysize= 7.0 cm\epsfbox{wakerange.eps}\\
\epsfysize= 7.0 cm\epsfbox{wakerangeBIG.eps}\\
\end{tabular}
}
}
\caption{
Numerically simulated Earth-Saturn range signature $\Delta\rho(t)$ induced by the Solar DM wake acceleration of \rfr{wakeacc} over a time span 13 yr long covering the time spent by the \textit{Cassini} spacecraft in the Kronian system. It was obtained as a difference between two time series of
$\rho(t)=\sqrt{\ton{x_\mathrm{Sat}(t)-x_\oplus(t)}^2 + \ton{y_\mathrm{Sat}(t)-y _\oplus(t)}^2 + \ton{z_\mathrm{Sat}(t)- z_\oplus(t)}^2}$
calculated by numerically integrating the barycentric equations of motion in Cartesian rectangular coordinates of all the major bodies of the solar system from 2004 April 1 to 2017 September 15 with and without \rfr{wakeacc} for $\varrho_\mathrm{DM} = \varrho_\mathrm{DM}^0= 0.018\,\mathrm{M}_\odot\,\mathrm{pc}^{-3}$ \citep{2019MNRAS.tmp.1533B} (upper panel) and $\varrho_\mathrm{DM}= 2.5\times 10^6\,\varrho^0_\mathrm{DM}$ (lower panel). The initial conditions, corresponding to 2004 April 1 and referred to the celestial equator at the reference epoch J2000, were retrieved from the WEB interface HORIZONS by NASA JPL; they were the same for both the integrations which share also the entire standard $N$-body dynamical models to the first post-Newtonian level. The Sun's Galactic velocity ${\bds v}_\odot$ was transformed to the International Celestial Reference System (ICRS). The gray shaded horizontal band in the lower panel has a semi-amplitude of $30\,\mathrm{m}$, and represents the \virg{standard} post-fit range residuals of Saturn produced by processing the \textit{Cassini} data without explicitly modeling \rfr{wakeacc} \citep{2017NSTIM.108.....V}. }\label{figura2}
\end{center}
\end{figure}
\clearpage{}
From its upper panel, it can be noted that the expected DM-induced effect on the Earth-Saturn range is as little as $\simeq 0.1-0.2\,\mathrm{m}$; the range residuals currently available, computed by the astronomers without explicitly modeling \rfr{wakeacc}, are as large as $\simeq 30\,\mathrm{m}$ \citep{2017NSTIM.108.....V}. The lower panel of Fig.\,\ref{figura2} shows that, in order to have an anomalous signal sufficiently large to be, perhaps, detectable even with such non-dedicated residuals\footnote{The signature of any unmodeled effect $\mathcal{E}$, even if present in Nature, may be partially or totally removed in the data reduction procedure generating, among other things, the post-fit residuals since it may be partially absorbed in the estimation of other parameters like, e.g., the planetary masses and state vectors. This is especially true if its putative magnitude is not sufficiently greater than the measurements' accuracy. Thus, caution is in order when straightforward comparisons between a theoretically expected anomalous effect $\mathcal{E}$ and the residuals produced in non-dedicated analyses are made. Indeed, the absence of $\mathcal{E}$ in the residuals does not necessarily imply that $\mathcal{E}$ does not exist.}, the local DM density $\varrho_\mathrm{DM}$ would need to be about $2.5\times 10^6$ times larger than the currently accepted value $\varrho^0_\mathrm{DM} = 0.018\,\mathrm{M}_\odot\,\mathrm{pc}^{-3}$ \citep{2019MNRAS.tmp.1533B}; note also that, according to some estimates \citep{2018RNAAS...2c.156M}, $\varrho_\mathrm{DM}$ could even be smaller, possibly at the level of $\varrho_\mathrm{DM}\simeq 0.006\,\mathrm{M}_\odot\,\mathrm{pc}^{-3}$ level.
\section{Summary and conclusions}
The Solar DM wake acceleration of \rfr{wakeacc} induces long-term, secular rates of change on all the Keplerian orbital elements of the planets of our solar system. I numerically calculated them by integrating their equations of motion and using the Gauss perturbative equations after having rotated the velocity ${\bds v}_\odot$ of the Sun's Galactic travel from the GalCS to the ICRS, which is the coordinate system routinely used by the astronomers to process the planetary observations. For the presently accepted values of the parameters entering \rfr{wakeacc}, including the local DM density in the Sun's neighbourhood, the expected DM-induced orbital precessions of Saturn turn out to be as low as $\lesssim \mathrm{nas\,cty}^{-1}$, while the current formal uncertainties in the estimated Kronian orbital rates are at the $\simeq 0.002-2\,\mathrm{mas\,cty}^{-1}$ level.
I also simulated the Earth-Saturn range signature due to \rfr{wakeacc} over the same time span as covered by the data collected by the \textit{Cassini} spacecraft (2004-2017) whose residuals, computed by the astronomers without modeling any DM perturbations, are as large as $\simeq 30\,\mathrm{m}$. My numerically produced range time series, calculated with the values of the parameters of \rfr{wakeacc} found in the literature, is as low as $\simeq 0.1-0.2\,\mathrm{m}$. I demonstrated that the local DM density should be about a million times greater than its currently accepted value to create a range signal so large that it could not have escaped measurement even with the conventionally produced residuals today available.
In conclusion, the expected effects of the Solar DM wake on planets are far too small to be detected, or even effectively constrained, with the current accuracy of planetary observations, being their existence quite compatible with them.
|
1,941,325,221,072 | arxiv | \section{Introduction}
Deep neural networks (DNNs) are already used in a wide range of inference tasks, such as speech and image recognition, and are continuously advancing into physics, until now mostly for offline data analysis. There are only very few examples where DNNs are used in or targeted for the context of high-performance detector triggers. This might be due to the very special inference rate and latency constraints that apply in such environments, and the difficulty of developing complex algorithms for field-programmable gate arrays (FPGAs). FPGAs, however, often are the only type of processing hardware that can be used in such contexts, apart from even more demanding ASICs. One example of a specific network in the trigger context can be found in the Belle II trigger, in which a relatively small neural network is used for z-vertex triggering~\cite{Neuhaus:2017trg}. An example for a more general attempt at enabling neural network usage within triggers is the "High Level Synthesis for Machine Learning" (hls4ml) companion compiler, which uses high-level synthesis tools to generate the FPGA firmware design for a given network \cite{Duarte:2018ite}. A more detailed overview on trigger requirements and existing work regarding neural network inference on FPGAs is also given in \cite{Duarte:2018ite}.
In this paper we take the ATLAS detector~\cite{PERF-2007-01} and its upgrades of the first level trigger system~\cite{PHASE1,PHASE2} as a reference for our studies. In this FPGA-based trigger level, the incoming data rate of \SI{40}{MHz} needs to be reduced down to less than \SI{100}{kHz} within a maximal latency of \SI{2.2}{\micro\second}. Only this large reduction in rate allows for the further processing by a software trigger that reduces the event rate further to about \SI{1}{kHz}, the maximum rate that can be written to permanent storage. Most of the \SI{2.2}{\micro\second} latency is used up by data preparation and transfer, such that only a few tens to few hundreds of nanoseconds remain for actual neural network applications.
In contrast to the hls4ml framework, we chose to pursue a hardware-centric, bottom-up approach for implementing general neural networks on FPGAs, which grants maximum control over the FPGA design, and therefore allows very fine tuning for the specific use case. Thereby, we intend to further lift network size limits, while simultaneously providing scientists working on trigger algorithms with an easy-to-use tool for efficiently incorporating neural networks of sizes that were never used before into their systems.
We begin with a very brief overview over the relevant aspects of neural networks as well as FPGAs. Following that, we discuss the required types of operations, how they can be implemented in hardware, and introduce the concept of user-configurable fixed-point precision. The main section of this paper focusses on the design of the individual layers, which currently include \emph{fully-connected layers}, as well as two-dimensional (multi-channeled) \emph{convolutions} and \emph{maximum pooling}, and concludes this with the implementation of activation functions. We then quickly discuss what needs to be considered for putting multiple layers together in a functioning network, before the presentation of implementation results on individual layers and entire networks. In the end, we summarize the results of our developments and give an outlook on possible future improvements.
\section{Basics}
\subsection{Neural networks}
In the following, we are focussing on deep neural networks which consist of fully-connected, 2D convolutional and 2D maximum pooling layers, and any meaningful combination of these, i.e. an arbitrary combination of the 2D layers, which might be followed by flattening and then an arbitrary sequence of fully-connected layers, or alternatively a network consisting of fully-connected layers only. As neural network framework, Keras was used, with a TensorFlow backend \cite{Keras}.
In a fully-connected layer, every neuron receives every input, with the number of inputs $N_\mathrm{I}$ being predetermined by the network and the number of neurons $N_\mathrm{N}$ being a parameter of the layer. The inputs $\vec{i} \in \mathbb{R}^{N_\mathrm{I}}$ are multiplied by a weight matrix $W \in \mathbb{R}^{N_\mathrm{N} \times N_\mathrm{I}}$, typically have an offset $\vec{b} \in \mathbb{R}^{N_\mathrm{N}}$ applied and then go through a usually component-wise real activation function $\vec{A}: \mathbb{R}^{N_\mathrm{N}} \rightarrow \mathbb{R}^{N_\mathrm{N}}$ to produce the result $\vec{o} \in \mathbb{R}^{N_N}$, i.e. the fully-connected layer implements equation \ref{eq:Dense}.
\begin{equation}\label{eq:Dense}
\vec{o} = \vec{A}\left( W\cdot\vec{i} + \vec{b} \right)
\end{equation}
A 2D convolutional layer receives an input image/feature map of shape $H_\mathrm{I} \times W_\mathrm{I} \times D_\mathrm{I}$. The number of kernels $N_\mathrm{K}$ and the kernel area $H_\mathrm{K} \times W_\mathrm{K}$ are parameters, while the kernel depth $D_\mathrm{K}$ equals $D_\mathrm{I}$ to incorporate all input channels into the kernel application. For a general layer, there are two additional architectural parameters called padding and stride. The stride determines the step lengths between kernel applications, and padding determines how kernel applications at the edges are treated. Currently, we support the 'default' stride of 1 in both directions and what is usually called padding 'valid', i.e. no incomplete kernel applications at the edges. Accordingly, output height and width are computed as $H_\mathrm{O} = H_\mathrm{I} - (H_\mathrm{K} - 1)$ and
$W_\mathrm{O} = W_\mathrm{I} - (W_\mathrm{K} - 1)$, respectively, with the output depth $D_\mathrm{O} = N_\mathrm{K}$. The application of a kernel is similar to the fully-connected neuron operation, where $\vec{i}$ contains the inputs that are covered at the given position and $W$ depends on the kernel that is used.
As the 2D convolutional layer, a 2D pooling layer also receives a potentially multi-channeled input image. Instead of a kernel area, a pooling area $H_\mathrm{P} \times W_\mathrm{P}$ is defined, which is applied channel by channel and selects the maximum value within the given input range. As for the convolutional layer, currently only the default stride is supported, which in this case is $(H_\mathrm{P}, W_\mathrm{P})$ (i.e. non-overlapping pooling areas without free spaces in between). Padding options 'valid', 'same' and 'unchanged' are supported (see section \ref{ss:layers_pooling}). In consequence, the output shape is at least $\floor[\big]{ \frac{H_\mathrm{I}}{H_\mathrm{P}} } \times \floor[\big]{ \frac{W_\mathrm{I}}{W_\mathrm{P}} } \times D_\mathrm{I}$, with possibly one more element in height and width depending on the padding option and shapes.
Generally, it is also possible to reshape the data between layers. Until now, we support flattening from 2D multi-channeled data to 1D, where inputs are rearranged in a C-like manner, i.e. an input at $(x,y,channel)$ will be put into position $x\cdot (W_\mathrm{I} \cdot D_\mathrm{I}) + y\cdot D_\mathrm{I} + channel$ in 1D.
\subsection{FPGAs}
\label{Basics_FPGAs}
Modern FPGAs comprise a large number (up to the order of millions) of programmable look-up tables (LUTs) and typically one or two flip-flop registers (FFs) per LUT, with the routing between these also being progammable. In the following, any specifics apply to the Xilinx UltraScale+ (US+) FPGA architecture \cite{Xilinx_USplus, Xilinx_USplus_SwChar, Xilinx_CLB, Xilinx_DSP}. However, most features are directly or similarly applicable to other recent device families from Xilinx and competitors such as Intel/Altera.
The LUTs typically implement binary combinational functions from few bits to 1- or 2-bit results, e.g. $f:\{0, 1\}^6 \rightarrow \{0, 1\}^2$ in the US+ architecture.\footnote{True di-output LUTs are only supported for up to 5 inputs, otherwise constraints apply, see \cite{Xilinx_CLB}.} A flip-flop can store a single bit.
The combination of a large ensemble of programmable logic functions, registers for storage and synchronous processing and programmable routing makes it possible to implement even complex and large digital circuits within FPGAs.
In addition to these basic building blocks, there are also specialized embedded components such as block memories (BRAMs) and digital signal processors (DSPs). The BRAMs provide a high-density, high-capacity memory. One BRAM block consists of two \SI{18}{kib} parts, where each is addressed by 9 to 14 address bits, which results in port widths and depths ranging from 36 and 512 to 1 and 16384. The DSPs are 'simple' ALUs (arithmetic logic units), which offer a wide range of operations. In the US+ architecture, the operation mode which is most important for NN applications is a $w \cdot i + b$ type operation, where $w$, $i$ and $b$ are binary numbers of up to 18, 27, and 48 bits, respectively. The number $b$ can be an external input or come from an internal result register, allowing to perform a multiply-accumulate operation in either a pipelined (if $b$ is external) or localized (if $b$ is internally accumulated) design.
Our target device was the Xilinx US+ XCVU9P, which features approximately \SI{1.2}{M} LUTs and \SI{2.4}{M} FFs, as well as 6840 DSPs and 2160 2x18\,kib BRAM units, and supports maximum operation frequencies in the range from \SI{640}{MHz} to \SI{900}{MHz}, depending on device speed grade and hardware component. The FPGA firmware was designed in VHDL using Vivado 2018.2 as IDE with default settings for synthesis and implementation strategy, except for the synthesis mode \emph{out of context}, to make it possible to implement designs without any I/O connections. Apart from automatic inference of hardware components such as LUTs, FFs and DSPs from the VHDL code, they were also explicitly instantiated by \emph{design primitives} for the DSPs and adders implemented in the general logic, which gave maximum control over the implementation.
Key metrics for the evaluation of an FPGA implementation are the amount of resources required and the design timing, i.e. if the target frequency is met or if the design cannot run at the targeted frequency due to time needed for signal propagation. Designs which miss timing cannot be used without optimization, but even in these cases, it is useful to have knowledge about the severity of the timing violation.
\subsection{Arithmetics implementation}
\paragraph{Precision requirements}
Previous work~\cite{Duarte:2018ite} has demonstrated that for NN inference, reasonably low precision is sufficient, and neither floating-point arithmetics nor a large number of bits are necessary for optimal performance in many cases.\footnote{Which we also verified for our own test implementations.} Using fixed-point arithmetics provides the benefit of a significantly reduced implementation complexity, which saves resources and makes the adjustment of the arithmetic precision to specific needs regarding value range and granularity significantly easier.
In the following, we define the precision specification '$i.f$', where $i$ and $f$ are the integer and fractional bits in a fixed-point representation, with the entire value being in the so called \emph{two's~complement} representation. Accordingly, the resulting value range is $- 2^{i - 1}$ to $2^{i - 1} - G$ with the granularity $G = 2^{-f}$.
\paragraph{Implementation in hardware}
For DNNs as described above, only a few basic operations are needed: Multiplication and addition are combined into the multiply-accumulate (MAC) operation, and for the pooling layers, the 'select maximum' operation (MAX) on two values is required.
Since fixed-point arithmetics are sufficient, it is possible to use only a single DSP to implement the MAC operation that is required for computing weighted input sums.\footnote{This of course limits the number of bits for the inputs, but with up to 27 by 18 bit-wide multiplications in the UltraScale+ architecture, this limit is significantly above what is usually critical for neural network purposes.} Thereby, it is possible to provide one input and weight to a DSP per cycle, and then the DSP can add the product either to the internally accumulated value or to an externally provided partial result. For reaching the maximum DSP frequency, it is necessary to use internal pipeline registers of the DSPs. This requires having internal registers enabled at three levels: input, product and after the accumulation. Due to this, a 'first DSP' latency of three cycles arises. For any additional DSP in a pipeline, only one extra cycle is necessary, as even an externally provided extra term (e.g. the current partial result) can be inserted latency-free before the accumulation step.\footnote{An extra external input register is also possible and might be required for future frequency improvements, but it is possible to incorporate this with only one extra cycle for the entire DSP pipeline, instead of one cycle per DSP.}
In some situations, it is necessary to compute sums of values that come out of multiple DSP pipelines. Adding $N$ values requires $N-1$ adders. By adding multiple values in a binary tree structure, where at each level, pairs of values are added and the result is sent to the next level, it is possible to keep the logic depth at the optimal value of $\ceil[\big]{ \log_2 N }$, which is desirable from a latency point of view. To guarantee optimal performance, these logic-based adders were instantiated on a design primitive level, forcing Vivado to use the dedicated carry logic and to place the corresponding primitives as close as possible.
For the pooling layers, it is necessary to find the maximum in a set of values. Similarly to the sum of multiple values, this can be done in a binary tree structure, to provide the lowest possible logic depth and resource utilization.\footnote{It might be possible to use an algorithm with formally lower logic depth, but this is not expected to scale to the hardware regarding utilization and timing.} Instead of being added, each pair of values is processed by the MAX operation, to detect the larger and send it to the next level, until only one value remains.
For the maximum of and sum of set structures, we added configuration parameters to include register stages at the input, inter-operation and output levels, allowing to trade maximum frequency against register utilization and latency in terms of cycles.
\section{Layer structure}
\subsection{General decisions}
Due to the very strict real-time context of high-performance detector triggers, it is very important to optimize the design for minimum latency.
At the same time, it is necessary to aim for an optimized inference performance, which strongly relates to an efficient resource usage. It is not expected that a solution could be found which optimizes all three metrics at once, especially if no further constraints exist for the network architectures. The selection of potentially suitable design approaches is tightly bound to the data rate $f_\mathrm{D} = T_\mathrm{D}^{-1}$ (with the data tick period $T_\mathrm{D}$), the achievable processing frequency $f_\mathrm{P}$ and the latency budget. Motivated by the ATLAS first level trigger, we chose to optimize the design for a data rate of \SI{40}{MHz} and for a latency as short as few tens of nanoseconds for entire networks, but put performance and efficiency before a perfectly minimized latency.
Generally, if data is coming in at a rate $f_D$ and the frequency of the processing units (PUs), e.g. the DSPs, is $f_\mathrm{P}$, there are $C = \floor[\big]{ \frac{f_\mathrm{P}}{f_\mathrm{D}} }$ cycles per PU and data set for real-time processing. Moreover, three general structures could be used that potentially reach perfect efficiency: Ideally, one would have all of the hardware resources available to process an entire data set within one data tick, and then proceed with the next set, which also results in the shortest latency. If that is not possible, one can either build a single pipeline of processing steps, where partial results of one pipeline stage propagate to the next stage during each data tick, or partition the hardware into $N_\mathrm{P}$ parts, and place the corresponding number of instances of the network processors, where each has $N_\mathrm{P}$ data ticks for processing, and inputs are multiplexed to a different instance at each tick. We have decided for a single-instance pipelined design, and excluded the others:
The 'single tick processing' design can be ruled out because with e.g. $T_\mathrm{D} = \SI{25}{ns}$ for the ATLAS detector, it will not be possible to have large designs which process all of the data within one data tick, simply due to the time required for computation and signal propagation in device-spanning, deep logic structures.
The decision for either a single pipeline design or a multi-instance design is less trivial, as both have advantages: In the single pipeline design, every pipeline stage could potentially be adapted to a specific task, which would reduce overhead structures. A multi-instance design would require more general processing units instead (as these would need to perform different tasks/compute different layers while time progresses), and could be implemented for example as systolic arrays. At the same time, data propagation through the single pipeline design would be bound to the pipeline structure, while the processing units in the multi-instance design could \emph{potentially} allow better adjustment of the latency depending on the exact design, although at the price of an increased complexity.
We took three further aspects into account for our decision: In order to avoid an increase in latency, we decided not to batch the input data, i.e. the batch size during inference is 1. Additionally, with the given maximum hardware processing frequency and targeted data frequency range, $C$ is in the order of $10^1$, and in a neural network, a layer can only begin computing when at least some of the required inputs are available.
All of this convinced us to choose the single pipeline design: It makes it easier to keep the efficiency high if no batching is used and $C$ is small, it allows having relatively small (although possibly not minimized) latencies and it avoids too much overhead structure for reconfigurability and data flow management.\footnote{Even with this design, it is possible to further reduce the latency at cost of computational efficiency by constraining the layers to use less than $C$ cycles. The loss in computational efficiency would typically be tolerable, as mostly small networks would benefit from this, while larger networks are rather throughput-bound than latency-bound anyway.} For the pipelining approach, it also appears natural to have one pipeline stage per network layer, which is conceptually much less complex than making efficient use of a systolic array while simultaneously keeping the latency low. Layers were therefore designed to take the data within $C$ cycles after the first part of the data arrives, and also produce all results within a period of $C$ cycles, to maintain network synchronicity, with an arbitrary but fixed delay in between.
Moreover, with $N_\mathrm{PU}$ processing units placeable in a device, there are $N_\mathrm{Op} = C \cdot N_\mathrm{PU}$ operations possible per data set and operation type, which is useful to understand the ideal-case inference performance of any device and thereby the maximum network size. Figure \ref{f:TOPsVsDevice} shows the number of DSP MACs possible depending on the device/DSP count and processing frequency, where the ATLAS data frequency of \SI{40}{MHz} was assumed. This shows that already with one of the current high-end FPGA families, a network size of up to 280 kMACs could be possible even for such high data frequencies, if the maximum device frequency can be achieved and the computational efficiency is close to 100\%.
\begin{figure}
\includegraphics[width = \textwidth]{TOPsVsDevice}
\caption{\label{f:TOPsVsDevice}
Maximum network size (in terms of MACs per data set) depending on processing frequency and number of DSPs, together with specifications of devices from the Xilinx UltraScale+ family (dashed grey lines and black squares). The number of operations is obtained by dividing the product of frequency and DSPs by a data frequency of $f_\mathrm{D} = \SI{40}{MHz}$. Network implementations which met the timing are marked with a plus, other examples with a cross.
The color of the square in the marker background indicates where the network would be located if all processing cycles could be used, i.e. it is an indicator for the ratio of used cycles (no contrast corresponds to a 100\% efficiency). (See section \ref{ss:Results_Networks} for details on these networks).
}
\end{figure}
\subsection{Fully-connected layers}
\begin{figure}
\centering
\begin{subfigure}[b]{.85\textwidth}
\includegraphics[width = \textwidth]{Dense_Dataflow_Alternative}
\caption{\label{fs:Dense_Dataflow}
Data flow schematic for the fully-connected layer.
The color (pattern) coding is as follows: gray (vertical lines) refers to no data set available/idling, while cyan (crosshatch) and pink (horizontal lines) refer to data of a first and second data set, respectively. The inputs of neighboring pipeline stages are updated in subsequent cycles and only once per data cycle, while the weight sequence for each individual DSP is repeated during each data cycle.
}
\end{subfigure}
\vspace{1\baselineskip}
\begin{subfigure}[b]{.85\textwidth}
\includegraphics[width = \textwidth]{Dense}
\caption{\label{fs:Dense_Structural}
General structural schematic of a fully-connected layer. For connections of inputs and weights to DSPs see text. As in the following, the dimensionality of signals is indicated by the number of strokes on a given signal line. (No stroke corresponds to scalar data, one stroke to an 1D array, etc.)
}
\end{subfigure}
\caption{Fully-connected layer design.}
\end{figure}
Generally, it is possible to use a single DSP to implement the weighted input sum of a neuron, as the DSPs support the MAC operation. However, the number of processing cycles is limited, therefore arbitrary amounts of inputs $N_\mathrm{I}$ could only be handled if multiple inputs are weighted in parallel in multiple DSPs, and even then (or if $N_\mathrm{I} < C$), such an approach can be computationally very inefficient
Instead, we exploited that neurons in fully-connected layers require each of the inputs in order to be computed. By combining this with the inputs being made available within $C$ cycles, we implemented the weighted input sums using pipelines of DSPs.
A typical pipelined data flow is illustrated in figure \ref{fs:Dense_Dataflow}, where for illustrational purposes we assumed that per processing cycle, only one further input becomes available. In this scheme, there are as many DSPs as inputs to the given pipeline. The input $i_n$ ($n \in \mathbb{N}_0$), which is available at cycle $n$, is stored in a register which is connected to the input of $\mathrm{DSP}_n$ at the subsequent cycle edge, which keeps this input until it is overwritten after $C$ cycles. With $n + 1$ being the first cycle when $\mathrm{DSP}_n$ has the new input available in its register, weight $w_{m,n}$ is presented to $\mathrm{DSP}_n$ during cycle $n + 1 + m$, where $m \in \mathbb{N}_0$ denotes the neuron index. After each cycle, the partially accumulated weighted input sum is passed as third input to the next DSP in the pipeline, which adds the next weighted input for the corresponding neuron, until the final result comes out of the last DSP.
Multiple of these pipelines can be used when multiple inputs become available during each cycle. We define the number of pipelines as $P$. Each pipeline weights at least $\floor[\big]{ \frac{N_\mathrm{I}}{P} }$ inputs, and the first $N_\mathrm{I} \bmod P$ pipelines need to weight one extra input, while the remaining pipelines are extended by a shift register for cycle-matching the partial results.
Apart from parallelizing the pipelines when multiple inputs become available per processing cycle, it is also possible to use multiple of such 'neuron units' (NUs) if necessary, i.e. $N_\mathrm{NU} = \ceil[\big]{ \frac{N_\mathrm{N}}{C} } > 1$. In that case, each NU computes at least $\floor[\big]{ \frac{N_\mathrm{N}}{N_\mathrm{NU}} }$ neuron results, and the first $N_\mathrm{N} \bmod N_\mathrm{NU}$ NUs compute one extra neuron. If $\ceil[\big]{ \frac{N_\mathrm{N}}{N_\mathrm{NU}} } < C$, all neuron results are available even before $C - 1$ extra cycles after the first result, which allows a reduction of the layer-to-layer latency.
\paragraph{Code structure}
As for the following layers, the fully-connected layer consists of multiple design subunits, which are shown in figure \ref{fs:Dense_Structural}. The main part of the fully-connected layer are the neuron units. These contain the DSP pipelines, which use the $w\cdot i + b$ type operation of the DSPs, with $w$ for weights, $i$ for the input values, and $b$ as partial result of the previous DSP. In the current implementation, the $b$ signal can be implemented either via dedicated connections between the DSPs or through the general fabric wiring. The latter case provides more placement flexibility, but worse timing characteristics. For the fabric routing, timing could be further improved by also activating the DSP input register for the $b$ signal, which was not yet implemented.\footnote{Minor adaptations in the layer structure and controlling are necessary for that.} If multiple pipelines are used, a final adder is included to sum up all partial results to produce the actual weighted neuron input sum.
The input values for the DSPs are stored in a memory entity with external write control, therefore the external write sequence must match the expected input sequence. Input $i$ is weighted by DSP $\floor[\big]{ \frac{i}{P} }$ in pipeline $i\bmod P$. Similarly, there are multiple weight memory blocks, realized with BRAM.\footnote{The choice for BRAM-based memory units fell because we considered these a spare and available resource, which makes it possible to save LUTs and registers. If necessary, it would be possible to switch to memory based on the general logic resources.} Currently, there is one weight memory block per NU and pipeline stage, which spans all DSPs of that stage and NU. In the future, other architectures might be added, such as blocks per pipeline or per part of a pipeline, for frequency optimization for different layer architectures.
In the end the results are multicast, i.e. the single result signal of any given neuron unit is then connected to all output signals of the entire dense layer which are produced by that neuron unit during any cycle. A controller entity controls the selection of the weight memory and asserts signals indicating when a result is valid.
\subsection{2D convolutional layers}
Convolutional layers typically feature only few parameters, but a significant amount of the total MAC operations within a network, due to the manifold application of the same kernel. Obviously, it is possible to apply kernels at multiple positions in parallel, as there are no direct dependencies between different output positions.\footnote{There are partially shared inputs, however, and some very intelligent ways to reduce the computational cost, e.g. by making use of the convolution theorem to implement the convolution as multiplication, but these often work well only for larger layers than possible here and sometimes require preparations that introduce significant additional latency, therefore we did not focus on these.} However, not exploiting the topology of convolutional layers in any way results in a significant and mostly avoidable logic resource consumption, because the same input values and weights need to be copied often and in many places in spite of their redundancy.
Therefore, we studied possibilities to save resources by spatially and temporally reusing the layer input values as well as the weights. In the most extreme case, one would simply compute the entire result in parallel, which would mean that the weights could be completely static and every input would also only need to be stored (although dynamically updated per data set) exactly once. However, in such an architecture, most of the processing cycles would remain unused, which would result in an extremely inefficient resource usage, even though the data reuse is maximized. In consequence, it is necessary to find a way to divide the computations that reuses the data efficiently while also taking into account how many cycles are there for processing and that is feasible from an engineering point of view.
As example, computing an entire output channel at once would also allow an extremely efficient weight reuse, as all processing units could get the same set of weights for the current channel, and the weights would need to change only cycle by cycle. The input reuse would even still be maximized, as every output channel requires the complete input data. However, such a scheme would work well only if the number of processing cycles matches the number of output channels well, and otherwise could become extremely inefficient.
As other extreme, one could make use of nearly all of the available processing cycles if the computations are divided on a per-output-element basis, as for the fully-connected layer. While feasible in principle, this would make it very complicated to find and implement an efficient way of sharing inputs and weights between processing elements.
As compromise between the optimization of the data sharing and cycle usage, we decided to use one 'row' of data as smallest unit for distribution to different processing units, which we accordingly name 'row units' (RUs). We identify a row by its height and channel index, and it spans the entire width of the volume, see figure \ref{f:slicing}.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Slicing}
\caption{\label{f:slicing}
Illustration of how 'slices' and 'rows' are related to the original data volume. A slice contains all elements at a given height, a row spans the entire width at a given height and channel position.
}
\end{figure}
Generally, with $N_\mathrm{OE}$ output elements to be computed and $C$ cycles for processing, one requires at least $N_\mathrm{PU} = \ceil[\big] { \frac{N_\mathrm{OE}}{C} }$ processing units. With a row-wise division, there are $N_\mathrm{OE} = H_\mathrm{O} \cdot D_\mathrm{O}$ output elements to be computed. This means that $N_\mathrm{OE}$ is easily in the order of one hundred, while $C$ is expected to be 20 or significantly smaller in case of the ATLAS detector trigger. If we use these values as assumption for 'large' convolutional layers, the cycle efficiency (i.e. ratio of non-idling row unit cycles) is guaranteed to be at least 80\% even in the worst case, and up to 100\% in the best, which further improves with lower $C$ or larger layers. This is still not as good as what could be obtained for a per-element division, but it turns out that even for a row-wise division, one already needs to resort to a relatively complex design for combining the high cycle efficiency with an extensive data reuse, which is explained in the following.
The first level of data reuse is facilitated by the row unit itself: Neighboring output elements share a range of their inputs, and all of their weights. Accordingly, it is only necessary to provide one set of weights for the subunits within a row unit that compute the individual output elements, and only one range of the input data which is then shared internally.
A second level of data reuse can be introduced between row units: If row units are guaranteed to always process output rows belonging to the same channels in parallel, then it is also possible to let them share their weights throughout the entire computation.
Furthermore, it is also possible to use input sharing between the $N_\mathrm{RU} = \ceil[big] { \frac{H_\mathrm{O} \cdot D_\mathrm{O}}{C} }$ row units. For that, we want to introduce the concept of a 'slice', which contains all rows at a given height index, i.e. it is identified by the height index and spans the entire width and depth of the data volume, see figure \ref{f:slicing}. Row units that process rows from neighboring slices can share part of their inputs, as neighboring output slices require $H_\mathrm{K} - 1$ common input slices for their results.
Until here, there is 'for-free' data sharing within a row unit, and there are known conditions for inter-row-unit data sharing. Both belong to the spatial data reuse rather than the temporal data reuse. As final step, it is necessary to find a way to assign different output rows to row unit processing cycles, such that both the inter-row spatial conditions are met as well as possible and that temporal data reuse is also facilitated.
We developed the following scheme: Initially, all row units begin processing data for the same output channel, but of directly neighboring slices, i.e. row unit $i$ begins computing the result of the first channel of output slice $i$. Cycle by cycle, every row unit progresses to the next channel of the same slice. All row units begin computing outputs for the next free range of slices when all rows within the current slice range have been covered.
This scheme has the advantage that it maximizes both the input and the weight sharing between row units, and additionally, it is only necessary to load a new range of inputs when the row units move on to the next range of output slices, i.e. only once every $D_\mathrm{O}$ cycles. This means a drastic reduction of the amount of different inputs that need to be loadable, and therefore is very beneficial for the resource utilization, as this effects both registers for data storage and LUTs for input selection.
An example for this scheme is shown in figure~\ref{fs:Conv_Allocs}. Generally, any row unit can process at most $C$ rows, which can be part of $N_\mathrm{RU,Sl} = \floor[\big]{ \frac{C}{D_\mathrm{O}} }$ complete slices and $N_\mathrm{RU,Rem} = C \bmod D_\mathrm{O}$ remaining rows.
\begin{figure}
\begin{subfigure}[b]{.35\textwidth}
\centering
\includegraphics[width=0.65\textwidth]{Conv_Allocs}
\caption{\label{fs:Conv_Allocs}
Example of the row unit allocation scheme (\mbox{$C \in \lbrace 14,...,19 \rbrace$} and given output shape). Positions marked with a 0 are computed first, all others require the indicated number of delay cycles. The different colors (patterns) indicate which row unit covers a given row.
The blue (crosshatched) row unit is 'short', the others 'long'.
}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.62\textwidth}
\includegraphics[width=\textwidth]{Conv_Regular}
\caption{\label{fs:Conv_Structure}
Convolutional layer schematic for the regular case (here with 5 RUs of kernel height 3, 3 of them 'long'). A buffer memory receives inputs and row-wise write enable signals. Internally, the row units receive their inputs from working memories, which provide selected ranges of the complete input. Finally, the RU results are multicast. Weight blocks for the row units and control infrastructure were left out for clarity reasons.
}
\end{subfigure}
\caption{2D convolutional layer design (regular structure).}
\end{figure}
At this point, it is necessary to distinguish between a regular and an irregular case: In the regular case, there are at most as many slices as can be completely processed by the given number of row units, i.e. $N_\mathrm{Sl} \leq N_\mathrm{RU} \cdot N_\mathrm{RU,Sl}$. This means that it is possible to process all results for any slice by only a single RU. It might be necessary to have some RUs process one slice more than others, we attribute those which process an extra slice as 'long', the others as 'short'.
In the 'irregular' case, it is necessary to also use at least some of the $N_\mathrm{RU,Rem}$ remainder cycles of some row units, and the allocation scheme becomes much more complicated (see section \ref{ss:Conv_Irreg}).
\paragraph{Code structure}
The main part of the 2D convolutional layer are the row units. These are provided with all of the weights and inputs they need. For a row with $W_\mathrm{O}$ output positions, there are just as many subunits for computing the respective output values. Every subunit is structured as in figure~\ref{f:Conv_Pipeline}: The inputs that are relevant for the given output position are weighted and accumulated per input channel from top to bottom channel, and partial results are pipelined individually for each kernel position. At the end, the total result for the given output position is produced by adding the partial results from all kernel positions. The top-to-bottom input scheme was used to adapt to the output scheme, which primarily also works from top to bottom channel, in order to typically allow faster data propagation between layers.
The rest of the layer was designed in an attempt to reduce the resource utilization (rather than frequency). A simplified structural example for the regular case is shown in figure~\ref{fs:Conv_Structure}. There is one input buffer memory, which has external write controlling and stores each of the (2D multi-channeled) inputs exactly once. The buffer memory is followed by working memories, which have control inputs for selecting when data is written to them and from which range of inputs data is taken. There always is a 'long', and if 'short' RUs are present, also a 'short' working memory. The 'long' working memory contains all input slices required for the 'long' RUs. The 'short' working memory contains all \emph{extra} input slices required for the 'short' RUs, i.e. those which are not required in any 'long' RU. We chose to differentiate the working memories into these two subunits because the 'short' memory needs to load one set of inputs less from the buffer memory, as the 'short' RUs finish their computation one cycle earlier, and therefore this structure can reduce the resource utilization. In figure \ref{fs:Conv_Structure}, each working memory output signal corresponds to an input slice, i.e. this spans the entire width and all channels, but has a fixed height index (which varies over time, while different ranges of inputs are loaded into the working memories).
To save further resources, we grouped RUs which always process the same output channel during any given cycle together, and gave all of them only a single weight memory. In the regular case, this means that there is only one set of weight memories for all 'long' and one set for all 'short' RUs. Details on the more complicated irregular case are described in appendix~\ref{ss:Conv_Irreg}.
As for the fully-connected layer, the row unit results are finally connected to all output positions where they produce results during any cycle (see allocation scheme), and the output row write enable signal is managed by a controller, which also controls the working memory input selection and write enabling and the weight memories.
\begin{figure}
\centering
\includegraphics{Dynamic_Conv_Pipeline.pdf}
\caption{
\label{f:Conv_Pipeline}
2D convolutional layer pipeline structure. Inputs from the topmost channel are weighted first, the weighting of inputs then propagates towards the last channel cycle by cycle. Weighting and accumulation happens per input area element individually, and the total result is finally created by adding the partial results for all input area elements. Such a pipeline structure is used $W_\mathrm{O}$ times within a row unit, i.e. once for each row output position. Here illustrated for an $n$ channel deep, $2 \times 2$ kernel.}
\end{figure}
\subsection{2D max-pooling layers}
\label{ss:layers_pooling}
For default-stride pooling layers, where there is no input sharing, it is neither necessary nor possible to resort to as elaborate schemes as for the convolutional layers. Since all inputs are required only once, there is no way of saving resources or input accesses, and there is no need to use complicated row allocation patterns. For simplicity reasons, the concept of output rows and row units was still maintained, but the row allocation was simply done from top to bottom channel, top to bottom slice in an interleaved manner (see figure \ref{fs:Pool_Allocs}).
Contrary to the convolutional layers, it was already possible to implement different padding options for the pooling layer: These include padding 'valid' and 'same' (as in Keras), and a new mode 'unchanged'.\footnote{'Valid' padding dismisses 'incomplete' pooling inputs at the high-index edges, 'same' symmetrically extends the input space to only have complete pooling inputs, 'unchanged' extends only on the high-index edges.} Internally, padding is realized by the extension or truncation of the input signals. The latter will not increase the utilization significantly, as constant propagation will be detected and the corresponding logic parts removed/simplified by Vivado.
\begin{figure}
\begin{subfigure}[b]{.36\textwidth}
\centering
\includegraphics[width = 0.65\textwidth]{Pool_Allocs}
\caption{\label{fs:Pool_Allocs}
Example of the row unit allocation scheme (\mbox{$C \in \lbrace 14,...,19 \rbrace$} and given output shape). Positions marked with a 0 are computed first, all others require the indicated number of delay cycles. The different colors (patterns) indicate which row unit covers a given row.
}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.61\textwidth}
\includegraphics[width = \textwidth]{Pool}
\caption{\label{fs:Pool_Structure}
Pooling layer schematic. A buffer memory receives inputs and row-wise write enable signals. Internally, the pooling row units receive their inputs from working memories, which provide selected input rows. Finally, results are multicast. Control infrastructure was left out for clarity reasons.
}
\end{subfigure}
\caption{2D maximum pooling layer design.}
\end{figure}
\paragraph{Code structure}
As for the convolution layer, the basic units of this layer are the row units. A row unit gets all inputs required to compute a single output row. Since pooling usually does not extend over multiple channels, this means that a row unit only requires a range of input rows of a single channel for each cycle.
A structural schematic of the design is shown in figure \ref{fs:Pool_Structure}. Similar to the convolutional layers, a buffer memory is used, from which the row unit working memories loads a new input slice during each cycle. This process is driven by a controller entity, which also controls the output write enabling. Row unit results are again multicast to any position in which they are needed at some cycle.
\subsection{Activation functions}
Apart from the obvious linear/identity activation, we currently support the \emph{rectified linear unit} (relu) activation, which is quite commonly used in (convolutional) deep neural networks, and has the advantage that it can be implemented with very little cost on FPGAs.
By explicitly instantiating on a design primitive level, we were able to guarantee a relu implementation that requires either $\floor[\big]{ \frac{B}{2} }$ LUTs or $B - 1$ FFs per relu unit working on $B$ bit values. For the LUT-based implementation, we used that Xilinx UltraScale+ LUTs support dual-output use for up to five common inputs. The relu is then obtained by providing two input value bits and the input value sign bit to a LUT, which either replicates the two input bits on its outputs (positive sign) or sets the outputs to zero (negative sign). The FF-based implementation makes use of the FDRE primitive\footnote{This realizes a D-type flip-flop with a synchronous reset and a clock enable input, the latter was tied to a logical '1'.} for every non-sign input value bit. The flip-flops simply store the input value bits if the sign is positive and reset to '0' if the sign is negative, i.e. the sign bit acts as reset control bit. This is an elegant demonstration of how advanced storage primitives can be used to implement not only storage, but also simple computations, which can save LUT resources for more complex operations.
\begin{figure}
\centering
\includegraphics[width = .8\textwidth]{Activations_Interpolated}
\caption{\label{f:Activations_Interpolated}
Example for a linear interpolation of activations functions. Python-based case study, every value was rounded to a granularity of only $2^{-6}$ during all steps of the interpolation, only 16 sample points were used.
}
\end{figure}
Other activations can be implemented in the future. One way of doing this is by either value-based look-up tables or by value-derivative-based look-up tables. For example, with 16 bit activation unit input values, one could use the 8 most significant bits to look up an interpolation base value and a first derivative, and then add the first derivative multiplied by the eight least significant input bits to the base value. With the Xilinx US+ architecture, both look-ups could be done at an approximate LUT cost of $2 \cdot 2^{8-6} \cdot 8 = 64$, the first two comes from two look-ups, the factor $2^{8-6}$ from the cost for looking up one bit for an $2^8$-deep address space and the eight comes from an assumed eight bits which are looked up for the base value and first derivative. The multiplication of two 8 bit values and final addition would cost approximately 70 LUTs, leaving a total of $\sim 140$ LUTs per activation unit to implement a relatively precise look-up with 256 sample points and linear interpolation in between.\footnote{The exact LUT cost depends on some details, but less than 200 is seen as reasonable assumption for many cases. Characteristics like base value and derivative saturation could be further exploited for an even decreased LUT requirement.} Figure \ref{f:Activations_Interpolated} shows a case study, where we linearly interpolated the comparably complicated tanh and sigmoid functions. To demonstrate the loose precision requirements, we used only sixteen sample points within the shown intervals, and rounded to a granularity of only $2^{-6}$ during all steps of the interpolation (i.e. the value and derivate samples themselves were rounded, the interpolation position was rounded and multiplication and addition results were rounded). This case study nicely demonstrates that even with only very low precision and very few sample points, it is possible to have a surprisingly accurate approximation of non-linear activations. In these cases, it would be possible to implement the look-up of both base value and derivative with approximately ten LUTs and the multiplication and addition with coarsely 50 LUTs. Given that there would usually be only a single activation unit at the end of a many-DSP pipeline, one can expect an extra utilization of much less than 10 LUTs per DSP even for a precise interpolation.
\section{Network creation toolkit}
\subsection{Layer synchronization considerations}
When connecting fully-connected layers to fully-connected layers, it is only reasonable to select a pipeline parallelization factor $P$ in the successor layer that equals the number of neuron units $N_\mathrm{NU}$ in the predecessor layer. By that, every input is used exactly when it is made available, and no extra buffering structures or delays are necessary.
When connecting layers with 'arbitrary' input and output schemes, there is no sense in starting the second layer earlier or later than necessary to run without interruption. This means that it is necessary to compute the minimum start delay that allows uninterrupted computation of the successor layer. Minimum start delay determination can easily be done if the output and input pattern of the involved layers are known, i.e. at which cycle which results/inputs are produced/needed. By element-wise subtraction of the 'needed' pattern from the 'available' pattern, one can take the largest positive value as minimum necessary delay for continuous computation.
However, with a given start delay, it might happen for some layer sequences that an input is already overwritten before it is needed for the last time.\footnote{For example the 2D convolution might have multiple load operations from its buffer memory and can therefore create such a situation.} These cases can be solved by the introduction of individual extra delays for all affected input values. The extra delay for each input value can be computed by adding the start delay to the input 'needed' scheme, adding $C - 1$ to the 'available' scheme, subtracting the latter from the former and then introducing the respective extra delay for all values which yield a positive result. This does \emph{not} increase the layer-to-layer latency.
Further delays that need to be taken into account are possible extra delays introduced by the application of activation functions and delay offsets between the layer enable signal and when that layer actually expects the first input. The latter do not influence the network latency itself, but need to be considered for asserting the layer enable signals at the right moments, which is automatically done in our toolkit.
In the special case of flattening, it is currently necessary to 'regularize' the fully-connected layer input, because the fully-connected layer has no buffer memory, and therefore it is necessary to ensure that the inputs are updated in the exact order in which the computations propagate through the pipeline. This requires an extra auxiliary layer resulting in one extra cycle delay. The same effect could in the future be obtained by re-ordering the inputs/weights of the fully-connected layer, such that the inputs which are available first after flattening are also required first. This was not yet implemented because the regularization approach is more general, and it was understood only lately that a mere re-ordering would be sufficient, if the fully-connected layer parallelization $P$ is set to the number of simultaneously produced result values from the last 2D layer.
\subsection{Network creation}
The Python-based toolkit for automated network creation and the VHDL library files can be obtained via email from the authors and is distributed as open source software.
The starting point for the network implementation on FPGA is a trained Keras network. Supported network architectures consist of the previously described layers and activation functions. An arbitrary sequence of 2D multi-channeled layers \emph{can} be followed by flattening and an arbitrary sequence of fully-connected layers, or a fully-connected-only network can also be used.
Before the network can be implemented for the FGPA, it is necessary to specify/customize various design parameters. These include the precision (i.e. integer and fractional bits) of the input value and weight representation, which can even be customized on a layer-wise basis. Apart from specifying layer input and output bit widths, it is even possible to configure how intermediate values are treated, which occur for partial results being passed between DSPs and from the last DSP within a pipeline to a potential logic-based adder. Other parameters include pipelining and routing behavior, for example whether the relu activation is implemented in LUTs or FFs, if data flowing between DSPs is routed through the dedicated neighbor connections or the general fabric routing, or where registers are placed in arithmetic and logic pipelines.
Based on this information and the trained Keras network, the toolkit can be used for creating the VHDL network top file, auxiliary package file for configuration constants, network simulation testbench file and network data files for initializing control and weight memories from a Keras network. The VHDL files can finally be included in a project together with the network library files and are ready for use. Then, it is only necessary to load the respective weights and controller data into the memories during execution, after which inference is possible.
\section{Results}
Given our target design parameters, i.e. \SI{40}{MHz} data frequency, at most \SI{600}{MHz} to \SI{900}{MHz} processing frequency and the Xilinx XCVU9P FPGA, we considered a possible network size of at most multiple $10^4$ MACs, to constrain ourselves to reasonable layer and network sizes for further studies.
Typical deep neural networks in use by ATLAS start at a few $10^3$ MACs for fully-connected networks aiming at tagging $W$-Bosons and Top-Quarks~\cite{ATL-PHYS-PUB-2017-004}, but can easily reach the order of several $10^7$ MACs for e.g. quark/gluon identification using convolutional networks on calorimeter images~\cite{ATL-PHYS-PUB-2017-017}, as they have not been optimized for low resource usage.
For any design, timing closure was characterized by the 'worst negative slack' (WNS), which can be thought of as difference between the signal propagation time estimated by Vivado and the target clock period. Timing characteristics apart from the WNS were met for all implemented designs. All designs were implemented with the 'out of context synthesis'. The bit lengths of the layer input values and results were set to 14 (6.8), and where occurring weights were set to 10 (2.8) bits length.\footnote{If any, we observed less than 1\% accuracy degradation in all of our MNIST digit recognition test networks with that precision choice. Fewer or more bits might be necessary for satisfying accuracy with other networks, with an approximately linear effect on the utilization when both bit widths are scaled simultaneously.}
\subsection{Isolated layers}
All implementations were done with a target clock period of \SI{1.563}{ns}, reflecting the $f_\mathrm{P} = \SI{640}{MHz}$ processing frequency that are ideally achievable even with speed grade 1 US+ DSPs, which corresponds to $C = 16$ for an assumed data rate of \SI{40}{MHz}, as in the ATLAS experiment. Therefrom, we derived the minimum processing clock period $T_\mathrm{P,min} = T_\mathrm{P} - T_\mathrm{WNS}$ of a design. Designs which failed with a large WNS might show better results for larger $T_\mathrm{P}$, because for better-suited $T_\mathrm{P}$, Vivado can typically reach better $T_\mathrm{P,min}$.
\begin{figure}
\includegraphics[width = \textwidth]{LayerSizevsFreq}
\caption{\label{f:Layer_SizevFreq}
Layer size versus maximum frequency. With maximum DSP and BRAM frequencies depending on speed grade. The clustering between \SI{640}{MHz} and \SI{750}{MHz} probably is an artifact of the target frequency of \SI{640}{MHz}. (Designs which reach timing closure are barely optimized further, and designs which initially just fail can often be optimized to meet the timing. Only too large designs fall completely below the target frequency.)
}
\end{figure}
\paragraph{Fully-connected layers}
We implemented fully-connected layers for $N_\mathrm{I}, N_\mathrm{N} \in \lbrace 8, 16, 24,\allowbreak 32,\allowbreak 50,\allowbreak 64,\allowbreak 75,\allowbreak 100,\allowbreak 128 \rbrace$, \mbox{$C \in \lbrace 10, 16 \rbrace$}. As expected, the DSP utilization represents what was designed for, i.e. $N_\mathrm{I} \cdot N_\mathrm{NU}$. BRAM utilization is at most $0.5 \cdot \ceil[\big]{ \frac{N_\mathrm{I}}{P} } \cdot N_\mathrm{NU} \cdot \ceil[\big]{ \frac{P}{3} } + 0.5$, since for $P > 3$ and 10 bit weights, multiple BRAM (sub-)units are necessary for most stage weight memories. For larger layers, this corresponds to approximately one BRAM per 5 DSPs, which is tolerable given the resource ratios in the US+ device family. For all of these designs, at most 4 LUTs and 23 FFs were required per DSP, with less than 10 FFs per DSP in almost 75\% of the designs. Larger designs tended to have lower per-DSP LUT and FF utilization, due to positive scaling effects. Compared to $\sim 170$ LUTs and $\sim 340$ FFs per DSP in the XCVU9P FPGA, both values are negligible. Layer size in terms of MACs against maximum frequency is shown in figure \ref{f:Layer_SizevFreq}. Depending on the device choice, it is possible to have up to multiple thousand MACs in a layer, before the implemented design rather than the device specification becomes limiting.
\paragraph{2D convolutional layers}
\label{ssp:Results_Conv}
The exact structure and therefore resource utilization of 2D convolutional layers depends strongly on the architectural parameters. To give a good overview, we implemented for some hand-picked parameters, to ensure all architectural features are covered at least once, and did a grid implementation for $H_\mathrm{I} \times W_\mathrm{I} \in \lbrace (10, 10), (15, 15), (16, 16), (20, 20), (25, 25) \rbrace$, $I_\mathrm{D} \in \lbrace 1, 2, 4, 6 \rbrace$, $H_\mathrm{K} \times W_\mathrm{K} \times N_\mathrm{K} \in \lbrace (2, 2), (3, 3), (4, 4) \rbrace \times \lbrace {2, 4, 6} \rbrace$, $C \in \lbrace 10, 16 \rbrace$ with a veto for these if more than 3000 DSPs (i.e. almost half of the XCVU9P DSPs) are needed for the given single layer. By design, no BRAM is used. The number of DSPs needed is $N_\mathrm{RU} \cdot W_\mathrm{O} \cdot H_\mathrm{K} \cdot W_\mathrm{K} \cdot D_\mathrm{I}$. For the ratios of LUTs and FFs to DSPs, we achieved 6 to 61 (with 75\% of the designs below 25, and 95\% below 40) and 10 to 77 (with 75\% of the designs below 40 and 90\% below 50), respectively, which is still a good result given the available resource ratios. Again, larger layers tend to require fewer LUTs and FFs per DSP. A plot of layer size in terms of MACs against maximum frequency is shown in figure \ref{f:Layer_SizevFreq}. It turned out that for comparable size, convolutional layers typically reach lower maximum frequencies than fully-connected layers. This is a result of the optimization for resource utilization rather than timing at a time when the resource consumption was not yet known. Critical signal paths could already be identified, and we expect to be able to make it possible to trade a higher frequency for more utilization in the future.
\paragraph{2D max-pooling layers}
Test implementations of layers included $H_\mathrm{P} \times W_\mathrm{P} \in \lbrace (2, 2), (3, 3), (4, 4) \rbrace$,
$H_\mathrm{I} \times W_\mathrm{I} \in \lbrace (10, 10), (16, 16), (20, 20), (30, 30), (40, 40) \rbrace$, $D_\mathrm{I} \in \lbrace 1, 2, 4 \rbrace$, $C \in \lbrace 10, 16 \rbrace$, padding 'same'. Per input value, between 5 and 16 LUTs were required, with 70\% of the values below 8, and 18 to 29 FFs, with 65\% of the values below 20. Hence, pooling layer utilization is negligible for computationally reasonable input sizes, with even only up to 52k LUTs and 130k FFs needed for as many as 6400 inputs.\footnote{That many inputs would already be difficult to handle with convolutional layers, depending on the exact layer parameters.} Layer size in terms of inputs against maximum frequency is shown in figure~\ref{f:Layer_SizevFreq}. Small layers met the target frequency of \SI{640}{MHz} without any problems, some larger designs fell below that line, but not significantly. Future alternative designs could yield increased frequencies, but this was not yet studied, as an improved implementation of 2D convolutions appears more urgent.
\subsection{Example networks}
\label{ss:Results_Networks}
We tested four basic network architectures, where in the following we use I, C, P, F and D for 2D input, 2D convolutional, 2D pooling, flattening and fully-connected layers respectively. Architecture Arc\textsubscript{A} has the layer sequence \mbox{I-C-P-F-D-D}, Arc\textsubscript{B} has \mbox{I-C-P-C-F-D-D}, Arc\textsubscript{C} has \mbox{I-C-P-C-F-D-D-D} and Arc\textsubscript{D} has \mbox{I-C-C-C-F-D-D-D}. For each architecture, we trained various networks with varying layer parameters for the MNIST digit recognition task. The last fully-connected layer always had 10 neurons to classify the ten digits. A range of values for $C$ was also scanned for each network, where as data frequency $f_\mathrm{D}$ we chose \SI{40}{MHz}, and as (target) processing frequency $C \cdot f_\mathrm{D}$. We used a precision choice of 6.8 and 2.8 for values and weights, for which at most a sub-percent order of classification accuracy loss was observed, compared to CPU inference using float32 as data type. Some of the implementation results are shown in table \ref{t:ResNets}.
\begin{table}
\caption{\label{t:ResNets}
Implementation results for example networks trained for the MNIST digit recognition task. Activation relu used for all but the last layer, which had a 'linear' activation and always had $N_\mathrm{N} = 10$ neurons. The single-channel input has a size of $14 \times 14$, if not stated otherwise. WNS was left out where no WNS occurred, i.e. timing was met for these designs. For convolutional layers, the kernel shape $(H_\mathrm{K} \times W_\mathrm{K} \times N_\mathrm{K})$ is specified, for pooling layers the pooling area $(H_\mathrm{P} \times W_\mathrm{P})$ and for fully-connected layers the number of neurons $N_\mathrm{N}$ is specified. The DSP efficiency refers to the relative amount of non-idling DSP cycles. For reference, the Xilinx UltraScale+ XCVU9P target FPGA features 6840 DSPs, 2160 BRAMs, approximately 1.2 million LUTs and twice as many FFs.
}
\centering
\begin{ADLactivate}
\begin{tabular}{l|c|ccc|cc}
Architecture (see text) & MACs & $T_\mathrm{P}$ & WNS & latency & $N_\mathrm{LUT}$ & $N_\mathrm{FF}$
\\
{\scriptsize(layer information)} & (DSP eff.) & (ns) & (ns) & (cycles) & $N_\mathrm{DSP}$ & $N_\mathrm{BRAM}$
\\
\hline
Arc\textsubscript{A1} ($C = 16$) (input $(7 \times 7)$) & 334 & 1.562 & - & 56 & 1793 & 3571 \\
{\scriptsize $(2\times 2 \times1)$-$(2\times 2)$-$10$} & (0.485) & & & & 43 & 10.5\\
\arrayrulecolor{gray}
\cdashline{1-7}
Arc\textsubscript{A2} ($C = 14$) & 1089 & 1.786 & - & 60 & 5060 & 9706 \\
{\scriptsize $(2 \times 2 \times 1)$-$(2\times 2)$-$7$} & (0.630) & & & & 108 & 17 \\
\cdashline{1-7}
Arc\textsubscript{A3} ($C = 14$) (input $(7 \times 7)$) & 1024 & 1.786 & - & 57 & 3051 & 5654 \\
{\scriptsize $(2 \times 2 \times 3)$-$(2 \times 2)$-$16)$} & (0.620) & & & & 118 & 19 \\
\cdashline{1-7}
Arc\textsubscript{A4} ($C = 13$) & 3188 & 1.923 & - & 63 & 8689 & 16219 \\
{\scriptsize $(2\times2\times2)$-$(2\times2)$-$17$)} & (0.774) & & & & 317 & 54.5 \\
\cdashline{1-7}
Arc\textsubscript{A5} ($C = 13$) & 7854 & 1.923 & - & 68 & 15567 & 28450 \\
{\scriptsize $(2\times2\times4)$-$(2\times2)$-$25$} & (0.967) & & & & 625 & 93.5 \\
\cdashline{1-7}
Arc\textsubscript{A6} ($C = 11$) & 12884 & 2.273 & - & 68 & 20962 & 34711 \\
{\scriptsize $(3\times3\times4)$-$(2\times2)$-$50$} & (0.894) & & & & 1310 & 166 \\
\cdashline{1-7}
Arc\textsubscript{B1} ($C = 12$) & 8858 & 2.083 & - & 76 & 18587 & 32886 \\
{\scriptsize $(2\times2\times4)$-$(2\times2)$-$(2\times2\times4)$-$25$} & (0.812) & & & & 909 & 99.5 \\
\cdashline{1-7}
Arc\textsubscript{B1} ($C = 16$) & 8858 & 2.083 & - & 87 & 17205 & 32760 \\
{\scriptsize $(2\times2\times4)$-$(2\times2)$-$(2\times2\times4)$-$25$} & (0.812) & & & & 713 & 71.5 \\
\cdashline{1-7}
Arc\textsubscript{B3} ($C = 11$) & 11362 & 2.273 & - & 79 & 28383 & 47140 \\
{\scriptsize $(2\times2\times6)$-$(2\times2)$-$(2\times2\times4)$-$25$}& (0.792) & & & & 1305 & 102.5 \\
\arrayrulecolor{black}
\hline
Arc\textsubscript{B2} ($C = 10$) & 15610 & 2.500 & -0.134 & 84 & 40998 & 69333 \\
{\scriptsize $(3\times3\times6)$-$(2\times2)$-$(3\times3\times6)$-25} & (0.855) & & & & 1825 & 68 \\
\arrayrulecolor{gray}
\cdashline{1-7}
Arc\textsubscript{B3} ($C = 16$) & 11362 & 1.562 & -0.014 & 93 & 26006 & 45065 \\
{\scriptsize $(2\times2\times6)$-$(2\times2)$-$(2\times2\times4)$-25} & (0.825) & & & & 861 & 71.5 \\
\cdashline{1-7}
Arc\textsubscript{C1} ($C = 8$) & 24076 & 3.125 & -0.045 & 93 & 37528 & 61388 \\
{\scriptsize $(3\times3\times6)$-$(2\times2)$-$(2\times2\times8)$-50-25} & (0.934) & & & & 3222 & 338.5 \\
\cdashline{1-7}
Arc\textsubscript{D5} ($C = 9$) & 26120 & 2.778 & -0.060 & 86 & 32592 & 51865 \\
{\scriptsize $(2\times2\times4)-(2\times2\times2)-(2\times2\times2)-50-25$} & (0.928) & & & & 3128 & 353 \\
\end{tabular}
\end{ADLactivate}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{FMM_MNIST_Nets_FreqClosure_vs_MULs}
\caption{\label{f:Networks_FreqClosure}
Network frequency closure depending on network size and target processing frequency. 'Frequency closure' is defined as the ratio between the targeted processing frequency and the maximum processing frequency according to Vivado. Note that especially for the larger networks, there is a lower limit for $C$ due to the amount of available resources, and large values for $C$ are excluded due to the difficulty to reach such high frequencies with those designs.
}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{.48\textwidth}
\includegraphics[width = \textwidth]{latencyVsNetsize}
\caption{\label{f:Networks_Latencies}
Network latency versus size (in terms of MACs) for networks which achieved timing closure (plus indicator) or failed only slightly (cross indicator).
}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.48\textwidth}
\includegraphics[width = \textwidth]{FMM_MNIST_Nets_Latency_vs_MULs}
\caption{\label{f:Networks_Latencies_C}
Network latency depending on $C$ if it is assumed that timing closure can be achieved, with networks and values for $C$ corresponding to the data from figure \ref{f:Networks_FreqClosure}.
}
\end{subfigure}
\caption{Timing characteristics for some of the MNIST example networks.}
\end{figure}
For the given precision choice, between 7 and 50 LUTs were needed per DSP (among all networks, including those which completely failed timing and are not shown in table \ref{t:ResNets}), with the tendency of less LUTs per DSP for larger networks, and less than 30 LUTs per DSP in 83\% of the cases. Similarly, between 12 and 90 FFs were needed per DSP, with less than 50 FFs in 80\% of the cases, again with a tendency for smaller values in larger networks. The absolute BRAM utilization depends largely on the number of inputs and neurons in the fully-connected layers, but does never exceed 20\% of the absolute DSP utilization, and is significantly lower in most cases.
The larger networks ($>15\mathrm{k}$ MACs) did not yet meet the timing requirements, which could already be expected based on the results for the individual layers. For all of the networks below that size, at least one value for $C$ was found for which the network met the timing. The findings regarding the relation between network size and timing closure are also shown in figure \ref{f:Networks_FreqClosure}. One interesting observation is that the relation between timing closure and $C$ is not nearly 'continuous', i.e. there might be designs with a large timing violation for one choice of $C$, and little violation or even timing closure for neighboring values of $C$, or vice-versa. This is no surprise, as $C$ has a large influence on the structure of the design, but it shows that it might be necessary to explore different settings to find a working design for medium-sized networks. However, it can still be seen that in tendency, timing closure is more difficult to reach for larger networks. For the future, it is expected that frequency improvements for the individual layers (especially for the convolutional layer, see section \ref{ssp:Results_Conv}) will make it possible to reach timing closure at least for networks which failed only closely. If then layer-to-layer paths become critical, it is an option to add extra register stages between layers to relieve the placement and routing efforts.
Network latencies (both in terms of real time latency and cycles) are put into relation to network size in figure \ref{f:Networks_Latencies}. The two dominant factors to the network latency are the number of layers and the processing frequency. For the four-layer networks with architecture Arc\textsubscript{A}, a latency of $\sim \SI{100}{ns}$ is obtained, after which \emph{all} results are available. The larger networks which just failed timing closure required up to $\sim \SI{300}{ns}$ to produce all results. Figure \ref{f:Networks_Latencies_C} shows what latencies would be obtained depending on $C$ if timing closure was always achieved for networks with up to $\sim \SI{50}{k}$ MACs, which especially shows that with increasing $C$, a significant latency reduction would become possible. It is expected that with future design improvements, the network latencies will approximately remain the same (or even improve) in terms of cycles, but improve in terms of real time latency due to frequency improvements, which would be represented by figure \ref{f:Networks_Latencies_C}.
Regarding the DSP efficiency, our design is not always able to adapt efficiently for very small networks, as expected. For larger networks, where the intended design works well, efficiencies of more than 90\% can be reached. Please note that this is not yet completely visible in table \ref{t:ResNets}, because the networks which are shown there are still relatively small compared to the theoretical maximum network size, although an on average increased DSP efficiency can also be observed there.
We also switched the relu activation implementation between LUT- or FF-based for some designs, but did not see a consistent effect on timing, only resource usage changed accordingly between LUTs and FFs. Vivado might not always be able to make use of the improved placement capabilities that come from the extra register layer. Additionally, we implemented one layer for significantly increased precision, and observed an almost linearly increased utilization and slightly worse timing characteristics, as expected.
\section{Summary and outlook}
We were able to develop a neural network implementation framework that is suitable for use within detector triggers, provides a substantial inference performance even at incoming data rates of many MHz, and requires only tens to few hundreds of nanoseconds for inference for reasonable network sizes by specifically designing neural network layers for the special constraints of the trigger environment. By embedding our developments into an easy-to-use toolkit, the overhead of implementing efficient, high-performance neural networks within trigger systems with such constraints has been greatly reduced, as no expert knowledge about FPGA implementation and neural networks is needed any more. Now, implementing a neural network on trigger FPGAs, for example in the ATLAS detector, can be as simple as running our script on an already trained Keras network.
In the future, we intend to further improve and extend our work, with already a lot of useful features in our current scope. Among these are various changes to the current layer implementations, which target network latency and maximum frequency, but also functional extensions such as support for neuron bias, new activation functions and new layer types, such as transposed 2D convolutions for upsampling. We also have toolkit extensions planned, such as layer emulation for faster network benchmarking and automated precision recommendations. Further in the future, we see potential to significantly increase the overall performance, for example by implementing support for sparse weights, but also by further improving the efficiency of the FPGA resource utilization.
|
1,941,325,221,073 | arxiv | \section{Introduction}
Given a compact Riemannian manifold $(M,g)$, the Yamabe problem is to find
a metric conformal to $g$ such that it has constant scalar curvature. This was
solved by Aubin, Schoen, and Trudinger in \cite{Aubin0,Schoen,Trudinger}.
The (unnormalized) Yamabe flow was introduced to study the Yamabe problem, which is defined as
follows:
\begin{equation}\label{1}
\frac{\partial}{\partial t}g(t)=-R_{g(t)}g(t)\mbox{ for }t\geq 0,\hspace{2mm}g(0)=g.
\end{equation}
Here $R_{g(t)}$ is the scalar curvature of $g(t)$.
The existence and convergence of the Yamabe flow
have been studied in \cite{Brendle4,Brendle5,Chow,Schwetlick&Struwe,Ye}.
Yamabe soliton is a self-similar solution to the Yamabe flow. More precisely,
$g(t)$ is called a Yamabe soliton if there exist a smooth function $\sigma(t)$ and
a $1$-parameter family of diffeomorphisms $\{\psi_t\}$ of $M$ such that
\begin{equation}\label{3}
g(t)=\sigma(t)\psi^*_t(g)
\end{equation}
is the solution of the Yamabe flow (\ref{1}), with $\sigma(0)=1$ and $\psi_0=id_M$.
The following is an alternative definition:
$(M,g)$ is called Yamabe soliton if there exist a
vector field $X$ and a constant $\rho\in\mathbb{R}$ such that
\begin{equation*}
(R_g-\rho)g=\mathcal{L}_Xg.
\end{equation*}
Here $R_g$ is the scalar curvature of the metric $g$, and $\mathcal{L}_X$ is the Lie derivative in the direction of $X$.
Note that these two definitions are equivalent (see \cite{DiCerbo} for the proof).
Yamabe soliton has been studied by many authors. See \cite{Calvaruso,Cao,Chu,Daskalopoulos,DiCerbo,Hsu,Hsu1,Ma&Cheng,Ma&Miquel}
and the references therein. In particular, we mention the following theorem related to the main result in this paper,
which was obtained independently by di Cerbo and Disconzi in
\cite{DiCerbo} and by Hsu in \cite{Hsu}:
\begin{theorem}
Any compact Yamabe soliton must
have constant scalar curvature.
\end{theorem}
Suppose $(M,\theta)$ is a strictly pseudoconvex CR manifold of real dimension $2n+1$. The CR Yamabe problem is to
find a contact form conformal to $\theta$ such that it has constant Webster scalar curvature.
This was
solved by Jerison-Lee and Gamara-Yacoub in \cite{Gamara2,Gamara1,Jerison&Lee1,Jerison&Lee2,Jerison&Lee3}.
The (unnormalized) CR Yamabe flow is defined
as the evolution equation of the contact form $\theta(t)$:
\begin{equation}\label{2}
\frac{\partial}{\partial t}\theta(t)=-R_{\theta(t)}\,\theta(t)\mbox{ for }t\geq 0,\hspace{2mm}\theta(0)=\theta.
\end{equation}
Here $R_{\theta(t)}$ is the Webster scalar curvature of the contact form $\theta(t)$.
The CR Yamabe flow was introduced to tackle the CR Yamabe problem. See \cite{Chang&Chiu&Wu,Chang&Cheng,Ho2}
and the references therein. As in the Riemannian case,
CR Yamabe soliton is a self-similar solution to the CR Yamabe flow: we call $\theta(t)$ a CR Yamabe soliton
if there exist a smooth function $\sigma(t)$ and
a $1$-parameter family of CR diffeomorphisms $\{\psi_t\}$ of $M$ such that
\begin{equation}\label{5}
\theta(t)=\sigma(t)\psi^*_t(\theta)
\end{equation}
is the solution of the CR Yamabe flow (\ref{2}), with $\sigma(0)=1$ and $\psi_0=id_M$.
The following is our main result, which is the CR version of
the result of di Cerbo and Disconzi
in \cite{DiCerbo} and
Hsu in \cite{Hsu} that we mentioned above.
\begin{theorem}\label{main}
If $(M,\theta(t))$ is a compact strictly pseudoconvex CR manifold satisfying (\ref{5}),
then the Webster scalar curvature of $(M,\theta(t))$ is constant.
\end{theorem}
\section{Proof}
In this section, we are going to prove Theorem \ref{main}.
We will consider the evolution of the quantity
\begin{equation}\label{2.0}
\frac{\int_MR_{\theta(t)}dV_{\theta(t)}}{(\int_MdV_{\theta(t)})^{\frac{n}{n+1}}}
\end{equation}
along the CR Yamabe flow (\ref{2}). Note that if
$\theta(t)=u(t)^{\frac{2}{n}}\theta$ is the solution of the CR Yamabe flow (\ref{2}), then
$u(t)$ satisfies the following evolution equation:
\begin{equation}\label{2.1}
\frac{\partial}{\partial t}u(t)=-\frac{n}{2}R_{\theta(t)}u(t)\mbox{ for }t\geq 0.
\end{equation}
Therefore, by (\ref{2.1}), the volume form $dV_{\theta(t)}$ of $\theta(t)$ satisfies
\begin{equation}\label{2.2}
\frac{\partial}{\partial t}(dV_{\theta(t)})=\frac{\partial}{\partial t}(u(t)^{\frac{2n+2}{n}}dV_{\theta})=
\frac{2n+2}{n}u(t)^{\frac{2n+2}{n}-1}\frac{\partial u(t)}{\partial t}dV_{\theta}=-(n+1)R_{\theta(t)}dV_{\theta(t)},
\end{equation}
which implies that
\begin{equation}\label{2.3}
\frac{d}{dt}\left(\int_MdV_{\theta(t)}\right)=-(n+1)\int_MR_{\theta(t)}dV_{\theta(t)}.
\end{equation}
Since $\theta(t)=u(t)^{\frac{2}{n}}\theta$, $u(t)$ satisfies the CR Yamabe equation:
$$-(2+\frac{2}{n})\Delta_{\theta}u(t)+R_{\theta}u(t)=R_{\theta(t)}u(t)^{1+\frac{2}{n}}$$
where $\Delta_{\theta}$ is the sub-Laplacian of the contact form $\theta$. Differentiate it with respect to $t$,
one can derive that the following evolution equation of the
Webster scalar curvature $R_{\theta(t)}$ of $\theta(t)$:
(see \cite{Ho1} or \cite{Ho2} for the case of normalized CR Yamabe flow)
\begin{equation}\label{2.4}
\frac{\partial}{\partial t}R_{\theta(t)}=(n+1)\Delta_{\theta(t)} R_{\theta(t)}+R_{\theta(t)}^2.
\end{equation}
Here $\Delta_{\theta(t)}$ is the sub-Laplacian of the contact form $\theta(t)$.
Therefore, we have
\begin{equation}\label{2.5}
\begin{split}
&\frac{d}{dt}\left(\int_MR_{\theta(t)}dV_{\theta(t)}\right)\\
&=\int_M(\frac{\partial}{\partial t}R_{\theta(t)})dV_{\theta(t)}+\int_MR_{\theta(t)}\frac{\partial}{\partial t}(dV_{\theta(t)})\\
&=\int_M\Big((n+1)\Delta_{\theta(t)} R_{\theta(t)}+R_{\theta(t)}^2\Big)dV_{\theta(t)}-(n+1)\int_MR_{\theta(t)}^2dV_{\theta(t)}\\
&=-n\int_MR_{\theta(t)}^2dV_{\theta(t)}
\end{split}
\end{equation}
where we have used (\ref{2.2}) and (\ref{2.4}). Combining (\ref{2.3}) and (\ref{2.5}),
we obtain
\begin{equation}\label{2.6}
\begin{split}
\frac{d}{dt}\left(\frac{\int_MR_{\theta(t)}dV_{\theta(t)}}{(\int_MdV_{\theta(t)})^{\frac{n}{n+1}}}\right)
&=\frac{-n\left(\int_MR_{\theta(t)}^2dV_{\theta(t)}\right)\left(\int_MdV_{\theta(t)}\right)+n\left(\int_MR_{\theta(t)}dV_{\theta(t)}\right)^2}
{\left(\int_MdV_{\theta(t)}\right)^{\frac{n}{n+1}+1}}\leq 0
\end{split}
\end{equation}
where the last inequality follows from Cauchy-Schwarz inequality. This shows that
the quantity in (\ref{2.0}) is decreasing along the unnormalized CR Yamabe flow (\ref{2}).
On the other hand, the quantity in (\ref{2.0}) is invariant under the CR Yamabe soliton (\ref{5}).
To see this, note that if
$\theta(t)=\sigma(t)\psi^*_t(\theta)$ for some smooth function $\sigma(t)$ and
a $1$-parameter family of CR diffeomorphisms $\{\psi_t\}$ of $M$, then
$R_{\sigma(t)\psi^*_t(\theta)}=\sigma(t)^{-1}R_{\psi^*_t(\theta)}$
and $dV_{\sigma(t)\psi^*_t(\theta)}=\sigma(t)^{n+1}dV_{\psi^*_t(\theta)}$, which implies that
\begin{equation*}
\begin{split}
\frac{\int_MR_{\theta(t)}dV_{\theta(t)}}{(\int_MdV_{\theta(t)})^{\frac{n}{n+1}}}=
\frac{\sigma(t)^n\int_MR_{\psi^*_t(\theta)}dV_{\psi^*_t(\theta)}}{(\sigma(t)^{n+1}\int_MdV_{\psi^*_t(\theta)})^{\frac{n}{n+1}}}
=
\frac{\int_MR_{\theta}dV_{\theta}}{(\int_MdV_{\theta})^{\frac{n}{n+1}}}.
\end{split}
\end{equation*}
Therefore, we have
$$\frac{d}{dt}\left(\frac{\int_MR_{\theta(t)}dV_{\theta(t)}}{(\int_MdV_{\theta(t)})^{\frac{n}{n+1}}}\right)=0$$
under the CR Yamabe soliton (\ref{5}). This implies that
the inequality in (\ref{2.6}) is equality.
In particular,
$R_{\theta(t)}$ must be constant
by the equality case of the Cauchy-Schwarz inequality in (\ref{2.6}).
This completes the proof of
Theorem \ref{main}.
\bibliographystyle{amsplain}
|
1,941,325,221,074 | arxiv | \section{Introduction}
Asteroid time-series study was a relatively unexplored field in planetary science because it was a challenge to collect a large number of asteroid light curves within a short period of time. Thanks to the significant advance in observational technology (i.e., robotic telescope and wide-field camera) and information science (i.e., high computing power and massive storage), such challenge becomes accessible and asteroid time-series study can therefore be conducted in a more comprehensive way through wide-field surveys in the last decade \citep{Masiero2009, Polishook2009, Dermawan2011, Polishook2012, Chang2014a, Chang2015, Waszczak2015, Chang2016}.
The 2-hour spin-barrier \citep{Harris1996, Pravec2002} has continuously been found for asteroids mostly with size of few hundreds meter or larger collected from these wide-field surveys. Moreover, the relation between the spin-rate limit and the bulk density of asteroids in this size range \citep[i.e., $P \sim 3.3 \sqrt{(1 + \Delta m)/\rho}$;][]{Harris1996} was first time to be seen in these data sets that the S-type asteroids has a higher spin-rate limit than the C-type asteroids \citep{Chang2015, Waszczak2015}. This suggests that the rubble-pile structure (i.e., gravitationally bounded aggregation) is generally applicable to these asteroids. However, six large super-fast rotators (large SFRs; i.e., D $> 300$ m) have been found to break the 2-hour spin-barrier and challenged the rubble-pile structure \citep[SFRs; see table 2 in ][and references therein]{Chang2017}. Although internal cohesion \citep{Holsapple2007, Sanchez2012} is a possible solution to keep these large SFRs intact under their super-fast rotations, the rarity of large SFRs, comparing to the average asteroids, somehow suggest that cohesion might be only available to certain asteroids. Moreover, a taxonomic tendency seems to present in the six known large SFRs \citep{Chang2017}. If the aforementioned rarity of large SFR and the taxonomic tendency are true, large SFRs could just possibly be a special group distinguishing from the average asteroids. Therefore, any preference shared by large SFRs, such as composition, size, and location in the main asteroid belt, is important to understand their natures.
The asteroid spin-rate distribution reflects the overall evolution of the spin state for a group of asteroids. Two dominant mechanisms, mutual collisions and the Yarkovsky-O'Keefe-Radzievskii-Paddack effect \citep[YORP;][]{Rubincam2000}, are believed to effectively alter the spin states of main-belt asteroids (MBAs). While the former (i.e., collision equilibrium) would lead to a Maxwellian spin-rate distribution \citep{Salo1987}, the latter tends to deviate the distribution from a Maxwellian form \citep{Pravec2008}. Indeed, asteroids with diameters larger than 40 km were shown to have a Maxwellian spin-rate distribution \citep{Pravec2000}, and contrarily, smaller asteroids display a distribution different from a Maxwellian form. Interestingly, some difference has been seen between the spin-rate distributions of smaller asteroids obtained from the target observations \citep[i.e., a flat distribution;][]{Pravec2008} and the wide-field surveys \citep[i.e., a deviated Maxwellian form;][]{Masiero2009, Chang2015, Waszczak2015, Chang2016} have some difference, and however, how this discrepancy was caused still needs more study. Because the timescales of both aforementioned mechanisms depend on the size and location of asteroid \citep[][and the references therein]{McNeill2016}, some footprints are therefore expected to be left in the spin-rate distributions. Fortunately, the recent wide-field surveys provide a good chance to study the spin-rate distributions of asteroids in different sizes and locations for a further insight of asteroid spin-state altering mechanisms. \citet{Chang2015, Chang2016} found that the spin-rate distributions are similar for asteroids in a fixed diameter range at different locations. Besides, a drop in number at $> 5$ rev/day was found in the spin-rate distributions for asteroids of $D < 3$ km in the inner and mid main belt, which is not seen for asteroids of $3 < D < 15$ km. The reason for this number drop is still unknown, and it is also interesting to know whether this number drop would also exist in the outer main belt.
To understand the aforementioned questions, a rotation period survey aiming at the kilometer-sized asteroids in the outer main belt is needed, and therefore, we used the Pan-STARRS1 (PS1) telescope to conduct a survey for asteroid rotation period in October 2016. From the survey, 876 reliable rotation periods were obtained and seven of them are large SFRs. The observation information and light-curve extraction are given in the Section 2. The rotation-period analysis is described in Section 3. The results and discussion can be found in Section 4, and the summary and conclusion is presented in Section 5.
\section{Observations and Data Reduction}
The Panoramic Survey Telescope And Rapid Response System-1 (Pan-STARRS1, PS1) was designed to explore the visible $3\pi$ sky and mainly dedicated to find small solar system bodies, especially those potentially hazardous objects. The telescope is a 1.8 m Ritchey–Chretien reflector located on Haleakala, Maui, which is equipped with the Gigapixel Camera \#1 to create a field of view of 7 deg$^2$. The available filters include $g_{P1}$ ($\sim 400-550$ nm), $r_{P1}$ ($\sim550-700$ nm), $i_{P1}$ ($\sim 690–820$ nm), $z_{P1}$ ($\sim820-920$ nm), and $y_{P1}$ ($>920$ nm), and a special filter, $w_{P1}$ (i.e., combination of $g_{P1}$, $r_{P1}$, and $i_{P1}$), was designed for the discovery of moving object \citep{Kaiser2010, Tonry2012, Chambers2016}.
In order to discover large SFRs and carry out the spin-rate distribution of outer MBAs down to the kilometer size, we used the PS1 to conduct a special campaign to collect asteroid light curves in $w_{P1}$ band during October 26-31, 2016, in which eight consecutive PS1 fields (i.e., $\sim56$~deg$^2$ in total) over the ecliptic plane around the opposition were continuously scanned using a cadence of $\sim10$ minutes. In the first night of the campaign, we used an observation sequence of $w_{P1}$, $g_{P1}$, $w_{P1}$, $r_{P1}$, $w_{P1}$, $i_{P1}$, $w_{P1}$, $z_{P1}$ bands to obtain asteroid colors, and the other nights were only observed in $w_{P1}$ band. The exposure times for $g_{P1}$, $r_{P1}$, $i_{P1}$, $z_{P1}$, and $w_{P1}$ bands were 120, 120, 120, 180, and 60 seconds, respectively, and this would give us a similar limiting magnitude of 22.5 mag at $5 \sigma$ level for each band. However, only few exposures were obtained for the last two nights of the campaign due to bad weather. The details of the observation can be found in Table~\ref{obs_log} and \ref{obs_log_1}.
All the images obtained in the campaign were processed by the Image Processing Pipeline (IPP), which includes image de-trending, instrumental signature removal, object detecting, image warping, and photometric and astrometric calibration \citep[the detailed description can be found in][]{Chambers2016, Magnier2016a, Magnier2016b, Magnier2016c, Waters2016}. The IPP also performs image subtraction to find transient detections and then passes them to the Pan-STARRS Moving Object Processing System to discover new moving objects \citep{Denneau2013}. From this campaign, more than 1500 asteroids were discovered and reported to the Minor Planet Center.
The light curves of asteroids, including known and newly discovered, were extracted by matching the detections against the ephemerides obtained from the {\it JPL/HORIZONS} system with a search radius of 2\arcsec ~after removing the detections of the stationary sources.
\section{Rotation-Period Analysis, Color Calculation, and Diameter Estimation}\label{period_analysis}
After correcting light-travel time and reducing both heliocentric, $r$, and geocentric, $\triangle$, distances to 1~AU for all light-curve measurements, we fitted a 2nd-order Fourier series to each light curve to find the rotation period \citep{Harris1989}:
\begin{equation}\label{FTeq}
M_{i,j} = \sum_{k=1}^{2} B_k\sin\left[\frac{2\pi k}{P} (t_j-t_0)\right] + C_k\cos\left[\frac{2\pi k}{P} (t_j-t_0)\right] + Z_i,
\end{equation}
where $M_{i,j}$ are the reduced magnitudes in $w_{P1}$ band measured at the epoch, $t_j$; $B_k$ and $C_k$ are the coefficients in the Fourier series; $P$ is the rotation period; and $t_0$ is an arbitrary epoch. We also introduced a constant value, $Z_i$, to correct the possible offsets in magnitude between the measurements obtained from different nights. The least-squares minimization was applied to Eq.~(\ref{FTeq}) to obtain the other free parameters for each given $P$, and the explored spin-rate, $f = 1/P$, was from 0.25 to 50~rev/day with a step size of 0.01~rev/day. However, we excluded the upper and lower 5\% of the detections in a light curve in the aforementioned fitting to avoid outliers, which might be contaminated by nearby bright stars or unknown sources.
A code ($U$), describing the reliability of the derived rotation periods, was then assigned after manual review for each light curves, where `3', `2', `1', and `0' mean highly reliable, some ambiguity, possibly correct, and no detection, respectively \citep{Warner2009}. We estimated the uncertainty of rotation period using the frequency range that has $\chi^2$ smaller than $\chi_{best}^2+\triangle\chi^2$, where $\chi_{best}^2$ is the $\chi^2$ of the derived rotation period and $\triangle\chi^2$ is the 68\% (i.e., $1\sigma$) of the inverse $\chi^2$ distribution, assuming $1 + 2N_k + N_i$~degrees of freedom in which $N_k$ is the order of Fourier series and $N_i$ is the number of observation nights. The amplitude of a light curve was calculated after rejecting the upper and lower 5\% of data points.
Using the detections of different bands obtained from the first night, the colors can be calculated for the observed asteroids. To remove rotational effect in the color calculation, an offset for each band was simply fitted using Eq.~\ref{FTeq} with the solution obtained from the rotation period fitting. Therefore, only asteroids with a rotation period of $U >= 2$ have color calculation. However, we rejected a case if its detections in $g_{P1}$, $r_{P1}$, $i_{P1}$, and $z_{P1}$ bands do not well follow its folded light curve in $w_{P1}$ band. Moreover, we adopted the first order translation from \citet{Tonry2012} to covert the PS1 color into SDSS color, and then, determined the spectral type using the SDSS color, $a^*$\footnote{$a^* = 0.89*(g-r) + 0.45*(r-i)-0.57$, which was first used to distinguish blue ($a^* < 0$) and red ($a^* > 0$) asteroids in the SDSS $r-i$ vs $g-r$ diagram \citep{Ivezic2001}.} vs $i-z$ \citep{Ivezic2002}, and the boundary defined by \citet{Parker2008}\footnote{The SDSS colors of C- and X-type (i.e., including the E-, M-, and P-type) are overlapped in the region of $a^* < 0$ \citep[i.e., the neutral color objects;][]{Demeo2013}. To distinguish the C- and X-type asteroids relies on albedo or spectrum. In this work, we follow the definition of \citet{Parker2008} to show the diverse colors of our samples.}.
Since the phase angles only had a small variation during our relatively short observation time-span, a fixed $G_{w}$ slope of 0.15 in the $H$--$G$ system was simply applied to estimate the absolute magnitudes of asteroids \citep{Bowell1989}. We then estimated the diameter using
\begin{equation}\label{dia_eq}
D = {1329 \over \sqrt{p_{v}}} 10^{-H_{w}/5},
\end{equation}
where $H_v$ is the absolute magnitude in $V$ band converted from the $H_{w}$ from our observation, $D$ is the diameter in~km, $p_v$ is the $V$ band geometric albedo, and 1329 is the conversion constant. We adopted the albedo value for S-, V-, and C-type to be $p_v$ = 0.23, 0.35, and 0.06 from \citet{Demeo2013} if the asteroid has its spectral type determination from our observation. Otherwise, three empirical albedo values, $p_w = 0.20$, 0.08 and 0.04, were assumed for asteroids in the inner ($2.1 < a < 2.5 AU$), mid ($2.5 < a < 2.8$ AU) and outer ($a > 2.8$ AU) main belts, respectively \citep{Tedesco2005}. However, if the $WISE$/$NEOWISE$ diameter estimation of an asteroid is available, we then adopted that value \citep{Grav2011, Mainzer2011, Masiero2011}.
\section{Results and Discussion}
\subsection{The Derived Rotation Periods and Colors}\label{discuss_p}
From our survey, 3858 asteroid light curves with 10 detections or more in $w_{P1}$ band were extracted, in which 876 have reliable measurements for their rotation periods (i.e., $U >= 2$). Their magnitude distribution is shown in Fig.~\ref{mag_hist}, where we see the recovery rate of rotation period is decreasing toward the faint end. Most of our samples are MBAs, and the rest includes some Hungaras, Cybeles, and Hildas. The diameter range of our samples can be found in Fig~\ref{a_d} which shows the plot of their semi-major axes vs diameters. Among the 876 asteroids with reliable rotation periods, 762 have qualified color measurements for spectral type determinations. Their spectral distributions were divided by the inner ($2.1 < a < 2.5$ au), mid ($2.5 < a < 2.8$ au), and outer ($a > 2.8$ AU) main belt and shown in Fig.~\ref{sp_dist}. We see that the C-type becomes more dominant with greater heliocentric distance. The detailed information of 876 asteroids with reliable rotation periods are listed in Table~\ref{table_p}, and their folded light curves are shown in Figs.~\ref{lightcurve00}-\ref{lightcurve17}.
Among the 876 asteroids with reliable rotation periods, 34 of them also have a rotation period of $U \ge 2$ listed in the LCDB\footnote{The light-curve database \citep{Warner2009}; http://www.minorplanet.info/lightcurvedatabase.html.}. Therefore, we compare their rotation periods in both data sets. The ratios of rotation period from our survey to the LCDB are shown in Fig.~\ref{comp_period}, where we see that most objects have consistent results except for four objects showing difference greater than 5\%. Because our observation time-span was only a few days, it was difficult to recover long rotation period. If possible, we mostly have a folded light curve with partial coverage of a full rotation, like asteroid (2574) Ladoga in Fig.~\ref{comp_period}. Therefore, this kind of long rotation period obtained from our survey can be seen as a lower limit for these objects. The other three cases are briefly discussed below. For asteroid (114756) 2003 HC45, we derived a rotation period of 6.33 hour, which doubles the value given in \citet{Chang2015}. While our folded light curve of 2003 HC45 was assigned as $U = 3$ for its significant double-peak feature, that of \citet{Chang2015} was assigned as $U = 2$ and only shows a single-peak feature with a insignificant secondary dip. Therefore, we believe that \citet{Chang2015} identified a half of the actual rotation period for 2003 HC45. For asteroid (7077) Shermanschultz, \citet{Waszczak2015} published two rotation periods, 4.41 and 4.86 hours, using 29 and 28 data points, respectively. Comparing to this, our result, 4.41 hour, from a folded light curve of $U = 3$ with much more data points densely covering in the rotational phase, is consistent with the former. Therefore, we believe that we have high reliability on the rotation period of (7077) Shermanschultz and the 4.41 hour is the actual value. For asteroid (227189) 2005 QS67, we derived a rotation period of 4.55 hours, which is close to 4.17 hour given by \citet{Chang2015}. Both folded light curves were assigned as $U = 2$ and look equally good. Therefore, its rotation period needs further confirmation. Since the difference is less than 10\%, we therefore see this case as a consistent result. In general, our rotation period measurements are reliable for the following analysis.
\subsection{The 2-hour Spin-Rate Limit}
The 2-hour spin-rate limit shown in the asteroids of $D > 150$ m has been seen as a supporting evidence for the rubble-pile structure \citep{Pravec2002}. Although the six large SFRs show contradictory to the concept of rubble-pile structure, the chance to discover a large SFR is still very rare \citep[see Table 2 in][ and the references therein]{Chang2017}. This is also the case in our survey that only seven out of the 876 reliable rotation periods were found to be shorter than 2 hours (detailed analysis please see below). Fig.~\ref{dia_per} shows the plot of diameters vs. rotation periods of our samples, where we see an obvious stop around 2 hours. Although the chance to find an object with rotation period shorter than 2 hour is higher in our survey (i.e., $\sim 1$ \%) than \citet[][i.e., $\sim 0.1$ \%]{Chang2015, Chang2016}, the rubble-pile structure is still a reasonable explanation to what we observed.
\subsection{The Large Super-Fast Rotators}
In our survey, eight objects were found to have reliable rotation periods of $< 2$ hours. Their period analysis was given in Fig.~\ref{ps1_sfr_lc}, in which all the rotation periods are clearly detected on the periodograms and all the folded light curves show a clean trend. \citet{Harris2014} pointed out that small light-curve amplitude (i.e., less than 0.2-0.3 mag) can possibly be dominated by the 4th or 6th harmonics that leads to a detection of a half or one-third of the actual rotation period. To test this possibility, we used the 4th-order Fourier series to run the analysis for these eight objects again. Figure~\ref{ps1_sfr_lc_4th} shows the periodograms and the folded light curves of the 4th-order Fourier series fitting, where we see all the fittings have been improved in someway due to the better fitting in detailed features. The best-fitted periods of 4th-order fitting are consistent with the previous 2nd-order fitting except for 2001 FQ10 and 2016 UL98 that their best fitted periods of 4th-order analysis are double the periods of the pervious 2nd-order fitting. For 2001 FQ10, its 4th-order folded light curve enhances a very insignificant 3rd peak that was missed in the previous 2nd order fitting and gives 3.38 hours as the best-fitted period, and therefore, we exclude this objects as a SFR for now and wait for further confirmation of its rotation period. For 2016 UL98, the folded light curve of the 4th-order fitting shows a very insignificant difference in the depths of the 1st deep and the 3rd deep. However, we doubt this difference is due to the scattered data points. If the difference is true, the new period (i.e., 1.04 hour) is still shorter than 2 hours and this reminds 2016 UL98 as a SFR as well. Therefore, we use 0.52 hour as the rotation period of 2016 UL98 in the following discussion. The detailed information of these seven objects (hereafter, PS1-SFRs) along with the previous reported large SFRs can be found in Table~\ref{sfr_tbl}.
The diameter range of the PS1-SFRs is from $\sim 0.3$ to $\sim 1.5$ km\footnote{The diameters of the PS1-SFRs are estimated based upon the assuming albedos of their spectral types. For the neutral colored objects (i.e., SDSS $a^* < 0$), the diameter would be reduced by a factor of two when assuming E-type \citep[i.e., 0.45;][]{Demeo2013} instead of C-type. However, this still gives the diameter estimations of four neutral colored PS1-SFRs of $\gtrsim 0.3$ km. The details of spectral types of the PS1-SFRs can be found in below.}. Using $P \sim 3.3 \sqrt{(1 + \Delta m)/\rho}$, the minimal bulk density to maintain the equilibrium between self-gravity and centrifugal force for a rubble-pile asteroid can be calculated \citep{Harris1996}. Fig.~\ref{spin_amp} shows the plot of the spin rates vs. light-curve amplitudes of our samples with the spin-rate limits calculated for bulk densities of $\rho = 3, 4$ and 5 g cm$^{-3}$, where we see the PS1-SFRs all need a relatively high bulk density to survive under their super-fast rotation. In addition to the PS1-SFRs, another asteroid of $D \sim 0.7$ km, 2016 UK50, also requires a bulk density of $\rho > 4$ g cm$^-3$ to keep intact, although its rotation period is only 2.2 hours. Such high bulk density is unusual among asteroids \citep[see Table 2 in][]{Demeo2013}. Therefore, the PS1-SFRs and 2016 UK50 are very unlikely to be explained simply by the rubble-pile structure. Is it possible that these PS1-SFRs are large monoliths? Although we have no evidence to totally rule out this possibility, the question becomes how they could avoid numerous collisions or keep these numerous impacts not destructive \footnote{Given the intrinsic collision probability of main belt asteroids shown by \citet{Polishook2016} as $N_{impacts} = P_i N(> r_{projectile} (r_{target} + r_{projectile})^2)$, where $P_i = 2.85 \times 10^{-18}$ km$^{-2}$yr$^{-1}$ and $r_{projectile}$ is 16 meter \citep{Bottke1994}, the PS1-SFRs would have $10^3 - 10^4$ collisions during 1 Gyr.}.
The color calculations of the PS1-SFRs are shown in Fig.~\ref{ps1_sfr_color}. Except for 2016 UL98, the color measurements of the other six PS1-SFRs all have good agreement with the folded light curve in $w_{P1}$ band. Although the color measurement of 2016 UL98 are relatively scattered, they are still within their light-curve variation in $w_{P1}$ band. In addition, 2016 UN129 might also have great uncertainty in its color measurements because it was relatively faint and had only one detection in each $g_{P1}$, $r_{P1}$, and $i_{P1}$ bands. Fig.~\ref{ps1_sfr_sdss} shows the plot of the SDSS $a^*$ vs $i-z$ for the seven PS1-SFRs on top of the objects with meaningful color calculation in our survey. Note that we adopt the photometric error for the color uncertainty. As seen, most of our samples populate in the dense region of SDSS sampled asteroids \citep[see Fig. 3 in][]{Parker2008}, and the seven PS1-SFRs have diversity in their colors. Among them, 2016 UN129 has an unusual location on the plot that might be due to its relatively large uncertainty in its colors measurements. Using the boundary defined in \citet{Parker2008}, the colors of the seven PS1-SFRs suggest that 2016 UG94 is S-type, 2009 DY105 and 2016 UY68 are V-type, and the other four are C-type\footnote{Considering most E-type objects are found in Hungarias and the population of M-type objects are relatively small in mid main belt \citep{Demeo2013}, we believed these four neutral colored SFRs are very likely to be C-type.}. \citet{Chang2017} pointed out a possible taxonomic tendency in the six known large SFRs that none of them are C-type asteroids. However, the diverse colors of the seven PS1-SFRs seem to rule out that tendency. Although the spectral types of the seven PS1-SFRs need further confirmations, our result suggests that large SFRs in the main belt can have different compositional materials.
Using the Drucker-Prager yield criterion, \citet{Holsapple2007} showed that SFRs can survive with the presence of internal cohesion. Following the equations and calculations shown in \citet{Holsapple2007, Rozitis2014, Polishook2016, Chang2017}, we estimated the cohesion needed for the PS1-SFRs assuming bulk density $\rho = $ 2.72, 1.93, and 1.33 for S-, V-, and C-type, respectively \citep{Demeo2013}. The smallest cohesion of the PS1-SFRs is $\sim10~Pa$ of 2016 UY68 and the largest is $\sim700~Pa$ of 2016 UL98. This cohesion range is is similar to that of the known large SFRs (see Table~\ref{sfr_tbl}) and the lunar regolith \citep[i.e., 100 to 1000 $Pa$;][]{Mitchell1974}. This probably suggests similar source of generating cohesion for these large SFRs.
Unlike the six known large SFRs which belong to either near-Earth objects or inner MBAs, the PS1-SFRs populate throughout the main belt. This suggests that large SFRs can form in any location of the main belt. However, it is very interesting to notice that six out of the seven PS1-SFRs locate in the mid main belt. If large SFRs are uniformly distributed in the main belt and have similar sizes (i.e., about 1 km), a general survey for asteroid rotation period, like ours, should have more chance to discover them in the inner main belt (i.e., better photometric accuracy for asteroids with the same size). As shown by the simulation of deriving rotation period, the chances to recover spin rate of $f >= 3$ rev/day are very similar at a fixed magnitude and a fixed amplitude (see Fig.~\ref{debias_map}). Therefore, less large SFRs being detected in the inner main belt is not because we miss to derive their rotation periods. Do we obtain more reliable rotation periods in the mid main belt to detect more large SFRs there? When limiting the diameter range to $0.3 - 2$ km, we have 237 and 193 reliable rotation periods in the inner and mid main belt, respectively. Therefore, this is not the case for our survey. A possible explanation is that less SFRs exist in the inner main belt. While the detection rate of large SFRs in our survey is only $\sim0.4\%$ (i.e., 1 out of 237 reliable rotation periods) in the inner main belt for objects of $0.3 < D <2$ km, the mid main belt is $\sim3.1\%$ (i.e., 6 out of 193). This can also explain why the chance to discover SFRs was lower (i.e., $\sim 1$ out of 1000) in the pervious similar surveys \citep[e.g.,][]{Chang2014a, Chang2015, Chang2016} than this work (i.e., $\sim 1$ out of 100). Because the previous surveys were merely able to detect kilometer-sized asteroids in the mid main belt.
\subsection{The Spin-Rate Distributions}
We first carried out the spin-rate distributions according to their sizes and locations in the main belt. The samples were divided into inner, mid, and outer MBAs with diameters of $3 < D < 15$ km and $D < 3$ km. Moreover, we followed the approach shown by \citet{Masiero2009} and \citet{Chang2015} to consider the possible observational biases in our survey. Fig.~\ref{debias_map} is the recovery rates of rotation period for different magnitudes in the simulation of our survey. In general, it tends to have higher recovery rate for brighter, short-period, and large-amplitude objects. The de-biased results are given in Fig.~\ref{spin_rate_comp}. Because we only have a small number of asteroids of $3 < D < 15$ km in the inner and mid main belt, and we therefore exclude them in the following discussion. Overall, our results are very similar to that of \citet{Chang2015}.
For asteroids of $3 < D < 15$ km in the outer main belt, we see the spin-rate distribution showing a smooth decline in number along the spin-rate. This means that the asteroid system is not in collisional equilibrium, otherwise it would have a Maxwellian spin-rate distribution \citep{Salo1987}. Although the YORP effect can deviate the distribution from a Maxwellian form, we not clear how the one like ours can be produced.
For asteroids of $D < 3$ km, a significant drop in number is observed at the spin-rate of $f = 5$ rev/day in all locations. As pointed out by \citet{Chang2015}, the high spin-rate bins only contain very few small and elongated objects. This is also the case for our survey, in which most fast rotators of $D < 3$ km also have small amplitudes (i.e., $< 0.4$ mag; see the green line in Fig.~\ref{spin_rate_comp}). \citep{Chang2015} suspected that the rotational disruption generates the deficiency in small and elongated fast rotators. Because the spin-rate limit for small and elongated objects is lower and their YORP timescales, moreover, are also shorter than large objects, these objects could have been pushed through the spin-rate limit and destroyed already.
Therefore, a comprehensive simulation on the spin-rate evolution for the entire main asteroid belt is needed to understand what we see here.
\section{Summary}
Using the PS1, we conducted a survey for asteroid rotation period during October 26-31, 2016, from which more than 1500 new asteroids were reported to the Minor Planet Center, 3858 asteroid light curves with 10 or more detections were extracted, and 876 reliable rotation periods were obtained. The spin-rate distributions for asteroids of different sizes and locations in the main belt are similar to \citep{Chang2015, Chang2016}, which shows (a) the number of asteroid decreases along with spin rate for asteroids of $D > 3$ km; (b) a number drop appears at $f = 5$ rev/day for asteroids of $D < 3$ km; and (c) no obvious dependence on the location was found.
Among the 876 reliable rotation periods, only seven objects were found to have rotation periods shorter than 2 hours. This suggests that SFRs are still rare. Considering the significant difference in number between SFRs and the rest in our survey, it looks like the rubble-pile structure still can explain our observation.
Assuming a rubble-pile structure, the seven PS1-SFRs require relatively high bulk density to keep intact under their super-fast rotation. Such high bulk density is unusual to asteroids, and, we therefore believe other physical strengthes, in addition to self-gravity, are needed to explain them. Using the Drucker-Prager yield criterion, the cohesion for the PS1-SFRs were estimated in a range of $\sim 10 - 600 Pa$, which is similar to that of the six known large SFRs and the lunar regolith \citep{Mitchell1974}. This might suggest that SFRs could share similar source to generate internal cohesion. Unlike the six known large SFRs locating in inner main belt or near-Earth region, the PS1-SFRs populate throughout the main asteroid belt. Moreover, the diverse colors of the seven PS1-SFRs rule out the possible taxonomic tendency previously found in the six known large SFRs. This suggests that the formation of SFR is unlikely to have dependence on location and composition. However, it is interesting that five out of the the seven PS1-SFRs are mid MBAs. Considering the survey condition, we suspect that mid main belt possibly harbors more SFRs than the inner main belt.
\acknowledgments This work is supported in part by the National Science Council of Taiwan under the grants MOST 107-2112-M-008-009-MY2, MOST 104-2112-M-008-014-MY3, MOST 104-2119-M-008-024, and MOST 105-2112-M-008-002-MY3, and also by Macau Science and Technology Fund No. 017/2014/A1 of MSAR. We thank the referee, Dr. Alan Harris, for his useful comments and suggestions to improve the content of this paper.
|
1,941,325,221,075 | arxiv |
\section*{Warhead verification}
In a warhead verification protocol, a warhead owner (`host') attempts to prove
to an inspection team (`inspector') that an object submitted for inspection and
subsequent dismantlement and disposition is indeed a genuine nuclear warhead.
An
object successfully verified may then be dismantled by the host
under a secure
chain of custody~\cite{bunchCoC} and counted
towards the host's obligations under an arms reduction treaty. At the same
time,
the host seeks to prevent the inspector from learning any sensitive information
about the design of the warhead, whether to prevent proliferation of nuclear
weapons technology or disclosure of warhead architecture and vulnerabilities.
Thus, the verification measurement must be designed and performed in such a
way as to provide a strong test of authenticity while while minimizing
intrusiveness and maximizing information security. Non-authentic warheads
(`hoaxes') fall into two broad categories: isotopic hoaxes, in which a valuable
weapon component (e.g.,~the weapons-grade Pu fissile fuel) is replaced by a
less-valuable surrogate of similar geometry (e.g.,~reactor grade Pu); and
geometric hoaxes, in which isotopes are present in their correct amounts but in
a non-weapons-usable configuration (e.g.,~rough slabs of Pu rather than
highly-engineered spherical shells).
Past approaches to warhead verification have generally focused on the
`attribute' approach, in which the protocol measures a set of key
characteristics thought to define a warhead, such as the total mass of
plutonium
and the isotopic ratio of Pu-239 to Pu-240 in the object~\cite{lanl2001fmttd}.
Such measurements are highly intrusive, and so are conducted behind an
`information barrier' (IB), an electronic or software layer that shields the
classified raw measurement data
and presents the inspector with only a binary pass/fail answer for each of the
attribute measurements~\cite{close2001infobarriers}. However, certifying that an
electronic or (especially) software IB does not contain any hidden backdoors or
functionalities---which a
nefarious inspector could exploit to obtain sensitive information or a
nefarious
host could use to fraudulently simulate a `genuine' result---is exceedingly
difficult, and may never be satisfactorily proven. Moreover, attributes must be
chosen specifically to describe real nuclear warheads, and thus may constitute
sensitive information themselves. Even then, the set of attributes may not be
complete, opening the door to hoax objects that pass all the attribute tests
but
nevertheless are not real warheads.
More recent work has therefore focused on the `template' approach to
verification, in which comparison to a known genuine object (the ``template'')
is used to certify subsequent objects presented for
inspection~\cite{yan2015review,fuller,marleau2015tabletop}. In such a
protocol, the measurements of both the template and subsequent objects are
encrypted using the same method, so that only the encrypted signals (or
``hashes'') must be compared to authenticate. The hash should be unique to a
particular combination of geometry and isotopic makeup (i.e.,~a particular
warhead design), while containing no sensitive information about the object. As
such, the hash is useless on its own, and only has any use in comparison
against
the hash of another object---a warhead that is already known to be genuine.
This
authenticated template warhead could be established for instance via an
unannounced visit by the inspector to a random launch facility in the host
country, and then by selecting a random warhead from an active-duty
intercontinental ballistic missile.\footnote{In any template warhead
verification protocol, the utility of every measurement hinges on the
authenticity of the template. A complete solution to the question of first
establishing such an authentic template will require classified knowledge of the
chain of custody of a country's nuclear stockpile, and therefore is an open question beyond the scope of this article.} A measurement of the
authenticated template would then be used as the standard against which to
compare the measurements from the same model of warhead covered by the arms
control treaty.
Recent papers have put forth template verification protocols that aim to make a
verification measurement of a warhead while protecting sensitive design
information. A team of researchers at Princeton proposed and later
experimentally demonstrated a verification protocol using superheated bubble
detectors and fast neutron radiography~\cite{ref:alex,philippe2016verification}.
In parallel, a team at the Massachusetts Institute of Technology (MIT) developed an alternative approach using isotopic
tomography via transmission nuclear resonance fluorescence
(tNRF)~\cite{kemp2016physical}; the present work is an experimental
demonstration of the MIT tNRF protocol. Further techniques using
coded-aperture-based passive neutron counters~\cite{marleau2017implementations}
and epithermal neutron resonance radiography \cite{hecla2017epithermal}, from
Sandia National Laboratories and MIT, respectively, have been proposed in the
past year.
The strengths and weaknesses of the aforementioned proposals can be compared by
examining the three requirements of an ideal warhead verification protocol:
\begin{enumerate}
\item completeness: the ideal protocol must clear all real warheads;
\item soundness: the ideal protocol must raise an alarm on all hoax
warheads;
\item information security: the ideal protocol must be
\textit{zero-knowledge}~\cite{goldwasser1989knowledge,blum1988zk}---for an
honest host, it must not reveal anything beyond a binary genuine/hoax
determination.
\end{enumerate}
The Princeton protocol is essentially zero-knowledge, returning a flat image
(up
to statistical variation) if the host has submitted a real warhead. In its
original form~\cite{ref:alex}, the measurement faces a challenge in the
soundness requirement: fast neutron radiography is insensitive to the isotopic
or (in some cases) elemental composition of the object, and cannot on its own
distinguish between weapon materials and well-chosen hoax materials. Additional
measurement modes using multiple incident neutron energies~\cite{yan2015two}
have been proposed to increase the protocol's discrimination between fissionable
and fissile isotopes. Similarly, work on the Sandia coded-aperture protocol has
focused on satisfying the completeness and information security aspects of the
problem, but has not demonstrated resistance to hoaxing by a neutron source of
similar geometry and activity.
Unlike the Princeton and Sandia protocols, the two MIT protocols are highly
sensitive to isotopics through their use of isotope-specific resonant
phenomena,
making them highly robust against a large class of hoaxes. While the MIT tNRF
protocol is not zero-knowledge (since the inspector has access to the hashed
measurements rather than solely a binary genuine/hoax determination), and thus
there are uncertainties about the extent of the information security of the MIT
tNRF protocol, there are methods to make it sufficiently secure~\cite{kemp2016physical}. This work demonstrates the core measurement of the MIT
tNRF protocol, and is an experimental implementation of an
isotopically-sensitive warhead
verification measurement.
\section*{Nuclear resonance fluorescence measurements}
Nuclear resonance fluorescence (NRF) describes the X$(\gamma, \gamma')$X
reaction in which a photon $\gamma$ is resonantly absorbed by the nucleus X and
then re-emitted as the excited nucleus subsequently transitions to its ground
state~\cite{metzger1959resonance,kneissl1996structure}. The cross
section for an NRF interaction with absorption via the resonant energy level
$E_r$ is given by the Breit-Wigner distribution
\begin{align}\label{eq:sigmaNRFBW}
\sigma^\text{NRF}_{r}(E) = \pi g_r \left( \frac{\hbar c}{E_r} \right)^2
\frac{\Gamma_r \Gamma_{r,0}}{(E-E_r)^2 + (\Gamma_r/2)^2}
\end{align}
where $\Gamma_r$ is the width of the level at $E_r$, $\Gamma_{r,0}$ is the
partial width for transitions between $E_r$ and the ground state, and $g_r$ is
a
statistical factor as described in SI Appendix~\S \ref{sec:si_nrf}. For high-$Z$
isotopes
of interest, these fundamental widths are typically ${\sim}10$~meV but
the effective width of the cross section is increased to
${\sim}1$~eV through Doppler broadening by thermal motion
of the target nuclei. Imperfect detector resolution further broadens the
measurable NRF
resolution to widths of ${\sim}1$~keV. Since the NRF lines of an isotope are
still typically $>$10~keV apart, the set of resonance energies $E_r$ provides a
resolvable, one-to-one map between measurement space and isotopic space.
The MIT verification protocol exploits the isotope-specific nature
of NRF to make a template measurement of the mass and geometry of the isotopes
of interest to the inspector. As discussed in the following section and
illustrated in Fig.~\ref{fig:schematic}, the measurement uses a broad-spectrum
bremsstrahlung photon source to irradiate the measurement object; NRF
interactions in the object preferentially attenuate the photon flux at specific
energies determined by the unique nuclear energy level structure of each isotope
according to how much of the isotope is present in the warhead.
The remaining transmitted flux at these energies goes on to induce further NRF
interactions in an encryption foil, leading to NRF emission into high-purity
germanium (HPGe) photon detectors at an observed rate (SI Appendix Eq.~\ref{eq:d2ndEdOmega})
that has been reduced by the presence of the NRF isotope in the warhead. The
hashed measurements required for the template verification protocol are thus the
recorded spectra, since it is impossible to precisely determine the warhead
composition (i.e.,~the thickness $D$ in SI Appendix Eqs.~\ref{eq:phi_t} and
\ref{eq:d2ndEdOmega}) from the height of the NRF peaks in the observed spectrum
without knowledge of the \textit{detailed} composition of the foil (i.e.,~the
thickness $X$ in SI Appendix Eq.~\ref{eq:d2ndEdOmega}). The exact foil
design is therefore decided by the host and kept secret from the inspector. The
influence of the warhead composition on the height of the NRF peaks---and thus
any sensitive warhead design information---is then said to be \textit{physically
encrypted} by the foil. This technique uses the laws of physics to mask sensitive
information, rather than electronic or computer-based information barriers, making it
substantially more robust against tampering and hoaxing than previously proposed techniques \cite{close2001infobarriers}.
Although the detailed construction of the foil is kept
secret from the inspector in order to maintain the encryption, the mere presence
of certain characteristic NRF lines in the detected spectrum corresponds to the
presence of certain isotopes in the encryption foil, a fact the inspector may
use to validate the utility of a given foil without breaking the encryption.
The foil may also be placed under joint custody of the host and
inspector to ensure it has not been altered between the template and
candidate measurements. As an additional layer of information security, the
host may add optional `encryption plates' of warhead materials to the measured
object so that even if precise inference about the measured object is possible,
it is impossible to infer anything about the warhead alone.
To protect against geometric hoaxes, the MIT protocol includes measurements of template and candidate
warheads in random or multiple random orientations due to the difficulty for the
host to engineer a hoax warhead that could mimic the template signal successfully
along multiple projections. To increase the information security of this protocol, each orientation may be paired
with a unique cryptographic foil to dilute the information content of the multiple measurements.
Ref.~\cite{kemp2016physical} discusses the required complexity
of such geometric hoaxes, which increases rapidly with number of projections measured.
\section*{Experimental design}\label{sec:experiment}
Following the design depicted in Fig.~\ref{fig:schematic}, a bremsstrahlung
beam was used to illuminate a circular section of the object undergoing
interrogation. Since no real nuclear warheads were available in an academic
setting, several proxy warheads were constructed. The proxy warheads were
objects with a set of isotopes---U-238 and Al-27---that form the basis for
proof-of-concept NRF experiments and subsequent extrapolations to more
realistic
settings involving weapon isotopes such as U-235, Pu-239, and Pu-240. The first
proxy genuine target (``template~I'') was constructed from DU plates of total
thickness $3.18$~mm (wrapped in thin layers of Al foil, amounting to a total
thickness of ${\sim}0.25$~mm) encased between two
$19$~mm-thick layers of high-density polyethylene (HDPE) as proxy high
explosives. In the first hoax target (``hoax~Ia''), the DU was replaced by
$5.29$~mm of Pb sheets in order to match the nominal areal densities of high-$Z$
material to better than $1\%$. A second measurement of template I was made on
the following day of experiments to emulate the verification of a genuine
candidate warhead (``candidate~Ig''). The Pb hoax was similarly re-measured
(``hoax Ib''). To emulate measurements on different warhead designs, a second
genuine target (``template~II'') with double the thickness of DU was also
tested
against a hoax with double the thickness of Pb (``hoax~IIc'') and against a
partial hoax (``hoax~IId'') in which only half the DU was replaced. In total,
seven measurements were conducted on five different targets (see
Table~\ref{tab:configs_small} and Figs.~\ref{fig:I_vs_Ia}--\ref{fig:II_vs_IId}).
Experiments were performed at MIT's High Voltage Research Laboratory (HVRL),
which houses a continuous-wave Van de Graaff electron accelerator capable
of producing electron kinetic energies of $2.0$--$3.0$~MeV at beam currents of
up to $30$~{\textmu}A. For the physical cryptography measurements, a $2.52$~MeV
electron beam at the maximum stable current (between 25--30~{\textmu}A) was
directed towards a water-cooled bremsstrahlung radiator consisting of a
$126$~{\textmu}m-thick Au foil and approximately $1$~cm of Cu backing. The
resulting $2.52$~MeV endpoint bremsstrahlung photon beam was then collimated
with
a 20~cm-long conical collimator of entry diameter $9.86$~mm and exit diameter
$26.72$~mm, producing an opening half-angle of about~$5^\circ$. The beam
configuration and stability are discussed in SI Appendix~\S
\ref{sec:si_exp}.
\begin{figure*}[ht]
\centering
\begin{tikzpicture}
\fill[fill=gray] (0,-0.5) rectangle (2,0.5);
\node[below] at (1,-0.5) {collimator};
\fill[fill=white, draw=white] (0,-0.05) -- (0,0.05) -- (2,0.15) -- (2,-0.15);
\draw[densely dashed] (2.1,-0.5) rectangle (2.3,0.5);
\draw (2.2, -0.65) -- (2.1, -1.2) -- (1.4, -2.0);
\node[below,nodes = {draw,align=center},text width=4cm] at (1.4, -2.0) {optional
encryption plates};
\fill[fill=brown!80!black] (-0.5, -0.5) rectangle (-0.25, 0.5);
\fill[fill=yellow!80!black] (-0.5, -0.2) rectangle (-0.4, 0.2);
\node[above] at (-0.375,0.5) {radiator};
\draw (-0.375, -0.6) -- (0, -1.4);
\draw (-0.45, -0.22) -- (-0.7, -1.4);
\node[below] at (0,-1.4) {Cu};
\node[below] at (-0.7,-1.4) {Au};
\fill[fill=brown!80!black] (3,-1) rectangle (3.2,1);
\fill[fill=gray!60!white] (3.2,-0.8) rectangle (3.25,0.8);
\fill[fill=blue!80!black] (3.25,-0.8) rectangle (3.45,0.8);
\draw [decorate,decoration={brace,amplitude=3pt}](3.0,1.1) -- (3.7,1.1)
node[yshift=10pt,xshift=-8pt]{proxy warhead};
this
\fill[fill=gray!60!white] (3.45,-0.8) rectangle (3.5,0.8)
\fill[fill=brown!80!black] (3.5,-1) rectangle (3.7,1);
\node[below] at (2.7,-1.4) {plastic};
\node[below] at (3.99,-1.4) {DU, Al};
\draw (3.0,-1.1) -- (2.9,-1.4);
\draw (3.5,-1.05) -- (2.9,-1.4);
\draw (3.35,-1.) -- (3.7,-1.4);
\fill[fill=blue!80!black] (9.8,-0.8) rectangle (10,0.8);
\fill[fill=gray!60!white] (10,-1) rectangle (10.4,1);
\draw [decorate,decoration={brace,amplitude=3pt}]
(9.8,1.1) -- (10.4,1.1) node[xshift=20pt,yshift=10pt]
{DU/Al encryption foil};
\fill[fill=gray] (13.75,-0.30) rectangle (15.0,0.30);
\fill[fill=black] (14,-0.15) rectangle (14.75,0.15);
\node[above] at (14.375,0.30) {LaBr$_3$};
\fill[fill=gray, rotate around={55:(10.3,0)}] (5.5,-0.65) rectangle (7.4,2.65);
\fill[fill=gray, rotate around={-55:(10.3,0)}] (5.5,-2.65) rectangle (7.4,0.65);
\node at (6.7,2.3) {Pb};
\node at (6.7,-2.3) {Pb};
\fill[fill=black, rotate around={55:(10.3,0)}] (5.7,-0.35) rectangle (7.2,0.35);
\fill[fill=black, rotate around={-55:(10.3,0)}] (5.7,-0.35) rectangle
(7.2,0.35);
\node at (10,3.5) {2 $\times$ HPGe};
\node at (10,-3.5) {1 $\times$ HPGe};
\draw (8.4,3.5) -- (9.1,3.5);
\draw (8.4,-3.5) -- (9.1,-3.5);
\draw[dashed] (-1,0) -- (15.2, 0);
\draw[-{latex}, green!70!black, line width=1pt] (-0.2,0) -- (2.95, 0.15);
\draw[-{latex}, green!70!black, line width=1pt] (-0.2,0) -- (2.95, -0.15);
\draw[-{latex}, green!70!black, line width=1pt] (-0.2,0) -- (2.95, 0);
\draw (2.7, 0.35) -- (2.4, 0.8) -- (1.4, 1.05);
\node[above] at (1.4, 1) {$\phi_0(E)$};
\draw[-{latex}, green!70!black, line width=1pt] (3.75, 0.1) -- (9.75, 0.2);
\draw[-{latex}, green!70!black, line width=1pt] (3.75, -0.1) -- (9.75, -0.2);
\node[above] at (5, 0.2) {$\phi_t(E)$};
\draw[-{latex}, green!70!black, line width=1pt, rotate around={-55:(10.3,0)}]
(9, 0) -- (7.5, 0);
\draw[-{latex}, green!70!black, line width=1pt, rotate around={55:(10.3,0)}]
(9,
0) -- (7.5, 0);
\node at (9.6,2.2) {NRF $\gamma$};
\node at (9.6,-2.2) {NRF $\gamma$};
\draw[-{latex}, red!70!black, line width=1pt] (-1,0) -- (-0.45, 0);
\node[below] at (-1, 0) {e$^-$};
\node at (13.0, -3.8) {(top view, not to scale)};
\end{tikzpicture}
\caption{Schematic of the physical cryptographic NRF measurement. As an
information security measure, the large Pb shields prevent the HPGe detectors
from directly observing the the proxy warhead. Annotated photographs of the
experiment geometry are shown in SI Appendix Figs.~\ref{fig:setup_photo} and
\ref{fig:genuine_photo}.}
\label{fig:schematic}
\end{figure*}
Optional encryption plates directly after the collimator may be included as an
additional layer of information security. The encryption plates are composed of
warhead materials in amounts unknown to the inspector, so that any inference
about the warhead composition will in fact be an inference on the warhead plus
encryption plates, thus protecting the warhead information. As with the
encryption foil, the encryption plates must remain constant between the template
and candidate measurements. In these experiments, no such encryption plates were
included in order to maximize the available flux and thus the statistical
precision and sensitivity of the measurements.
After passing through the proxy warhead or hoax, the transmitted flux then
impinged on the encryption foil, which was constructed from $3.18$~mm of DU
plates followed by $63.5$~mm of aluminum plates. The uranium and aluminum
components demonstrate the verification measurement for high- and low-$Z$
materials, respectively. Specifically, the measurements in this work are
designed to show the
detection of high-$Z$ material diversions and the verification of low-$Z$
material consistency.
The combined NRF signature of the measurement target plus encryption foil---at
this point physically encrypted---was measured using three mechanically cooled
Ortec $100\%$ relative efficiency GEM P-type coaxial HPGe photon
detectors. The detectors were placed ${\sim}45$~cm from the
foil at an angle of 55$^\circ$ to the beam axis, and surrounded by lead to
shield against NRF photons directly from the warhead, as well as active
backgrounds from the experimental setting which would otherwise limit the
performance of the detectors. The shielding moreover prevents the detectors
from observing any passive photon spectra generated by radioactive material in
the test objects. The lead shielding thickness ranged from $51$~mm below the
detectors to $254$--$305$~mm along the line of sight from the collimator and
warhead to the detectors. Only a $25.4$~mm lead filter was placed between the
detectors and encryption foil. This reduced by multiple orders of magnitude the
low energy photon flux, which can cause pileup and dead time in the detectors,
with only a moderate reduction in the NRF signal. Finally, Canberra Lynx Digital
Signal
Analyzers were used to record the photon spectra in acquisition periods
of five minutes (real time) in order to save the spectra for offline analysis
and to estimate the detector dead time.
A $38.1$~mm right square cylinder lanthanum bromide (LaBr$_3$)
crystal was placed downstream from the foil as an
independent diagnostic for the bremsstrahlung beam flux. It should be
emphasized
that such additional measurements are not part of the verification protocol.
They are, however, useful in an experimental setting for determining the
bremsstrahlung endpoint energy of $2.52$~MeV (despite the $2.6$~MV reading of
the accelerator terminal voltage---see SI Appendix \S \ref{sec:si_beam_char}) as small
shifts in electron energy can have a large effect on absolute photon flux (and
thus measurement time) near the
endpoint. The LaBr$_3$ scintillator was chosen for its extremely fast decay time
($16$~ns) and encased in a lead hut in order to avoid high pileup rates that
could complicate the endpoint measurement---the detector was directly downbeam
from the radiator, otherwise shielded only by the warhead and encryption foil.
The detector was controlled using the ROOT-based~\cite{Brun1997} ADAQAcquisition
software~\cite{hartwig2016adaq} and a CAEN DT-5790M digitizer.
\section*{Results and analysis}
For each measured object, photon spectra\footnote{Data and analysis code are available at https://github.com/jvavrek/PNAS2018} from multiple acquisition periods and
three separate detectors are combined into a single
live-charge-normalized\footnote{The term `live' is used to denote measurement
times calculated using live time, i.e.,~the real time minus the detector's dead
time. `Live charge' therefore corresponds to the product of beam current with
live time.} spectrum in order to improve the signal-to-noise ratio (see SI Appendix \S
\ref{sec:si_data}). Each spectrum is then fit with a series of Gaussian
functions for the eight observed NRF peaks in the signal region near
$2.1$--$2.3$~MeV, on top of an exponentially decaying continuum background.
U-238 contributes the 2.176, 2.209, and 2.245~MeV peaks, the branched decays
$45$~keV below each of these three, and a small peak with no branch at
2.146~MeV. Al-27 contributes the intense 2.212~MeV peak. The Pb isotopes have no
NRF lines below 2.3~MeV. Altogether, the spectral fitting function is written as
\begin{align}\label{eq:spectral_fit}
\hspace*{-3pt}
f(E) = \exp\left( c_1 + c_2 E \right) + \sum_{k=1}^8
\frac{a_k}{\sqrt{2\pi}\sigma_k} \exp\left[ -\frac{(E-E_k)^2}{2\sigma_k^2}
\right]
\end{align}
where $c_1$ and $c_2$ describe the shape of the continuum, and $a_k$, $E_k$, and
$\sigma_k$ are the area, mean, and standard deviation fit parameters of the
$k^\text{th}$ peak. With eight sets of three peak parameters and two parameters
for the continuum, this results in a total of 26 parameters per spectrum.
Once the 26-parameter fit (and set of associated fit parameter uncertainties)
for each spectrum is computed using Eq.~\ref{eq:spectral_fit}, the detected NRF
rate in each peak in counts per live {\textmu}A$\cdot$s, as predicted by
integration of SI Appendix Eq.~\ref{eq:d2ndEdOmega}, can be extracted as simply $A_k = a_k /
\Delta E$,
where $a_k$ is the value of the area fit parameter for the $k^\text{th}$ peak,
and the division by the spectrum bin width $\Delta E$ enforces proper dimensions
and normalization~\cite[p.~171]{bevington2003error}. Similarly, the uncertainty
in the NRF rate is $\delta A_k = \delta a_k / \Delta E$ (where $\delta x$ is
used to express the $1$~standard deviation
uncertainty in a value $x$ so as to distinguish it from other uses of the
symbol
$\sigma$) where $\delta a_k$ is the uncertainty in the $a_k$ fit parameter as
reported by ROOT's TH1::Fit() subroutines \cite{Brun1997}.
One possible test statistic $T$ for comparing the NRF
peaks of a single isotope is the sum of net rates $A_k$ (above the fit
background) of the six
U-238 peaks well-separated from the doublet: $2.131$, $2.146$, $2.164$,
$2.176$,
$2.200$, and $2.245$~MeV.
The 2.209~MeV component of the 2.209 and 2.212 MeV doublet tends to have a
larger uncertainty such that it does not contribute reliably to $T$, and thus
is
excluded. Moreover, since the amount of Al-27 (and the total high-$Z$ areal
density) does not change between the warhead and hoax objects, the $2.212$~MeV
peak rate is consistent throughout the measurements (up to day-to-day beam
variations---see SI Appendix~\S\ref{sec:si_beam_char}). To compare the NRF spectrum of a
candidate object to that of the genuine template, the discrepancy $\nu$ is
defined as the difference in $T$ divided by the uncertainty in the
difference:
\begin{align}\label{eq:discrep}
\nu \equiv \frac{T_\text{cand}-T_\text{temp}}{\sqrt{(\delta
T_\text{cand})^2
+ (\delta T_\text{temp})^2}}.
\end{align}
As the presence of an NRF isotope in the object reduces the corresponding
observed NRF rate (and thus $T$), $\nu > 0$ indicates a possible diversion of
the isotope in the candidate compared to the template, while $\nu < 0$ indicates
a possible addition. Under the null hypothesis that the
candidate object is a real nuclear warhead, $T_\text{cand} = T_\text{temp}$, so
that (due to statistics alone) $\nu$ is
normally distributed with mean 0 and standard deviation 1: $\nu \sim
\mathcal{N}(0,1)$. As such, $\nu$ measures
the discrepancy from the null hypothesis in number of
standard deviations (``sigmas'') where, e.g.,~the probability of observing a
$5\,\sigma$ discrepancy
(regardless of sign) by chance alone, i.e.,~$|\nu| > 5$, is $6\times 10^{-7}$.
Setting an alarm threshold $|\nu| > \nu^*$ by necessity trades-off
the probability that the measurement declares a genuine warhead to be a
hoax (type~I error) and the probability that it declares a hoax warhead to be
genuine (type~II error). If low type~I error is prioritized, a suitable alarm
threshold may be $\nu^* = 5$, while $\nu^* = 3$ may be more suitable if low
type~II error is desired.
Figs.~\ref{fig:c3_unzoom}, \ref{fig:c3_arrows}, and \ref{fig:c3_fit} show the
culmination of the above analysis procedure for the fourth verification scenario
listed in Table~\ref{tab:configs_small} (template II vs hoax IIc).
Fig.~\ref{fig:c3_unzoom} contains the two combined spectra measured for the
template II (DU) and hoax IIc (Pb) proxy warheads; in this unzoomed energy
range, the genuine and hoax spectra at first appear to match quite closely,
with
no obvious distinguishing features. Focusing on the NRF signal region in
Fig.~\ref{fig:c3_arrows} (where only the template II spectrum is shown for
clarity), the NRF peaks from U-238 and Al-27 become visible;
Fig.~\ref{fig:c3_fit} subsequently shows the 26-parameter fits to the two
spectra and the computed discrepancy of $\nu = 10.7$. The discrepancies for all
verification scenarios are shown in Table~\ref{tab:configs_small} (see also
SI Appendix Table~\ref{tab:configs_full}). In all four hoax scenarios, a discrepancy in $T$
greater than an alarm threshold of $\nu^* = 3$ was attained in
${\sim}20$~{\textmu}A$\cdot$h (live, on three detectors) per
measured object, indicating diversions in the uranium component. In the genuine
candidate scenario, the $1.7\, \sigma$ discrepancy in uranium (primarily a
result of day-to-day beam variations) does not trigger the alarm at $\nu^* =
3$,
and is clearly delineated from the much larger observed discrepancies in the
hoax cases. Similarly, the Al-27 comparisons all exhibit $|\nu| < 2$,
indicating
consistency in the aluminum component across all measurement scenarios.
The continua underlying the peaks---generated from both pileup and secondary
electron bremsstrahlung in the foil---also provide some insight. For the
spectra
in Fig.~\ref{fig:c3_unzoom}, the integrals from $1$--$2$~MeV differ by
$5\%$. The differences rise to $6$--$10\%$ when comparing measurements
performed
on different days due to beam variations, but are only $2\%$ in the other
same-day measurements. Though these small differences are significant given the
high statistics at low energies, the close matching of continua between the
template and hoax scenarios suggests that the continuum background may not
encode any appreciable information about the isotopic content of
the weapon. This lack of distinguishing information in the
majority of the spectrum also may indicate that non-resonant photon transmission
measurements such as radiographs would likely fail to detect hoaxes of the same areal density.
\begin{figure}[t]
\centerline{\includegraphics[width=1.0\columnwidth]{c3_unzoom-eps-converted-to.pdf}}
\caption{Measured spectra for DU template II (black points) and Pb hoax IIc
(red
points). In this and subsequent Figures, error bars are $\pm$1 SD.}
\label{fig:c3_unzoom}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=1.0\columnwidth]{c3_arrows-eps-converted-to.pdf}}
\caption{Measured spectra for DU template II, zoomed to show the NRF signal
region. For clarity, the spectrum of hoax IIc is not shown. Arrows indicate the
branching relationships from the three main U-238 lines to the peaks 45~keV
lower, as well as the non-branching 2.146~MeV U-238 and 2.212~MeV Al-27 peaks.}
\label{fig:c3_arrows}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=1.0\columnwidth]{c3-eps-converted-to.pdf}}
\caption{26-parameter Gaussian peak plus exponential background fits to the
spectra of template II (black points and curve) and hoax IIc (red points and
curve). A comparison of spectra for all verification measurements is shown in
Table~\ref{tab:configs_small}.} \label{fig:c3_fit}
\end{figure}
\begin{table}
\centering
\caption{Proxy warhead verification measurements}\label{tab:configs_small}
\begin{tabular}{cccc}
\hline\vspace{-2pt}
\# & scenario & Al-27 discrep. & U-238 discrep.\\
- & - & $\nu$ (vs template) & $\nu$ (vs template)\\
\hline
0 & template I & - & -\\
1 & hoax Ia (100\% Pb) & $-$0.051~$\sigma$ & 7.9~$\sigma$\\
2 & genuine candidate Ig & 0.76~$\sigma$ & 1.7~$\sigma$\\
3 & hoax Ib (100\% Pb) & 1.7~$\sigma$ & 9.8~$\sigma$\\
4 & template II & - & -\\
5 & hoax IIc (100\% Pb) & 0.25~$\sigma$ & 10.7~$\sigma$\\
6 & hoax IId (50\% Pb) & 1.9~$\sigma$ & 4.6~$\sigma$\\
\hline
\end{tabular}
\end{table}
\section*{Discussion}
\subsection*{Extrapolation to future systems}
The proxy warheads used in this article are relatively thin---templates I and
II
have total on-axis areal densities of ${\sim}11$ and $17$~g/cm$^2$,
respectively---and do not accurately represent typical areal densities of real
warheads. More realistic warhead models in the open literature range from the
compact
(${\sim}50$~g/cm$^2$) Black Sea-type warhead used in~\cite{kemp2016physical} to
the thicker (${\sim}200$~g/cm$^2$) models of
Fetter \textit{et al.}~\cite{fetter1990detecting}. Moreover, verification
measurements conducted on real warheads will use the NRF lines associated with
the fuel isotopes U-235 or Pu-239, whose strongest lines are
2--5$\times$ less intense than the U-238 lines
considered in this work~\cite{ref:bertozzi_full}. In the present experimental
design,
verification of realistic weapon designs would therefore require several
orders of magnitude longer measurement times than the ${\sim}60$ detector live {\textmu}A$\cdot$h used here (see Table~\ref{tab:extrap_params}). In a dedicated warhead verification
facility, however, these unrealistically long measurement times could be ameliorated by
increasing the electron beam current and the number of detectors (see SI Appendix~\S\ref{sec:si_extrap}). A modern
commercially available electron accelerator may have a continuous wave beam
current of at least 5~mA at ${\sim}$3~MeV~\cite{dynamitron}, a factor of
$200\times$ improvement over the 25~{\textmu}A used in this experiment. While
this increase in beam current would affect the event rate in the detectors (thus
reducing the effective live time), the increase in the event rate is only $\sim$30$\times$
due to the increased attenuation of realistic warheads (see SI Appendix~\S\ref{sec:si_extrap}). This increase may be mitigated
by reducing the detector sizes and the operating with more detectors, by optimizing the balance of the
detector event rate and the available measurement time, or by taking advantage
of future developments of high-rate HPGe detectors capable of operating at MHz rates \cite{ref:pnnlhpge}.
Additionally, the shielding and detector filters used in this experiment could be signficantly
optimize to reduce the low energy photon rate in the HPGe detectors to further alleviate
this effect.
Extending the array of detectors from three to 30 would provide another factor
of $10\times$ reduction in
measurement time, and would provide the additional benefit of
reducing the dose to the warhead---here estimated at ${\sim}30$~Gy per 1~hour
measurement at 25~{\textmu}A for template~I---required to achieve the same
confidence. Doses for other warhead configurations are presented in Ref.~\cite{kemp2016physical}. Such a dedicated
verification system would be capable of
attaining $5\,\sigma$ confidence in a single NRF line in a Pb
hoax scenario involving the Fetter
\textit{et al.} uranium-uranium model of Table~\ref{tab:extrap_params} in
${\sim}15$--$20$~minutes per projection per object, for a capital cost on the order of USD~5M.
This required runtime increases to ${\sim} 5$~hours for the worst-case
plutonium-uranium model in Table~\ref{tab:extrap_params}. For thinner warheads,
or for
warheads that have been partially disassembled, even lower measurement times
would be required, creating opportunities for measurements at multiple warhead
orientations, for measurements of isotopes with weaker NRF lines, or for ruling
out less discernible hoaxes. More information on the calculation of the required
runtimes for realistic warhead configurations may be found in SI Appendix Section~\ref{sec:si_extrap}.
\begin{table}[hbt]
\centering
\caption{Warhead geometries and approximate detector live charges required
for Pb replacement hoax detection at $5\,\sigma$.}\label{tab:extrap_params}
\hspace*{-0.4cm}
\scalebox{0.95}{
\begin{tabular}{l|l|l|c}
comparison (model ref.) & NRF line & foil & det.~live {\textmu}A hr \\
\hline
WGU+W vs Pb+W~\cite{fetter1990detecting} & U-235 1.733 MeV & WGU &
$25\times 10^3$\\
WGU+DU vs Pb+DU~\cite{fetter1990detecting} & U-235 1.733 MeV & WGU & $40
\times 10^3$\\
WGPu+W vs Pb+W~\cite{fetter1990detecting} & Pu-239 2.431 MeV & WGPu &
$600 \times 10^3$\\
WGPu+DU vs Pb+DU~\cite{fetter1990detecting} & Pu-239 2.431 MeV & WGPu &
$800 \times 10^3$\\
WGU vs Pb~\cite{kemp2016physical} & U-235 1.733 MeV & WGU & $0.15
\times 10^3$\\
WGPu vs Pb~\cite{kemp2016physical} & Pu-239 2.431 MeV & WGPu & $3.5
\times 10^3$
\end{tabular}
}
\end{table}
\subsection*{Information security}
The equation for the predicted NRF rates (SI Appendix Eq.~\ref{eq:d2ndEdOmega} or its
integrated form), contains multiple quantities that are kept secret from the
inspector, and thus cannot be used alone to infer the warhead thickness~$D$
from
a physically encrypted spectrum. However, it may be possible to construct a
system of equations from SI Appendix Eq.~\ref{eq:d2ndEdOmega}---one equation per NRF
peak---and make a series of simplifying approximations, in which case there may
be at least as many equations as unknowns and inference may be possible. As
previously shown in Fig.~\ref{fig:schematic} and described in~\cite[SI~\S
7.1]{kemp2016physical}, a solution to this \textit{multi-line inference} problem
is to include optional encryption plates of relevant materials of unknown
thickness $\Delta D$ to the collimator output. As such, any inference on the
isotope of interest will estimate only an upper bound $D + \Delta D$. In fact,
if such encryption plates are used, the foil parameter $X$ no longer needs to be
kept secret from the inspector, eliminating the information security
complexities of ensuring that the foil has not been nefariously designed.
Lastly, the continuum background may contain
sensitive information, especially given the large number of photons it
comprises
over the entire range of the spectrum (see Fig.~\ref{fig:c3_unzoom}). The
`logarithmic slope' parameter $c_2$ in Eq.~\ref{eq:spectral_fit}, for instance,
depends moderately on the atomic number $Z$ of the foil
materials~\cite{bertozzi2007ez3d}. As discussed above, however, the continuum
appears to encode very little information about the $Z$ of the warhead materials
for a fixed areal density. A thorough analysis of the continuum information
content is therefore a vital next step in the analysis of the physical
cryptographic NRF protocol. For a more
complete discussion of information security issues and possible solutions, the
reader is referred to~\cite{kemp2016physical}.
\section*{Conclusions and future work}
We have reported on the successful demonstration of the MIT tNRF physical
cryptographic warhead verification protocol. The isotope-sensitive tNRF
measurement is capable of distinguishing proxy nuclear warheads from hoax
objects with high confidence in total measurement times of around one hour per
object. Extrapolations to more realistic warhead designs indicate that a
dedicated warhead verification facility could conduct $5\,\sigma$ verification
measurements in less than an hour while protecting sensitive warhead design
information.
The NRF verification technique may be expanded to other isotopes
that may be found in nuclear weapons (beyond U-238 and Al-27) such as U-235 or
Pu-239 in the fissile fuel and nitrogen and carbon isotopes in the high
explosives~\cite{caggiano2007nuclear}. Similarly, testing
the measurement's sensitivity to geometric hoaxes would be a useful development.
Finally, an additional layer of information security may be added through
analog-to-digital converters (ADCs) with non-uniform binning, which are
currently being developed. Such ADCs would act as very low-level, more
easily-verifiable information barriers. If installed in the acquisition systems of the HPGe detectors, such ADCs could be used to remove all spectral features
except one NRF line per isotope from the observed spectrum, thus eliminating
possible information security concerns such as the continuum and the multi-line
inference problem.
In a broader context, the implementation of any warhead verification protocol in
a real arms control agreement faces two challenges. First, an assessment of the
protocol's utility and security must be made by nuclear weapons laboratories. To
this end, future work on any warhead verification protocol should involve
collaboration with the US and possibly Russian national laboratories, and
possibly combining multiple proposed verification techniques as part of an
overarching protocol. Such a joint effort will enable research that otherwise
could not be conducted in academic settings, such as the aforementioned
measurements involving weapons isotopes and realistic, classified weapon
geometries. Finally, the implementation of a warhead verification protocol is
predicated on the existence of future arms control frameworks, and thus requires
a commitment to the goal of deep reductions in the world's nuclear arsenals.
\section{Nuclear resonance fluorescence}\label{sec:si_nrf}
Nuclear resonance fluorescence (NRF) describes the X$(\gamma, \gamma')$X
reaction in which a photon $\gamma$ is resonantly absorbed by the nucleus X and
then re-emitted as the excited nucleus subsequently relaxes to its ground
state~\cite{metzger1959resonance,kneissl1996structure}. Due to the
discrete energy level structure of the nucleus, the probability that an incident
photon of energy $E$ undergoes an NRF interaction is only significant if $E$ is
approximately equal to one of the resonance energies $E_r$ of the nucleus, given
by
\begin{align}
E_r = E_\ell + \frac{E^2}{Mc^2}
\end{align}
where $E_\ell$ is the energy of a nuclear level and the latter term corrects for
the recoil energy ($\sim$20~eV for U-238 and $E=2$~MeV) of the nucleus X with
mass $M$. The probability of absorption by state $r$ is then given by the NRF
cross section, which is most accurately described by a Doppler-broadened version of the
Lorentzian profile of Eq.~\ref{eq:sigmaNRFBW}:
\begin{align}\label{eq:sigmaNRF}
\begin{split}
\sigma_{r}^\text{NRF}(E) &= 2\pi^{1/2} g_r \left( \frac{\hbar c}{E_r}
\right)^2 \frac{b_{r,0}}{t^{1/2}} \int_{-\infty}^{+\infty} \exp\left[
-\frac{(x-y)^2}{4t}\right] \frac{dy}{1+y^2},
\end{split}
\end{align}
where
\begin{align}
x \equiv 2(E-E_r)/\Gamma_r\\
t \equiv (\Delta/\Gamma_r)^2,
\end{align}
$\Gamma_r$ is the intrinsic width of the excited state $r$, and
\begin{align}
\Delta = E\sqrt{\frac{2k_B T}{Mc^2}}
\end{align}
is the Doppler-broadened width of the state at temperature $T$. For NRF lines of
high-$Z$ isotopes, $\Gamma_r{\sim}1$--$100$~meV~\cite{nndc2015u238} while
$\Delta{\sim}1$~eV. For greater accuracy, the $\Delta$ and thus $t$ may be evaluated
using the effective temperature $T_\text{eff}$ instead of the physical temperature $T$ of the
target~\cite{metzger1959resonance}. The $g_r$ is a statistical factor that
accounts for the number of available spin states at the ground and resonant
states:
\begin{align}
g_r = \frac{2J_r+1}{2(2J_0+1)}
\end{align}
where $J_i$ for $i=\{0,r\}$ is the spin of the $i^\text{th}$ level. The branching ratio
$b_{r,0}$ from the resonant state $r$ to ground also enters the calculation as
$b_{r,0} \equiv \Gamma_{r,0}/\Gamma_r$, where $\Gamma_{r,0}$ is the partial
width for the decay $r \to 0$ and $\sum_i \Gamma_{r,i} = \Gamma_r$. The
$b_{r,i}$ therefore also give the probabilities of the resonant state $r$
decaying either directly to the ground state, emitting a photon of energy $E'=
E_r$ (neglecting recoil), or through the intermediate state $i$, emitting a
photon of energy $E' = E_r - E_i$.
Given Eq.~\ref{eq:sigmaNRF}, the NRF measurement described in the main paper can
be described by a slightly simplified 1D model (see
e.g.~Fig.~\ref{fig:schematic}) in which a parallel incident bremsstrahlung beam
$\phi_0(E)$ is incident on a single-isotope rectangular slab warhead of
thickness $D$. The transmitted flux $\phi_t(E)$ then interacts with a
rectangular slab foil of thickness $X$ composed of the warhead isotope. In this
case, the transmitted flux $\phi_t(E)$ through the warhead can be written as
\begin{align}\label{eq:phi_t}
\phi_t(E) = \phi_0(E) \exp\left[ -D\left( \mu_\text{nr}(E) +
\mu_\text{NRF}(E) \right) \right],
\end{align}
where the $\mu \equiv N \sigma$ terms denote linear attenuation coefficients if
$D$ is expressed as a length, or mass attenuation coefficients if it is
expressed as an areal density. This equation assumes that every NRF or
non-resonant (`nr') interaction (e.g.~Compton scattering, pair production, etc.)
results in the loss of forward-going flux at energy $E$. Because of the sharp
$E$-dependence of $\mu_\text{NRF}(E)$, the forward-going flux is preferentially
attenuated or `notched' at the resonance energies $E_r$ of the isotopes present
in the warhead. The above assumption regarding photon losses can break down via
a process known as `notch refill,' by which photons undergo small-angle Compton
scattering to the resonance energies, thus replenishing the available flux in
the notches and reducing the sensitivity of the measurement to the
warhead~\cite{pruet2006detecting}. Since the notches are narrow, notch refill is
only significant for relatively thick measurement targets (e.g.~a correction
factor of $5\%$ for areal densities ${\sim}90$~g/cm$^2$~\cite{ref:quiter}) with
many opportunities for downscatter.
For a single transition from $r\to 0$ (dropping subscripts for brevity), the
double-differential NRF count rate induced by the transmitted flux $\phi_t(E)$
as observed by a single HPGe detector is
\begin{align}\label{eq:d2ndEdOmega}
\begin{split}
\frac{d^2n}{dE d\Omega} &= \phi_t(E)\, b\, \mu_\text{NRF}(E)
\frac{W(\theta)}{4\pi} \frac{1-\exp\left[ -X \mu_\text{eff}(E,E',\theta)
\right]}{\mu_\text{eff}(E,E',\theta)} \epsilon_\text{int}(E') P_f(E')
\end{split}
\end{align}
where $W(\theta)$ is the angular correlation function for successive gamma
rays~\cite{hamilton1940directional} and $\epsilon_\text{int}(E')$ is the
intrinsic peak efficiency of the HPGe detector. A high-$Z$ (typically Pb) filter
may be placed between the foil and detector in order to preferentially attenuate
low-energy photons and reduce detector dead time, in which case $P_f(E') < 1$ is
the probability that an NRF photon of energy $E'$ will be transmitted through
the filter. The effective attenuation coefficient
\begin{align}
\mu_\text{eff}(E,E',\theta) \equiv \mu_\text{NRF}(E) + \mu_\text{nr}(E) +
\frac{\mu_\text{nr}(E')}{\cos\theta}
\end{align}
accounts for attenuation in the foil of incoming photons (of energy $E$) via NRF
and non-resonant processes as well as the attenuation of outgoing NRF photons
(of energy $E'$) through the path at angle $\theta$ pointing to the detector.
Integration of Eq.~\ref{eq:d2ndEdOmega} over all energies $E$ and the solid
angle of the detector $\Omega$ then gives the predicted count rate for a single
NRF peak as observed by the detector. The peak will appear not as a perfectly
sharp emission line at $E'$, but as a Gaussian centered at $E'$ due to the
imperfect resolution of the detector.
We note that the sharp $E$-dependence of $\mu_\text{NRF}(E)$ in the exponential
terms of Eqs.~\ref{eq:phi_t} and \ref{eq:d2ndEdOmega} can substantially affect
the predicted detected count rate~\cite[Fig.~3.25]{quiter2010thesis}: while the
Doppler-broadened Lorentzian profile of Eq.~\ref{eq:sigmaNRF} is the most
accurate, a Gaussian approximation~\cite{metzger1959resonance} to the cross
section is often sufficient. The rectangular cross section approximation---a
constant value of cross section over an energy range on the order of $1$~eV such
that the integral of Eq.~\ref{eq:sigmaNRF} over $E$ (the `integrated cross
section') is preserved---is only accurate to about $20\%$ and should be avoided
unless computational efficiency is required at the expense of accuracy.
\section{Experimental Methods}\label{sec:si_exp}
\subsection{Data acquisition}
Data acquisition (DAQ) was accomplished using a Canberra Lynx Digital Signal
Analyzer (DSA) connected to each HPGe detector~\cite{ortec_gem}. Instead of using the standard
Genie2K acquisition software, the three detectors and DAQs were controlled
simultaneously using the custom-written Python Readout with Lynx for Physical
Cryptography (PROLyPhyC) wrapper classes sitting atop the Lynx Software
Development Kit (SDK). Events were recorded in pulse-height analysis (PHA) mode,
resulting in a 32768-channel pulse height spectrum produced for each detector at
the end of each acquisition time. To guard against data corruption due to beam
instability, acquisition times were set to five minutes (real time); each
measurement therefore consisted of $\sim$10 such acquisition periods summed
together using the procedure described in Section \ref{sec:si_data}. The raw
pulse height spectra were converted to energy (deposition) spectra using linear
calibrations of channel number vs energy using Cs-137 ($0.662$~MeV) and Co-60
($1.172$, $1.333$~MeV) check sources taken before and throughout the week of
experiments.
The integrated beam charge over the course of an acquisition period was
determined by using a Keithley Model 614 Electrometer to measure the
beam-induced current from the radiator to ground. The analog output of the
electrometer was digitized by a Measurement Computing Model USB-201
analogue-to-digital converter at a sample rate of 1~kHz, and read to a plain
text file on a laptop for persistent storage. The average current over the
acquisition time was then computed for use in Eq.~\ref{eq:live_charge_norm} and
compared against the display of the electrometer throughout the run for
consistency.
\begin{figure}[thb]
\centering
\includegraphics[width=0.6\columnwidth]{figures/tNRF_HVRL.png}
\caption{Annotated photograph of the target geometry.}
\label{fig:setup_photo}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=0.6\columnwidth,angle=270]{figures/genuine_closeup.jpg}
\caption{Close-up of template I near the collimator exit. The cylinder
affixed to the end of the collimator is a gas ionization chamber used for beam
tuning and monitoring.}
\label{fig:genuine_photo}
\end{figure}
\subsection{Beam characterization and stability}\label{sec:si_beam_char}
The stability and reproducibility of the electron beam settings (most notably,
the beam energy, current, and position relative to the gold radiator foil)
directly affects the validity of comparisons between template/candidate
scenarios, especially when the integrated beam charge is used to normalize
measurements. In particular, preliminary experiments that tested elements of
the physical cryptographic protocol prior to the work reported here indicated
that the absolute rate of NRF photon detection was lower than expected from
simulation and analytic calculations by a factor of $1.5$--$2$
\cite{vavrek2017progress}. A number of possible explanations for this
observation were explored, and uncertainties in the electron beam position,
emittance, and energy were identified as the most likely causes of the
discrepancy. While knowledge of the \textit{absolute} bremsstrahlung flux is
not required to perform the relative spectral comparisons between template and
candidate warheads, any temporal variance in the flux could make such
comparisons invalid. Due to this, several operational procedures and
diagnostics were implemented to complement and enhance the existing HVRL beam
diagnostics. The results of these diagnostics are presented in this section,
demonstrating that while variations in the beam may have affected previous
experiments, the beam conditions were well-understood and constrained for
the data presented in this work due to the improvements.
\subsubsection{Electron beam energy}
The electron beam kinetic energy was chosen as 2.6 MeV as a compromise of
several competing factors. An ideal beam for this application maximizes the
number of photons at the specific energies of the NRF lines of interest while
minimizing photons at other energies. Photons above the NRF energies may
undergo various physical processes that may cause them to scatter into the
detectors resulting in background counts in the region of the spectrum near the
NRF energies and additionally contribute to the notch refill effect discussed in
Section \ref{sec:si_nrf}. Below the NRF energies, photons contribute to pile-up
effects in detectors and add to the radiation dose to which inspected objects
are exposed. To balance these effects when studying NRF lines, it is most
effective to choose an endpoint energy a few hundred keV above the NRF energies.
For a photon source produced by the bremsstrahlung of electrons, the number of
photons rapidly decreases as a function of energy with no photons produced above
the energy of the incident electrons, as visible in the spectra shown in Fig.
\ref{fig:typspect}. Due to this sharp drop-off in the spectrum, however, the
total flux of photons at the NRF energies depends strongly on the precise
location of the endpoint. This is illustrated in Fig.~\ref{fig:bremrat}, which
shows the ratio of the forward bremsstrahlung fluxes of electron beams of
nominal (2.6 MeV) energy and of energy below nominal (2.521 MeV). This
$\sim$3\% change in the beam energy results in $\gtrsim$10\% change at the NRF
lines energies, which is further magnified by the even greater reduction at
higher energies (since these photons can downscatter within the mock warhead
and/or foil to add to the flux). While the absolute flux of the bremsstrahlung
beam is not required for the comparative measurements presented here, this
effect necessitates establishing that the beam energy was consistent between
measurements. The HVRL electron beam energy was set using a generating
voltmeter (GVM), which measured the potential across the accelerator terminals
\cite{gvm}. When used for this purpose, however, GVMs require regular,
independent calibration to the actual electron beam energy, a process that had
not been conducted for the HVRL beam for some time prior to the experiments
described in this work. Additionally, since the GVM reading was not recorded
throughout the run (so as to monitor its fluctuations), it is critical to
establish the stability of the beam energy between the different measurements.
\begin{figure}[thb]
\centering
\includegraphics[width=0.8\columnwidth]{figures/typspectfoilonly.pdf}
\caption{Calibrated spectrum of the bremsstrahlung beam recorded by the
LaBr$_3$ detector after transmission through the encryption foil and the 6-inch
Pb filter shielding the scintillator at a nominal beam energy of 2.6 MeV. In this and all subsequent Figures, error bars are $\pm$1 SD.}
\label{fig:typspect}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=0.8\columnwidth]{figures/bremrat.pdf}
\caption{Ratio of the simulated 2.521 MeV endpoint bremsstrahlung spectra to the simulated 2.6 MeV endpoint spectra, showing the order
10\% deficit of photons in the former case relative to the latter in the region of interest around 2 MeV.}
\label{fig:bremrat}
\end{figure}
To measure the electron beam energy, the LaBr$_3$ scintillator spectra of the
bremsstrahlung photons for each data run were examined. The LaBr$_3$ detector~\cite{canberra2017labr}
is especially well-suited for examining spectral features in the vicinity of
2.0--2.7 MeV due to the presence of intrinsic spectral lines in this region that
are due to alpha decays from the decay chain of Ac-227, which is a contaminant
in LaBr$_3$ due to the chemical similarities of La and Ac \cite{QUARATI2013596}.
The electronic-equivalent energy depositions from several of these decays allow
a precise calibration of the ADC-photon energy calibration of the detector in
this region. Following the experimental run, a long sample of the intrinsic
spectrum of the detector was collected to provide a precise energy calibration.
To account for a possible shift in the gain of the detector over the course of
the experimental run, the ADC channel position corresponding to the 511 keV peak
(caused by the plentiful $e^+/e^-$ pair production interactions caused by
photons with energy greater than 1022 keV in the bremsstrahlung beam) was
determined for each of the data runs as a measure of the shift in gain and is
shown in Fig.~\ref{fig:labrgain}. The shift in this peak position relative to
the data taken immediately before the intrinsic calibration run was used to
correct the gain drift for each spectrum to calibrate the individual spectra.
Additionally, pulse shape discrimination was utilized to exclude pile-up events
(in which two photons contributed to a single count in the spectrum). Since
such events contribute relatively significantly to the high-energy end of the
spectrum, rejecting them is necessary to sharply reconstruct features such as
the spectral endpoint.
\begin{figure}[thb]
\centering
\includegraphics[width=1.0\columnwidth]{figures/511stability.png}
\caption{Position of the 511 keV pair production peak in the LaBr$_3$ over
the course of the entire experimental run for the data presented in this paper.
With the exception of Run \#81, the gain of the LaBr$_3$ detector was stable to
within 1.5\%. Dashed lines indicate gaps between days of operation.}
\label{fig:labrgain}
\end{figure}
For each calibrated spectrum, the point at which the second derivative of the
spectrum was maximal (determined numerically) was found, indicating the position
at which the rapidly decreasing bremsstrahlung spectrum met the relatively
flatter background above the endpoint, thus indicating the energy of the
incident electron beam. Fig. \ref{fig:enddiff} shows the shape of the spectrum
endpoint for each data run, illustrating the fact that the rapid drop-off in the bremsstrahlung
spectrum occurred at a lower energy than the nominal 2.6 MeV endpoint.
Fig.~\ref{fig:endpoint} shows the results of the fit
determination for each of the data runs. This procedure contributes 4 keV
systematic uncertainty to the overall determination of the endpoint, while the
gain drift conservatively contributes another $\sim$0.5\% uncertainty (reduced
from the total drift by the correction described). Averaging the results of the
individual electron beam energy extractions results in a beam energy
determination of $(2.521\pm0.015)$ MeV, lower than the nominally determined beam
energy. As shown in Fig.~\ref{fig:endpoint}, however, the endpoint energy was
very stable to within the quoted uncertainty over the entire experimental run,
demonstrating that variations in the beam energy did not systematically affect
any comparisons between template and candidate proxy warheads. Understanding
this systemic offset, however, is critical for any future analyses that require
knowledge of the absolute photon flux.
\begin{figure}[thb]
\centering
\includegraphics[width=1.0\columnwidth]{figures/beamenergy.png}
\caption{Extracted electron beam energy for each of the production data
runs. While below the nominal value of 2.6 MeV, the endpoint was consistent to
within uncertainties for the entire data-taking period.}
\label{fig:endpoint}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=1.0\columnwidth]{figures/markedendpoint.pdf}
\caption{Overlain calibrated and pile-up rejected spectra from the LaBr$_3$ detector for all data runs, showing the difference between
the extracted endpoint at 2521 keV (black dashed line) and the nominal endpoint at 2600 keV (red dashed line).}
\label{fig:enddiff}
\end{figure}
\subsubsection{Electron beam position}
The spectrum of bremsstrahlung photons emitted from a radiator depends
significantly on the geometry and materials of the radiator, and thus may also
depend on the position at which the electrons are incident on the radiator. In
particular, the number of photons generated near the endpoint energy is
maximized by ensuring that the photons first strike the highest atomic number
material (in this case the gold of the 126 {\textmu}m foil) prior to losing
energy through interactions with other materials. If the position of the
electron beam deviates from the center of the foil or if the electron beam has a
significant width beyond the 0.5 cm radius of the gold foil, the bremsstrahlung
photon spectrum is altered and any inconsistencies in these parameters over the
course of the experimental run could induce differences between the different
proxy warhead tests.
To study possible magnitude of this effect, the effect of beam wander on the
bremsstrahlung spectrum was simulated using the Geant4 toolkit
\cite{agostinelli_geant4}. In this simulation, the geometry of the
bremsstrahlung radiator and collimator were modeled in detail based on
experimental survey of the objects, shown in Fig.~\ref{fig:radgeo}, and electron
beams of energy 2.6 MeV were simulated incident upon the radiator at different
positions. The simulated beams were infinitely narrow and incident normal to
the face of the radiator. The incident position of the beam was varied radially
from the center of the gold foil ($r=0$ mm) to beyond the foil radius so that
the electrons were directly incident on the copper frame ($r=14$ mm). For each
beam position, the number of bremsstrahlung photons incident on the mock warhead
target (i.e., beyond the collimator) was counted and compared to the number
generated when the beam was centered at the same electron current. The results
of this study are shown in Fig.~\ref{fig:beam_offset}, plotted as the ratio of
the number photons generated above 1.9 MeV (i.e., in the NRF region of interest)
for a given beam position to the number generated with the beam on center. The
simulation shows that while the beam remains on the gold foil ($r<5$ mm) the
number of high energy photons remains within a few percent of the ideal value.
For $r>5$ mm, the high energy photon count drops precipitously. Thus, as long
as the beam remains on the foil throughout the experiments, the beam position
uncertainty contributes a systematic uncertainty of at most $\lesssim$1\% to the
comparisons of different mock warheads. Large deviations in beam position,
however, would have a significant effect on the data by greatly reducing the
number of photons available for the NRF interactions.
During the experiments described in this work, there existed no means of
concurrently monitoring the beam position and width. Prior to these experiments
however, the HVRL electron beam was imaged using a beryllium oxide screen as
part of the experiments conducted by another user of the facility. Using this
imaging system, the electron beam focusing elements were tuned to minimize the
beam diameter and maximize its positional stability for the 2.6 MeV beam energy
setting required for the experiments used in this paper. The BeO screen was
placed 200 cm from the dipole magnet (located at the ``bend'' of the $e^-$ beam
shown Fig. \ref{fig:setup_photo}). At this distance, the electron beam diameter
could be held to smaller than the 5 cm$\times$5 cm screen and stable in position
to within a few millimeters (see the video included in the supplemental materials,
courtesy of C.S. Epstein). Given that the radiator foil was
located approximately 20 cm from the dipole bend, and that the beam is maximally
focused when exiting the dipole, it is likely that the beam was well confined to
the gold foil throughout the experimental run and thus the contribution to the
uncertainty on the NRF lines measurements from beam wander is limited to
$\sim$1\%. Since this uncertainty is negligible compared to the statistical
uncertainty of the data, it is neglected in the analysis.
\begin{figure}[thb]
\centering
\includegraphics[width=0.6\columnwidth]{figures/newgeo.png}
\caption{Visualization of the solid model of the bremsstrahlung radiator
used for simulated beam studies. The exposed gold foil in the center had a
diameter of 1 cm.}
\label{fig:radgeo}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=1.0\columnwidth]{figures/beamoffset.pdf}
\caption{Simulated effect of drift in the beam position on the flux of
photons with $E>1.9$ MeV incident on the mock warhead.}
\label{fig:beam_offset}
\end{figure}
\subsubsection{Beam conditioning}
Due to the age and the electrostatic mechanism of the HVRL electron accelerator,
the accelerator had to be regularly `conditioned' by running an incoherent
electron plasma (as opposed to a coherent beam) through the beamline to burn off
contaminants. Failure to regularly condition the beam would lead to deviations
in the beam current and energy, invalidating any data collected until the beam
was reconditioned. Care was taken to regularly condition the beam and monitor
its stability, and any data taken during periods in which a significant beam
parameter deviated from the nominal value was excluded from the main analyses.
Deviations were observed as uncontrolled shifts in the measured electron beam
current, shifts in the measured terminal voltage, or unexplained changes or
time-variance in the spectra from the HPGe and LaBr$_3$ detectors (which were
monitored online). During the experimental run in which the data presented here
was collected, typically $30$ minutes of conditioning were required for every
four hours of run time. This frequency would have been higher were it not for
the beam effectively being conditioned by another group's experiment during the
previous week (during which the beam imaging and tuning described in the
previous section was conducted). It should be noted that a
modern, dedicated accelerator facility for the purpose of an implemented weapons
verification program would not face such limitations.
\subsubsection{Summary of beam effects}
The combination of this flux reduction due to the beam energy, past lack of
constraint
on wandering of the beam condition, and less stable beam conditions in prior
experiments is hypothesized to account for the factor of $1.5$--$2$ ratio of
predicted over observed absolute NRF rates in preliminary
experiments~\cite{vavrek2017progress}. The beam diagnostics used for the
experimental campaign presented here indicate that these issues were, in
general, rectified for the data presented. In particular, these analyses
indicate that the beam conditions were very stable. Thus, while there may be
remaining uncertainties on the absolute bremsstrahlung flux, this flux was
consistent and thus relative comparisons between different mock warheads are not
subject to significant uncertainties
from the beam conditions. Such cancellation of consistent, systematic
uncertainties is an inherent advantage of any template verification system.
\section{Analysis of data from multiple detectors}\label{sec:si_data}
\subsection{Data unification}
For each measured object, data from multiple acquisition periods and three
separate detectors is combined into a single spectrum in order to improve the
signal-to-noise ratio. First, the count rate $r^{d}_i$ (counts per live second
per {\textmu}A) in the $i^\text{th}$ bin of detector $d$'s spectrum is the
live-charge-normalized sum of bin contents (i.e.~raw counts) $c^{d}_{ij}$ in
each of the $j$ runs:
\begin{align}\label{eq:live_charge_norm}
r^{d}_{i} = \frac{\sum_j c^{d}_{ij}}{\sum_j t^{d}_{\ell,j} I_{b,j}}
\end{align}
where $I_{b,j}$ is the average beam current recorded in run $j$, and the live
time $t^{d}_{\ell,j}$ is computed from the real time $t^{d}_{r,j} = 300$~s and
the detector dead time fraction $f^{d}_{\text{dt},j}$ (as reported by each
detector's Lynx DSA) as
\begin{align}
t^{d}_{\ell,j} = t^{d}_{r,j} (1-f^{d}_{\text{dt},j}).
\end{align}
To build a meaningful sum across the three detectors $d$, each histogram of
rates $r^d$ must have an equal number of bins and locations of bin centers. This
is difficult to achieve in practice, however, since each detector has a
different calibration (depending on its gain and unique response function) for
converting from a bin number in the range $1$-$32768$ to energy deposition in
MeV. As a result, the calibrated bin widths and bin centers, in general, differ
among the detectors. This is solved in post-processing by a combination of
recalibration and histogram interpolation. As a common starting point, each
histogram $r^d$ is linearly recalibrated using its peaks at $0.511$~MeV (pair
production), $1.001$~MeV (U-238 passive signature), and $2.212$~MeV (Al-27 NRF
emission), all of which are prominent in the beam-on spectrum. A new histogram
$\bar{r}^{d}$ with $3000$ $1$~keV-wide bins between $0$ and $3$~MeV is then
generated by interpolation of $r^d$, and scaled by the ratio of new to old bin
widths in order to keep constant the differential counts per unit energy. The
bin errors $\delta\bar{r}^{d}_i$ are finally recomputed under Poisson statistics
as
\begin{align}
\delta \bar{r}^{d}_i = \sqrt{\frac{\bar{r}^{d}_i}{\sum_j t^{d}_{\ell,j}
I_{b,j}}}
\end{align}
which just amounts to reverting the live-charge-scaled spectra to count spectra,
computing the bin error as the square root of the (interpolated) counts, and
then re-dividing by the live charge. The detector-summed spectrum is then just
the bin-by-bin sum $\bar{r}_i = \sum_d \bar{r}^d_i$. Note: the peak resolutions
of $\sim$0.05\% ($\sigma$) at $\sim$2.2~MeV in the spectrum $\bar{r}$ (after
processing) were verified to be consistent with those in the individual spectra
$r^d$ (before processing).
\subsection{Fitting procedure}
The interpolated histograms $\bar{r}$ (see Figs.~\ref{fig:I_vs_Ia}--\ref{fig:II_vs_IId}) are fit with an exponential background plus a series of Gaussian peaks as given in Eq.~\ref{eq:spectral_fit} of the main article. Such high-dimensional fits are achieved reliably by an iterative process: each peak is
first fit individually with a five-parameter Gaussian plus linear background
curve, where the initial parameter estimates and bounds for ROOT's $\chi^2$
minimization are computed using rough linear approximations. In the case of the
closely-spaced $2.209$~MeV and $2.212$~MeV lines from U-238 and Al-27,
respectively, an eight-parameter doublet fit is used instead. A first-order
estimate of the continuum is then made by fitting the entire spectrum (including
the peaks) with a single exponential curve. The two parameters of this
exponential, along with the area, mean, and standard deviation estimates of each
of the six peaks, are then input directly as starting estimates for the full
26-parameter fit. Parameter bounds are established in a similar fashion by
allowing some tolerance around the starting estimates.
\begin{figure}[thb]
\centering
\includegraphics[width=0.9\columnwidth]{figures/c0-eps-converted-to.pdf}
\caption{Summed spectra (points) and fits (curves) in the template I (black)
vs hoax Ia (red) verification measurement.}
\label{fig:I_vs_Ia}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=0.9\columnwidth]{figures/c1-eps-converted-to.pdf}
\caption{Summed spectra (points) and fits (curves) in the template I (black)
vs genuine candidate Ig (red) verification measurement.}
\label{fig:I_vs_Ig}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=0.9\columnwidth]{figures/c2-eps-converted-to.pdf}
\caption{Summed spectra (points) and fits (curves) in the template I (black)
vs hoax Ib (red) verification measurement.}
\label{fig:I_vs_Ib}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=0.9\columnwidth]{c3-eps-converted-to.pdf}
\caption{Summed spectra (points) and fits (curves) in the template II
(black) vs hoax IIc (red) verification measurement.}
\label{fig:II_vs_IIc}
\end{figure}
\begin{figure}[thb]
\centering
\includegraphics[width=0.9\columnwidth]{figures/c4-eps-converted-to.pdf}
\caption{Summed spectra (points) and fits (curves) in the template II
(black) vs hoax IId (red) verification measurement.}
\label{fig:II_vs_IId}
\end{figure}
\begin{sidewaystable*}[hbt]
\footnotesize
\centering
\caption{Proxy warhead verification measurements}\label{tab:configs_full}
\makebox[\textwidth][c]{
\begin{tabular}{@{\vrule height 10.5pt depth4pt width0pt}cccccccc}
\hline\vspace{-5pt}
meas. & warhead & summed live charge & Al-27 fit peak rate & U-238 fit peak
sum & Al-27 discrepancy & U-238 discrepancy & figure\\
& & $Q_\ell$ [{\textmu}A$\cdot \text{h\, (live)}$] &
[({\textmu}A$\cdot$h)$^{-1}$] & $T$ [({\textmu}A$\cdot$h)$^{-1}$] & $\nu$ (vs
template) & $\nu$ (vs template)\\
\hline
0 & template I & 50.0 & 99.1 $\pm$ 4.5 & 87.9 $\pm$ 4.6 & - & - & -\\
1 & hoax Ia (100\% Pb) & 67.5 & 98.7 $\pm$ 6.5 & 140.8 $\pm$ 4.8 &
$-$0.051~$\sigma$ & 7.9~$\sigma$ & \ref{fig:I_vs_Ia}\\
2 & genuine candidate Ig & 61.0 & 105.3 $\pm$ 6.8 & 99.4 $\pm$ 4.8 &
0.76~$\sigma$ & 1.7~$\sigma$ & \ref{fig:I_vs_Ig}\\
3 & hoax Ib (100\% Pb) & 54.8 & 112.0 $\pm$ 6.0 & 153.4 $\pm$ 4.8 & 1.7~$\sigma$
& 9.8~$\sigma$ & \ref{fig:I_vs_Ib}\\
\hline
4 & template II & 53.7 & 69.9 $\pm$ 7.5 & 59.0 $\pm$ 3.4 & - & - & -\\
5 & hoax IIc (100\% Pb) & 42.9 & 72.7 $\pm$ 8.4 & 109.9 $\pm$ 3.4 &
0.25~$\sigma$ & 10.7~$\sigma$ & \ref{fig:II_vs_IIc}\\
6 & hoax IId (50\% Pb) & 73.7 & 85.8 $\pm$ 3.6 & 84.6 $\pm$ 4.5 & 1.9~$\sigma$ &
4.6~$\sigma$ & \ref{fig:II_vs_IId}\\
\hline
\end{tabular}
}
\\
\footnotesize{Live charge, Al-27 fit rate, and U-238 fit rate are summed
over three detectors. U-238 fit rate is additionally summed over six peaks. Uncertainties are $\pm$1~SD.}
\end{sidewaystable*}
\section{Extrapolation calculations for realistic warhead measurements}
\label{sec:si_extrap}
The extrapolations to more realistic warhead models and future dedicated
verification systems are computed using Eq.~\ref{eq:d2ndEdOmega}, which predicts
the detected count rate of a single NRF line for a given encryption foil and
warhead geometry. All calculations assume isotropic NRF emission for simplicity
(especially when spin states are unknown in e.g.~U-235), and use the same Pb
filter transmission probability function $P_f(E) \sim 0.25$, intrinsic peak
detector efficiency $\epsilon_\text{int} \simeq 0.16$, and single-detector
geometric efficiency $\Omega_d/4\pi \simeq 1.3\times 10^{-3}$ used in this
work's experiments. The incident bremsstrahlung spectrum $\phi_0(E)$ is computed
in a Geant4 simulation of the 126~{\textmu}m Au radiator with an electron beam
energy of 2.521~MeV as determined in Fig.~\ref{fig:endpoint}, and it is assumed
that the simulated flux accurately predicts the flux that would be observed in
the laboratory. For use in Eq.~\ref{eq:d2ndEdOmega}, $\phi_0(E)$
is approximated as a pencil beam impinging on the axis of a concentric-shell
warhead. Calculations for different warhead models assume
different foil compositions (though maintain the $X=3.18$~mm thickness) and
therefore consider different NRF lines depending on whether a uranium or
plutonium component would need to be verified. In both cases, the NRF line used
in Eq.~\ref{eq:d2ndEdOmega} is chosen as the ground-state transition with the
highest integrated cross section based on the values in Table~I
of~\cite{ref:bertozzi_full}, excluding those (in particular, the Pu-239
2143.56~keV transition) with poorly-understood nuclear level schemes. In all
scenarios, a Pb hoax is constructed by replacing a weapon-isotope component with
the same areal density of Pb. The runtime (quantified in `detector live microamp
hours', i.e.~the triple product of the number of detectors, the beam current,
and the live time) required to distinguish the NRF count rates $r_1$ and $r_2$
of the genuine and hoax warheads at a confidence $\nu'$ of $5\,\sigma$ is
computed as
\begin{align}
\nu' = \frac{r_2 - r_1}{\sqrt{r_1/C_\ell + r_2/C_\ell}}.
\end{align}
where the triple product $C_\ell$ is assumed to be equal in the two
measurements. Table~\ref{tab:extrap_params} in the article lists the results of these
calculations for six different warhead geometries. The extrapolation
calculations given in the article, for instance, use 30 detectors and a 5~mA
current to arrive at a measurement time of roughly 20 minutes (per object) for
the second entry of Table~\ref{tab:extrap_params}. These
calculations use only the counts in a single NRF line, but summing the lines
from a single isotope will increase the statistics and reduce the quoted
required measurement times.
We note that for more realistic (i.e.~non-slab) geometries, the
precise alignment of the beam, warhead, foil, and detectors may become important
for the prediction of absolute NRF count rates in a verification measurement.
These systematic factors will cancel in the verification measurement, however, if
they are kept constant between template and candidate measurements. For completeness, we estimate
here the effect of a misalignment of the warhead transverse to the beam, which
effectively changes the warhead thickness $X$. If we take for simplicity a nominal
spherical shell of DU with inner and outer radii of 6.3 and 6.5~cm,
respectively, and measure until 2000 counts are obtained in the 2.176~MeV line
of U-238, we find (using Eq.~\ref{eq:d2ndEdOmega}) that a $1\,\sigma$
discrepancy in the observed 2.176~MeV rate requires a misalignment of
approximately $1.6$~cm. Such displacements may also affect the solid angle
integration of Eq.~\ref{eq:d2ndEdOmega}, changing the geometric efficiency by
factors on the order of $10\%$, depending on the foil-to-detector distance.
However, geometric control on scales $<1$~cm is feasible with appropriate survey
equipment. This effect is further mitigated by the fact that the bremsstrahlung
beam has a spatial width and thus samples an extended area of warhead at a given time. For these
experiments, the opening half-angle of the cone of beam was approximately $5^\circ$ and the photon illumination
in this cone was relatively uniform (to better than 10\%). The collimation of the beam could be adjusted
to further cancel misalignment effects by adjusting the size and uniformity of the beam spot on the
inspected object. Finally, manufacturing tolerances in the warheads or true
warhead-to-warhead variation in component sizes may also affect the results of
the template measurement, but estimates of such variations are not available in
the open literature.
An additional concern regarding extrapolation to measurements
at mA-scale beam currents is the capability of the HPGe detectors to handle
the event rate increase and the additional loss of live time to
pile-up events. While a 5~mA current will produce 200$\times$ as many bremsstrahlung
photons relative to the 25~{\textmu}A currents of the experiments described in this
article, the rates in the HPGe detectors will increase by a lesser factor. Realistic
inspection objects will be larger than the mock warheads used for this work and thus will prevent a greater fraction of the
beam from reaching the encryption foil. Since the event rate in the HPGe detectors is dominated
by photons scattered from the foil, this attenuation reduces the event rate in the detectors.
To estimate the size of this effect the transmission of the bremsstrahlung beam through the warhead
test object to the encryption foil was simulated for two scenarios --- the ``template I'' mock warhead
and the WGPu+DU ``Black Sea'' warhead model consisting of spherical shells of WGPu, high explosives,
and a uranium tamper \cite{ref:fetter1990gamma}. Fig. \ref{fig:foiltrans} shows the transmitted
spectrum through each of these objects per 1~{\textmu}C of electron beam on target. The total transmitted
photon rate through the mock warhead is approximately 11 times higher than that of the more realistic
Black Sea model, and the rate at the high end of the spectrum ($\gtrsim$2 MeV) is approximately 6 times
higher for the mock warhead. Since photons are more likely to eventually cause events in the detectors
if they strike the foil at high energies, the latter factor of $\sim$6 is taken as a conservative estimate
of the rate reduction due to the thicker warhead. Thus, the increase in the event rate in the detectors
between the experiments described in this work and a realistic warhead under inspection with a 5~mA electron
beam current will be approximately 30. At this rate, the dead time fraction due to pile-up events would
be $\sim$60\% while the increase in the fixed event rate processing would result in a total dead time
fraction of $\sim$90\%. This corresponds to a live time reduction of approximately an order of magnitude
relative to the mock warhead experiments. This may be mitigated in a future realistic verification
scenario by increasing the number of detectors to directly increase the live time, and in the future it is
likely that HPGe detectors capable of operating at MHz-scale rates will be available that would be more
than sufficient for this application \cite{ref:pnnlhpge}. Additionally, the 5~mA beam current assumption
here is merely a starting point to provide a reference for estimated measurement times and may be optimized
to achieve a balance between detector rates and measurement times. As also noted in the text, minimal effort
was made to optimze the low-energy photon filters in front of the HPGe detectors for this experiment and it
is likely that further rate reductions could be achieved by optimized shielding.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/transmitted.pdf}
\caption{Transmitted spectrum of the bremsstrahlung beam at the encryption foil
for the template I mock warhead and the Black Sea model warhead \cite{ref:fetter1990gamma}.}
\label{fig:foiltrans}
\end{figure}
\section{Multi-line inference}
The equation for the predicted NRF rates (Eq.~\ref{eq:d2ndEdOmega} or its
integrated form), contains multiple quantities that are kept secret from the
inspector, and thus cannot be used alone to infer the warhead thickness~$D$ from
a physically-encrypted spectrum. However, it is possible to develop a system of
equations from Eq.~\ref{eq:d2ndEdOmega}---one equation per NRF peak---in which
case there may be at least as many equations as unknowns and inference may be
possible. This technique may be especially straightforward to realize by using
two NRF lines of the same isotope with similar resonance energies $E_r$, such as
the 2.176~MeV and 2.209~MeV U-238 lines (if the latter were not obscured by the
the 2.212~MeV Al-27 line) in any of the above NRF spectra. Taking a ratio of
Eq.~\ref{eq:d2ndEdOmega} for the two lines allows one to cancel systematic
factors such as the $\epsilon_\text{int}(E')$, $P_f(E')$, and number densities
$N$, and to approximately cancel slowly varying functions of energy $E$ such as
$\phi_0(E)$ or perhaps even $\phi_t(E)$. If the $\phi_t(E)$ are canceled, the
ratio of observed counts may then be used to estimate the foil thickness~$X$,
which could in turn be used to estimate~$D$; more complicated procedures are
required if only the $\phi_0(E)$ are canceled. This information security question may be solved by use of the encryption plates, which obscure the true value of $D$ and permit inference only on some upper bound $D + \Delta D$.
|
1,941,325,221,076 | arxiv | \section{Introduction}
A connected, complete Riemannian manifold $(M^{n},g)$ is called a {\em
conformal gradient soliton} if there exists a {\em nonconstant} smooth
function $f$, called {\em potential} of the soliton, such that
\begin{equation*}
\nabla^{2} f \, = \, \varphi\, g \,,
\end{equation*}
for some function $\varphi:M^n\to\RR$. Tracing this equation with the metric $g$, we see
immediately that the function $\varphi$ must coincide with $\Delta f/n$.
Hence, an equivalent characterization of conformal gradient solitons is given by the equation
\begin{equation}\label{confsol}
\nabla^{2} f \, = \, \frac{\Delta f}{n}\, g \,.
\end{equation}
In this note we are going to fully detail a remark of Petersen and
Wylie~\cite[Remark A.3]{pw} about the classification of these
solitons, moreover, we revisit a result of Tashiro~\cite{tashiro}, who
first studied their global structure.
Complete Riemannian manifolds admitting a vector field $\nabla f$
satisfying equation~\eqref{confsol} were studied by many authors in the late
60's. Solutions to equation~\eqref{confsol}
have also been considered in a work by Cheeger and
Colding~\cite{cheegcold}, where the authors give a characterization of
warped product manifolds. In particular, they observe that, in the
complement of the critical points of $f$, any conformal gradient
soliton is isometric to a warped product on some open interval. Taking
advantage of this, we will be able to drastically simplify the proof
of the classification result for conformal gradient solitons given by
Tashiro. Moreover, we further characterize conformal gradient solitons
with nonnegative Ricci tensor, in the spirit of some recent works
about the classification of Einstein--like structures, such as
gradient Ricci solitons and quasi--Einstein manifolds.
Our main result reads:
\begin{teo}\label{teo1} Let $(M^{n},g)$ be a complete conformal gradient soliton and let $f$ be a potential function for it. Then, any regular level set $\Sigma$ of $f$ admits a maximal open neighborhood $U\subset M^n$ on which $f$ only depends on the signed distance $r$ to the hypersurface $\Sigma$. In addiction, the potential function $f$ can be chosen in such a way that the metric $g$ takes the form
\begin{equation*}
g \, = \, dr^2 \,+ \,(f'(r))^{2}\, g^{\Sigma}\quad {\hbox{on $U$}} ,
\end{equation*}
where $g^{\Sigma}$ is the metric induced by $g$ on $\Sigma$. As a consequence, $f$ has at most two critical points on $M^n$ and we have the following cases:
\begin{itemize}
\item[(1)] If $f$ has no critical points, then $(M^{n},g)$ is globally conformally equivalent to a direct product $I\times N^{n-1}$ of some interval $I=(t_{*},t^{*})\subseteq \RR$ with a $(n-1)$--dimensional complete Riemannian manifold $(N^{n-1},g^{N})$. More precisely, the metric takes the form
$$
g \, = \, u^{2}(t)\, \big(dt^{2}+g^{N}\big) \, ,
$$
where $u:(t_{*},t^{*})\rightarrow \RR$ is some positive smooth
function. In this case, if $(M^n,g)$ is also locally
conformally flat, it is well known that $(N^{n-1},g^N)$ must have constant curvature.
\smallskip
\item[(1')] If, in addition, the Ricci tensor of $(M^n,g)$ is nonnegative, then $(M^n,g)$ is {\em isometric} to a
direct product $\RR\times N^{n-1}$, where $(N^{n-1},g^N)$ has nonnegative
Ricci tensor. In this case, if $(M^n,g)$ is also locally
conformally flat, then either $(M^n,g)$ is flat or it is a direct product of $\RR$ with a quotient of the round sphere $\SS^{n-1}$.
\smallskip
\item[(2)] If $f$ has only one critical point $O\in M^n$, then
$(M^{n},g)$ is globally conformally equivalent to the interior of an Euclidean ball of radius $t^{*}\in(0,+\infty]$.
More precisely, on $M^{n}\setminus~\{O\}$, the metric takes the form
$$
g \, = \, v^{2}(t)\, \big(dt^{2}+t^{2}g^{\SS^{n-1}}\big) \,,
$$
where $v:(0,t^{*})\rightarrow \RR$ is some positive smooth function. In particular $(M^{n},g)$ is complete, noncompact and rotationally symmetric.
\smallskip
\item[(2')] If, in addition, the Ricci tensor of $(M^n,g)$ is nonnegative, then $(M^n,g)$ is globally conformally equivalent to $\RR^{n}$.
\smallskip
\item[(3)] If the function $f$ has two critical points $N,S \in M^n$, then $(M^{n},g)$ is globally conformally equivalent to $\SS^{n}$. More precisely, on $M^{n}\setminus \{N,S\}$, the metric takes the form
$$
g \, = \, w^{2}(t)\, \big(dt^{2}+\sin^{2}(t)\,g^{\SS^{n-1}}\big) \,,
$$
where $w:(0,\pi)\rightarrow \RR$ is some smooth positive function. In particular $(M^{n},g)$ is compact and rotationally symmetric.
\end{itemize}
\end{teo}
In Section~\ref{s_proof} we will prove Theorem~\ref{teo1}, whereas in
Section~\ref{s_yamabe} we will focus our attention on the
classification of gradient Yamabe solitons. These are conformal
gradient solitons satisfying the equation
\begin{equation*}
\nabla^{2}f \, = \, (R-\lambda)\,g\,,
\end{equation*}
for some constant $\lambda$. We will show that any complete,
noncompact, gradient Yamabe soliton with nonnegative Ricci tensor
either has constant scalar curvature, or it splits isometrically as a
direct product $\RR\times N^{n-1}$, or it is rotationally symmetric
and globally conformally equivalent to $\RR^{n}$
(see Theorem~\ref{teoY} and Theorem~\ref{teokY} for the generalization
to the case of gradient $k$--Yamabe solitons).
\medskip
\section{Proof of Theorem~\ref{teo1}}\label{s_proof}
Let $\Sigma$ be a regular level set of the function $f:M^n\to\RR$,
i.e. $|\nabla f|\neq 0$ on $\Sigma$, which exists by Sard's Theorem and the fact that $f$ is nonconstant in our definition. We have that $|\nabla f|$ has to be
constant on $\Sigma$. Indeed, for all $X\in T_{p}\Sigma$
$$
\nabla_{X} |\nabla f|^{2} \,=\, 2 \,\nabla^{2} f (\nabla f, X) =
\frac{2\,\Delta f}{n} \,g(\nabla f,X) \, = \, 0\,.
$$
From this we deduce that, in a neighborhood $U$ of $\Sigma$ not
containing any critical point of $f$, such potential function only depends
on the signed distance $r$ to the hypersurface $\Sigma$. In particular
$df=f' dr$. Moreover, if $\theta=(\theta^{1}\,\ldots,\theta^{n-1})$ are coordinates {\em adapted} to the hypersurface $\Sigma$, we get
$$
\nabla^{2}f \,=\, \nabla df \,=\, f'' dr\otimes dr + f' \nabla^2
r=\, f'' dr\otimes dr + \frac{f'}{2} \, \partial_r g_{ij} \,d\theta^i\otimes d\theta^j\,,
$$
as
$$
\Gamma_{rr}^r=\Gamma_{rr}^k=\Gamma_{ir}^r=0\,,\qquad
\Gamma_{ij}^r=- \frac{1}{2} \,\partial_r g_{ij}\,,\qquad
\Gamma_{ir}^k= \frac{1}{2} \, g^{ks}\partial_r g_{is}\,.
$$
On the other hand, using equation~\eqref{confsol}, we have
$$
\nabla^{2} f \, = \, \frac{\Delta f}{n}\, g \,= \, \frac{\Delta
f}{n}\,(\,dr \otimes dr + g_{ij}\, d\theta^{i} \otimes
d\theta^{j}\,)\,,
$$
thus, $\Delta f = n f''$ and $g_{ij} \Delta f = \frac{n}{2} f' \,\partial_{r}
g_{ij}$. These equations imply the family of ODE's
$$
f''(r) \,g_{ij}(r,\theta) \,=\, \frac{f'(r)}{2} \,\partial_{r}
g_{ij}(r,\theta)\,.
$$
Since $f'(0)\not=0$ (otherwise $\Sigma$ is not a regular level set of $f$) we can integrate these equations obtaining
$$
g_{ij}(r,\theta) \, = \, \big(f'(r)/f'(0)\big)^{2} g_{ij}(0,\theta)\,.
$$
Therefore, in $U$ the metric takes the form
$$
g \, = \, dr \otimes dr \,+ \,\big(f'(r)/f'(0)\big)^{2}\, \sigma_{ij}(\theta)\,d\theta^{i} \otimes d\theta^{j}\,,
$$
where $g^{\Sigma}_{ij}(\theta)=g_{ij}(0,\theta)$ is the metric induced by $g$ on $\Sigma$. We notice that, since $f=f(r)$, then the width of the neighborhood $U$ is uniform with respect to the points of $\Sigma$, namely we can assume $U=\{r_{*}<r<r^{*}\}$, for some maximal $r_{*}\in[-\infty,0)$ and $r^{*}\in(0,\infty]$. Moreover, by the scalar invariance of equation~\eqref{confsol}, we can assume that $f'(0)=1$, possibly changing the function $f$. Hence, in $U$, the metric can be written as
\begin{equation}\label{metric}
g \, = \, dr \otimes dr \,+ \,(f')^{2}\, g^{\Sigma}\,,
\end{equation}
where $g^{\Sigma}$ denotes the induced metric on the level set $\Sigma$. Moreover, the Ricci tensor and the scalar curvature of the metric $g$ take the form (see~\cite[Proposition~9.106]{besse})
\begin{equation}\label{ricci}
\Ric_{g} = -(n-1) \frac{f'''}{f'} dr\otimes dr + \Ric^{\Sigma} - \big((n-2)(f'')^{2}) + f'\,f'''\big)\,g^{\Sigma}\,,
\end{equation}
\begin{equation}\label{scalar}
R_{g} = -2(n-1)\frac{f'''}{f'} + \frac{R^\Sigma-(n-1)(n-2)(f'')^{2}}{(f')^{2}}\,.
\end{equation}
\medskip
{\bf Case 1: $f$ has no critical points.} Since $(M^{n},g)$ is complete, the width of the maximal neighborhood $U$ is unbounded in both the negative and the positive direction of the signed distance $r$ (i.e., $r_{*}=-\infty$ and $r^{*}=+\infty$). To complete the proof, it is sufficient to set
$$
t(r) \,=\, \int_{0}^{r}\frac{1}{f'(z)}\,dz\,.
$$
For $r\in(-\infty,+\infty)$, we have $t\in(t_{*},t^{*})$, where $t_{*}=\lim_{r \rightarrow r_{*}} t(r)\in[-\infty,0)$ and $t^{*}=\lim_{r \rightarrow r^{*}} t(r)\in(0,+\infty]$. Moreover, $t'(r)\neq 0$ and $r$ can be viewed as a function of $t$ by inverting the expression above. From~\eqref{metric}, the metric takes the form
$$
g \,=\, u(t)^{2}\big(\,dt^{2}+g^{\Sigma}\,\big)\,,
$$
where $u(t)=f'(r(t))$.
\medskip
{\bf Case 1': $f$ has no critical points and $\Ric\geq 0$.} From formula~\eqref{ricci} and the fact that $g$ has nonnegative Ricci tensor, one has
$$
0\,\leq\, \Ric_{g}(\partial_r, \partial_r) \,=\, -(n-1)\frac{f'''}{f'}\,.
$$
Hence, $f'$ is a concave function defined on the whole real line that can
never be zero. Thus, it must be constant, that is $f'\equiv 1$, according to
our choice of $f'(0)$. This implies that $(M^{n},g)$ is isometric to the
direct product $\RR\times N^{n-1}$ of the real line with a
$(n-1)$--dimensional complete Riemannian manifold with nonnegative Ricci tensor.
\begin{rem}\label{remark} We notice that, in this case, the Ricci tensor has a zero eigenvalue at every point. Hence, there are no examples of such manifolds for which the Ricci tensor is positive definite at some point.
\end{rem}
\medskip
{\bf Case 2: $f$ has only one critical point $O\in M^n$.} In this case, since $(M^{n},g)$ is complete, we can assume that the width of the neighborhood $U$ is unbounded in the positive direction of the signed distance (i.e., $r_{*}>-\infty$ and $r^{*}=+\infty$) and $f'\rightarrow 0$, as $r\rightarrow r_{*}$. By formula~\eqref{ricci} and the smoothness of the metric $g$, we have that $f'''/f'$ is bounded, as $r\rightarrow r_{*}$. Hence, from~\eqref{scalar}, we deduce that
$$
R^{\Sigma}-(n-1)(n-2)(f'')^{2}\longrightarrow 0\,,\quad\quad\hbox{as}\,\, r\rightarrow r_{*}\,.
$$
In particular $R^{\Sigma}$ is nonnegative and constant along $\Sigma$. Moreover, it is easy to see that
\begin{equation}\label{hopital}
\lim_{r\to r_{*}} \,\frac{f'(r)}{r-r_{*}} \, = \lim_{r\to r_*}
f''(r)\, = \bigl(R^{\Sigma}/(n-1)(n-2)\bigr)^{1/2}\,.
\end{equation}
To conclude the proof of Case~2 of Theorem~\ref{teo1}, it remains to
show that the induced metric $g^{\Sigma}$ on the level set $\Sigma$
is proportional to the round metric $g^{\SS^{n-1}}$ of the
$(n-1)$--dimensional sphere. This follows from the
elementary fact that, infinitesimally, the metric $g$ is approximately
Euclidean near $O$.
Indeed, the standard expansion of the metric $g$ around $O$, written
in normal coordinates $(x^1, \cdots, x^n)$, gives
\begin{eqnarray*}
g &=& (\delta_{ij}+ \eta_{ij}(x))\, dx^{i}\otimes dx^{j} \\
&=& g^{\mathbb{R}^{n}}+ \eta_{ij}\, dx^{i}\otimes dx^{j} \,,
\end{eqnarray*}
where $\eta_{ij}=\mathcal{O}(|x|^{2})$. Passing to Riemannian polar
coordinates, we write $x^{i} = s
\,\phi^{i}(\vartheta^{1},\ldots,\vartheta^{n-1)})$, with
$s=r-r_{*}\in(0,+\infty)$ and $(\vartheta^{1},\ldots,\vartheta^{n-1})$
being local coordinates on $\mathbb{S}^{n-1}$. Notice that
$|\phi^{1}|^{2}+\dots+|\phi^{n}|^{2}=1$ and $|x|=s$. Thus, one has
\begin{eqnarray*}
g &=& ds\otimes ds + \big( s^{2}
{g}^{\mathbb{S}^{n-1}}_{\alpha\beta}+\,s^{2}\eta_{ij}\frac{\partial
\phi^{i}}{\partial \vartheta^{\alpha}}\frac{\partial
\phi^{j}}{\partial \vartheta^{\beta}}\big) \,d\vartheta^{\alpha}
\otimes d\vartheta^{\beta}\,,
\end{eqnarray*}
with $\eta_{ij}=\mathcal{O}(s^{2})$. Comparing with
expression~\eqref{metric}, we see that, for $s\in(0,+\infty)$, we have
$$
f'(s+r_{*})^{2} g^{\Sigma} \,= \, s^{2} {g}_{\mathbb{S}^{n-1}} +
s^{2}\eta_{ij}\frac{\partial \phi^{i}}{\partial
\vartheta^{\alpha}}\frac{\partial \phi^{j}}{\partial
\vartheta^{\beta}} \,d\vartheta^{\alpha} \otimes
d\vartheta^{\beta}\,.
$$
Now, combining the fact that $\eta_{ij}=\mathcal{O}(s^{2})$
with formula~\eqref{hopital}, if we take the limit as $s\to 0$ (which
means $r\to r_{*}$) we obtain $R^{\Sigma}>0$ and
$$
g^{\Sigma} \,=\, c^{2}\,{g}_{\mathbb{S}^{n-1}} \,,
$$
with $c^{2}=(n-1)(n-2)/R^{\Sigma}$. Therefore, on $M^n\setminus\{O\}$, we have
$$
g = ds^{2} + (c\,f'(s+r_{*}))^{2} \,{g}_{\mathbb{S}^{n-1}}\,.
$$
This proves that $g$ is rotationally
symmetric. To complete the proof, we set
\begin{equation}\label{eqqq7}
t(s) \, = \, \exp\Big(\,\frac{1}{c}\,\int_{-r_{*}}^{s}\frac{1}{f'(z+r_{*})}\,dz\,\Big)\,.
\end{equation}
For $s\in(0,+\infty)$, we have $t\in(0,t^{*})$, where $t^{*}=\lim_{s
\to +\infty} t(s)\in(0,+\infty]$. Notice that $t'(s)\neq 0$, hence,
the coordinate $s$ can be viewed as a function of $t$ by inverting the
expression above. Moreover,
$$
\frac{dt}{t} \,=\, \frac{ds}{c\,f'(s+r_{*})}
$$
and the metric $g$ can be expressed as in the statement, namely
$$
g\,=\, v(t)^{2}\big(\,dt^{2}+t^{2}\,g^{\SS^{n-1}}\,\big)\,,
$$
where $v(t)=c\,f'(s(t)+r_{*})/t$.
\medskip
{\bf Case 2': $f$ has one critical point $O\in M^n$ and $\Ric\geq
0$.} Like in the Case 1' above, as $\Ric\geq0$, we have that $z\mapsto f'(z+r_{*})$
is a concave function. In particular, it is definitely bounded above by some linear function, as $z\rightarrow +\infty$.
Then, by the very definition of $t^*$ in equation~\eqref{eqqq7}, it
follows that $t^*=+\infty$. This clearly implies that $(M^{n},g)$ is globally conformally
equivalent to $\RR^{n}$ and rotationally symmetric.
\medskip
{\bf Case 3: $f$ has two critical points $N,S\in M^n$.}
We assume that the width of the neighborhood $U$ is bounded in
both the negative and the positive directions of the signed distance
(i.e., $r_{*}>-\infty$ and $r^{*}<+\infty$). In particular $(M^{n},g)$
is compact (it ``closes'' at the points $N$ and $S$) and there cannot
be other critical points around. The same argument used in the proof of Case~2 implies at once the
rotational symmetry of $g$.
Namely, on $M^n\setminus\{N,S\}$, we have
$$
g = ds^{2} + (c\,f'(s+r_{*}))^{2} \,{g}_{\mathbb{S}^{n-1}}\,,
$$
where $c^{2}=(n-1)(n-2)/R^{\Sigma}$. To complete the proof of Case 3, we set
$$
t(s) \, = \, 2\arctan\exp\Big(\,\frac{1}{c}\,\int_{-r_{*}}^{s}\frac{1}{f'(z+r_{*})}\,dz\,\Big)\,.
$$
For $s\in(0,r^{*}-r_{*})$, we have $t\in(0,\pi)$ and $t'(s)\neq 0$,
hence, $s$ can be viewed as a function of $t$ by inverting the
expression above. Moreover,
$$
\frac{dt}{\sin(t)} \,=\, \frac{ds}{c\,f'(s+r_{*})}
$$
and the metric $g$ can be expressed as in the statement, namely
$$
g\,=\, w(t)^{2}\big(\,dt^{2}+\sin^{2}(t)\,g^{\SS^{n-1}}\,\big)
$$
where $w(t)=c\,f'(s(t)+r_{*})/\sin(t)$.
\medskip
This completes the proof of Theorem~\ref{teo1}.
\medskip
\section{Classification of Yamabe--Type Solitons with Nonnegative Ricci
Tensor}\label{s_yamabe}
Let $(M^{n},g)$, $n\geq 3$, be a complete Riemannian manifold verifying
\begin{equation}\label{sol}
\nabla^{2} f \, = \, \varphi\, g \,,
\end{equation}
for some smooth functions $f$ and $\varphi$ on $M^{n}$. When the potential function $f$ is nonconstant, then, according to our definition, $(M^{n},g)$ is a {\em conformal gradient soliton} and Theorem~\ref{teo1} applies.
We first notice that, by taking the divergence of this equation, we have
$$
\nabla _{i} \varphi \, = \, \Delta \nabla_{i} f \, =
\,\nabla_{j}\nabla_{i}\nabla^{j} f \,
= \, \nabla_{i} \Delta f + R_{ij}\nabla^{j}f\,,
$$
where we interchanged the covariant derivatives. Now, using
the fact that $\Delta f=n\,\varphi$, we obtain the following identity
\begin{equation}\label{eq2}
(n-1) \, \nabla_{i} \varphi \,=\, - R_{ij} \nabla^{j} f\,.
\end{equation}
We will discuss now some geometric applications of Theorem~\ref{teo1} to {\em gradient Yamabe solitons} and {\em gradient $k$--Yamabe solitons}.
\medskip
\subsection{Gradient Yamabe Solitons}
A Riemannian manifold $(M^{n},g)$ is called a {\em gradient Yamabe
soliton} if it satisfies equation~\eqref{sol} with $\varphi \, =
\, R-\lambda$ for some constant $\lambda\in\RR$, i.e., there exists
a smooth function $f$ (notice that here we are not excluding the case of a constant $f$) such that
\begin{equation}\label{Ysol}
\nabla^{2}f \, = \, (R-\lambda)\,g\,.
\end{equation}
If $\lambda=0$, $\lambda>0$ or $\lambda<0$, then the soliton is called {\em
steady}, {\em shrinking} or {\em expanding}, respectively. We recall that gradient Yamabe solitons are self--similar solutions to the Yamabe flow
$$
\frac{\partial}{\partial t} \,g \, = \, -R\,g\,.
$$
This flow was first introduced by Hamilton and we refer the reader to~\cite{dasksesum} and the references therein for further details on this subject.
Notice that any Riemannian manifold with constant scalar curvature
moves by the Yamabe flow only by dilations. Hence, it is trivially a
self--similar solution and a gradient Yamabe soliton with $R=\lambda$ and $f$ constant. Thus, according to our definitions, only gradient Yamabe solitons with nonconstant potential function $f$ can be viewed as conformal gradient solitons.
On the other hand, it is well known (see, for
instance~\cite[Proposition~B.16]{chowluni})
that any compact gradient Yamabe soliton has constant
scalar curvature $R=\lambda$. For the sake of completeness, we report here the proof.
\begin{teo}\label{teoYcpt} Any compact gradient Yamabe soliton has
constant scalar curvature $R=\lambda$. Moreover, the potential function $f$ is constant.
\end{teo}
\begin{proof}
Contracting equation~\eqref{Ysol} with the Ricci tensor and
integrating over $M^n$, we obtain
$$
\int_{M^n}(R-\lambda)R\,dV_{g} \,= \, \int_{M^n}R_{ij}\,\nabla^{ij}
f\,dV_{g} \,=\, - \int_{M^n}\nabla_{i}R_{ij}\,\nabla_{j}f\,dV_{g}\,=\,
-\frac{1}{2}\int_{M^n}\langle\nabla R,\nabla f\rangle\,dV_{g}\,,
$$
where in the last equality we have used Schur's lemma
$2\,\hbox{div}(\Ric) = dR$. Moreover, from equation~\eqref{Ysol}, one
has that $\Delta f=n(R-\lambda)$. Hence, it follows that
$\lambda=\tfrac{1}{Vol(M^n)}\int_{M^n} R\,dV_g$ and, from the previous
computation, we get
$$
\int_{M^n}(R-\lambda)^{2}\,dV_{g} \, = \,
-\frac{1}{2}\int_{M^n}\langle\nabla R,\nabla f\rangle\,dV_{g} \, = \,
\frac{n}{2}\int_{M^n} (R-\lambda)^{2}\,dV_{g}\,.
$$
Since $n\geq 3$, this implies that $R$ coincides with the constant $\lambda$. As an immediate consequence, by the relation~\eqref{Ysol}, we have
that $\Delta f$ is zero. Since $M^n$ is compact, the function $f$ is constant as well.
\end{proof}
When the function $f$ is constant, obviously the scalar curvature of the gradient Yamabe soliton is constant as well. In such a case, Theorem~\ref{teo1} does not apply. In the sequel we will always assume that relation~\eqref{Ysol} is satisfied by some nonconstant function $f$, so we will deal only with noncompact gradient Yamabe solitons. In this case, as an immediate application of Theorem~\ref{teo1}, we can prove the following global result.
\begin{teo}\label{teoY} Let $(M^{n},g)$ be a complete, noncompact,
gradient Yamabe soliton with nonnegative Ricci tensor and
nonconstant potential function $f$. Then, we have the following two cases:
\begin{itemize}
\item[(1)] either $(M^{n},g)$ is a direct product
$\RR\times N^{n-1}$ where $(N^{n-1},g^N)$ is
an $(n-1)$--dimensional complete Riemannian manifold with nonnegative Ricci tensor. If in addition $(M^{n},g)$ is locally conformally flat, then either it is flat or
the manifold $(N^{n-1},g^N)$ is a quotient of the round sphere $\SS^{n-1}$;
\smallskip
\item[(2)] or $(M^{n},g)$ is rotationally symmetric and globally
conformally equivalent to $\RR^n$. More precisely, there exists a
point $O\in M^n$ such that on $M^{n}\setminus \{O\}$,
the metric has the form
$$
g \, = \, v^{2}(t)\, \big(dt^{2}+t^{2}g^{\SS^{n-1}}\big) \,,
$$
where $v:\RR^+\to \RR$ is some positive smooth function.
\end{itemize}
\end{teo}
From Remark~\ref{remark}, it is now easy to deduce the following
corollary.
\begin{cor}\label{corY} Let $(M^{n},g)$ be a complete, noncompact,
gradient Yamabe soliton with nonnegative Ricci tensor and
nonconstant potential function $f$. If the Ricci tensor is positive definite
at some point, then $(M^{n},g)$ is rotationally symmetric and globally
conformally equivalent to $\RR^n$, in particular, it is locally conformally flat.
\end{cor}
It was proved by P. Daskalopoulos and N. Sesum in~\cite{dasksesum},
that any complete, noncompact, locally conformally flat, gradient
Yamabe soliton with positive sectional curvature
has to be globally conformally equivalent to
$\RR^{n}$. Corollary~\ref{corY} shows that one can remove the assumption
of local conformal flatness and relax the hypothesis on the
sectional curvature. In~\cite{dasksesum} the authors also provide a
complete classification of all rotationally symmetric gradient Yamabe
solitons in the steady, shrinking and expanding cases.
\medskip
\subsection{Gradient $k$--Yamabe Solitons}
A Riemannian manifold $(M^{n},g)$ is called a {\em gradient
$k$--Yamabe soliton} if satisfies equation~\eqref{sol} with
$\varphi \, = \, 2(n-1)(\sigma_{k}-\lambda)$ for some constant
$\lambda\in\RR$, where $\sigma_{k}$ denotes the
$\sigma_{k}$--curvature of $g$. We recall that, if we denote by
$\mu_{1}, \ldots, \mu_{n}$ the eigenvalues of the symmetric
endomorphism $g^{-1}A$, where $A$ is the Schouten tensor defined by
$$
A \,= \, \tfrac{1}{ n-2} \,\big( \, \Ric - \tfrac{1}{2(n-1)} \,R \,g \, \big) \, ,
$$
then the $\sigma_k$--curvature of $g$ is defined as the $k$--th
symmetric elementary function of $\mu_{1},\ldots,\mu_{n}$, namely
\begin{eqnarray*}
\sigma_{k}\,=\,\sigma_k(g^{-1} A) \, = \, \sum_{i_1\, <\,\ldots\,< \,
i_k}\mu_{i_i}\cdot \, \ldots \, \cdot \mu_{i_k} \,\, \quad \hbox{for
$1\leq k \leq n$}\,.
\end{eqnarray*}
Notice that $\sigma_{1}=\tfrac{1}{2(n-1)}R$, so gradient $1$--Yamabe solitons simply correspond to gradient Yamabe solitons. The structure equation takes the form
\begin{equation}\label{kYsol}
\nabla^{2}f \, = \, 2(n-1)(\sigma_{k}-\lambda)\,g\,,
\end{equation}
for some constant $\lambda\in\RR$. As usual, if $\lambda=0$,
$\lambda>0$ or $\lambda<0$, then $g$ is called {\em steady}, {\em shrinking} or {\em expanding}, respectively.
Again, we observe that only gradient $k$--Yamabe solitons with nonconstant potential function $f$ can be viewed as conformal gradient solitons.
We have seen that, for $k=1$, compact, gradient
Yamabe solitons have constant scalar curvature. By means of a generalized
Kazdan--Warner identity, for any $k\geq 2$, we can prove the following analogue of Theorem~\ref{teoYcpt}.
\begin{teo} Any compact, gradient $k$--Yamabe soliton with nonnegative Ricci tensor has constant $\sigma_{k}$--curvature $\sigma_{k}=\lambda$. Moreover, the potential function $f$ is constant.
\end{teo}
\begin{proof} Let us suppose, by contradiction, that $\sigma_{k}$ is
nonconstant. Then $f$ cannot be constant, since $\Delta f =2n(n-1)\,(\sigma_{k}-\lambda)$. Hence,
we can apply Theorem~\ref{teo1}, obtaining that $(M^{n},g)$ is
globally conformally equivalent to $\SS^{n}$, in particular, $g$ is
locally conformally flat. It was proved in the proof in~\cite{han1} that, on a
compact, locally conformally flat, Riemannian manifold, one has
$$
\int_{M^n}\langle X,\nabla\sigma_{k}\rangle\,dV_{g} \,=\,0 \,,
$$
for every conformal Killing vector field $X$ on $(M^{n},g)$. For
$k=1$, this obstruction corresponds to the well known Kazdan--Warner
identity, which holds on any compact Riemannian manifold (i.e.,
without assuming the locally conformally flatness,
see~\cite{bourgezin}). From the structure equation~\eqref{kYsol}, we
know that $\nabla f$ is a conformal Killing vector field, hence, it
follows that
$$
\int_{M^n}\langle \nabla f,\nabla\sigma_{k}\rangle\,dV_{g} \,=\,0 \,.
$$
Now, contracting the identity~\eqref{eq2} with $\nabla f$, and integrating over $M^n$, we obtain
$$
0\,=\, \int_{M^n}\langle \nabla f,\nabla\sigma_{k}\rangle\,dV_{g} \,=\,
-\frac{1}{2(n-1)^{2}} \int_{M^n} \Ric(\nabla f,\nabla f)\,dV_{g} \,.
$$
From the fact that $g$ has nonnegative Ricci tensor, we obtain that
$\Ric(\nabla f,\nabla f)=0$ everywhere. Then, by equation~\eqref{eq2}, we get $\langle \nabla f,\nabla
\sigma_{k}\rangle =0$. Since $g$ is rotationally symmetric, we have
that $\sigma_{k}$ is constant on the regular level sets of $f$. Hence,
the condition $\langle \nabla f,\nabla \sigma_{k}\rangle =0$ is sufficient
to conclude that $\sigma_{k}$ is constant. This implies that $\Delta f$ is constant. Since $M^n$ is compact, the only possibility is that $f$ is constant and $\sigma_{k}=\lambda$.
\end{proof}
\begin{rem} The same result holds if one considers a generalized $k$--Yamabe soliton structure~\eqref{sol} with $\varphi = \psi (\sigma_{k})$,
for every $k\geq 1$ and every strictly monotone function
$\psi:\RR\to\RR$. For instance, in~\cite{guanwang} the authors consider a
fully nonlinear conformal flow with velocity
$\varphi=\log(\sigma_{k})-\lambda$, with $\sigma_{k}>0$.
\end{rem}
Again, when the function $f$ is constant, obviously the $\sigma_{k}$--curvature of the gradient $k$--Yamabe soliton is also constant. Hence, in such a case Theorem~\ref{teo1} does not apply. In the complete, noncompact case, as an immediate application of
Theorem~\ref{teo1}, we can prove the following global result.
\begin{teo}\label{teokY} Let $(M^{n},g)$ be a complete, noncompact,
gradient $k$--Yamabe soliton with nonnegative Ricci tensor and
nonconstant potential function $f$. Then, we have the following two cases:
\begin{itemize}
\item[(1)] either $(M^{n},g)$ is a direct product
$\RR\times N^{n-1}$ where $(N^{n-1},g^N)$ is
an $(n-1)$--dimensional complete Riemannian manifold with nonnegative Ricci tensor. If in addition $(M^{n},g)$ is locally conformally flat, then either it is flat or
the manifold $(N^{n-1},g^N)$ is a quotient of the round sphere $\SS^{n-1}$;
\smallskip
\item[(2)] or $(M^{n},g)$ is rotationally symmetric and globally
conformally equivalent to $\RR^n$. More precisely, there exists a
point $O\in M^n$ such that on $M^{n}\setminus \{O\}$
the metric has the form
$$
g \, = \, v^{2}(t)\, \big(dt^{2}+t^{2}g^{\SS^{n-1}}\big) \,,
$$
where $v:\RR^+\to \RR$ is some positive smooth function.
\end{itemize}
\end{teo}
From Remark~\ref{remark}, it is now easy to deduce the following corollary.
\begin{cor} Let $(M^{n},g)$ be a complete, noncompact, gradient
$k$--Yamabe soliton with nonnegative Ricci tensor and nonconstant
potential function $f$. If the Ricci tensor is positive at some
point, then $(M^{n},g)$ is rotationally symmetric and globally
conformally equivalent to $\RR^n$.
\end{cor}
\
\begin{ackn}
The authors are partially supported by the Italian project FIRB--IDEAS ``Analysis and Beyond''.
\end{ackn}
\medskip
\noindent {\bf Note.} {\em During the editing of this work, H.-D.~Cao,
X.~Sun and Y.~Zhang posted on the {\em ArXiv Preprint Server} the
manuscript~\cite{caosunzhang}, where a classification result for
gradient Yamabe solitons similar to the one discussed in
Section~\ref{s_yamabe} (in particular, Theorem~\ref{teoY}) is
obtained.}
\
\
\bibliographystyle{amsplain}
|
1,941,325,221,077 | arxiv | \section{Introduction}
\label{Introduction}
Predictive modelling of turbulent combustion incorporating finite-rate kinetics is becoming
increasingly important for the development of fuel-flexible combustion devices
with low emissions and high-speed propulsion systems.
Large Eddy Simulation (LES) is a promising approach requiring closure models describing subfilter
transport as for non-reactive flows \cite{sagaut2006large}, but also for filtered reaction rates,
\cite{poinsot2005theoretical,janicka2005large,echekki2009multiscale,menon2010computational}.
Mixing and chemical reactions usually occur together on scales smaller than convection, requiring
different modelling approaches, \cite{echekki2009multiscale,menon2010computational}.
These include flamelet models, e.g.\ \cite{hawkes2000flame}, finite-rate chemistry models such as
thickened flame models, \cite{colin2000thickened},
localized time-scale models, \cite{fureby2009large,giacomazzi2000fractal,sabelnikov2013combustion},
approximate deconvolution models, \cite{mathew2002large},
presumed Probability Density Function (PDF) models, \cite{gerlinger2003investigation},
transported PDF models, \cite{bulat2013large,kim2014effects},
Conditional Moment Closure (CMC) models, \cite{navarro2005conditional},
and Linear Eddy Models (LEM), \cite{menon2011linear}.
Assessment of flamelet and finite-rate chemistry models,
e.g.~\cite{hernandez2011laboratory,ma2014posteriori,fureby2017comparative,fedina2017assessment},
have found satisfactory overall agreement for most models, but with finite-rate chemistry models
performing somewhat better than the flamelet models.
In finite-rate chemistry LES,
the combustion chemistry is incorporated by solving filtered transport equations for the species,
with the filtered reaction rates being computed either explicitly using
filtered Arrhenius reaction rate expressions or tabulated reaction rates, \cite{bulat2015reacting}.
An issue with finite-rate chemistry models is the dependence on the underlying reaction
mechanism, \cite{bulat2015reacting,zettervall2017large}.
From \cite{bulat2015reacting,zettervall2017large,fiorina2005premixed} and other similar studies, it
has been observed that skeletal reaction mechanisms can successfully be used in finite-rate chemistry
LES.
The influence of the filtering is handled through different types of mathematical or
phenomenological models, e.g.\ \cite{colin2000thickened,fureby2009large,giacomazzi2000fractal,sabelnikov2013combustion,mathew2002large,gerlinger2003investigation,bulat2013large,kim2014effects,navarro2005conditional,menon2011linear}, in conjunction with the reaction rates.
The filtering operation has been examined previously for non-reacting LES
(e.g.\ \cite{liu1994properties}), and for combustion LES \cite{LapointeCnF17}
targetting tabulated chemistry based on presumed PDFs.
Here, Direct Numerical Simulation (DNS) results from lean premixed methane-air flames
will be used to examine the influence of the underlying turbulent flow on the filtered reaction
rates, and hence the modelling requirements for finite-rate chemistry LES.
\section{LES of Turbulent Combustion}
LES equations of motion are derived from conservation of mass, momentum and energy by applying
a low-pass filter. Physical processes on scales larger than the filter width, $\Delta$, are
resolved, whereas physics occurring on scales smaller than $\Delta$ require subfilter models.
The full equation set can be found elsewhere (e.g.\ \cite{echekki2009multiscale}), but the focus
here is on the reaction terms appearing in the conservation of mass for species $i$,
\begin{align}
\frac{\partial}{\partial t}\left(\bar{\rho}\tilde{Y}_i\right)
&+\nabla\cdot\left(\bar{\rho}\tilde{Y}_i\tilde{\boldsymbol{v}}\right)
=\nabla\cdot\left(D_i\nabla\tilde{Y}_i-\boldsymbol{b}_i\right)
+\bar{\dot{\omega}}_i,
\label{eq:LES2}
\end{align}
in which, $\bar{\rho}$, $\tilde{\boldsymbol{v}}$, and $\tilde{Y}_i$
are the (Favre) filtered density, velocity, and species mass fractions, respectively,
$D_i$ is the Fickian diffusion coefficient for species $i$,
and the subfilter turbulent mixing is hidden in the diffusive flux term $\boldsymbol{b}_i$.
The filtered reaction term $\bar{\dot{\omega}}_i$ is the focus of the present work,
in particular the non-linear response to the LES filtering operation,
which can be written out to emphasize the dependencies on all dependent variables,
\begin{equation}
\bar{\dot{\omega}}_i=M_i\sum_{j=1}^M (P_{ij}^{\prime\prime}-P_{ij}^{\prime}) \bar{Q}_j,
\label{eq:reac}
\end{equation}
where $\bar{Q}_j$ are the filtered progress rates of reaction $j$,
\begin{equation}
\bar{Q}_j=\overline{\left[k_{f,j}\prod_{k=1}^N\left(\frac{\rho Y_k}{M_k}\right)^{P_{kj}^{\prime}}
-k_{b,j}\prod_{k=1}^N\left(\frac{\rho Y_k}{M_k}\right)^{P_{kj}^{\prime\prime}}\right]},
\label{eq:reacFilt}
\end{equation}
where $k_{f,j}$ and $k_{f,j}$ are the forward and backward rates of reaction $j$, respectively.
Taylor series expansions of (\ref{eq:reacFilt}) have been discussed (from a RANS point of view) in
\cite{poinsot2005theoretical},
and this is not considered a useful approach for increasing the understanding of these
terms or for the development of improved models due to the inherent non-linearites. Alternatively,
by multiplying and dividing each of the reaction rates in (\ref{eq:reac}) by the filtered
reaction rates we have,
\begin{equation}
\bar{\dot{\omega}}_i=M_i\sum_{j=1}^M(P_{ij}^{\prime\prime}-P_{ij}^{\prime})\Omega_jQ_j(\tilde{\boldsymbol{Y}},\bar{T}),
\end{equation}
in which
$\Omega_j=\bar{Q}_j/Q_j(\tilde{\boldsymbol{Y}},\bar{T})$
denote the correlations between the filtered reaction
rates and the reaction rates evaluated by the filtered quantities accessible in LES.
The correlations $\Omega_j$ then constitute a model approach for LES with finite-rate chemistry.
The premise of the present work is to evaluate factors affecting these
correlation terms for a range of lean premixed flames at different $\Ka$ to investigate the
influence of the turbulence on the combustion chemistry, and thus also obtain information about
the subfilter modelling requirements for finite-rate chemistry LES.
\section{DNS of Turbulent Premixed Methane Flames}
The simulation database that will be used for the present study consists of a series of DNS with
detailed chemistry of statistically-stationary statistically-planar
turbulent premixed methane flames in maintained homogeneous isotropic turbulence,
\cite{AspdenCNF16,AspdenDodecane17,AspdenHiKa18}.
The simulations were run using the well-established low Mach number combustion solver developed at
the Center for Computational Sciences and Engineering at the Lawrence Berkeley National Laboratory.
The details of the numerical method can be found in \cite{DayBell2000} and \cite{Nonaka2012}.
The methodology treats the fluid as a mixture of perfect gases, using a mixture-averaged model
for diffusive transport, ignoring Dufour and Soret effects.
A long-wavelength forcing term designed to establish and maintain turbulence with the desired
properties \cite{Aspden08b}.
The chemical kinetics and transport were modelled using the GRIMech 3.0 without emissions chemistry
\cite{FrenklachWang1995}, resulting in 35 species with 217 elementary reactions.
The simulations were conducted at $\Lambda=l/l_F=1$,
as part of the study reported in \cite{AspdenDodecane17}, matching the Karlovitz numbers of the
$\Lambda=4$ calculations reported in \cite{AspdenCNF16}
($\Ka=(u^3l_F)/(s_F^3l)=1$ and 36), along with a higher
Karlovitz number case from \cite{AspdenHiKa18} ($\Ka=108$) looking at more turbulent conditions.
The conditions are shown on the regime diagram in figure~\ref{fig:regime},
and span the conventionally-defined thin reaction zone.
As with all DNS studies, the integral length scale has been sacrificed to resolve the flame
adequately, but is sufficient for studying small-scale turbulence-chemistry interaction, and is
representative of high intensity turbulence that would reach these scales through the energy
cascade from larger integral lengths at the same Karlovitz numbers.
\begin{figure}
\centering
\includegraphics[width=88mm]{figures/regime}
\caption{Regime diagram showing the conditions analysed.}
\label{fig:regime}
\end{figure}
\section{Analysis of the DNS Data}
The DNS data were filtered using a simple top-hat box filter with a width approximately equal to
one flame thermal thickness; at approximately 660 microns, this filter width is typical
for combustion LES (e.g.\ \cite{fureby2017comparative}),
and much larger than the scales over which the reactions take place.
A quantity $q$ filtered in this way will be denoted as $\bar{q}$,
with Favre filtered quantities denoted $\tilde{q}=\overline{\rho q}/\bar{\rho}$.
Comparisons are made between the reaction rates $Q_j$, the filtered reaction rates
$\bar{Q}_j$ and the reaction rates evaluated using filtered species and temperature
$Q_j(\tilde{\boldsymbol{Y}},\bar{T})$. Each of these three kinds of reaction rates was evaluated
for a laminar flame profile (i.e.~a steady unstrained one-dimensional flame) and for the DNS data,
from which a temporally-averaged mean and standard deviation was evaluated conditioning on
temperature.
\begin{figure}
\centering
\makebox[38mm][c]{\small $\Ka=1$}\makebox[38mm][c]{\small $\Ka=108$}\\
\makebox[19mm][c]{\small unfiltered}\makebox[19mm][c]{\small filtered}
\makebox[19mm][c]{\small unfiltered}\makebox[19mm][c]{\small filtered}\\
\includegraphics[width=38mm]{slicesKa01/plt01900_var_24}
\includegraphics[width=38mm]{slicesKa108/plt14000_var_24} \\
\includegraphics[width=38mm]{slicesKa01/plt01900_var_23}
\includegraphics[width=38mm]{slicesKa108/plt14000_var_23} \\
\includegraphics[width=38mm]{slicesKa01/plt01900_var_12}
\includegraphics[width=38mm]{slicesKa108/plt14000_var_12} \\
\includegraphics[width=38mm]{slicesKa01/plt01900_var_15}
\includegraphics[width=38mm]{slicesKa108/plt14000_var_15} \\
\includegraphics[width=38mm]{slicesKa01/plt01900_var_26}
\includegraphics[width=38mm]{slicesKa108/plt14000_var_26} \\
\includegraphics[width=60mm]{figures/colourBar}
\caption{Two-dimensional slices comparing the effect of the filter at $\Ka=1$ and 36;
all fields are normalised by corresponding laminar values, and periodicity has been
exploited to join $x$ and $y$ slices together to show more flame surface. Each square panel
is 20 flame thickness in size.}
\label{fig:slices}
\end{figure}
The effect of turbulence and the filtering procedure on some representative species is presented
in figure~\ref{fig:slices} as two-dimensional slices; all species at all $\Ka$ are presented in
the supplementary material. As expected, the turbulence has little effect at $\Ka=1$ other than
producing large-scale
wrinkling. At $\Ka=108$, turbulence has a significant effect on the flame, especially on the
preheat region, which is substantially thickened (e.g.\ CH$_4$).
Turbulence also increases the occurrence of highly-curved regions, which leads to variation along
the flame surface and a slightly enhanced radical pool (e.g.\ H).
Further details of flame response to turbulence in these cases can be found in
\cite{AspdenCNF16,AspdenDodecane17,AspdenHiKa18}.
Naturally, the filter smooths out monotonic fields like fuel mass fraction, but has a more
significant effect particularly on the thickness and magnitude of narrow fields such as CH$_3$.
At high $\Ka$, the radicals H and OH appear less impacted by the filter, and present values
in excess of the laminar values (as seen by the magenta and white regions).
\begin{figure}
\centering
\includegraphics[width=48mm]{figures/ka108ych4}
\includegraphics[width=48mm]{figures/ka108ych3}\vspace{2mm}\\
\includegraphics[width=48mm]{figures/ka108yh}
\includegraphics[width=48mm]{figures/ka108yoh}
\caption{Conditional means of unfiltered and filtered species mass fractions for $\Ka=108$.
One standard deviation about the mean are shown by dashed lines of corresponding colour.
Note that the filtered profiles are conditioned on filtered temperature.}
\label{fig:species}
\end{figure}
The response of species distribution to turbulent mixing was classified in \cite{AspdenCNF16}, and
the response to the filtering operation has been found to be consistent with this classification;
all species profiles for all $\Ka$ are presented by classification in the supplementary material,
but the profiles for CH$_4$, CH$_3$, H and OH at $\Ka=108$ are presented in figure~\ref{fig:species}.
At low $\Ka$ (again see supplementary material), the conditional means align closely with the
laminar profiles, and the standard deviations are small. The main response to increasing the
Karlovitz number, is an increase in the standard deviations for intermediate species
(see dashed lines in figure~\ref{fig:species}). The filtered profiles are more
interesting, and depend on temperature, Karlovitz number, and species type. Reactants and
products (see supplementary data) present filtered profiles that, at low-to-moderate temperatures,
align with the unfiltered turbulent profile (it is likely that both actually align with the unity
Lewis number profile); at higher temperatures, however, there is a deviation from the other profiles
(specifically, an increase in fuel mass fraction and a decrease in water mass fraction), the
explanation for which is currently unclear.
The alignment at low temperatures is attributed to penetration
of turbulence into the preheat region, broadening the profiles in physical space.
The conditional means of the radicals O, H and OH increase, whereas almost all of the other
carbonated radicals (represented here by CH$_3$) remain close to the laminar profile. A significant
difference is observed in response to the filter operation; a substantial decrease in the peak value
is observed in the short-lived high-temperature radicals, but is much less pronounced for O, H and
OH, which present increased profiles at lower temperatures.
\begin{figure}
\centering
\includegraphics[trim=140 175 150 125,clip,width=48mm]{figures/ka108cPath}
\includegraphics[trim=140 175 150 125,clip,width=48mm]{figures/ka108cPathFiltered}\\
\includegraphics[trim=120 200 120 70,clip,width=48mm]{figures/ka108oPath}
\includegraphics[trim=120 200 120 70,clip,width=48mm]{figures/ka108oPathFiltered
\caption{Reaction path diagrams following carbon (top) and oxygen (bottom);
unfiltered DNS data on the left, and reaction rates based on filtered species/temperature
DNS data on the right.}
\label{fig:reacPaths}
\end{figure}
Before considering individual reactions, the effect of the filter on the overall reaction paths
are considered. Path diagrams for $\Ka=108$ are shown in figure~\ref{fig:reacPaths} for carbon
(top), and oxygen (bottom), with the unfiltered DNS data on the left, and
reaction rates based on filtered species/temperature DNS data on the right (note that the
reaction path diagram of filtered reactions is not presented as the operation used to
construct the reaction paths integrates out the effect of the filter).
The size of each arrow reflects the rate of atom transfer normalised by the peak rate, and only
rates greater than 2\% of the peak are shown.
(Note that missing links do not indicate that the reactions are not present, they have just
fallen below the cut-off threshold based on the peak reaction rate.)
It is clear that the filter changes the balance of reaction paths
significantly; following carbon, the main decomposition of CH$_4$ to CH$_3$ far exceeds
all of the other rates, and following oxygen the final step of OH to H$_2$O becomes dominant.
The same response was observed for all Karlovitz numbers (see supplementary material, which
also includes path diagrams following hydrogen);
the effect of the filter on the reaction rates
appears to be largely independent from Karlovitz number for these conditions.
Profiles of the key reaction rates are presented in figure~\ref{fig:reactions} based on the
main pathway from the reaction path diagrams
(CH$_4$$\rightarrow$CH$_3$$\rightarrow$CH$_2$O$\rightarrow$HCO$\rightarrow$CO$\rightarrow$CO$_2$);
again, all reactions for all Karlovitz numbers are included as
supplementary material for completeness. In each plot,
six reaction rate profiles are presented
for unfiltered rates $Q_L$ and $Q_T$, filtered rates $\bar{Q}_L$ and $\bar{Q}_T$,
and rates evaluated with filtered species/temperature $Q(\tilde{\boldsymbol{Y}}_L,\bar{T}_L)$
and $Q(\tilde{\boldsymbol{Y}}_T,\bar{T}_T)$, where suffices $L$ and $T$ denote laminar and
turbulent profiles, respectively; standard deviations are shown by dashed lines for turbulent cases.
\begin{figure}
\centering
\includegraphics[width=48mm]{figures/ka108r097}
\includegraphics[width=48mm]{figures/ka108r010}\vspace{2mm}\\
\includegraphics[width=48mm]{figures/ka108r100}
\includegraphics[width=48mm]{figures/ka108r165}\vspace{2mm}\\
\includegraphics[width=48mm]{figures/ka108r098}
\includegraphics[width=48mm]{figures/ka108r037}\vspace{2mm}\\
\includegraphics[width=48mm]{figures/ka108r083}
\includegraphics[width=48mm]{figures/ka108r003}
\caption{Conditional means of filtered and unfiltered reaction rates for $\Ka=108$ and laminar flames.
Note that the filtered profiles are conditioned on filtered temperature.}
\label{fig:reactions}
\end{figure}
The main reactions responsible for fuel consumption are hydrogen abstraction by O, H, and OH, the
latter is shown by R097 in figure~\ref{fig:reactions}, but is representative of the other two
reactions (see R011 and R052 in the supplementary material). The conditional mean of the turbulent
profile (black) aligns closely with the laminar profile (green), with similar alignment between the
filtered profiles (red and magenta). The reaction rates of the filtered species/temperature (blue
and cyan) are significantly different from the other two pairs of profiles; the effects of
non-linearities in reaction rates profoundly affects all three abstraction reactions, and explains
the dominance of the CH$_4$$\rightarrow$CH$_3$ step in the carbon (and hydrogen) path diagrams of the
filtered data.
The main reaction in the next step in the carbon path (CH$_3$$\rightarrow$CH$_2$O) is R010.
Once again, the reactions align closely in pairs, and while the reaction rates of filtered
species/temperature are high, they are not as significantly different as in the abstraction reactions.
The CH$_2$O$\rightarrow$HCO step is represented here by R100 (see also R015 and R057 in the
supplementary material), and shows an increase in reaction rates of filtered species/temperature with
a magnitude between that observed in R100 and the abstraction reations.
The HCO$\rightarrow$CO step (R165 presented here; see also R164 and R166 in the supplementary
material) does not see a significant increase in magnitude of reaction rate, nor does the main
oxidation step CO$\rightarrow$CO$_2$ (R098).
The key reactions in the oxygen path diagram are R037, which takes O$_2$ to both O and OH,
R083, which takes OH to H$_2$O (and H$_2$ to H), and R003 which take O to OH (and again H$_2$ to H);
the latter stages of carbon oxidation are discussed above (R100, R165 and R098).
In R037, the effects of non-linearities are less pronounced than R083 (or the early stages in the
carbon path), which explains the shift observed in the oxygen path diagram towards R083.
Again the reaction rate of filtered species/temperature (blue) aligns more closely with
the corresponding laminar profile (cyan) than the turbulent filtered reaction profile (red).
For the reactions in general, it appears that in many cases, but by no means all, the profiles
loosely align in pairs; specifically, the conditional mean of the turbulent data (black) aligns
with the laminar profile (green), the filtered turbulent data (red) aligns with the filtered
laminar profile (magenta), and the reaction rate of filtered turbulent species/temperature (blue)
aligns with the reaction rates of the filtered laminar profile (cyan).
There are other reaction rates with interesting response to filtering (for example, see R033, R114
and R124 in the supplementary material), especially those involving HO$_2$, H$_2$O$_2$ or CH$_3$O,
but these are less significant reactions in the overall mechanism.
\begin{figure}
\centering
\includegraphics[width=48mm]{figures/delta1}
\includegraphics[width=48mm]{figures/delta2}\vspace{2mm}\\
\includegraphics[width=48mm]{figures/delta3}
\includegraphics[width=48mm]{figures/delta4}
\caption{Difference metrics $\delta_i$ for all turbulent cases.}
\label{fig:deltas}
\end{figure}
To quantify the alignment of the different reaction profiles, first define a normalised
difference between two functions $f(T)$ and $g(T)$ as
$$
\delta(f,g)=\sqrt{\int (f-g)^2\,\D T\Bigg/\frac{1}{2}\int f^2+g^2\,\D T},
$$
and then define five differences
\begin{equation*}
\delta_1=\delta\left(Q_T,Q_L\right),\qquad
\delta_2=\delta\left(\bar{Q}_T,\bar{Q}_L\right),
\end{equation*}
\begin{equation*}
\delta_3=\delta\left(Q(\tilde{Y}_T,\bar{T}_T),Q(\tilde{Y}_L,\bar{T}_L)\right),
\end{equation*}
\begin{equation*}
\delta_4=\delta\left(Q(\tilde{Y}_T,\bar{T}_T),\bar{Q}_T\right),\qquad
\delta_5=\delta\left(Q(\tilde{Y}_L,\bar{T}_L),\bar{Q}_L\right).
\end{equation*}
These five differences are depicted by bar graphs in figure~\ref{fig:deltas} for each Karlovitz
number. This comparison clearly demonstrates that the laminar-turbulent pairs of profiles from
figures~\ref{fig:species} and \ref{fig:reactions} (black and green by $\delta_1$, red and magenta by
$\delta_2$, and blue and cyan by $\delta_3$) are more closely aligned than the filtered reactions
(red, magenta) and the reaction of filtered species/temperature (blue, cyan) in both turbulent
and laminar differences ($\delta_4$, $\delta_5$).
Interestingly, the laminar difference $\delta_5$ is generally larger than all of the turbulent
differences $\delta_4$, which actually decreases with increasing $\Ka$.
\begin{figure}
\centering
\includegraphics[width=48mm]{figures/omegaCH4-CH3}
\includegraphics[width=48mm]{figures/omegaCH3-CH2O}\vspace{2mm}\\
\includegraphics[width=48mm]{figures/omegaCH2O-HCO}
\includegraphics[width=48mm]{figures/omegaHCO-CO}\vspace{2mm}\\
\includegraphics[width=48mm]{figures/omegaCO-CO2}
\includegraphics[width=48mm]{figures/omegaMisc}
\caption{Conditional means of reciprocals of reaction rate ratios, $\Omega_j^{-1}=Q_j(\tilde{\boldsymbol{Y}},\bar{T})/\bar{Q}_j$.}
\label{fig:omegas}
\end{figure}
From a modelling point of view, the ratio of the filtered reaction rate to the reaction rate of
filtered species/temperature was defined as
$\Omega_j=\bar{Q}_j/Q_j(\tilde{\boldsymbol{Y}},\bar{T})$.
Figure~\ref{fig:omegas} plots the reciprocal of $\Omega_j$ for the key reactions plotted in
figure~\ref{fig:reactions}, along with other corresponding reactions in the key steps;
the reciprocal is used as it was found to tend to zero on both sides of the flame.
Once again, the strongest response is observed in the hydrogen abstraction reactions R011,
R052 and R097.
Perhaps surprisingly, there appears to be relative insensitivity to turbulence intensity;
furthermore, the profiles from the filtered laminar flame have also been plotted as dotted
lines, and are reasonably close to the turbulent profiles.
This suggests that a potential turbulence modelling approach is to derived values for $\Omega_j$
based on the laminar flame.
Notwithstanding the temperature dependence of $\Omega_j$, a model constant $\hat{\Omega}_j$ can be
defined for each reaction as
\begin{equation}
\hat{\Omega}_j=\frac{\max_T\left|\bar{Q}_{j}\right|}{\max_T\left|Q_j(\tilde{\boldsymbol{Y}},\bar{T})\right|}.
\end{equation}
This represents a simple scaling of each reaction based on the ratio of filtered reaction rate to
reaction rate of filtered species and temperature; note that the errors introduced through this
approximation will be greatest away from the flame where the reactions go to zero, and so are
expected to be of little importance. The log of $\hat{\Omega}_j$ is presented in
figure~\ref{fig:modelOmega}, which gives equal weighting to reactions that need to be enhanced
(positive values) and those the need to be suppressed (negative values). The values are
predominantly negative, indicative that $Q_j(\tilde{\boldsymbol{Y}},\bar{T})$ is generally
higher than $\bar{Q}_j$. Pronounced negative values are found for the abstraction reactions
considered above, along with those for ethane and propane (e.g.~R0027, R205 and R207), and some
other reactions involving CH$_3$O (e.g.~R017 and R153). Pronounced positive values appear to involve
species and reactions that have narrow profiles, and so are smeared by the filter (e.g. R122, R127,
R133, R140, and R175). Again, note that there is relative insensitivity to $\Ka$, and typically,
the highest $\Ka$ corresponds to the smaller values of $\log\hat{\Omega}_j$.
Scaling the turbulent $\hat{\Omega}_{j,T}$ by the corresponding laminar value, $\hat{\Omega}_{j,L}$,
as shown in the right-hand panel of figure~\ref{fig:modelOmega}, further demonstrates a surprising
insensitivity to turbulence conditions.
\begin{figure}
\centering
\includegraphics[width=48mm]{figures/modelOmegaSorted}
\includegraphics[width=48mm]{figures/modelOmegaNormPM}
\caption{Left: model values of $\log\hat{\Omega}_j$. Right: $\hat{\Omega}_{j,T}/\hat{\Omega}_{j,L}$.}
\label{fig:modelOmega}
\end{figure}
This suggests that it may be possible to formulate a model for $\hat{\Omega}_j$ based on filtered
laminar profiles alone. To examine this potential, reaction path diagrams are presented in
figure~\ref{fig:modelPaths}
that use filtered species and temperature from the $\Ka=108$ case with reaction rates modified
using $\hat{\Omega}_{j,L}$ from the laminar flame profiles. Compared with the corresponding plots on
the figure~\ref{fig:reacPaths} the modified reaction paths align much more closely with the
filtered reaction rates than the reaction rates of the filtered species and temperature.
There are some clear
differences; the CH$_4$$\rightarrow$CH$_3$ edge is thinner in figure~\ref{fig:modelPaths}, and
there are some new reaction paths present in the oxygen path in figure~\ref{fig:modelPaths}
(e.g. O$_2$$\rightarrow$CH$_2$O and HO$_2$$\rightarrow$CH$_3$O). These differences suggest that
the balance has not been completely restored by the model $\hat{\Omega}_{j,L}$, and the normalisation
hides the overall reaction rates (which can be easily tuned), but this example provides
proof-of-concept that a simple scaling of reaction rates by $\hat{\Omega}_{j,L}$ based on filtered
laminar profiles is a straightforward approach that may yield feasible flame models for LES with
finite-rate chemistry.
\section{Discussion and Conclusions}
An {\it a priori} analysis of a DNS database of turbulent lean premixed methane flames has been
presented. The leading-order effect was found to be due to the
filter operation, and flame response to turbulence was a secondary effect
(figure~\ref{fig:deltas}), which manifested primarily as an increase in standard deviation;
moreover, with increasing Karlovitz
number, the disparity (as represented by $\delta_4$) was found to decrease.
Species profiles (figure~\ref{fig:species}) were found to align with the classification presented
in \cite{AspdenCNF16}. Importantly, the radicals O, H and OH were found
to be less impacted by the filter than other high-temperature radicals, which were significantly
reduced in magnitude by the filter. It is the non-linear response in the reaction progress rates
that presents the main modelling challenge (figure~\ref{fig:reactions}).
By considering reaction path diagrams, key reactions have been identified that are responsible for
disparities between the desired filtered reaction rates and the reaction rates evaluated using
quantites available in LES calculations (i.e.\ the filtered species and temperature).
Specifically, the hydrogen abstraction reactions that take CH$_4$ to CH$_3$ (by O, H and OH)
were found to have a particularly enhanced reaction rate, and dominate the whole reaction path
diagram. Under the conditions presented, reaction paths were found to be largely
independent of turbulence intensity.
In general, reaction rates were found to align in pairs; i.e.~the turbulent profile $Q_T$
aligned with the laminar profile $Q_L$, the filtered profiles $\bar{Q}_T$ aligned with $\bar{Q}_L$,
and the reaction rate of filtered quantities $Q(\tilde{\boldsymbol{Y}}_T,\bar{T}_T)$ aligned with
$Q(\tilde{\boldsymbol{Y}}_L,\bar{T}_L)$, (see figure~\ref{fig:deltas}).
This alignment and relative insensitivity to $\Ka$ suggests that a model for the reaction rate
scalings (e.g.~$\hat{\Omega}_{j,L}$) can be formulated based on filtered laminar profiles.
To this end, an example
was considered by taking the ratio of the maximum absolute values, from which reaction paths were
formed that presented better agreement with the actual reaction paths than those evaluated using
filtered species and temperature. The example given is not intended to be a model proposal,
and much more work is required to develop this concept into a predictive model.
In particular,
further work is required to consider a broader range of conditions (e.g.\ other fuels, Lewis
number effects, and turbulent conditions), and the effect of the filter width (and form).
Moreover, such {\it a priori} analysis is a long way from a predictive model that will perform
well in practice; it is anticipated that the simple approach proposed here will perform poorly
without further calibration. Fine tuning the model constants for reaction rates could be performed
by some kind of automated approach. Finally, confirmation of
the approach will have to come from successful {\it a posteriori} testing, but the present paper
demonstrates proof-of-concept of a potential approach for formulating a
reaction rate model for LES with finite-rate chemistry, and highlights the possibility of
being able to base the approach on filtered laminar flames.
\begin{figure}
\centering
\includegraphics[trim=140 175 150 125,clip,width=48mm]{figures/ka108cPathModelOmega}
\includegraphics[trim=120 200 120 70,clip,width=48mm]{figures/ka108oPathModelOmega}
\caption{Reaction path diagrams following carbon (left) and oxygen (right) using the model values of
$\hat{\Omega}_{j,L}$ and filtered species and temperature from the $\Ka=108$ case.}
\label{fig:modelPaths}
\end{figure}
\section*{Acknowledgments}
\label{Acknowledgments}
CF and NZ acknowledge the financial support from the Swedish Armed forces and by the
Swedish Energy Agency via the EFFECT2 project. The authors are also grateful to
John Bell and Marc Day for computational support.
|
1,941,325,221,078 | arxiv | \section{Introduction}
Nucleon form factors are key quantities in hadron physics as they give us information about the internal structure of the nucleon. More specifically, the interaction of a nucleon with external currents acquires a momentum-transfer dependence, described by the form factors, because the nucleon is not a point-like particle. In this work, we focus on the axial form factor $G_A(Q^2)$ and the induced pseudoscalar form factor $G_P(Q^2)$, which parameterize the nucleon matrix element of the axial vector current
\begin{equation}
\left< N,\bm{p}^\prime,s^\prime \left| A_\mu(x) \right| N,\bm{p},s \right> = \bar{u}^{s^\prime}(\bm{p}^\prime)\ \widetilde{A}_\mu(q)\ u^{s}(\bm{p}) e^{\mathrm{i} q\cdot x}\ ,
\end{equation}
\begin{equation}
\widetilde{A}_\mu(q) = \gamma_\mu\gamma_5 G_{\text{A}}(Q^2) + \gamma_5 \frac{q_\mu}{2m_N}G_{\text{P}}(Q^2)\ ,
\end{equation}
where $p$ and $p^\prime$ are the four-momenta of the initial and final nucleon, and $Q^2=-q^2=-(p^\prime-p)^2$. These form factors are not only accessible in experiments \cite{Bernard} but also from a first-principles calculation using Lattice QCD, which is our chosen approach. Lattice QCD is a powerful tool for form factor calculations as it allows disentangling contributions from different quark flavors. Here, we focus on the contributions of the u, d and s quarks corresponding to quark-disconnected diagrams. The techniques to study the connected contributions of the u and d valence quarks, the only contributions required for the iso-vector form factors of the nucleon, are already well-established in the Mainz Lattice group \cite{Capitani}. Combining the connected and disconnected contributions to the axial vector form factors will enable us to determine the weak neutral current (WNC) axial form factor $G_A^Z(Q^2)$, obtained at leading order from the iso-vector contribution $G_A(Q^2)$ and the strange-quark contribution $G_A^s(Q^2)$ using SU(3) flavor symmetry \cite{GAZ}: $G_A^Z(Q^2) = -G_A(Q^2) + G_A^s(Q^2)$. In addition, we will construct the flavor non-singlet induced pseudoscalar form factor $G_P^8(Q^2)= G_P^{u+d}(Q^2) - 2 G_P^s(Q^2)$. Here, the light-quark contribution $G_P^{u+d}(Q^2)$ contains connected and disconnected quark contributions whereas the strange-quark contribution $G_P^s(Q^2)$ is solely disconnected. These quantities are of importance since it has been seen that the WNC $G_A^Z(Q^2)$ gives a main contribution to $\nu p$ and $\bar{\nu}p$ differential cross sections \cite{WNC}, while $G_P^8(Q^2)$ can be used to obtain the $\eta$-nucleon coupling $g_{\eta NN}$, if the $\eta$ decay constant $f_\eta^8$ is known \cite{JerGA}.
\section{Extracting form factors from Lattice QCD}\label{sec:method}
The starting point to extract form factors from Lattice QCD is the nucleon three-point function
\begin{equation}
C_{3,A_\mu}^N(\bm{q},z_0;\bm{p}^\prime,y_0;\Gamma_\nu) = \sum_{\bm{y},\bm{z}} e^{i\bm{q}\bm{z}}e^{-i\bm{p}^\prime\bm{y}}\ (\Gamma_\nu)_{\beta\alpha} \left\langle N_{\alpha}(\bm{y},y_0)A_\mu(\bm{z},z_0)\bar{N}_\beta(0)\right\rangle_G
\end{equation}
with a nucleon interpolator $N_\alpha(x)$, a flavor-diagonal axial vector current $A_\mu(x)$ and a projector $\Gamma_\nu$. For the projector we consider
\begin{equation}
\Gamma_0 = \frac{1}{2}(1+\gamma_0)\ ,\ \Gamma_i = \frac{1}{2}(1+\gamma_0)\ \mathrm{i}\gamma_5\gamma_i\ ,\ i\in\{1,2,3\}\ ,
\end{equation}
where $\Gamma_0$ projects the nucleon to the correct parity, and $\Gamma_i$ additionally polarizes the nucleon spin along the $i$-axis. Applying the spectral decomposition to the nucleon three-point function and only taking the ground-state into account, which means that $z_0,(y_0-z_0)\gg0$, one arrives at
\begin{align}
C_{3,A_\mu}^N(\bm{q},z_0;\bm{p}^\prime,y_0;\Gamma_\nu) &= f(\bm{p}^\prime,\bm{q},y_0,z_0)\ T\left(\widetilde{A}_\mu,\Gamma_\nu,\bm{q},\bm{p}^\prime\right)\ .
\end{align}
The function $f$ contains nucleon overlap factors, time dependencies and kinematic factors. To eliminate the first two, we construct a ratio of nucleon three-point and two-point functions \cite{ffinlqcd}
\begin{equation}
R_{A_\mu}(\bm{q},z_0;\bm{p}^\prime,y_0;\Gamma_\nu) = \frac{C_{3,A_\mu}^N(\bm{q},z_0;\bm{p}^\prime,y_0;\Gamma_\nu)}{C_2^N(\bm{p}^\prime,y_0;\Gamma_0)}\sqrt{\frac{C_2^N(\bm{p}^\prime,y_0;\Gamma_0)\ C_2^N(\bm{p}^\prime,z_0;\Gamma_0)\ C_2^N(\bm{p}^\prime\text{-}\bm{q},y_0\text{-}z_0;\Gamma_0)}{C_2^N(\bm{p}^\prime\text{-}\bm{q},y_0;\Gamma_0)\ C_2^N(\bm{p}^\prime\text{-}\bm{q},z_0;\Gamma_0)\ C_2^N(\bm{p}^\prime,y_0\text{-}z_0;\Gamma_0)}}\ ,
\label{eq:ratio}
\end{equation}
so that the spectral decomposition of the ratio yields for the ground-state
\begin{equation}
R_{A_\mu}(\bm{q};\bm{p}^\prime;\Gamma_\nu) = \frac{1}{4\sqrt{(E_{\bm{p}^\prime-\bm{q}}+m_N)(E_{\bm{p}^\prime}+m_N)E_{\bm{p}^\prime}E_{\bm{p}^\prime-\bm{q}}}}\ T\left(\widetilde{A}_\mu,\Gamma,\bm{q},\bm{p}^\prime\right)\ ,
\label{eq:sdrat}
\end{equation}
\begin{equation}
T\left(\widetilde{A}_\mu,\Gamma_\nu,\bm{q},\bm{p}^\prime\right) = \mathrm{tr}\left[ \Gamma_\nu \left( E_{\bm{p}^\prime}\gamma_0 -i\bm{p}^\prime\bm{\gamma} + m_N \right)\ \widetilde{A}_\mu(\bm{q})\ \left( E_{\bm{p}^\prime-\bm{q}}\gamma_0 -i(\bm{p}^\prime-\bm{q})\bm{\gamma} + m_N \right) \right]\ .
\end{equation}
The function $T$ can be calculated for all combinations of a component of the axial vector current $A_\mu(x)$ and a component of the projector $\Gamma_\nu$. Each combination leads to a kinematic prefactor for the axial and induced pseudoscalar form factor $M^A_{\nu\mu},\,M^P_{\nu\mu}$. \Eq{eq:sdrat} then takes the form
\begin{equation}
R_{A_\mu}(\bm{q};\bm{p}^\prime;\Gamma_\nu) = M_{\nu\mu}^A(\bm{q},\bm{p}^\prime)\ G_A(Q^2) + M_{\nu\mu}^P(\bm{q},\bm{p}^\prime)\ G_P(Q^2)\ .
\end{equation}
The individual prefactors can be grouped to form a matrix $M$ for each combination of $\bm{q}$ and $\bm{p}^\prime$ that correspond to the same value of $Q^2$. Similarly, we can form a vector $\bm{R}$ from the data for the ratios. Now we can define the (generally overdetermined) system of equations
\begin{equation}
M\ \bm{G} = \bm{R},\ \ M = \left( \begin{array}{c}
M^A_1\\
\vdots\\
M^A_N\\
\end{array}\ \begin{array}{c}
M^P_1\\
\vdots\\
M^P_N\\
\end{array} \right),\ \ \bm{G} =
\left(
\begin{array}{c}
G_A(Q^2)\\
G_P(Q^2)\\
\end{array}
\right),\ \ \bm{R} =
\left(
\begin{array}{c}
R_1\\
\vdots\\
R_N\\
\end{array}
\right),
\label{eq:system}
\end{equation}
which connects our lattice results for the ratios on the right-hand side to the analytical expectation from the spectral decomposition on the left-hand side. It can be solved for the form factors by minimizing the least-squares function \cite{Capitani}
\begin{equation}
\chi^2 = (\bm{R}-M\bm{G})^T\ C^{-1}\ (\bm{R}-M\bm{G})\ ,
\end{equation}
where the covariance matrix $C$ is approximated from the lattice data of the ratios. Before we actually solve the system in \Eq{eq:system}, two steps are done to reduce the system size $N$ and increase the statistical precision. We first drop all non-contributing equations ($M^A = 0\ \&\ M^P = 0$) and then average equivalent contributions\footnote{Example of two equivalent contributions:\\
\begin{equation*}
\left.\begin{array}{c}
\Gamma_1,A_2,\bm{p}_a=\left(1\ 0\ 0\right)^T,\bm{q}_a=\left(0\ 1\ 0\right)^T,\bm{p}_a^\prime=\left(1\ 1\ 0\right)^T\\
\Gamma_3,A_2,\bm{p}_b=\left(0\ 0\ 1\right)^T,\bm{q}_b=\left(0\ 1\ 0\right)^T,\bm{p}_b^\prime=\left(0\ 1\ 1\right)^T
\end{array} \right\} \Rightarrow M_{12}^A(\bm{q}_a,\bm{p}_a^\prime) = M_{32}^A(\bm{q}_b,\bm{p}_b^\prime)\ \&\ M_{12}^P(\bm{q}_a,\bm{p}_a^\prime) = M_{32}^P(\bm{q}_b,\bm{p}_b^\prime)
\end{equation*}}.
For the number of independent equations over our considered range of $Q^2$ values we find: $N\in\left\{ 4, 5, 8, 9, 10, 11, 12, 13, 14, 18, 19, 21, 22, 25, 26, 28, 34 \right\}$. Note that we perform the averaging procedure already for the nucleon three-point functions, with the additional constraint that the momenta for the nucleon states at the source and the sink are related by spatial symmetry \cite{AV3pt}. Furthermore, we average the nucleon two-point functions over equivalent momentum classes. We then calculate the ratios from these averaged correlation functions. As the left-hand side of the system of equations corresponds to the ground-state contribution, we perform fits to the asymptotic behavior or employ the summation method (see e.g. \cite{ffinlqcd,gace}) to isolate the ground-state contribution, also in the lattice data, before solving for the form factors.
\section{Simulation}\label{sec:simulation}
\subsection{Ensembles}
In this work we use CLS $N_f = 2+1$ O($a$)-improved Wilson fermion ensembles \cite{CLS}. The gauge sector is described by the tree-level improved L\"uscher-Weisz gauge action. These ensembles have open boundary conditions in time to prevent the problem of topological freezing, and approach the physical values of the quark masses along a $\mathrm{tr}\ M = \text{const}$ trajectory, where $M$ is the quark mass matrix. The subset of ensembles and configurations we have processed for this project so far is shown in \Tab{tab:ensembles}. We employ the improved local axial vector current
\begin{equation}
A_\mu^f(\bm{z},z_0)^{\text{Imp.}}=\bar{f}(\bm{z},z_0) \gamma_5\gamma_\mu f(\bm{z},z_0)+ac_A\ \partial_\mu \left(\bar{f}(\bm{z},z_0) \gamma_5 f(\bm{z},z_0)\right)\ ,
\end{equation}
where we distinguish between the light and the strange quarks, $f\in\left\{l,s\right\}$, as the up and down quarks are degenerate on our ensembles. A non-perturbative determination of the improvement coefficient $c_A$ has been done in \cite{alpha}. As motivated in the introduction, we focus on the disconnected contributions. For this we need the flavor-singlet renormalization constant $Z_A^0$, which has not been determined yet, and thus we present unrenormalized (bare) results in \Sec{sec:results}. The three-point function corresponding to the disconnected contribution factorizes into separate traces for the quark loop and the nucleon two-point function
\begin{equation}
C_{3,A_\mu}^{N,l/s}(\bm{q},z_0;\bm{p}^\prime,y_0;\Gamma_\nu) = \left\langle \mathcal{L}_{A_\mu}^{l/s}(\bm{q},z_0)\cdot \mathcal{C}_2^N(\bm{p}^\prime,y_0;\Gamma_\nu) \right\rangle_G\ .
\end{equation}
These are the two main building blocks, described in more detail in the next two subsections.
\renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{table}[h]
\center
\begin{tabular}{l|ccccccccc}
&$\beta$ &$a$ [fm] &$N_s^3\times N_t$ &$m_\pi$[MeV] &$m_K$[MeV] &$N_{\text{cfg}}$ &$N_{\text{meas}}$\\
\hline
H105 &3.40 &0.086 &$32^3\times 96$ &280 &460 &1020 &391680\\
\hline
N203 &3.55 &0.064 &$48^3\times 128$ &340 &440 &772 &345856\\
N200 &3.55 &0.064 &$48^3\times 128$ &280 &460 &856 &383488\\
D200 &3.55 &0.064 &$64^3\times 128$ &200 &480 &278 &124544
\end{tabular}
\caption{The processed gauge ensembles for this work. $N_{\text{cfg}}$ denotes the number of gauge configurations. The last column corresponds to the total number of measurements for the ratio in \Eq{eq:ratio}.}
\label{tab:ensembles}
\end{table}
\end{center}
\renewcommand{\arraystretch}{1}
\subsection{Nucleon two-point function}\label{sec:2pt}
The nucleon two-point function is given by
\begin{equation}
C_2^N(\bm{p}^\prime,y_0;\Gamma_\nu) = \sum_{\bm{y}\in\Lambda} e^{-i\bm{p}^\prime\bm{y}}\ (\Gamma_\nu)_{\beta\alpha} \left\langle N_{\alpha}(y)\bar{N}_\beta(0) \right\rangle\ ,
\end{equation}
\begin{equation}
N_{\alpha}(x) = \epsilon_{abc}\left( u_\beta^a(x)\ \left(C\gamma_5\right)_{\beta\gamma}\ d_\gamma^b(x) \right)\ u_\alpha^c(x)\ .
\end{equation}
All quark propagators have been Wuppertal smeared \cite{wuppertal} at the source and the sink. We employ the truncated solver method \cite{trunc1,trunc2} to increase the statistical precision of the nucleon two-point functions at moderate cost. The trick is to first obtain a biased estimate with a large number of low-precision solves for the quark propagator and then add a bias correction from a much smaller subset of high-precision solves. We placed the sources for the nucleon two-point functions on seven timeslices for each ensemble. The seven timeslices were evenly distributed around the middle of the time extent. These were separated by seven timeslices on which no sources were placed. The number of high-precision solves on each timeslice was $N_{\text{src}}^{HP}=1$, except for H105, where we used $N_{\text{src}}^{HP} = 4$. For all ensembles, the number of low-precision solves on each timeslice was $N_{\text{src}}^{LP}=32$. Both the forward and the backward-propagating nucleon two-point functions from all source positions were included, except for the first (last) timeslice on H105, where we omitted the backward (forward) propagation. This is due to the arising boundary effects as H105 has a smaller temporal lattice extent than the other three ensembles.
\subsection{Quark loop}
The calculation of the quark loop requires an all-to-all propagator, which can be stochastically estimated with noise vectors $\eta$
\begin{equation}
L_{A_\mu}^{l/s}(\bm{q},z_0) = -\sum_{\bm{z}\in\Lambda} e^{i\bm{q}\cdot\bm{z}}\ \left<tr\left[S^{l/s}(z;z)\ \gamma_5\gamma_\mu\right]\right>_G
= -\sum_{\bm{z}\in\Lambda} e^{i\bm{q}\cdot\bm{z}}\ \left<\eta^{\dagger}(z)\ \gamma_5\gamma_\mu\ s^{l/s}(z)\right>_{G,\eta}\ .
\end{equation}
Here we use hierarchical probing \cite{hp}, which augments the series of noise vectors $\eta_n$ by a set of Hadarmard vectors $h_n$, where each element of a noise vector is multiplied with the Hadamard vectors to obtain an improved estimate of the quark loop. We employ four-dimensional noise and Hadamard vectors and use two independent noise vectors with 512 Hadamard vectors each. Thus, we perform a total of 1024 inversions per gauge configuration and flavor for the quark loop calculation.
\section{Results}\label{sec:results}
In \Fig{fig:exstate} we show the strange axial form factor for a particular non-vanishing $Q^2$ as a function of the source-sink separation $y_0$ used for the plateau fit and also include a band visualising the summation method result. For both ensembles excited-state contamination is visible but we find agreement between the plateau fits and the summation method at large enough $y_0$. In the following, we only consider the summation method results in order to have tidier plots. The $Q^2$-dependence of the disconnected axial vector form factors for the light and the strange quarks on the ensemble with $a=0.086\,\text{fm}$ and $m_\pi=280\,\text{MeV}$ are shown in \Fig{fig:Q2deü}. The results for the induced pseudoscalar form factor have been multiplied with $(Q^2+m_\pi^2)$ in order to remove the pion pole. The curves are z-expansion fits to fifth order with Gaussian priors for all coefficients $a_k$ with $k\geq 2$. Both the axial and the induced pseudoscalar form factor are found to be non-vanishing and negative.
\begin{center}
\begin{figure}[h]
\begin{minipage}{\textwidth}
\includegraphics[scale=0.4225]{GAs_H105_y0.pdf}
\end{minipage}%
\begin{minipage}{\textwidth}
\hspace{-8cm}
\includegraphics[scale=0.4225]{GAs_N200_y0.pdf}
\end{minipage}
\caption{Comparison of plateau fits at different source-sink separations $y_0$ and the summation method for two ensembles at $m_\pi=280\,\text{MeV}$ (left: $a=0.086\,\text{fm}$, right: $a=0.064\,\text{fm}$).}
\label{fig:exstate}
\end{figure}
\end{center}
\vspace{-1.7cm}
\begin{center}
\begin{figure}[h]
\begin{minipage}{\textwidth}
\includegraphics[scale=0.4225]{GA_H105.pdf}
\end{minipage}%
\begin{minipage}{\textwidth}
\hspace{-8cm}
\includegraphics[scale=0.4225]{GP_H105.pdf}
\end{minipage}
\caption{The disconnected contribution of the light and strange quarks to the axial form factor (left) and the induced pseudoscalar form factor (right) for the ensemble with $a=0.086\,\text{fm}$ and $m_\pi=280\,\text{MeV}$.}
\label{fig:Q2deü}
\end{figure}
\end{center}
\vspace{-0.8cm}
Lastly, the pion mass dependence at fixed lattice spacing and the lattice spacing dependence at fixed pion mass of the strange axial vector form factors is illustrated (\Fig{fig:mpi_dep}). At this level of statistics, we find the strange axial vector form factors to depend only mildly on the pion mass and the lattice spacing. In future work, we will include more ensembles into this analysis and attempt a continuum extrapolation. Furthermore, the investigation of disconnected contributions to the electromagnetic form factors is planned.
\begin{center}
\begin{figure}[h]
\begin{minipage}{\textwidth}
\includegraphics[scale=0.4225]{GAs_mpidep.pdf}
\end{minipage}%
\begin{minipage}{\textwidth}
\hspace{-8cm}
\includegraphics[scale=0.4225]{GPs_mpidep.pdf}
\end{minipage}\\
\begin{minipage}{\textwidth}
\includegraphics[scale=0.4225]{GAs_adep.pdf}
\end{minipage}%
\begin{minipage}{\textwidth}
\hspace{-8cm}
\includegraphics[scale=0.4225]{GPs_adep.pdf}
\end{minipage}
\caption{Pion mass dependence at a lattice spacing of $a=0.064\,\text{fm}$ (top) and lattice spacing dependence at a pion mass of $m_\pi=280\,\text{MeV}$ (bottom) of the strange axial vector form factors.}
\label{fig:mpi_dep}
\end{figure}
\end{center}
\vspace{-1.4cm}
\section*{Acknowledgements}
This research is supported by the DFG through the SFB 1044. K.O. is supported in part by DFG grant HI 2048/1-1. Calculations for this project were partly performed on the HPC clusters "{}Clover"{} and "{}HIMster II"{} at the Helmholtz-Institut Mainz and "{}Mogon II"{} at JGU Mainz. Additional computer time has been allocated through projects HMZ21 and HMZ36 on the BlueGene supercomputer system "{}JUQUEEN"{} at NIC, J\"ulich. Our programmes use the QDP++ library \cite{QDPpp} and deflated SAP+GCR solver from the openQCD package \cite{openQCD}, while the contractions have been explicitly checked using \cite{QCT}. We are grateful to our colleagues in the CLS initiative for sharing ensembles.
|
1,941,325,221,079 | arxiv | \section{Introduction}
\label{Introduction}
Neutrino oscillation experiments have shown that neutrinos are massive particles with at least two
squared-mass differences:
$
\Delta{m}^{2}_{\text{SOL}}
\simeq 8 \times 10^{-5} \, \text{eV}^{2}
$,
measured in solar and very-long-baseline reactor neutrino experiments,
and
$
\Delta{m}^{2}_{\text{ATM}}
\simeq 2 \times 10^{-3} \, \text{eV}^{2}
$,
measured in atmospheric and long-baseline neutrino experiments
(see Refs.~\cite{hep-ph/9812360,hep-ph/0202058,hep-ph/0310238,hep-ph/0405172,hep-ph/0506083,hep-ph/0606054,Giunti-Kim-2007,GonzalezGarcia:2007ib,0805.2517,0808.2016}).
These two $\Delta{m}^{2}$'s are perfectly accommodated in the framework of
three-neutrino mixing, where there are two independent squared-mass differences.
However, there are experimental anomalies which may indicate the existence of
Short-BaseLine (SBL) or
Very-Short-BaseLine (VSBL) oscillations generated by a third
$\Delta{m}^{2}$ which is much larger than the other two:
$ \Delta{m}^{2}_{\text{SBL}} \gtrsim 10^{-1} \, \text{eV}^2 $
or
$ \Delta{m}^{2}_{\text{VSBL}} \gtrsim 10 \, \text{eV}^2 $.
Among these anomalies,
the most well-known is the LSND signal in favor of SBL $\bar\nu_{\mu}\to\bar\nu_{e}$ oscillations \cite{hep-ex/0104049},
which has not been confirmed by other experiments
and is currently disfavored by the negative results of
KARMEN \cite{hep-ex/0203021}
and
MiniBooNE \cite{AguilarArevalo:2008rc}.
Less well-known are the Gallium radioactive source experiments anomaly \cite{Abdurashitov:2009tn}
and
the MiniBooNE low-energy anomaly \cite{AguilarArevalo:2008rc},
which could be explained by
SBL \cite{Giunti:2006bj,Acero:2007su}
or
VSBL \cite{Giunti:2007xv,Giunti:2009zz} $\nu_e$ disappearance.
The existence of a third $\Delta{m}^{2}$ requires the existence of at least a fourth massive neutrino
which corresponds,
in the flavor basis,
to the existence of a sterile neutrino $\nu_{s}$,
{\it i.e.}, a fermion which is a singlet under the Standard Model symmetries.
Hence it is electrically neutral and does not take part in weak interactions.
If the three active neutrinos
$\nu_{e}$,
$\nu_{\mu}$, and
$\nu_{\tau}$
are mixed with the sterile neutrino,
neutrino oscillation experiments can observe the disappearance of active neutrinos
into $\nu_{s}$.
In light of the above-mentioned anomalies,
it is interesting to investigate the possibility of (V)SBL $\nu_{e}$ disappearance
with future high-precision experiments.
In general,
it is important to investigate the possibility of $\nu_{e}$ disappearance
generated by a $\Delta{m}^{2}$ different from
$\Delta{m}^{2}_{\text{SOL}}$
and
$\Delta{m}^{2}_{\text{ATM}}$
in order to constrain schemes with mixing of four
(see Refs.~\cite{hep-ph/9812360,hep-ph/0405172,hep-ph/0606054,GonzalezGarcia:2007ib})
or more
\cite{hep-ph/0305255,0906.1997}
massive neutrinos.
These schemes have been studied mostly in connection with the LSND anomaly,
but the latest global fits of the experimental data, including the LSND signal,
are not good
\cite{hep-ph/0405172,GonzalezGarcia:2007ib}.
However, the schemes with mixing of more than three neutrinos
may be realized in nature independently of the LSND signal.
Hence, it is important to investigate the phenomenology of
sterile neutrinos with an open mind,
not only through neutrino oscillations
\cite{hep-ph/0609177,hep-ph/0611178,hep-ex/0701004,0704.0388,0705.0107,0706.1462,0707.2481,0710.2985,0907.3145},
but also by studying their effects in
astrophysics
\cite{0706.0399,0709.1937,0710.5180,0712.1816,0805.4014,0806.3029}
and cosmology
\cite{0711.2450,0810.5133,0812.2249}.
If there is (V)SBL electron neutrino disappearance,
it must be mainly into sterile neutrinos,
because the mixing of the three active neutrinos with the fourth massive neutrino
must be small in order to fit the data on $\nu_{e}\to\nu_{\mu,\tau}$ oscillations
generated by
$\Delta{m}^{2}_{\text{SOL}}$
and the data on $\nu_{\mu}\to\nu_{\tau}$ oscillations
generated by
$\Delta{m}^{2}_{\text{ATM}}$.
In the 3+1 four-neutrino schemes
(see Refs.~\cite{hep-ph/9812360,hep-ph/0405172,hep-ph/0606054,GonzalezGarcia:2007ib})
with
$ \Delta{m}^{2}_{\text{(V)SBL}} = |\Delta{m}^2_{41}| \gg \Delta{m}^{2}_{\text{ATM}} = |\Delta{m}^2_{31}| \gg \Delta{m}^{2}_{\text{SOL}} = |\Delta{m}^2_{21}| $,
where
$ \Delta{m}^2_{kj} \equiv m_{k}^2 - m_{j}^2 $,
the mixing matrix $U$
must be such that
$ |U_{e4}|, |U_{\mu4}|, |U_{\tau4}| \ll 1 $
and
$ |U_{s4}| \simeq 1 $.
Therefore,
the amplitudes of the (V)SBL oscillation channels,
$ A_{\alpha\beta} = 4 | U_{\alpha4} |^2 | U_{\beta4} |^2 $
for $\alpha\neq\beta$,
are such that
$ A_{ab} \ll A_{as} $
for
$a,b=e,\mu,\tau$.
In this paper we study the sensitivity of neutrino factory experiments
to (V)SBL $\nu_{e}$ and $\bar\nu_{e}$ disappearance,
which in practice has been investigated so far mainly through
SBL reactor neutrino experiments ($\bar\nu_{e}$ disappearance).
We will first study,
in Section~\ref{disappearance},
(V)SBL $\nu_{e}$ and $\bar\nu_{e}$ disappearance
at a neutrino factory assuming exact CPT symmetry,
which implies
$ P_{ee} = P_{\bar e \bar e} $
(see Ref.~\cite{Giunti-Kim-2007}),
considering the simplest case of effective two-neutrino mixing
with
\begin{equation}
P_{ee} = P_{\bar e \bar e}
=
1 - \sin^2 (2 \theta) \, \sin^2 \left( \frac{\Delta m^2 L}{4 E} \right)
\,,
\label{pee-cpt}
\end{equation}
where,
from now on,
$ \Delta{m}^{2} = \Delta{m}^{2}_{\text{(V)SBL}} $.
This is the case of four-neutrino mixing schemes with
$ \Delta{m}^{2} = |\Delta{m}^2_{41}| \gg \Delta{m}^{2}_{\text{ATM}} = |\Delta{m}^2_{31}| \gg \Delta{m}^{2}_{\text{SOL}} = |\Delta{m}^2_{21}| $.
In the 3+1 schemes,
the amplitude of the oscillations is related to the $U_{e4}$ element of the mixing matrix by
$ \sin^2 (2 \theta) = 4 |U_{e4}|^2 \left( 1 - |U_{e4}|^2 \right) $
(see Refs.~\cite{hep-ph/9812360,hep-ph/0405172,hep-ph/0606054,GonzalezGarcia:2007ib}).
The CPT symmetry is widely believed to be exact,
because it is a fundamental symmetry of local relativistic Quantum Field Theory
(see Ref.~\cite{hep-ph/0309309}).
However,
in recent years studies of extensions of the Standard Model
have shown that it is possible to have violations of the Lorentz
and CPT symmetries (see Refs.~\cite{hep-ph/0201258,hep-ph/0203261,0801.0287})
and
several phenomenological studies of neutrino oscillations with different
masses and mixing for neutrinos and antineutrinos
appeared in the literature
\cite{hep-ph/0010178,hep-ph/0108199,hep-ph/0112226,hep-ph/0201080,hep-ph/0201134,hep-ph/0201211,hep-ph/0307127,hep-ph/0308299,hep-ph/0505133,hep-ph/0306226,0804.2820,0903.4318}.
We will consider this scenario in the simplest case of effective two-neutrino mixing
with
\begin{eqnarray}
P_{ee} & = & 1 - \sin^2 (2 \theta_\nu) \, \sin^2 \left( \frac{\Delta m_\nu^2 L}{4 E} \right)
\,,
\label{equ:pelel}
\\
P_{\bar e \bar e} & = & 1 - \sin^2 (2 \theta_{\bar \nu}) \, \sin^2 \left( \frac{\Delta m_{\bar \nu}^2 L}{4 E} \right)
\,.
\label{equ:paeae}
\end{eqnarray}
This kind of CPT violation in a four-neutrino mixing scheme could reconcile the LSND signal with the other neutrino oscillation data
\cite{hep-ph/0308299}
and/or
could explain
the Gallium radioactive source experiments anomaly
and
the MiniBooNE low-energy anomaly
together with the absence of $\bar\nu_{e}$ disappearance in
reactor neutrino experiments \cite{Giunti:2009zz}.
Let us emphasize that the reconciliation of the LSND anomaly with the results of other neutrino oscillation experiments
is not possible in three-neutrino mixing schemes even if CPT violation is allowed
\cite{hep-ph/0306226,GonzalezGarcia:2007ib}.
Another hint in favor of a possible CPT violation comes from the
recent measurement of $\nu_{\mu}$ and $\bar\nu_{\mu}$ disappearance in the MINOS experiment
\cite{Evans-HEP2009},
which indicate different best-fit values of the oscillation parameters of
$\nu_{\mu}$ and $\bar\nu_{\mu}$:
$ \Delta\overline{m}^2_{\text{MINOS}} \simeq 2 \times 10^{-2} \, \text{eV}^2 $
and
$ \sin^2\overline{\theta}_{\text{MINOS}} \simeq 0.6 $
for $\bar\nu_{\mu}$'s,
whereas
$ \Delta{m}^2_{\text{MINOS}} \simeq 2.4 \times 10^{-3} \, \text{eV}^2 $
and
$ \sin^2\theta_{\text{MINOS}} \simeq 1 $
for $\nu_{\mu}$'s.
The best-fit values and allowed region of the $\nu_{\mu}$ oscillation parameters
are in agreement with atmospheric $\nu_{\mu}\to\nu_{\tau}$ oscillations.
Since the 90\% C.L. allowed region of the $\bar\nu_{\mu}$ oscillation parameters
has a marginal overlap with the much smaller 90\% C.L. of the $\nu_{\mu}$ oscillation parameters
(see the figure in page 11 of Ref.~\cite{Evans-HEP2009}),
the MINOS hint in favor of CPT violation is rather speculative.
Nevertheless,
it is interesting to notice that
a global separate analysis of neutrino and antineutrino data
in the framework of three-neutrino mixing with CPT violation
leads to different best-fit values of the oscillation parameters
of neutrinos and antineutrinos
with
$ \Delta\overline{m}^2_{\text{ATM}} \simeq \Delta\overline{m}^2_{\text{MINOS}} $
and
$ \sin^2\overline{\theta}_{\text{ATM}} \simeq \sin^2\overline{\theta}_{\text{MINOS}} $,
whereas
$ \Delta{m}^2_{\text{ATM}} \simeq \Delta{m}^2_{\text{MINOS}} $
and
$ \sin^2\theta_{\text{ATM}} \simeq \sin^2\theta_{\text{MINOS}} $
\cite{0908.2993}.
However,
in this paper we do not consider the MINOS hint in favor of CPT violation.
We concentrate our study on possible CPT violations in (V)SBL
$\nu_{e}$ and $\bar\nu_{e}$
disappearance due to squared-mass differences larger than about
$ 0.1 \, \text{eV}^2 $.
Besides those in Eqs.~(\ref{equ:pelel}) and (\ref{equ:paeae}),
it is possible to consider other, more complicated, expressions for
$P_{ee}$ and $P_{\bar e \bar e}$,
with additional energy-dependent terms in the oscillation phases which could be generated by
modified dispersion relations that are different for neutrinos and antineutrinos
(see, for example, Refs.~\cite{hep-ph/0309025,hep-ph/0506091,hep-ph/0606154,Hollenberg:2009tr,Esposito:2009ca,0907.1979}).
However, the introduction of more unknown parameters would make the analysis too cumbersome,
without much additional information on the potentiality of a neutrino factory experiment to test CPT invariance.
In fact, it is plausible that the additional energy-dependent terms in the oscillation phases
generate spectral distortions which would make the identification of new physics even easier
than in the simplest case that we consider.
In order to test CPT invariance (or {\em small} deviations from it) explicitly, it is convenient to define the averaged neutrino oscillation parameters
\begin{equation}
\theta \equiv \frac{1}{2} \left( \theta_\nu + \theta_{\bar \nu} \right) \, , \quad \Delta m^2 \equiv \frac{1}{2} \left( \Delta m_\nu^2 + \Delta m_{\bar \nu}^2 \right)
\,,
\end{equation}
together with the CPT asymmetries
\begin{equation}
a_{\mathrm{CPT}} \equiv \frac{\theta_\nu - \theta_{\bar \nu}}{\theta_\nu + \theta_{\bar \nu}} \, , \quad
m_{\mathrm{CPT}} \equiv \frac{\Delta m_\nu^2 - \Delta m_{\bar\nu}^2}{\Delta m_\nu^2 + \Delta m_{\bar\nu}^2} \, ,
\label{asy}
\end{equation}
which are constrained in the range between $-1$ and $1$.
Then we have
\begin{eqnarray}
\theta_\nu & = & (1 + a_{\mathrm{CPT}}) \, \theta \, ,
\label{p1}
\\
\theta_{\bar \nu} & = & (1 - a_{\mathrm{CPT}}) \, \theta \, ,
\label{p2}
\\
\Delta m^2_\nu & = & (1 + m_{\mathrm{CPT}}) \, \Delta m^2 \, ,
\label{p3}
\\
\Delta m^2_{\bar \nu} & = & (1 - m_{\mathrm{CPT}}) \, \Delta m^2 \, .
\label{p4}
\end{eqnarray}
The limit of CPT invariance (Eq.~(\ref{pee-cpt})) corresponds to $a_{\mathrm{CPT}} = m_{\mathrm{CPT}} = 0$.
In Section~\ref{cpt} we discuss the potentiality of neutrino factory experiments to discover
$ a_{\mathrm{CPT}} \neq 0 $
and/or
$ m_{\mathrm{CPT}} \neq 0 $.
The plan of the paper is:
in Section~\ref{ideal}
we define an ``ideal detector'' for the measurement of (V)SBL $\nu_{e}$ and $\bar\nu_{e}$ disappearance
at a neutrino factory, and we describe our treatment of geometric effects;
in Section~\ref{systematics}
we discuss the requirements for systematics;
in Section~\ref{disappearance} we discuss the sensitivity to (V)SBL $\nu_{e}$ and $\bar\nu_{e}$ disappearance
assuming CPT invariance,
with the survival probability in Eq.~(\ref{pee-cpt});
in Section~\ref{cpt} we discuss the sensitivity to CPT violation
considering the survival probabilities in Eqs.~(\ref{equ:pelel}) and (\ref{equ:paeae});
conclusions are presented in the final Section~\ref{conclusions}.
\section{Ideal detector and geometric effects}
\label{ideal}
\begin{figure}[t]
\begin{center}
\includegraphics[width=13cm]{ring}
\end{center}
\mycaption{\label{fig:ring} Geometry of the decay ring (not to scale). Two possible detector
locations are shown at $d=50 \, \mathrm{m}$ and $d=2 \, 000 \, \mathrm{m}$, where $d$ is the distance to the end of the decay straight. The baseline $L$ is the distance between production point and detector.}
\end{figure}
Our neutrino factory geometry is based on the International Design Study for the Neutrino Factory (IDS-NF) baseline setup~\cite{ids},
with the geometry illustrated in \figu{ring}.
We consider $2.5 \times 10^{20}$ useful muon decays per polarity and year, with muon energy $E_\mu=25 \, \mathrm{GeV}$.
For the total running time, we consider ten years.
In order to test SBL $\nu_e$ disappearance, we add detectors in front of the decay straights as illustrated in \figu{ring}.
Here ``near'' and ``far detectors'' refer to SBL $\nu_e$ disappearance only,
whereas the detectors for standard oscillations are much farther away and not relevant for our problem.
The straight sections are anticipated to be about
$s=600 \, \mathrm{m}$ long. The distance $d$ is the distance between the
end of the decay straight and the near detector. The baseline $L$ is the
distance between production point and near detector, {\it i.e.}, $d \le L \le d+s$. Since the $\mu^+$ and $\mu^-$ are assumed to circulate in different directions in the ring, we need pairs of detectors in front of the straights because we want to test CPT invariance.\footnote{Without CPT invariance test, detectors in front of one straight are sufficient. The detectors in front of the other straight only increase statistics then.}
Since there are no specifications for near detectors at a neutrino factory yet (see Ref.~\cite{Abe:2007bi} for a generic discussion), we turn the argument around and formulate the requirements for the detectors for this measurement.
Our detectors are assumed to measure the total charged current rates with a 100\% detection efficiency; a lower efficiency will simply lead to a re-scaling of statistics and can be easily compensated by a larger detector mass.
The energy threshold is chosen to be $500 \, \mathrm{MeV}$,
similar to a Totally Active Scintillator Detector or an iron calorimeter,
and the energy resolution is taken as
\begin{equation}
\Delta E = \varepsilon \, \sqrt{ \frac{ E }{ E_{0} } }
\,,
\label{resolution}
\end{equation}
with $ \varepsilon = 0.55 \, \mathrm{GeV} $ and $ E_{0} = 1 \, \mathrm{GeV} $,
which is a conservative estimate for a magnetized iron calorimeter~\cite{ids}. Similarly, we assume that the neutral current level can be controlled at the level of $10^{-3}$ from all neutrinos in the beam (see, {\it e.g.}, Refs.~\cite{Geer:2007kn,Bross:2007ts} in the context of a low energy neutrino factory).
However, we have tested that the results do not strongly depend on these three quantities.
We require an excellent flavor identification (at the level of $10^{-3}$ for the misidentification, as we will see later).
Charge identification is also desirable in order to reduce the contamination of the $\nu_{e}$ (or $\bar\nu_{e}$) signal by $\bar\nu_{e}$ (or $\nu_{e}$)
generated by possible (V)SBL $\bar\nu_\mu\to\bar\nu_e$ (or $\nu_\mu\to\nu_e$) oscillations.
However, we do not consider the backgrounds from charge misidentification explicitly.\footnote{The level of contamination depends on the oscillation model. Even for large mixing angles driving these oscillations
of the potential background, a charge misidentification level of about $10^{-3}$ would be sufficient.}
For the binning, we use 17~bins between $0.5$ and $25 \, \mathrm{GeV}$ with a bin size of $0.5 \, \mathrm{GeV}$ (1 bin) -- $1 \, \mathrm{GeV}$ (9 bins) -- $2 \, \mathrm{GeV}$ (5 bins) -- $2.5 \, \mathrm{GeV}$ (2 bins).
As the main obstacles for the physics potential, we have identified the extension of the decay straights and the impact of systematics. We discuss the first issue below, and the second issue in the next section. Thereby, we define our ``ideal detectors'' as detectors with the above properties, but no backgrounds and systematics.
Our geometric treatment of the near detectors is based on Ref.~\cite{Tang:2009na},
which discusses the flux at near detectors in detail. Here we start from the differential event rate from a point source $dN_{\mathrm{PS}}/dE$ without oscillations.
Taking into account the extension of the straight and the geometry of the detector,
the averaged differential event rate is given by\footnote{Note that as a peculiarity compared to Ref.~\cite{Tang:2009na}, $dN_{\mathrm{PS}}/dE$ uses the unoscillated event rate, because the oscillation probability has to be integrated over.}
\begin{equation}
\frac{dN_{\mathrm{avg}}}{dE} = \frac{1}{s} \int\limits_{d}^{d+s}\frac{dN}{dE} dL = \frac{1}{s} \int\limits_{d}^{d+s}\frac{dN_{\mathrm{PS}}(L,E)}{dE} \, \varepsilon(L,E) \, P_{ee}(L,E) dL \, .
\end{equation}
Here $\varepsilon(L,E)=A_{\mathrm{eff}}/A_{\mathrm{Det}}$
parameterizes the integration over the detector geometry for a fixed baseline $L$ and given energy $E$
($A_{\mathrm{Det}}$ is the surface area of the detector
and $A_{\mathrm{eff}}$ is the effective surface area which takes into account the
angular dependence of the neutrino flux).
Since $dN_{\mathrm{PS}}/dE \propto 1/L^2$,
we can re-write this as
\begin{equation}
\frac{dN_{\mathrm{avg}}}{dE} = \frac{dN_{\mathrm{PS}}(L_{\mathrm{eff}},E)}{dE} \frac{L_{\mathrm{eff}}^2}{s} \int\limits_{d}^{d+s} \frac{ \varepsilon(L,E)}{L^2} \, P_{ee}(L,E) dL = \frac{dN_{\mathrm{PS}}(L_{\mathrm{eff}},E)}{dE}
\, \hat P(E)
\,,
\label{equ:eavg}
\end{equation}
with the average efficiency ratio times probability\footnote{Note that \equ{eavg}
implies that in GLoBES a point source spectrum at the effective baseline $L_{\mathrm{eff}}$ can be used,
which has to be corrected by \equ{peff}. We perform \equ{peff} directly in the probability engine.}
\begin{equation}
\hat P(E) \equiv \frac{L_{\mathrm{eff}}^2}{s} \int\limits_{d}^{d+s} \frac{ \varepsilon(L,E)}{L^2} \, P_{ee}(L,E) dL
\,,
\label{equ:peff}
\end{equation}
and the effective baseline
\begin{equation}
L_{\mathrm{eff}}=\sqrt{d (d+s) } \, ,
\label{effective-baseline}
\end{equation}
such that $\hat P(E) = 1$ for $\epsilon(L,E) \equiv P_{ee}(L,E)\equiv 1$.
We assume $\epsilon(L,E) \equiv 1$ (far distance approximation), which, to a good
approximation, is satisfied for ND4 of Ref.~\cite{Tang:2009na} (see Fig.~4 therein) for $d \gtrsim 50 \, \mathrm{m}$.
This detector is very small (200~kg), however, with a sufficient event rate. At a neutrino factory, the active volume of near detectors are probably going to be rather small, because high granularity and good track reconstruction will be more important than the active volume size~\cite{Abe:2007bi}.
Our ``ideal'' test detectors therefore have 200~kg fiducial volume at very short distances. One can, for longer baselines, up-scale the detector mass as
\begin{equation}
m_{\mathrm{Det}} \simeq \frac{d \times (d+600 \, \mathrm{m})}{50 \, \mathrm{m} \times 650 \, \mathrm{m}} \, 0.2 \, \mathrm{t}
\label{equ:upscale}
\end{equation}
without strong geometric effects from the effective area of the detector ({\it i.e.}, one still operates in the far distance limit). However, one may choose a different technology for these larger detectors.
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm]{effectstraight2}
\end{center}
\mycaption{\label{fig:es} Exclusion limit for several near detector distances $d$ and our ideal near detectors (CPT invariance assumed; 90\% CL, 2 d.o.f.; two near detectors in front of straights). The dashed curves illustrate the effect of including the averaging over the decay straight,
whereas the solid curves are without this averaging. The fiducial detector masses are fixed to 200~kg. Note that there is no systematics included in this figure. }
\end{figure}
For our simulation, we use the GLoBES software~\cite{Huber:2004ka,Huber:2007ji}. We define the exclusion limit as a function of $\sin^2 2 \theta$ and $\Delta m^2$ as the excluded region obtained in a $\chi^2$ analysis assuming a vanishing true value of $\theta$ (i.e. no oscillations).
In \figu{es}, we show this exclusion limit for several near detector distances including the effects of averaging over the decay straight (dashed curves) and without averaging (solid curves). This figure is based on our ``ideal'' detectors without taking into account systematics yet.
Obviously, the optimal detector locations depend on the region of sensitivity of $\Delta m^2$ which is of interest: the smaller $\Delta m^2$, the longer the baseline.
For instance, for $\Delta m^2 \simeq 1 \, \mathrm{eV}^2$, best sensitivity is obtained for $d \simeq 20 \, \mathrm{km}$, whereas for $\Delta m^2 \simeq 100 \, \mathrm{eV}^2$, a distance of the order $d=100 \, \mathrm{m}$ is optimal. For short distances $d$ up to a few hundred meters, there is clearly an effect of the averaging over the decay straight. However, note that because of the $1/L^2$ weighting in \equ{eavg}, the effect becomes negligible for $d \gtrsim 1 \, \mathrm{km}$.
Compared to a classical beam dump experiment, one cannot get arbitrarily close to the source without loosing information.
In the next section, we will discuss the requirements for systematics.
We have also tested a low energy neutrino factory for this measurement, with similar success. However, in the absence of official numbers for the storage ring geometry and systematics, we will not discuss it in greater detail. In addition, note that the absolute performance is not {\em a priori} better than for a higher energy neutrino factory. For instance, assume that the distance $d$ is fixed for geometry reasons. Then the oscillation effect is, to a first approximation, proportional to $1/E^2$ (with $E$ the peak energy of the spectrum), but the statistics roughly increases as $E^3$ ($E^2$ from the beam collimation and $E$ from the cross sections), which means that the net effect is proportional to $E_\mu$. We observed this behavior in our simulation.
\section{Requirements for systematics}
\label{systematics}
As far as systematics is concerned, it is well known from reactor experiments, such as
Double Chooz \cite{hep-ex/0606025} and Daya Bay \cite{hep-ex/0701029}, that electron neutrino disappearance is
most affected by the signal normalization uncertainty (see, {\it e.g.}, Refs.~\cite{Huber:2003pm,Huber:2006vr}).
We expect the same for our measurement. However, compared to
reactor experiments, our signal normalization error does not mainly come from the knowledge on the flux, which
we may know to the level of 0.1\% using various mean monitoring devices~\cite{Abe:2007bi},
but from the knowledge of the cross sections. Because our neutrino energies span the
cross section regimes from quasi-elastic scattering, over resonant pion production,
to deep inelastic scattering, it is not {\em a priori} simple to estimate the
accuracy of the cross sections knowledge at the time of the measurement. For reactor experiments,
on the other hand, the inverse beta decay cross sections are well known. Note that
Ref.~\cite{0907.3145} also uses this well-understood detection reaction for a low-gamma beta beam,
whereas we will use a completely orthogonal strategy.
\begin{figure}[tp]
\begin{center}
\includegraphics[width=\textwidth]{sys2}
\end{center}
\mycaption{\label{fig:sys} The effect of a different (hypothetical) systematical errors: A signal normalization error of 2.5\% and an additional spectral tilt error of 2.5\% has been applied to the exclusion limit for two different detector distances $d$ (90\% CL, 2 d.o.f.). The dashed curves refer to our ideal detectors, the solid curves include systematics.
Here the fiducial mass is fixed to 200~kg, the effect of averaging over the decay straight is taken into account. Here CPT invariance is assumed.}
\end{figure}
Let us first of all illustrate what the main requirements for systematics are. As indicated above, we have tested in \figu{sys} the impact of a signal normalization error and an additional tilt error (tilting the shape of the spectrum). Although the errors are assumed to be rather optimistic (2.5\%), there is a significant impact on the sensitivities at all baselines, as we expected.
Off the oscillation maxima, as visible in the right panel at large values of $\Delta m^2$ where $ P_{ee} \simeq 0.5 \, \sin^2 2 \theta $, the signal normalization error $\sigma_{\mathrm{Norm}}$ directly limits the sensitivity to $\sin^2 2 \theta \simeq 2 \, \sqrt{2.3} \, \sigma_{\mathrm{Norm}} \simeq 0.076 $ at $1 \sigma$ (2.3 is the $\Delta\chi^2$ corresponding to $1 \sigma$ for two degrees of freedom).
The tilt error tilts the spectrum linearly, and is a first order approximation for a shape error. It is especially important where the spectral information leads to a good sensitivity, in particular, for the shorter baselines (left panel). However, note that this (linear) tilt error cannot fully take into account the uncertainties in the cross sections, because the actual deviation may be non-linear.
We have also tested the impact of backgrounds, energy resolution and energy threshold. The most important of these three systematics is the background, where the sensitivity is basically limited by the product of background level and background uncertainty. Even for large uncertainties of the background, such as 20\%, this product limits the sensitivity to about $0.001 \times 20\% \simeq 10^{-4}$, which is beyond our expectations in the presence of a normalization uncertainty.
In summary, the signal normalization and shape have to be either very well known, or very well measured.
The first requirement means that one needs very refined theoretical models for the cross sections, the second possibility means that one needs to measure the cross sections very well.
We follow the second approach by considering a setup with two sets of detectors
({\it cf.}, \figu{ring}):
\begin{enumerate}
\item
Near detectors at $d=50 \, \mathrm{m}$ with $m_{\mathrm{Det}}=200 \, \mathrm{kg}$.
\item
Far detectors at $d=2 \, 000 \, \mathrm{m}$ with $m_{\mathrm{Det}}=32 \, \mathrm{t}$.
\end{enumerate}
The signal measured with the near detectors fixes the normalization and shape of the unoscillated signal
(for small enough $\Delta m^2$).
The far detectors are up-scaled versions of the near detectors following \equ{upscale}, which means that
geometric effects are almost negligible.
The near detectors have optimal sensitivity at a few hundred $\mathrm{eV}^2$ (VSBL),
whereas the far detectors have optimal sensitivity at a few $\mathrm{eV}^2$ (SBL).
Note that longer baselines may be even better for the far detectors, but then the depth difference between storage ring and detectors may become unrealistically large. On the other hand, for distances much shorter than 2~km, one significantly looses sensitivity for small $\Delta m^2$.
For systematics, we adopt the most conservative point of view, {\it i.e.}, we assume that we hardly know anything about the cross sections, neither the normalization nor the shape, but that the cross sections are fully correlated among all detectors measuring the cross sections. Such an error is often called ``shape error'' and is uncorrelated among the bins.
In summary, we include the following systematical errors similar to the reactor experiments in Ref.~\cite{Huber:2006vr}, and we have tested their impact (we have switched off systematical errors to test their impact):
\begin{description}
\item[Shape errors] uncorrelated among bins and $\nu$-$\bar \nu$, but fully correlated among the detectors. These errors include cross section errors, scintillator or detector material properties, {\it etc.}. In addition, flux errors can be included here (the detectors only measure the product of flux and cross section for the disappearance channel). We estimate this error to be 10\%. However, even a larger error does not matter if both near and far detectors are present, but only errors considerably smaller than $10^{-3}$ improve the result significantly (which is absolutely unrealistic for this type of systematics).
\item[Normalization errors] uncorrelated between the near and far detectors. These relative normalization errors
come from the knowledge on fiducial mass, detector normalization, and analysis cuts (uncorrelated between the detectors). They are typically small if similar detectors are used. For reactor experiments (Double Chooz \cite{hep-ex/0606025}), this error is about 0.6\%, which we use as an estimate. We have tested that there is little dependence on this error unless it can be reduced to the level of $10^{-4}$ (then there is a small improvement), if the other systematics is present.
\item[Energy calibration errors] uncorrelated between the near and far detectors of the order 0.5\% are used (similar to the reactor experiments). As we have tested, they are of secondary importance if all the other systematics is present.
\item[Backgrounds] at the level of $10^{-3}$ from neutral current events {\it etc.}\ are assumed, known to the level of 20\%
(somewhat conservative estimate from a neutrino factory). If all the other errors are present, backgrounds hardly matter.
\end{description}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\textwidth]{events}
\end{center}
\mycaption{\label{fig:events}
Relative effect on the binned (neutrino) event rates for
several values of $ \Delta m^2 $, and $ \sin^2 2 \theta = 0.1 $,
in the near (left) and far (right) detectors.
For each energy bin we plotted
$ (R-R_{0})/R_{0} $
where $R$ and $R_{0}$ are the expected rates with and without oscillations.
}
\end{figure}
The effect of electron neutrino disappearance on the event rates of the individual bins
is illustrated in \figu{events} for the near (left) and far (right) detector for several values of $\Delta m^2$. For relatively small $\Delta m^2 \sim 1 \, \mathrm{eV}^2$ (diamond curves), the near-far combination will perform similar to the reactor experiments with two detectors, where the near detector measures the shape and the far detector the oscillation effect. For $\Delta m^2 \gg 1000 \, \mathrm{eV}^2$ ({\it cf.}, triangle curves for comparison), the oscillations average out in both detectors, and $\sin^2 2 \theta$ can only be constrained to the level of the shape errors (whereas $\Delta m^2$ cannot be measured). For $\Delta m^2 \sim 100 \, \mathrm{eV}^2$ (box curves), the oscillation effect will mainly take place in the near detector, whereas the far detector measures the shape (after averaging). For $\Delta m^2 \sim 10 \, \mathrm{eV}^2$ (star curves), the situation is most complicated: there are oscillation effects in both detectors, which can lead to intricate parameter correlations.
\section{Results for CPT invariance}
\label{disappearance}
All results presented in this section are based on our two-baseline setup without refined systematics treatment,
assuming CPT invariance, {\it i.e.}, the equal electron neutrino and antineutrino survival probabilities in Eq.~(\ref{pee-cpt}).
\begin{figure}[t!]
\begin{center}
\includegraphics[width=11cm]{fnsys2}
\end{center}
\mycaption{\label{fig:fnsys} Exclusion limit in the $\sin^2 2 \theta$-$\Delta m^2$ plane for our default configuration including systematics (thick solid curve, 90\% CL, 2 d.o.f.).
The thick dashed curve refers to our ideal detectors (no systematics), with near (ND) and far (FD) detectors combined. The thin solid curves illustrate the results for the near (50~m) and far (2~km) detectors if operated separately, but with full systematics. The effects of averaging over the decay straights are taken into account. The thin dashed curve corresponds to the default beta beam setup from Ref.~\cite{0907.3145} for comparison.
The thin gray/cyan curve is the current limit from Bugey\protect\cite{Declais:1995su} + Chooz\protect\cite{hep-ex/0301017} (taken from Ref.~\cite{Acero:2007su}). }
\end{figure}
\figu{fnsys} shows the performance of our near-far model (thick curve), where the effect of using only one set of detectors (near or far) is shown separately as thin curves. If only one set of detectors is used, the result will be limited by the 10\% shape errors, {\it i.e.}, it depends on the assumptions used. However, if the two sets of detectors are used, the impact of systematics cancels and the result is very robust with respect to the assumptions. From the above discussion, it should be clear that the results in this case do not depend very much on the actual numbers for the systematical errors.
Nevertheless there is a considerable deviation from the no-systematics case (dashed curve). The improvement towards this hypothetical sensitivity requires a very good understanding of the cross sections at the level of the $\sin^2 2 \theta$ sensitivity. We have also checked that the performance cannot even be significantly improved with considerably larger detectors, because of the systematics limitation (even without the geometric effect of the beam included).
Figure~\ref{fig:fnsys} shows that the sensitivity of a neutrino factory experiment to (V)SBL $\nu_{e}$ disappearance
represents a dramatic improvement with respect to the sensitivity of reactor experiments,
which is at the level of $ \sin^2 2 \theta \sim 10^{-1} $ at large values of $ \Delta m^2 $ ({\it cf.}, thin gray/cyan curve).
Moreover, the neutrino factory measurement with the near-far detector setup discussed in Section~\ref{systematics}
is model-independent,
whereas reactor measurements of $P_{\bar e \bar e}$ depend on the calculated flux of $\bar\nu_{e}$'s produced in a reactor.
Reactor neutrino experiments cannot take advantage of the near-far detector approach to get a model-independent result
for (V)SBL $\nu_{e}$ disappearance,
because for a typical reactor neutrino energy of 1~MeV
the oscillation length corresponding to $ \Delta m^2 \approx 10^2 \, \mathrm{eV}^2 $ is of the order of 1~cm.
It is interesting to note that the near-far detector setup that we have chosen
is sensitive to $\nu_{e}$ disappearance with small mixing
($ \sin^2 2 \theta \gtrsim 2 \times 10^{-3} $)
for values of $ \Delta m^2 $ as large as $ 10^{3} \, \mathrm{eV} $.
The condition for the observation of a spectral distortion
caused by neutrino oscillations is that the uncertainty of the phase of the oscillations
due to the energy resolution in Eq.~(\ref{resolution}) is smaller than about $\pi/2$.
One can easily find that this happens for neutrino energies
\begin{equation}
E \gtrsim
\left[
\frac{ \varepsilon \, \Delta m^2 \, L_{\mathrm{eff}} }{ 2 \pi \, E_{0}^{1/2} }
\right]^{2/3}
\,,
\label{emin}
\end{equation}
where we have considered the effective baseline in Eq.~(\ref{effective-baseline}).
Since for the near detector $ L_{\mathrm{eff}} \simeq 180 \, \mathrm{m} $,
if $ \Delta m^2 = 10^{3} \, \mathrm{eV} $ the condition (\ref{emin})
is satisfied for
$ E \gtrsim 18 \, \mathrm{GeV} $.
Since for the assumed $E_{\mu}=25\,\mathrm{GeV}$ the neutrino energy spectrum extends
up to $25\,\mathrm{GeV}$,
as shown by the curve in Fig.1 of Ref.~\cite{Tang:2009na}
with off-axis angle $\theta=0^{\circ}$,
the oscillations are not completely averaged out in the highest-energy bins.
This is illustrated in the left panel of Fig.~\ref{fig:events},
in which the line corresponding to $ \Delta m^2 = 10^{3} \, \mathrm{eV} $
has the constant averaged value $ 0.5 \, \sin^2 2 \theta - 1 = - 0.05 $
(for the assumed $ \sin^2 2 \theta = 0.1 $)
only for $ E \lesssim 10 \, \text{GeV} $.
Other curves illustrate the distortion of the event rate spectrum for
smaller values of $ \Delta m^2 $.
One can see that the lower limit of the sensitivity to $ \Delta m^2 $ of the near detector
is about $ 1 \, \mathrm{eV}^2 $,
which instead produces a strong spectral distortion in the far detector
(right panel of Fig.~\ref{fig:events}).
We also show in \figu{fnsys} a comparison with the default setup in Ref.~\cite{0907.3145} (thin dashed curve).
This setup uses a low-gamma ($\gamma \simeq 30$) beta beam using inverse beta decay as detection interaction,
which means that it is not surprising that our result is about an order of magnitude better. Compared to Ref.~\cite{0907.3145}, which uses only one detector and therefore runs in the systematics limitation in the larger $\Delta m^2$ range, we also have very good sensitivity for large $\Delta m^2$. While both approaches rely on near detectors receiving neutrinos from a storage ring, they are conceptually very different: Ref.~\cite{0907.3145} uses the fact that the inverse beta decay reaction is well known to control systematics, whereas we control the shape error with two sets of detectors in the fashion of the new generation of reactor experiments.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\textwidth]{fitscpt}
\end{center}
\mycaption{\label{fig:fitscpt} Fits in the $\sin^2 2 \theta$-$\Delta m^2$ plane for three chosen test-points marked by the diamonds ($1\sigma$, $2\sigma$, $3 \sigma$, 2 d.o.f.).
Here CPT invariance is assumed. Near (50~m) and far (2~km) detectors are used with our systematics model, the effects of averaging over the decay straights are taken into account. The unshaded contours show the result without averaging effects over the straights. They are too small to be visible in the middle panel.}
\end{figure}
It is interesting to examine not only the sensitivity
of our experimental setup to (V)SBL $\nu_{e}$ disappearance,
which corresponds to a negative result producing an exclusion curve as that in \figu{fnsys},
but also what could be the results if a signal is observed, {\it i.e.}, $\nu_e$ and $\bar\nu_e$ disappear.
In \figu{fitscpt}, we show three qualitatively different possible results for the test values of the neutrino oscillation parameters marked by the diamonds.
In the left panel, no degenerate solutions are present, and the parameters can be very well measured. There is hardly an effect of the averaging over the decay straights, as one can read off from the differences between the shaded and unshaded contours, because the far detectors dominate the sensitivity and oscillations have not yet developed at the near detectors. In the middle panel, we still have an excellent measurement dominated by the near detectors. In this case, however, the averaging effects over the straights are very important, and the contours without averaging are hardly visible. In particular, a degenerate solution appears at a smaller $\Delta m^2$. In the right panel, we show an even more extreme case, where only at the $2 \sigma$ confidence level $\sin^2 2 \theta=0$ can be excluded.
\section{CPT violation}
\label{cpt}
\begin{figure}[tp]
\begin{center}
\includegraphics[height=0.8\textheight]{fitsn2}
\end{center}
\mycaption{\label{fig:fits} Best-fit regions in the $\Delta m^2$-$\sin^2 2 \theta$ and $a_{\mathrm{CPT}}$-$m_{\mathrm{CPT}}$ planes for the test points defined in the main text ($1\sigma$, $2 \sigma$, $3 \sigma$, 2 d.o.f.).
Note that different baselines where chosen for the test points. The dashed curves represent the results without taking into account the averaging over the decay straights. }
\end{figure}
In this section we discuss the potentiality of the experimental setup described in \figu{ring},
with two pairs of near-far detectors,
to reveal a violation of CPT symmetry,
considering the different electron neutrino and antineutrino survival probabilities in Eqs.~(\ref{equ:pelel}) and (\ref{equ:paeae})
as functions of the CPT asymmetries in Eq.~(\ref{asy}).
Since there are four independent parameters,
given by Eqs.~(\ref{p1})--(\ref{p4}),
for simplicity we consider three test points inspired by Refs.~\cite{Acero:2007su,Giunti:2007xv,Giunti:2009zz}:
\begin{eqnarray}
\mathrm{T1:} & \quad & \sin^2 2 \theta= 0.05 \, , \quad \Delta m^2 = 1.8 \, \mathrm{eV}^2 \, , \\
\mathrm{T2:} & \quad & \sin^2 2 \theta= 0.1 \, , \quad \Delta m^2 = 20 \, \mathrm{eV}^2 \, , \\
\mathrm{T3:} & \quad & \sin^2 2 \theta= 0.1 \, , \quad \Delta m^2 = 330 \, \mathrm{eV}^2 \, ,
\end{eqnarray}
and $a_{\mathrm{CPT}}=m_{\mathrm{CPT}}=0$.
We fit the corresponding simulated data allowing for non-zero values of $a_{\mathrm{CPT}}$ and $m_{\mathrm{CPT}}$
in order to explore the sensitivity to the measurement of these parameters.
The test point T1 is motivated by the best-fit of the data of the Bugey SBL reactor experiment \cite{Declais:1995su},
which is compatible with the data of the Chooz reactor experiment \cite{hep-ex/0301017}
and the neutrino oscillation explanation of the Gallium anomaly
\cite{Acero:2007su}.
The test points T2 and T3 are motivated by a possible explanation
the MiniBooNE low-energy anomaly through VSBL $\nu_e$ disappearance
which is compatible with the neutrino oscillation explanation of the Gallium anomaly \cite{Giunti:2007xv,Giunti:2009zz}.
Even if values of $ \Delta m^2 $ larger than about $ 1 \, \text{eV}^2 $
are incompatible with the existing standard cosmological bound on the sum of neutrino masses
\cite{0805.2517,0809.1095},
we think that it is wise to test such bound in laboratory experiments.
A violation of the bound may lead to a discovery of fundamental new physics related to
non-standard cosmological effects.
The best-fit regions for the three test points are shown in \figu{fits}. The dashed curves represent the results without taking into account the averaging over the decay straights. The test-point T1 (upper row), with a relatively small $\Delta m^2$, is dominated by the far detectors, whereas in the near detectors (almost) no oscillations are present. Therefore, the cross sections can be directly reconstructed from the near detectors, and the fits are very clean. The effects of averaging over the straights are small because the signal sits in the far detectors, which sees a point source. The oscillation parameters can be measured at the level of 2\% ($1\sigma$), and the CPT invariance can be constrained at the same level.
The test-point T3 (lower row of \figu{fits}), is dominated by the short baseline, which means that the averaging effects over the straights are very important. The longer baseline measures the product of cross sections and $1- 0.5 \, \sin^2 2 \theta$, which means that $\Delta m^2$ can, before the averaging over the straights (dashed curves), be very well measured compared to the mixing angle since it remains as a net effect between the two detectors. Only after the averaging effects, both oscillation parameters can be measured at the level of 1\% ($1\sigma$), and the CPT invariance can be constrained at a similar level.
The test-point T2 (middle row of \figu{fits}) shows a complicated case with an intricate interplay between systematics and oscillation parameter correlations. Since there is an oscillation effect in both baselines, this case does not correspond to a classical near-far detector combination. The a priori excellent precisions for the oscillation parameters are spoilt by some complicated correlations. Nevertheless, percent level precisions are possible.
\begin{figure}[tp]
\begin{center}
\includegraphics[width=\textwidth]{cpt}
\end{center}
\mycaption{\label{fig:cpt} Discovery reach for CPT violation from $a_{\mathrm{CPT}}$ (left panel) or $m_{\mathrm{CPT}}$ (right panel) as a function of the true $\sin^2 2 \theta$ and true $\Delta m^2$. The different contours indicate for how small (true) values of $a_{\mathrm{CPT}}>0$ (left) or $m_{\mathrm{CPT}}>0$ (right) CPT violation will discovered at the $3 \sigma$ confidence level, as labeled at the contours. The dashed curves show the result if the averaging over the decay straights is not taken into account.}
\end{figure}
Instead of constraining CPT invariance, we can also discuss the discovery reach for CPT violation. In this case, we assume that nature has implemented a small (positive) $a_{\mathrm{CPT}}$ or $m_{\mathrm{CPT}}$, and we fit
the simulated data with the fixed parameters $a_{\mathrm{CPT}}=m_{\mathrm{CPT}}=0$ (corresponding to CPT invariance),
while we marginalize over the oscillation parameters $\sin^2 2 \theta$ and $\Delta m^2$. We show in \figu{cpt} the discovery reach for CPT violation from $a_{\mathrm{CPT}}$ (left panel) or $m_{\mathrm{CPT}}$ (right panel) as a function of the true $\sin^2 2 \theta$ and true $\Delta m^2$. The different contours indicate for how small (true) values of $a_{\mathrm{CPT}}>0$ (left) or $m_{\mathrm{CPT}}>0$ (right) CPT violation will discovered at the $3 \sigma$ confidence level, as labeled at the contours. The dashed curves show the result if the averaging over the decay straights is not taken into account.
From \figu{cpt}, CPT violation may be discovered even if it is as small as $10^{-3}$, provided that $\sin^2 2 \theta$ is large enough. However, even for very small $\sin^2 2 \theta$, a CPT violation of order unity is testable with our setup. Note that for larger $\Delta m^2$ and especially for $m_{\mathrm{CPT}}$, the averaging over the decay straights strongly reduces the performance (by about one order of magnitude).
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.5\textwidth]{acpt}
\end{center}
\mycaption{\label{fig:acpt}
Lower limit for $|a_{\mathrm{CPT}}|$
obtained from Eq.~(\ref{acpt1}) with
$ A_{ee}^{\mathrm{CPT}} < - 0.08 $,
which is the 95\% C.L. limit found in Ref.~\cite{Giunti:2009zz}.
}
\end{figure}
In Ref.~\cite{Giunti:2009zz}, a difference of
$ A_{ee}^{\mathrm{CPT}} \equiv P_{ee} - P_{\bar e \bar e}= - 0.17 {}^{+0.09}_{-0.07} $
at 90\% C.L. was identified as the
asymmetry between the electron neutrino and anti-neutrino
VSBL disappearance probabilities which can explain
the Gallium radioactive source experiments anomaly \cite{Abdurashitov:2009tn}
and
the MiniBooNE low-energy anomaly \cite{AguilarArevalo:2008rc}
without conflicting with the absence of $\bar\nu_{e}$ disappearance in
reactor neutrino experiments
(see Ref.~\cite{hep-ph/0107277}).
It is interesting to investigate if such CPT violation can be measured
in a neutrino factory experiment with the near-far pairs of detectors that we have considered so far.
Since in Ref.~\cite{Giunti:2009zz} $\Delta m^2$ was considered to be large,
in the range
\begin{equation}
20 \, \text{eV}^{2}
\lesssim
\Delta{m}^{2}
\lesssim
330 \, \text{eV}^{2}
\,,
\label{dm2}
\end{equation}
the neutrino and antineutrino survival probabilities were assumed to be averaged,
leading to
$ A_{ee}^{\mathrm{CPT}} = 0.5 \left( \sin^2 2\theta_{\bar\nu} - \sin^2 2\theta_{\nu} \right) $.
In this case,
the asymmetry $a_{\mathrm{CPT}}$ is given by
\begin{equation}
a_{\mathrm{CPT}}
=
\frac{ 1 }{ 4 \theta }
\,
\arcsin\left( \frac{ - 2 A_{ee}^{\mathrm{CPT}} }{ \sin 4 \theta } \right)
\,.
\label{acpt1}
\end{equation}
Since
$ |a_{\mathrm{CPT}}| \leq 1 $,
the mixing angle has a lower limit which depends on the value of $A_{ee}^{\mathrm{CPT}}$.
Moreover,
since $ | \sin 4 \theta | \leq 1 $
and $ \theta \leq \pi/2 $,
also
$ |a_{\mathrm{CPT}}| $ has a lower limit,
which is plotted in Fig.~\ref{fig:acpt} for
$ A_{ee}^{\mathrm{CPT}} < - 0.08 $,
which is the 95\% C.L. limit found in Ref.~\cite{Giunti:2009zz}.
One can see that the bound on $A_{ee}^{\mathrm{CPT}}$
implies that
$ \sin^22\theta \gtrsim 4 \times 10^{-2} $
and
$ |a_{\mathrm{CPT}}| \gtrsim 0.10 $.
Confronting these values with the left panel in Fig.~\ref{fig:cpt},
and taking into account that we consider the large values of
$\Delta{m}^{2}$
in the range (\ref{dm2}),
it is clear that the CPT violation required by $ A_{ee}^{\mathrm{CPT}} \lesssim - 0.08 $
will be easily discovered
in a neutrino factory experiment with the near-far pairs of detectors that we have considered.
\section{Summary and conclusions}
\label{conclusions}
In this work we have discussed the potentiality of testing
Short-BaseLine (SBL, with $ 10^{-1} \lesssim \Delta{m}^{2}_{\text{SBL}} \lesssim 10 \, \text{eV}^2 $) and
Very-Short-BaseLine (VSBL, with $ 10 \lesssim \Delta{m}^{2}_{\text{VSBL}} \lesssim 10^3 \, \text{eV}^2 $)
electron neutrino disappearance in a neutrino factory experiment,
based on the current setup of the International Design Study for the Neutrino Factory (IDS-NF)~\cite{ids}.
Since this setup uses both muon and anti-muon decays, a possible difference between the neutrino and antineutrino disappearance can be studied, which could constitute a revolutionary discovery of CPT violation.
We showed that for these purposes the ideal configuration would be
the two pairs of near-far detectors (shown in Fig.~\ref{fig:ring})
in a similar fashion to reactor experiments with near and far
detectors (Double Chooz \cite{hep-ex/0606025}, Daya Bay \cite{hep-ex/0701029}, {\it etc.}) to cancel systematics.
The near detectors are chosen to be at a distance of about 50~m
from the muon storage ring, in order to be sensitive to oscillations
due to a $\Delta m^2$ as large as about $10^3 \, \mathrm{eV}^2$.
For the far detectors an appropriate distance
from the muon storage ring is about 2~km, which gives a good sensitivity
to oscillations
generated by a $\Delta m^2$ as small as about $10^{-1} \, \mathrm{eV}^2$.
In this way,
it is possible to explore (V)SBL
$\nu_{e}$ and $\bar\nu_{e}$ disappearance
with effective oscillation amplitude
$\sin^22\theta$ as small as about $10^{-3}$
for
$\Delta m^2 \gtrsim 1 \, \mathrm{eV}^2$
(see Fig.~\ref{fig:fnsys})
taking advantage of the comparison of the event rates
measured in the near and far detectors,
which reduces dramatically the systematic uncertainties due to
insufficient knowledge of the cross sections
(see the discussion in Section~\ref{systematics}).
We have also shown, in Section~\ref{cpt}, that the chosen detector setup
provides a good sensitivity to the measurement of
a difference of the rates of $\nu_{e}$ and $\bar\nu_{e}$ disappearance
which would be a signal of CPT violation. For instance, our setup is
sensitive to an asymmetry between the neutrino and antineutrino
mass squared differences at the level of up to $10^{-3}$, depending
on the value of the mixing angle.
Let us emphasize that a discovery of CPT violation would represent a revolution in our
knowledge of fundamental physics,
because the CPT symmetry
is a fundamental symmetry of local relativistic Quantum Field Theory.
Therefore,
pursuing this line of investigation is of fundamental importance.
\subsubsection*{Acknowledgments}
We would like to thank Sanjib Agarwalla for providing the beta beam reference
curve in \figu{fnsys}, and Patrick Huber for useful discussions.
This work was supported by the European Union under the European Commission
Framework Programme~07 Design Study EUROnu, Project 212372. W. Winter also would like to acknowledge support
from the Emmy Noether program of Deutsche Forschungsgemeinschaft.
C. Giunti would like to thank the Department of Theoretical Physics of the University of Torino
for hospitality and support.
\input{cpt.bbl}
\end{document}
|
1,941,325,221,080 | arxiv | \section{Introduction}
The effective interaction is one of the most insightful concepts in the theoretical many-body physics. Correlations between the particles forming the media change the magnitude and may even transform the shape of the microscopic two-body interaction potential. Such modifications become especially profound in the quantum regime. A textbook example is a degenerate electron gas in a lattice of positively charged ions. Renormalization of the Coulomb repulsion due to polarization of the media yields a dipolar-like (pseudo-)potential with a cosine modulation \cite{Fetter}. Experimental manifestation of this effect is known as Friedel oscillations \cite{Friedel}.
In the case of bosons the physics is further enriched by the presence of a condensate at absolute zero temperature. Quantum scattering of the matter waves in the condensate can be promoted onto the macroscopic scale, which gives birth to new collective states of matter. Predicted in early 70-s the "coherent crystals" \cite{Kirzhnits, Nepomnyashchii} with possible supersolid properties now surface in ultra-cold dipolar gases \cite{Rosenzweig, Rotons, FragmentedSS}. In contrast to the familiar Wigner crystals \cite{Wigner}, crystallization of a Bose gas occurs with increase of the density $n$ and the unit cell can accommodate a macroscopically large amount of particles. In the mean-field picture formation of a supersolid can be described in terms of an effective interaction potential which has negative Fourier components in the vicinity of some finite momentum transfer $\bm k_0$ which satisfies $k_0 \ll n^{1/d}$, where $d$ is the dimension of the space. In the dilute limit existence of such feature for a generic condensate characterized by dipolar repulsion at large interparticle distances has been proven on the basis of the Beliaev diagrammatic approach \cite{RotonSpectrum, 1Dscattering}.
The Beliaev prescription for a scalar Bose-Einstein condensate consists in replacement of the actual microscopic potential by the off-shell scattering amplitude for two particles in a vacuum \cite{Beliaev}. Negative momentum-dependent correction to the scattering amplitude of dipoles was shown to come from large distances (on the order of the thermal de Broglie distances), where the scattering is governed entirely by the repulsive dipolar tail \cite{DipolarScattering, 1Dscattering}. In order to make this contribution comparable with the contact part two pathways has been explored in ultra-cold atomic systems.
First, one can use the so-called pancake geometry to allow alignement of the dipoles head-to-tail at short distances \cite{PancakeRotons, Review}. Initially prepared in a uniform state the system collapses into a regular pattern of drops after a quench of the Feshbach-resonant part of the scattering length to its background value \cite{Rosenzweig}. There is, however, no mutual coherence between the drops and their shape is strongly elongated in the transverse direction \cite{Filaments, Pfau}. These two factors make the supersolid scenario unlikely here. The physics of the drops appeared to be interesting on its own right because of the role played by quantum fluctuations in their stabilization \cite{ErbiumDrop, DysprosiumDrop, DipolarDropletsScience} (see also below).
The second idea, put forward in \cite{DiluteSupersolid}, is to use ultra-cold polar molecules in the bilayer geometry with tunneling \cite{PolarMolecules}. The tunneling makes the two-component system effectively behave as a 2D scalar gas with vanishing contact part of the effective interaction and controllable three-body repulsive forces \cite{Petrov2014}. The latter ensures the stability of a crystalline structure, which in this case indeed can be regarded as a true supersolid state. However, experimental realization of this model is challenging since the tunnelling would open a channel for three-body losses of the molecules \cite{Will}.
A promising contribution into the field has come from the semiconductor physics. As has been pointed out recently \cite{FragmentedSS}, the 2002 observation of a regular structure in the photoluminescence pattern of dipolar excitons in quantum wells (QW's) \cite{MOES} may hint toward a form of the coherent crystal. A surprising rarity of the phenomenon has been attributed to the specifics of the exciton-exciton interaction potential \cite{1Dscattering}. Interaction of two excitons with opposite spins can admit a shape resonance, which provides an efficient tool to tune the contact part of the scattering amplitude. The dipoles cannot leave the QW plane and stability of the system in the supersolid phase is guaranteed by formation of bosonic dimers (biexcitons) characterized by strong repulsion. A minimal model which allows one to describe the transition to the supersolid of dimers is a two-species dilute Bose gas with a resonant interspecies interaction \cite{ResonantPairing}.
Besides, studies of two-species Bose mixtures are now gaining momentum due to possibility of revealing beyond mean-field effects in the ultra-dilute regime. Thus, following the original proposal \cite{Petrov2015}, quantum droplets have been realized in atomic samples \cite{BoseMixture}. The very existence of such objects is due to quantum fluctuations. Experimental studies and numerical modelling of these states are guided by analytical perturbative expansion of effective low-energy Hamiltonians \cite{Petrov2015, Petrov2016, PetrovNature}.
These recent theoretical ideas and experimental results indicate a need for an extension of the Beliaev approach to a binary mixture of bosons. A challenging question is interference of different channels in a many-body scattering sequence. It is not obvious \textit{a priori} that the interaction in a mixed condensate can be described in terms of independent two-particle scattering processes.
In this paper we give a generic analytical solution of the problem. We derive a set of coupled Dyson equations and find the Green's functions of the system by using a specific spinor representation. The elementary excitation spectrum consists of two branches, one of which takes the characteristic parabolic form $\omega\propto p^2$ in the limit of spin-independent interactions. To the lowest order in the density parameter
\begin{equation}
\beta=\sqrt{nR_e^d},
\end{equation}
where $R_e$ is the characteristic range of the microscopic interaction, the diagrams for the self-energy parts decouple into a set of independent ladders. This yields three effective potentials expressed via the corresponding scattering amplitudes. In the case of 3D geometry, these potentials can be used to construct an effective Hamiltonian suitable for the perturbative expansion. The quantum interference of the channels manifests itself in renormalization of the magnon mass and the spin-wave velocity revealing the Andreev-Bashkin entrainment effect \cite{AB}. This feature escapes the standard hydrodynamic approach where the Fourier transform of some phenomenological potential is used to describe the normal modes in terms of small-amplitude oscillations \cite{ResonantPairing, Petrov2015, Petrov2016, Goldstein, Berman, Alexandrov, Eckardt}. For a 3D weakly-interacting gas the drag density can be obtained by considering interaction of magnons with the Bogoliubov phonon modes. We show that this problem is identical to the second-order perturbation theory of a Bose polaron developed in \cite{Christensen}. We exploit this fruitful analogy to speculate on possible transition to a \textit{magnon crystal} in the strongly-interacting regime. For weak interactions in 2D the drag contributes to the dispersion already in the first order in $\beta$. This reflects the enhanced role of quantum fluctuations in low dimensions. On the basis of our findings, we expect the entrainment to cause an increasing departure of the quantum correction to the energy of the mixture from the predictions \cite{Petrov2015, Petrov2016}.
\section{The model}
\begin{figure*}[t]
\centering{
\includegraphics[width=1.4\columnwidth]{Elementars.png}}
\caption{Possible types of the elementary graphs. Solid lines correspond to the bare Green's function $G^{(0)}$. Wavy lines describe emission and absorption of particles by the condensate. Dashed line stands for the interaction. The interaction conserves the spin of the particles, denoted by $\sigma$.}
\label{Elementars}
\end{figure*}
We consider a mixture of two bosonic species ($\sigma=\uparrow,\downarrow$) occupying the volume $V$ and characterized by the densities $n_\sigma=N_\sigma/V$ with $N_\sigma$ being the total number of particles in each component. As usual, we assume the thermodynamic limit $N_\sigma\rightarrow\infty$ and $V\rightarrow\infty$ with $n_\sigma$ being kept fixed. The second-quantized Hamiltonian of the system reads
\begin{widetext}
\begin{equation}
\label{Hamiltonian}
\begin{split}
\hat H&=\int\sum_\sigma\frac{\hbar^2}{2m_\sigma}\nabla\hat\Psi^\dagger_\sigma(\bm x)\nabla\hat\Psi_\sigma(\bm x) d\bm x+\frac{1}{2}\int\sum_{\sigma,\sigma'}\hat\Psi^{\dag}_{\sigma}(\bm x_1)\hat\Psi^{\dag}_{\sigma'}(\bm x_2)V_{\sigma\sigma'}(\bm x_1-\bm x_2)\hat\Psi_{\sigma}(\bm x_1)\hat\Psi_{\sigma'}(\bm x_2)d\bm x_1 d\bm x_2\\
&=\sum_{\textbf{p},\sigma}\frac{\hbar^2p^2}{2m_\sigma}\hat a_{\sigma, \textbf{p}}^{\dag} \hat a_{\sigma, \textbf{p}}+\frac{1}{2V}\sum_{\textbf p_1,\textbf p_2,\textbf{q},\sigma,\sigma^\prime}\hat a_{\sigma, \textbf p_1+\textbf q}^{\dag} \hat a_{\sigma^\prime,\textbf p_2-\textbf q}^{\dag} V_{\sigma\sigma^\prime}(\textbf{q})\hat a_{\sigma, \textbf p_1}\hat a_{\sigma^\prime,\textbf p_2}.
\end{split}
\end{equation}
\end{widetext}
Here $V_{\sigma\sigma'}(\bm x_1-\bm x_2)$ are the two-body interaction potentials with $\bm x$ being a $d$-dimensional coordinate and
\begin{equation}
V_{\sigma\sigma'}(\bm q)=\int e^{-i\bm q\bm x}V_{\sigma\sigma'}(\bm x)d\bm x
\end{equation}
are their Fourier transforms. The field operators $\hat\Psi_\sigma(\bm x)$ are related to the corresponding boson annihilation operators by
\begin{equation}\label{Psi1}
\hat\Psi_\sigma(\bm x)=\frac{1}{\sqrt{V}} \sum_{\textbf{p}} \hat a_{\sigma, \bm p} e^{i \bm p \bm x},
\end{equation}
and $\hat a_{\sigma, \bm p}$ obey
\begin{equation}
[\hat a_{\sigma, \bm p_1},\hat a_{\sigma', \bm p_2}^\dagger]=\delta_{\sigma\sigma',\bm p_1\bm p_2}.
\end{equation}
With equal masses of different species
\begin{equation}
\label{EqMasses}
m_\uparrow=m_\downarrow=m
\end{equation}
the model \eqref{Hamiltonian} has been applied to study resonant pairing of bright (dark) excitons in semiconductor heterostructures \cite{ResonantPairing} and formation of quantum droplets in a mixture of $\ket{m_F=-1}$ and $\ket{m_F=0}$ hyperfine states of the $F=1$ manifold of $^{39}$K \cite{Petrov2015, BoseMixture}. For the excitons one can additionally assume
\begin{equation}
\label{SymmPotentials}
V_{\uparrow\uparrow}(\bm x)=V_{\downarrow\downarrow}(\bm x).
\end{equation}
Below we adopt the simplifying assumptions \eqref{EqMasses} and \eqref{SymmPotentials} in order to make our consideration more transparent. The general case of unequal masses and asymmetric pairwise potentials is discussed in the Appendix.
\section{General solution}
\subsection{Notations and the elementary graphs}
The arguments presented below are entirely based on the hypothesis of existence of a Bose-Einstein condensate in the ground state of the Hamiltonian \eqref{Hamiltonian}. This is usually justified \textit{a posteriori} for a weakly-interacting dilute system (the corresponding conditions will be presented in Section IV). In general, there is no condensate in 1D ($d=1$) even at the absolute zero temperature, and applicability of our results to this case should be discussed with care. We shall postpone such a discussion for future work.
From a mathematical viewpoint, the presence of a condensate results in non-zero expectation values of the operators $\langle\hat \Psi_\sigma\rangle$. The condensate plays the role of a reservoir, which does not change its state upon increase or decrease of the number of particles $N_\sigma$ by one. As a consequence, the time evolution of the condensate wavefunctions is governed by the chemical potentials $\mu_\sigma$.
Miscibility of the system means that both spin components occupy the same volume. The corresponding condition for a dilute gas is given by Eq. \eqref{miscibility} below. With the assumption \eqref{SymmPotentials} the equilibrium configuration corresponds to $n_\uparrow=n_\downarrow\equiv n$. The mixture thus can be characterized by a unique chemical potential $\mu$.
The formalism of Green's functions in a Bose-Bose mixture can be developed along the lines of the spinless theory \cite{Beliaev}. We write the field operators in the form
\begin{equation}\label{Psi2}
\hat\Psi_\sigma(\bm x)=\hat\Psi_\sigma^\prime(\bm x) + \frac{\hat a_{0, \sigma}}{\sqrt{V}},
\end{equation}
where $\hat\Psi_\sigma^\prime$ stand for the non-condensed part and $\hat a_{0, \sigma}$ act on the macroscopically populated single-particle states with $\bm p=0$. The Green's functions are defined in terms of the non-condensed parts of the operators in the Heisenberg representation
\begin{equation}
\label{G}
G_{\sigma \sigma^\prime}(\mathsf x_1,\mathsf x_2) = - i \langle T \hat\Psi^\prime_\sigma(\mathsf x_1) \hat\Psi^{\prime \dag}_{\sigma^\prime}(\mathsf x_2) \rangle,
\end{equation}
where we have introduced the four-vectors $\mathsf x_i=(t_i, \bm x_i)$. For a uniform system one has
\begin{equation}
G_{\sigma \sigma^\prime}(\mathsf x_1,\mathsf x_2)=G_{\sigma \sigma^\prime}(\mathsf x),
\end{equation}
where $\mathsf x=\mathsf x_1-\mathsf x_2$. To describe absorption and emission of the particles by the condensate we shall also need the following auxiliary quantities
\begin{eqnarray}
\label{F1}
iF_{\sigma \sigma^\prime}(\mathsf x) &=& \langle N-2| T \hat\Psi^\prime_\sigma(\mathsf x_1) \hat\Psi^{\prime}_{\sigma^\prime}(\mathsf x_2) | N \rangle, \\ \label{F2}
iF^\dag_{\sigma \sigma^\prime}(\mathsf x) &=& \langle N+2| T \hat\Psi^{\prime \dag}_\sigma(\mathsf x_1) \hat\Psi^{\prime \dag}_{\sigma^\prime}(\mathsf x_2) | N \rangle,
\end{eqnarray}
known as anomalous Green's functions \cite{Pitaevskii}. In what follows we shall use the momentum-space representation for the Green's function. The corresponding transformation is given by
\begin{equation}
G(\mathsf p)=\int e^{i \mathsf p \mathsf x}G(\mathsf x)d^4 \mathsf x,
\end{equation}
where $\mathsf p=(\omega, \bm p)$ and $\mathsf p\mathsf x=\omega t-\bm p\bm x$. It will also be convenient to use the modified Hamiltonian
\begin{equation}
\label{Hprime}
\hat H^\prime=\hat H-\mu \hat N
\end{equation}
in setting the time-dependence of the operators. For an ideal gas we obtain
\begin{equation}\label{G0}
G^{(0)}_{\sigma \sigma^\prime} (\mathsf p) = \delta_{\sigma \sigma^\prime} G_{0}(\mathsf p)=\delta_{\sigma \sigma^\prime} \left[ \hbar\omega - \frac{\hbar^2p^2}{2m}+\mu+i0 \right]^{-1},
\end{equation}
where $\mu$ should be regarded as a free parameter.
Each diagram contributing to the expansion of $G_{\sigma\sigma^\prime}$ can be composed of the eight elementary graphs shown in Fig. (1). Wavy lines describe the emission and absorption of particles by the condensate. In calculations they are replaced by the factor $\sqrt{n_{\sigma,0}}$, where $n_{\sigma,0}$ is the $\sigma$-component of the condensate density. Dashed lines carry the factors $-iV_{\sigma\sigma'}(\bm q)$. Each vertex has a label $\sigma$ showing the spin of an incoming (outgoing) particle.
\begin{figure*}[t]
\noindent\centering{
\includegraphics[width=1.6\columnwidth] {Diagramms.png}}
\caption{Dyson equations. The Greens's functions (bold lines with arrows) couple to each other via the self-energies (circles). For each pair of spin indices $\sigma\sigma^\prime$ there are three types of self-energies characterized by different numbers of incoming (the left index in the lower row) and outgoing (the right index) lines.}
\label{Dyson}
\end{figure*}
\subsection{Dyson equations}
Though the interaction of two particles in a vacuum conserves the particle spin, the latter can be effectively changed after scattering off the condensate. As one can see from Fig. 1, already in the first order of perturbation theory there is a finite probability amplitude to find the particle in a state with a different $\sigma$. Formally, this results in appearance of the matrix elements $G_{\sigma\sigma^\prime}$ with $\sigma\neq\sigma^\prime$ for the Green's function of an interacting system. An accurate consideration of the higher order terms shows, that the resulting picture of many-body scattering processes can be recast in the graphical form shown in Fig. 2. The Greens's functions (bold lines with arrows) couple to each other via the self-energies (circles) obtained by summation of possible irreducible parts. There are three types of these parts for each pair of spin indices $\sigma\sigma^\prime$ differing by the number of incoming and outgoing continuous lines. By analogy with Ref. \cite{Beliaev}, we denote the resulting potentials by $\Sigma^{\sigma \sigma^{\prime}}_{11}$, $\Sigma^{\sigma \sigma^{\prime}}_{20}$ and $\Sigma^{\sigma \sigma^{\prime}}_{02}$. The graphical form in Fig. (2) then can be translated into the following system of Dyson equations
\begin{widetext}
\begin{subequations} \label{Dys}
\begin{align}
G_{\sigma \sigma^\prime}(\mathsf p)=& G^{(0)}_{\sigma \sigma^\prime}(\mathsf p) + \sum_{\sigma^{\prime\prime}} G_{0}(\mathsf p) \Sigma^{\sigma \sigma^{\prime\prime}}_{11} G_{\sigma^{\prime\prime} \sigma^\prime}(\mathsf p) + \sum_{\sigma^{\prime\prime}} G_{0}(\mathsf p) \Sigma^{\sigma \sigma^{\prime\prime}}_{20} F^\dag_{\sigma^{\prime\prime} \sigma^\prime}(\mathsf p), \\
F^\dag_{\sigma \sigma^\prime}(\mathsf p)=& \sum_{\sigma^{\prime\prime}} G_{0}(-\mathsf p) \Sigma^{\sigma \sigma^{\prime\prime}}_{02}(\mathsf p) G_{\sigma^{\prime\prime} \sigma^\prime}(\mathsf p) + \sum_{\sigma^{\prime\prime}} G_{0}(-\mathsf p) \Sigma^{\sigma \sigma^{\prime\prime}}_{11}(-\mathsf p) F^\dag_{\sigma^{\prime\prime} \sigma^\prime}(\mathsf p).
\end{align}
\end{subequations}
\end{widetext}
By noticing that the equations with different $\sigma^\prime$ decouple from each other, we can write the system \eqref{Dys} in the useful form
\begin{widetext}
\begin{equation}\label{Matr1}
\left[
\begin{array}{cccc}
G^{-1}_0(\mathsf p) - \Sigma^{\uparrow \uparrow}_{11}(\mathsf p) & - \Sigma^{\uparrow \downarrow}_{11}(\mathsf p) & - \Sigma^{\uparrow \uparrow}_{20}(\mathsf p) & - \Sigma^{\uparrow \downarrow}_{20}(\mathsf p) \\
- \Sigma^{\downarrow \uparrow}_{11}(\mathsf p) & G^{-1}_0(\mathsf p) - \Sigma^{\downarrow \downarrow}_{11}(\mathsf p) & - \Sigma^{\downarrow \uparrow}_{20}(\mathsf p) & - \Sigma^{\downarrow \downarrow}_{20}(\mathsf p) \\
- \Sigma^{\uparrow \uparrow}_{02}(\mathsf p) & - \Sigma^{\uparrow \downarrow}_{02}(\mathsf p) & G^{-1}_0(-\mathsf p) - \Sigma^{\uparrow \uparrow}_{11}(-\mathsf p) & - \Sigma^{\uparrow \downarrow}_{11}(-\mathsf p) \\
- \Sigma^{\downarrow \uparrow}_{02}(\mathsf p) & - \Sigma^{\downarrow \downarrow}_{02}(\mathsf p) & - \Sigma^{\downarrow \uparrow}_{11}(-\mathsf p) & G^{-1}_0(-\mathsf p) - \Sigma^{\downarrow \downarrow}_{11}(-\mathsf p) \\
\end{array}
\right] \left[
\begin{array}{c}
G_{\uparrow \uparrow}(\mathsf p) \\
G_{\downarrow \uparrow}(\mathsf p) \\
F^\dag_{\uparrow \uparrow}(\mathsf p) \\
F^\dag_{\downarrow \uparrow}(\mathsf p) \\
\end{array}
\right]
= \left[
\begin{array}{c}
1 \\
0 \\
0 \\
0 \\
\end{array}
\right].
\end{equation}
\end{widetext}
\begin{figure*}[t]
\noindent\centering{
\includegraphics[width=1.2\columnwidth] {IrrParts.png}}
\caption{First-order diagrams in the expansion of the self-energies $\Sigma^{\sigma\sigma^\prime}_{11}(\mathsf p)$ and $\Sigma^{\sigma\sigma^\prime}_{02}(\mathsf p)$, defining the chemical potential and the spectrum of elementary excitations according to Eq. \eqref{Mu1} and Eq. \eqref{Spec1}, respectively.}
\label{Firstorder}
\end{figure*}
Furthermore, by virtue of \eqref{SymmPotentials}, one has $\Sigma^{\uparrow \uparrow}_{11}(\mathsf p)=\Sigma^{\downarrow \downarrow}_{11}(\mathsf p)$. Note also that $\Sigma^{\sigma \sigma^{\prime\prime}}_{02}(\mathsf p)= \Sigma^{\sigma \sigma^{\prime\prime}}_{20}(\mathsf p)$, because the relevant diagrams differ only by the direction of the wavy lines. This allows us to write Eq. \eqref{Matr1} in the spinor form
\begin{widetext}
\begin{equation}\label{Matr2}
\left[
\begin{array}{cc}
G_1^{-1}(\mathsf p) \hat\sigma_0 - \Sigma^{\uparrow \downarrow}_{11}(\mathsf p) \hat\sigma_1 & - \Sigma^{\uparrow \uparrow}_{20}(\mathsf p) \hat\sigma_0 - \Sigma^{\uparrow \downarrow}_{20}(\mathsf p) \hat\sigma_1 \\
- \Sigma^{\uparrow \uparrow}_{20}(\mathsf p) \hat\sigma_0 - \Sigma^{\uparrow \downarrow}_{20}(\mathsf p) \hat\sigma_1 & G^1_{-1}(-\mathsf p) \hat\sigma_0 - \Sigma^{\uparrow \downarrow}_{11}(-\mathsf p) \hat\sigma_1 \\
\end{array}
\right] \left[
\begin{array}{c}
\varphi \\
\chi \\
\end{array}
\right] = \left[
\begin{array}{c}
\alpha \\
0 \\
\end{array}
\right],
\end{equation}
\end{widetext}
where
\begin{equation}\label{G11}
G^{-1}_1(\mathsf p)=G^{-1}_0(\mathsf p) - \Sigma^{\uparrow \uparrow}_{11}(\mathsf p)
\end{equation}
and
\begin{equation}\label{Pauli1}
\hat\sigma_0=\left[
\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}
\right], \quad \hat\sigma_1=\left[
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right],\quad\alpha=\left[
\begin{array}{c}
1\\
0\\
\end{array}
\right].
\end{equation}
The system \eqref{Matr2} then can be solved by using the identity
\begin{equation*}
a^2-b^2=(a \hat\sigma_0 - b \hat\sigma_1)(a \hat\sigma_0 + b \hat\sigma_1).
\end{equation*}
We first use the second row in \eqref{Matr2} to express $\chi$ via $\varphi$, and then substitute it into the first row. We find that all Green's functions have the denominator $D_1(\mathsf p)D_2(\mathsf p)$, where
\begin{widetext}
\begin{eqnarray}
\label{D1P}
D_1(\mathsf p) &=& (G^{-1}_1(\mathsf p)-\Sigma^{\uparrow \downarrow}_{11}(\mathsf p))(G^{-1}_1(-\mathsf p)-\Sigma^{\uparrow \downarrow}_{11}(-\mathsf p))-(\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)+ \Sigma^{\uparrow \downarrow}_{20}(\mathsf p))^2, \\ \label{D2P}
D_2(\mathsf p) &=& (G^{-1}_1(\mathsf p)+\Sigma^{\uparrow \downarrow}_{11}(\mathsf p))(G^{-1}_1(-\mathsf p)+\Sigma^{\uparrow \downarrow}_{11}(-\mathsf p))-(\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)- \Sigma^{\uparrow \downarrow}_{20}(\mathsf p))^2.
\end{eqnarray}
\end{widetext}
In terms of these quantities the solution of Eq.\eqref{Matr1} can be written as
\begin{widetext}
\begin{subequations}
\label{GF}
\begin{align}
\label{Gud}
G_{\sigma\sigma^\prime}= & \frac{1}{2} \left( \frac{G^{-1}_1(-\mathsf p)-\Sigma^{\uparrow \downarrow}_{11}(-\mathsf p)}{D_1(\mathsf p)} \pm \frac{G^{-1}_1(-\mathsf p)+\Sigma^{\uparrow \downarrow}_{11}(-\mathsf p)}{D_2(\mathsf p)} \right),\\ \label{Fud}
F^\dag_{\sigma\sigma^\prime}=& \frac{1}{2} \left( \frac{\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)+ \Sigma^{\uparrow \downarrow}_{20}(\mathsf p)}{D_1(\mathsf p)} \pm \frac{\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)- \Sigma^{\uparrow \downarrow}_{20}(\mathsf p)}{D_2(\mathsf p)} \right),
\end{align}
\end{subequations}
\end{widetext}
where ``$+$'' should be used for $\sigma=\sigma^\prime$ and ``$-$'' for $\sigma \neq \sigma^\prime$.
With the result \eqref{GF} one can readily express the chemical potential of the system in terms of the self-energies. We notice, that in the long-wavelength limit the above-condensate part of the field operator can be written as $\hat\Psi^\prime_\sigma \approx i \sqrt{n_{0\sigma}} \hat\Phi_\sigma$, where the operator $\hat\Phi_\sigma$ is the phase of the condensate. Hence, one has $F^\dag_{\uparrow \uparrow} \approx -G_{\uparrow\uparrow}$. On the other hand, as a consequence of the symmetry breaking we may expect two gapless Goldstone modes in the elementary excitation spectrum, which implies the condition $D_1(0)D_2(0)=0$. By noticing also that $\Sigma^{\uparrow \downarrow}_{20}(0)= \Sigma^{\uparrow \downarrow}_{11}(0)$, we obtain
\begin{equation}\label{Mu1}
\mu=\Sigma^{\uparrow \uparrow}_{11}(0) - \Sigma^{\uparrow \uparrow}_{20}(0).
\end{equation}
The result \eqref{Mu1} provides the dependence of the chemical potential on the density $n_0$ of the condensate components, and together with the well-known formula \cite{AGD}
\begin{equation}
\label{depletion}
n=n_0+\frac{i}{(2\pi)^{d+1}}\lim_{{t}\rightarrow{-0}}\int G_{\uparrow\uparrow}(\mathsf p)e^{-i\omega t}d\mathsf p
\end{equation}
allows one to calculate $\mu$ as a function of the total density $n$, which includes the above-condensate particles.
\subsection{Elementary excitation spectrum}
According to the general theorem \cite{AGD} the spectrum of elementary excitations of the system can be obtained from the poles of the Green's functions. By solving $D_1(\bm p,\omega)D_2(\bm p,\omega)=0$ with respect to $\omega$ we find two branches for the excitations of the particle type (we shall omit the hole excitations for brevity):
\begin{widetext}
\begin{equation}
\label{Spec1}
\hbar\omega(\bm p)=\sqrt{(\hbar^2 p^2/2m+\Sigma_s^{\uparrow\uparrow}(\mathsf p)\pm\Sigma_s^{\uparrow\downarrow}(\mathsf p)-\mu)^2-(\Sigma^{\uparrow \uparrow}_{20}(\mathsf p) \pm \Sigma^{\uparrow \downarrow}_{20}(\mathsf p))^2}+\Sigma_a^{\uparrow\uparrow}(\mathsf p)\pm\Sigma_a^{\uparrow\downarrow}(\mathsf p)
\end{equation}
\end{widetext}
where we have introduced
\begin{equation}
\Sigma_{s,a}^{\sigma\sigma^\prime}(\mathsf p)=\frac{\Sigma^{\sigma\sigma^\prime}_{11}(\mathsf p)\pm\Sigma^{\sigma\sigma^\prime}_{11}(-\mathsf p)}{2}.
\end{equation}
Strictly speaking, Eq. \eqref{Spec1} is a transcendental equation on $\omega$. As we shall see, to a good accuracy one can neglect the dependence of the self-energies on $\omega$ in the dilute regime. Thus, in 3D it is common to model the system by a hypothetical weakly-interacting gas characterized by $V_{\sigma\sigma^\prime}(\bm q)=g_{\sigma\sigma^\prime}$ for $q R_e\ll 1$ with $R_e$ being the interaction radius. One can then approximate the self-energies by few first-order diagrams shown in Fig. \ref{Firstorder}. We find $\Sigma^{\uparrow \uparrow}_{11}=n(2g_{\uparrow \uparrow}+g_{\uparrow \downarrow})$, $\Sigma^{\uparrow \downarrow}_{11}=n g_{\uparrow \downarrow}$, $\Sigma^{\uparrow \uparrow}_{20}= n g_{\uparrow \uparrow}$, $\Sigma^{\uparrow \downarrow}_{20}= n g_{\uparrow \downarrow}$, which, upon substitution into \eqref{Spec1} yields the well-known result
\begin{subequations}\label{SpecB1}
\begin{align}
\hbar\omega(\bm p)&=\sqrt{\left( \frac{\hbar^2 p^2}{2m} \right)^2 + \frac{\hbar^2p^2}{m}n(g_{\uparrow\uparrow}\pm g_{\uparrow\downarrow})}\\
\mu&=n(g_{\uparrow \uparrow}+g_{\uparrow \downarrow})
\end{align}
\end{subequations}
for the spectrum and the chemical potential of the system. Relation of the constants $g_{\uparrow \uparrow}$ and $g_{\uparrow \downarrow}$ to the characteristics of the original model will be discussed below.
To conclude this part, let us point out an important symmetry property of the formula \eqref{Spec1}. In the long-wavelength limit $\mathsf p\rightarrow 0$ we can use the Gavoret-Nozieres type of arguments \cite{Gavoret} to obtain the following relations
\begin{widetext}
\begin{subequations}
\label{GNrelations}
\begin{align}
\Sigma_s^{\uparrow\uparrow}(\mathsf p)-\Sigma_s^{\uparrow\downarrow}(\mathsf p)-\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)+\Sigma^{\uparrow \downarrow}_{20}(\mathsf p)-\mu&=\frac{\hbar^2 p^2}{2m}\left(\frac{n^\prime}{n_0}-\frac{\rho_{\uparrow\downarrow}}{mn_0}\right)\\
\Sigma_s^{\uparrow\uparrow}(\mathsf p)-\Sigma_s^{\uparrow\downarrow}(\mathsf p)+\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)-\Sigma^{\uparrow \downarrow}_{20}(\mathsf p)-\mu&=2(\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)-\Sigma^{\uparrow \downarrow}_{20}(\mathsf p))+\frac{\hbar^2 p^2}{2m}\left(\frac{n^\prime}{n_0}-\frac{\rho_{\uparrow\downarrow}}{mn_0}\right),
\end{align}
\end{subequations}
\end{widetext}
where $n^\prime=n-n_0$ is the quantum depletion of the condensate and $\rho_{\uparrow\downarrow}$ is the so-called \textit{superfluid drag} due to Andreev-Bashkin effect \cite{AB, ABnote}. For spin-independent interactions one has $\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)=\Sigma^{\uparrow \downarrow}_{20}(\mathsf p)$ and $\Sigma_a^{\uparrow\uparrow}(\mathsf p)=\Sigma_a^{\uparrow\downarrow}(\mathsf p)$, and, by virtue of \eqref{GNrelations}, the lower branch in Eq. \eqref{Spec1} takes the form
\begin{equation}
\label{magnon}
\hbar\omega_\mathrm{m} (\bm p)=\frac{\hbar^2 p^2}{2m_\ast},
\end{equation}
where
\begin{equation}
\label{magnonmass}
m_\ast=\frac{n_0}{n}\frac{m}{1-\rho_{\uparrow\downarrow}/mn}
\end{equation}
is the effective mass.
An energy spectrum quadratic in $\bm k$ is what one would expect on general grounds for an arbitrary multicomponent superfluid \cite{Halperin1}. The dispersion of the type \eqref{magnon} describes the excitations analogous to the spin waves in a Heisenberg ferromagnet \cite{Halperin2}.
In the asymmetric case $\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)>\Sigma^{\uparrow \downarrow}_{20}(\mathsf p)$ one finds $\hbar\omega_\mathrm{m} (\bm p)=\hbar c_\mathrm{m} p$ with
\begin{equation}
\label{spinvelocity}
c_\mathrm{m}^2=\frac{(nm-\rho_{\uparrow\downarrow})}{m^2}\frac{[\Sigma^{\uparrow \uparrow}_{20}(\mathsf p)-\Sigma^{\uparrow \downarrow}_{20}(\mathsf p)]}{n_0}
\end{equation}
being the spin wave velocity. For the model potential $V_{\sigma\sigma^\prime}(\bm q)=g_{\sigma\sigma^\prime}$ considered above the result \eqref{spinvelocity} matches the hydrodynamic formula of Ref. \cite{Nespolo}. One can see that the entrainment slows the propagation of magnons.
\begin{figure*}[t]
\noindent\centering{
\includegraphics[width=1.2\columnwidth] {IntVert.png}}
\caption{Graphical equation for the effective inetraction in the dilute regime.}
\label{Vertex}
\end{figure*}
\section{Dilute regime}
By analogy with the spinless theory \cite{Beliaev, Lozovik}, estimation of the integrals over the internal momenta in the graphs for $\Sigma$'s shows, that to the lowest order in $\beta$ only the ladders should be retained. These obey the diagrammatic rule shown schematically in Fig. 4. One can readily recognise the structure typical for the scattering problem of two particles in vacuum. Indeed, by introducing the relative $\mathsf p_1-\mathsf p_2=2\mathsf k$, $\mathsf p_3-\mathsf p_4=2\mathsf k^\prime$ and total $\mathsf p_1+\mathsf p_2=\mathsf p_3+\mathsf p_4=\mathsf P=(\Omega, \bm{P})$ momenta and taking advantage of the fact that $V_{\sigma\sigma^\prime}(\bm q)$ does not depend on frequency, one can recast Fig. 4 in the form
\begin{multline}\label{Vert2}
T_{\sigma\sigma^\prime}(\bm k^\prime,\bm k; z) = \frac{1}{(2\pi)^d}V_{\sigma\sigma^\prime}(\textbf{k}^\prime-\bm k)\\
+\frac{1}{(2\pi)^d} \int \frac{V_{\sigma\sigma^\prime}(\textbf{k}^\prime - \textbf{k}^{\prime\prime})}{z-E_{\bm k^{\prime\prime}}}T_{\sigma\sigma^\prime}(\bm k^{\prime\prime},\bm k; z) d\bm k^{\prime\prime},
\end{multline}
where $E_{\bm k^{\prime\prime}}=\hbar^2 k^{\prime\prime 2}/m$ and
\begin{equation}\label{Kappa1}
z=\Omega - \frac{P^2}{4m} + 2\mu+i0.
\end{equation}
This allows one to identify the quantity
\begin{equation*}
T_{\sigma\sigma^\prime}(\bm k^\prime,\bm k; z)\equiv\frac{1}{(2\pi)^d}\Gamma(\mathsf p_1,\mathsf p_2;\mathsf p_3,\mathsf p_4)
\end{equation*}
with the matrix elements of the $T_{\sigma\sigma^\prime}$-operator of the quantum scattering theory \cite{Taylor}. Furthermore, the $T_{\sigma\sigma^\prime}$-operator can be expressed in terms of the off-shell scattering amplitude defined by
\begin{equation}
f_{\sigma\sigma^\prime}(\bm k^\prime,\bm k)=-(2\pi)^2\frac{m}{2\hbar^2}T_{\sigma\sigma^\prime}(\bm k^\prime,\bm k; E_{\bm k}+i0).
\end{equation}
The corresponding relation reads
\begin{widetext}
\begin{equation}
\label{Vert3}
T_{\sigma\sigma^\prime}(\bm k^\prime,\bm k; z)=-\frac{1}{(2\pi)^2}\frac{2\hbar^2}{m}\Bigl [f_{\sigma\sigma^\prime}^{\ast}(\bm k,\bm k^\prime)-
\frac{1}{(2\pi)^2}\frac{2\hbar^2}{m}\int f_{\sigma\sigma^\prime}(\bm k^\prime,\bm q)f_{\sigma\sigma^\prime}^{\ast}(\bm k,\bm q)\left (\frac{1}{E_{\bm q}-E_{\bm k^\prime}+i0}+\frac{1}{z-E_{\bm q}}\right)d\bm q\Bigr].
\end{equation}
\end{widetext}
The self-energies are defined by the special matrix elements of the $T$-operator obtained by letting two out of the four particles belong to the condensate:
\begin{equation}
\label{SigmaViaTs}
\begin{split}
\Sigma^{\sigma\sigma^\prime}_{11}(\pm\mathsf p)&=(2\pi)^d n_0 [T_{\sigma\sigma^\prime}(\mp\bm p/2,\pm\bm p/2;z_\pm)\\
&+\delta_{\sigma\sigma^\prime}\sum_{\sigma^{\prime\prime}}T_{\sigma\sigma^{\prime\prime}}(\pm\bm p/2,\pm\bm p/2;z_\pm)]\\
\Sigma^{\sigma\sigma^\prime}_{20}(\mathsf p)&=(2\pi)^d n_0 T_{\sigma\sigma^\prime}(0,\bm p;2\mu+i0),
\end{split}
\end{equation}
where
\begin{equation}
z_\pm=\pm\hbar\omega-
\frac{\hbar^2 p^2}{4m}+2\mu+ i0.
\end{equation}
The chemical potential satisfies the transcendental equation
\begin{equation}
\label{mu}
\mu=(2\pi)^d n_0[T_{\uparrow\uparrow}(0,0; 2\mu+i0)+T_{\uparrow\downarrow}(0,0; 2\mu+i0)].
\end{equation}
Assuming slow dependence of $T_{\sigma\sigma^\prime}$ on $\mu$ and $n_0\approx n$ one can write
\begin{equation}
E_{\mathrm{mix}}=\int \mu dN=\frac{(2\pi)^dN^2 (T_{\uparrow\uparrow}+T_{\uparrow\downarrow})}{4V},
\end{equation}
where we have used the shortcut $T_{\uparrow\uparrow}\equiv T_{\uparrow\uparrow}(0,0; 2\mu+i0)$. On the other hand, for a phase separated configuration one has
\begin{equation}
E_{\mathrm{separ}}=\frac{(2\pi)^dN^2 T_{\uparrow\uparrow}}{2V}.
\end{equation}
Comparing the two energies we find $T_{\uparrow\downarrow}<T_{\uparrow\uparrow}$ as the condition of miscibility. More generally,
\begin{equation}
\label{miscibility}
T_{\uparrow\downarrow}^2<T_{\uparrow\uparrow}T_{\downarrow\downarrow},
\end{equation}
which applies also to the spin-imbalanced configurations $n_\uparrow\neq n_\downarrow$ (see Appendix B). Further conclusions depend on the dimensionality of the problem.
\subsection{3D gas}
In the 3D geometry to the first order in $\beta$ one can neglect the integral term in Eq. \eqref{Vert3}. Taking into account the invariance of the on-shell scattering amplitude with respect to the time reversal, we obtain
\begin{equation}
\label{Sigmas3D}
\begin{split}
\Sigma_{a}^{\sigma\sigma^\prime}(\mathsf p)&=0\\
\Sigma_s^{\uparrow\uparrow}(\mathsf p)\pm\Sigma_s^{\uparrow\downarrow}(\mathsf p)&=-\frac{8\pi\hbar^2 n_0}{m}[f_{\uparrow\uparrow}^{+}(\bm p/2,\bm p/2)+f_{\uparrow\downarrow}^{\pm}(\bm p/2,\bm p/2)]\\
\Sigma^{\sigma\sigma^\prime}_{20}(\mathsf p)&=-\frac{4\pi\hbar^2 n_0}{m}f_{\sigma\sigma^\prime}(0,\bm p)\\
\mu&=-\frac{4\pi\hbar^2 n_0}{m}[f_{\uparrow\uparrow}(0,0)+f_{\uparrow\downarrow}(0,0)],
\end{split}
\end{equation}
where we have defined
\begin{equation}
f_{\sigma\sigma^\prime}^{\pm}(\bm k^\prime,\bm k)=\frac{1}{2}[f_{\sigma\sigma^\prime}(\bm k^\prime,\bm k)\pm f_{\sigma\sigma^\prime}(-\bm k^\prime,\bm k)].
\end{equation}
At small momenta the leading contribution to the scattering is in the $s$-wave scattering channel, and the $s$-wave scattering amplitude is known to approach the constant value \cite{Taylor}
\begin{equation}
\label{f3D}
f_{\sigma\sigma'}(\bm k^\prime,\bm k)=-a_{\sigma\sigma'},
\end{equation}
known as the $s$-wave scattering length. Substitution of \eqref{f3D} into \eqref{Sigmas3D} yields the elementary excitation spectrum and the chemical potential of the type \eqref{SpecB1} with
\begin{equation}
g_{\sigma\sigma^\prime}=\frac{4\pi\hbar^2 a_{\sigma\sigma^\prime}}{m}.
\end{equation}
The same result can be obtained by solving linearized equations of motion for the small-amplitude oscillations of the classical fields $\Psi_\sigma$ obtained from the Hamiltonian \eqref{Hamiltonian} where one substitutes $g_{\sigma\sigma^\prime}$ in lieu of $V_{\sigma\sigma^\prime}(\bm q)$. Such treatment of the low-energy excitations is quite common \cite{Petrov2015, Goldstein, Berman} and is sometimes extended to momentum-dependent phenomenological potentials $g_{\sigma\sigma^\prime}(\bm q)$ as well \cite{ResonantPairing, Alexandrov, Eckardt}. Below we present the result of our theory which escapes this simplified approach.
Consider again the lower branch of the spectrum \eqref{Spec1} and assume the interaction potential to be not dependent on the particle's spin, so that $f_{\uparrow\uparrow}(\bm k^\prime,\bm k)=f_{\uparrow\downarrow}(\bm k^\prime,\bm k)\equiv f(\bm k^\prime,\bm k)$. By using the relations \eqref{Sigmas3D} we obtain
\begin{equation}
\label{DiluteMagnon}
\hbar\omega_\mathrm{m}(\bm p)=\frac{\hbar^2 p^2}{2m}-\frac{8\pi\hbar^2 n_0}{m}[f(\bm p/2,\bm p/2)-f(0,0)].
\end{equation}
The second term in the above equation for the magnon dispersion does not appear if one uses a standard hydrodynamic approach. Indeed, mere Fourier-expansion of the small-amplitude oscillations of the order parameter would yield the equation having the structure of \eqref{SpecB1}. For identical inter- and intra-species interactions the density-dependent term vanishes and one gets $\hbar\omega_{\mathrm m}(\bm p)\equiv \hbar^2 p^2/2m$. In the weakly-interacting limit $n a^3\ll 1$, the result \eqref{DiluteMagnon} can be reproduced if instead one applies the canonical Bogoliubov transformation to an \textit{ersatz} Hamiltonian (see Appendix A)
\begin{widetext}
\begin{equation}
\label{DiluteHamiltonian}
\hat H_\ast=\sum_{\textbf{p},\sigma}\frac{\hbar^2 p^2}{2m}\hat a_{\sigma, \textbf{p}}^{\dag} \hat a_{\sigma, \textbf{p}}+\frac{1}{2V}\sum_{\textbf k,\textbf p,\textbf q,\sigma,\sigma^\prime}\hat a_{\sigma, \textbf k+\textbf p}^{\dag} \hat a_{\sigma^\prime,\textbf k-\textbf p}^{\dag} g_{\sigma\sigma^\prime}(\bm p, \bm q)\hat a_{\sigma, \bm k+\bm q}\hat a_{\sigma^\prime,\bm k-\bm q},
\end{equation}
\end{widetext}
where
\begin{equation}
g_{\sigma\sigma^\prime}(\bm p, \bm q)\equiv-\frac{4\pi\hbar^2}{m} f_{\sigma\sigma^\prime}(\bm p, \bm q)
\end{equation}
are the properly defined effective potentials.
\begin{figure*}[t]
\noindent\centering{
\includegraphics[width=1.2\columnwidth] {Polaron.png}}
\caption{Second-order graphs for the magnon self-energy due to interaction with phonons. Bold black and empty lines are used for the phonon \eqref{PhononG} and magnon \eqref{MagnonG} Green's functions, respectively. Wavy lines carry the factor $\sqrt{n_0}$. The picture is fully analogous to the second-order perturbative treatment of an impurity in a one-component Bose-Einstein condensate (Bose polaron) done in \cite{Christensen}.}
\label{Polaron}
\end{figure*}
It would be wrong to identify the low-momentum expansion of the tail in Eq. \eqref{DiluteMagnon} with the drag density, as this expansion yields a subleading order with respect to the quantum depletion $n'\sim \sqrt{na^3}$ which enters the formula \eqref{magnonmass}. The leading correction to the magnon mass comes from the second-order approximation in $\beta$. For a weakly-interacting gas the result can be obtained by considering interaction of magnons with the Bogoliubov phonon modes. The bare propagators in this picture take the form
\begin{equation*}
\begin{split}
G_{\sigma\sigma'}(\mathsf p)&=\frac{1}{2}[G_\mathrm{ph}(\mathsf p)\pm G_\mathrm{m}(\mathsf p)]\\
F_{\sigma\sigma'}^\dagger(\mathsf p)&=\frac{1}{2}F_\mathrm{ph}^\dagger(\mathsf p),
\end{split}
\end{equation*}
with
\begin{equation}
\label{PhononG}
\begin{split}
G_\mathrm{ph}(\mathsf p)&=\frac{u_{\bm p}^2}{\hbar\omega-\hbar\omega_\mathrm{ph}(\bm p)+i0}-\frac{\upsilon_{\bm p}^2}{\hbar\omega+\hbar\omega_\mathrm{ph}(\bm p)-i0}\\
F_\mathrm{ph}^\dagger(\mathsf p)&=\frac{u_{\bm p} \upsilon_{\bm p}}{(\hbar\omega-\hbar\omega_\mathrm{ph}(\bm p)+i0)(\hbar\omega+\hbar\omega_\mathrm{ph}(\bm p)-i0)}
\end{split}
\end{equation}
and
\begin{equation}
\label{MagnonG}
G_\mathrm{m}(\mathsf p)=\frac{1}{\hbar\omega-\hbar^2 p^2/2m}
\end{equation}
being the phonon and the magnon Green's functions, respectively. Here
\begin{equation}
\label{BogoliubovCoeff}
\begin{split}
u_{\bm p}&=\sqrt{\frac{\hbar^2 p^2/2m+2ng}{\hbar\omega_\mathrm{ph}(\bm p)}+1}\\
\upsilon_{\bm p}&=-\sqrt{\frac{\hbar^2 p^2/2m+2ng}{\hbar\omega_\mathrm{ph}(\bm p)}-1}
\end{split}
\end{equation}
are the Bogoliubov coefficients for the phonon part. At this level of approximation we neglect the dependence of the effective potentials on the momenta [Eq. \eqref{f3D} for the scattering lengths with $a_{\uparrow\uparrow}=a_{\uparrow\downarrow}\equiv a$ and $g\equiv 4\pi\hbar^2 a/m$] and take $n=n_0$. Retaining the terms cubic and quartic in $\hat a_{\sigma,\bm p}$ with $\bm p\neq 0$ in the Hamiltonian \eqref{DiluteHamiltonian} and substituting
\begin{equation}
\label{ReducedBogoliubov}
\begin{split}
\hat a_{\uparrow,\bm p}&=\frac{1}{\sqrt{2}}(u_{\bm p} \hat b_{\bm p}+\upsilon_{\bm p} \hat b_{-\bm p}^\dagger+\hat c_{\bm p})\\
\hat a_{\downarrow,\bm p}&=\frac{1}{\sqrt{2}}(u_{\bm p} \hat b_{\bm p}+\upsilon_{\bm p} \hat b_{-\bm p}^\dagger-\hat c_{\bm p}),
\end{split}
\end{equation}
we get the magnon Hamiltonian
\begin{equation}
\label{MagnonHamiltonian}
\begin{split}
\hat H_\mathrm{m}&=\sum_{\textbf{p}}\frac{\hbar^2 p^2}{2m}\hat c_{ \textbf{p}}^{\dag} \hat c_{\textbf{p}}+\\
\frac{g}{V}&\sum_{\textbf k,\textbf p,\textbf q}\hat c_{\textbf k+\textbf p}^{\dag} \hat c_{\textbf k-\textbf p}^{\dag} \hat c_{\bm k+\bm q}\hat c_{\bm k-\bm q}+\hat H_{\mathrm{m}-\mathrm{ph}},
\end{split}
\end{equation}
where the last term
\begin{equation}
\label{PhononMagnon}
\hat H_{\mathrm{m}-\mathrm{ph}}=\frac{g}{V}\sqrt{\frac{N}{2}}\sum_{\textbf p,\textbf q}[\hat c_{\textbf p+\textbf q}^{\dag}\hat c_{\bm q}(u_p \hat b_{\bm p}+\upsilon_{\bm p} \hat b_{-\bm p}^\dagger)+h.c.]
\end{equation}
describes the interaction of magnons with phonons.
At zero temperature the second term in \eqref{MagnonHamiltonian} does not yield renormalization of the magnon mass. We are thus left with the second-order contribution of \eqref{PhononMagnon} shown in Fig. \ref{Polaron}. We notice, that the same graphs appear in the perturbation theory of a mobile impurity in a single-component condensate (Bose polaron) \cite{Christensen}. The magnon now drags a cloud of phonons, which increases its effective mass. According to the general formula \eqref{magnonmass} the change in the mass is directly related to the Andreev-Bashkin entrainment effect. Evaluation of the graphs in Fig. \ref{Polaron} yields
\begin{equation}
\label{DiluteDrag}
\frac{\rho_{\uparrow\downarrow}}{mn}=\sqrt{\frac{2}{\pi}}\frac{64}{45}\sqrt{na^3}.
\end{equation}
The same formula for the superfluid drag density has been obtained in the earlier works \cite{Pastukhov1, Fil} by using hydrodynamic approaches. Hence, our consideration establishes a link between the effect of entrainment and the physics of Bose polarons.
Let us now follow Beliaev \cite{Beliaev} in considering the behaviour of the magnon dispersion \eqref{DiluteMagnon} in the high-momentum region $pa\sim 1$. By using the well-known result
\begin{equation}
\label{f3Dk}
f_0(k)=-\frac{\sin (k a)}{k} e^{-i k a}
\end{equation}
for the $s$-wave part of $f(\bm k^\prime,\bm k)$ at the mass shell we get
\begin{equation}
\label{MagnonHighMomenta}
\hbar\omega_\mathrm{m}(\bm p)=\frac{\hbar^2 p^2}{2m}+\frac{8\pi\hbar^2 n_0 a}{m}\left[\frac{\sin (pa)}{pa} -1\right],
\end{equation}
where we have omitted the imaginary part describing the damping of quasiparticles. Very similar expression can be obtained for the phonon (upper) branch of the dispersion. In that latter case Beliaev noticed, that if one formally allows the parameter $na^3$ to approach the unity, the spectrum develops a roton minimum. Such hypothetical state would mimic the superfluid Helium, rotonization of the spectrum being a signature of strong correlations and a precursor of an eventual transition to a solid state. An alternative way to probe that kind of physics is to use long-range interactions. Thus, for dipolar interactions the roton structure in the spectrum can be observed in the dilute and weakly-interacting limit \cite{Rotons}, signalling a possible transition to a supersolid \cite{FragmentedSS}.
The magnon dispersion \eqref{MagnonHighMomenta} does not develop a roton minimum upon increasing $na^3$. Rather, it flattens showing gradual increase of the quasiparticle mass. In terms of the above analogy with the Bose polaron, one can speak about \textit{magnon self-localization} [for the discussion of self-localization of polarons see Ref. \cite{Luis} and references therein]. In fact, cooperative self-localization of multiple impurities has been argued to represent the nucleation process for the phase separation transition \cite{Santamore}. On the other hand, one cannot exclude a possibility to find \textit{magneto-rotons} \cite{SpinRotons} in a more general case of unequal interactions, where the spin-wave dispersion becomes linear at the end-point. The resulting instability in this case could bring the system to a new phase, a \textit{magnon crystal}. Still retaining a uniform density, the mixed condensate would separate into an ordered array of domains characterized by alternating spin polarization \cite{MagnonCondensate}. Investigation of this intriguing possibility is the subject of on going work.
\subsection{2D gas}
The $s$-wave scattering amplitude in 2D is given by \cite{Landau}
\begin{equation}
\label{f2D}
f_0(E_{\bm k})=\frac{2\pi}{\ln{(E_{\bm k}/E_a)}},
\end{equation}
where we have defined $E_a=\hbar^2/ma^2$. For a hard-core potential of the radius $R_e$ and at small momenta one has $f(\bm k^\prime,\bm k)\approx f_0(k)$ with $a=e^{\gamma} R_e/2$, where $\gamma\approx 0.577$ is the Euler-Mascheroni constant \cite{Schick}. The integral term on the r.h.s of Eq. \eqref{Vert3} cannot be ignored, and it defines the value of the chemical potential [the formula \eqref{mu}] via the transcendental equation
\begin{equation}
\label{mu2D}
\mu=-\frac{2\hbar^2 n_0}{m}\frac{2\pi}{\ln p_c a},
\end{equation}
where $p_c\equiv\sqrt{2m\mu}/\hbar$ and one assumes $p_c a\ll 1$. Furthermore, by using the first formula in \eqref{SigmaViaTs} and assuming $\hbar\omega\approx\hbar^2 p^2/2m$, we obtain
\begin{equation}
\label{2Dcorrection}
\Sigma_s^{\uparrow\uparrow}(\mathsf p)-\Sigma_s^{\uparrow\downarrow}(\mathsf p)-\mu=-\frac{\pi\hbar^2 n_0}{2m}\frac{1}{\ln^2{p_c a}}\left(\frac{p}{p_c}\right)^2,
\end{equation}
which holds at $p\ll p_c$. Upon substitution into \eqref{Spec1} and comparison with the formula \eqref{magnonmass} [where we must let $n=n_0$] this yields
\begin{equation}
\label{2Ddrag}
\rho_{\uparrow\downarrow}=-\frac{1}{8\ln p_c a}mn,
\end{equation}
for the superfluid drag in 2D. The result agrees with that of Ref. \cite{Pastukhov2}. In contrast to the 3D case, here the drag contributes to the magnon dispersion already in the first order of the perturbation theory. This reflects the enhanced role of quantum fluctuations and polaronic effects in low dimensions.
Another important distinction from the 3D geometry is that the tail \eqref{2Dcorrection} cannot be reproduced by doing the Bogoliubov transformation of the Hamiltonian \eqref{DiluteHamiltonian}, in which $g_{\sigma\sigma^\prime}(\bm p, \bm q)$ is expressed via the 2D scattering length \eqref{f2D}. In this sense the concept of effective interaction does not apply here. Still, however, one can use the standard relationship \cite{Schick}
\begin{equation}
\label{g2D}
g=-\frac{2\hbar^2}{m}f_0(2\mu)
\end{equation}
to calculate the chemical potential \eqref{mu2D} and the excitation spectrum without the entrainment.
For long-range dipolar interactions the formula \eqref{g2D} should be supplemented with the so-called anomalous term \cite{Landau}, which to the leading order depends linearly on the transferred momentum \cite{DipolarScattering},
\begin{equation}
g(\bm p, \bm q)=g-\frac{2\pi\hbar^2}{m} \lvert \bm p -\bm q \rvert r_\ast,
\end{equation}
where $r_\ast$ is the dipolar length. For $ng\ll \hbar^2/mr_\ast^2$ the phonon branch of the spectrum may develop a roton-maxon structure \cite{RotonSpectrum}. As regards the magnon dispersion, no traces of the dipolar tail remain since the transferred momentum is identically zero for the forward scattering, which defines the correction to the dispersion in this case [see Eq.\eqref{DiluteMagnon}]. In other words, contribution of the dipolar tail to the superfluid drag can be neglected in the first approximation \cite{footnote}.
Finally, it is worth to point out, that the condition of weak interactions in 2D is automatically fulfilled in the range of validity of the formula \eqref{f2D}, i.e. $E_{\bm k}\ll E_a$. A different situation takes place in the vicinity of a shape resonance \cite{ResonantPairing}. Extension of our approach to this case will be given in a separate paper.
\section{Conclusions}
Our consideration generalizes the Beliaev diagrammatic theory to the case of a binary mixture of Bose-Einstein condensates. The elementary excitation spectrum consists of two gapless modes one of which takes the parabolic form \eqref{magnon} in the limit where the inter- and intra-species interactions are the same. We observe renormalization of the magnon mass due to the superfluid drag effect which contributes to the expansion of the kinetic energy of the system at small momenta. In the dilute regime the diagrams for the self-energy parts decouple into a set of independent ladders. This yields three effective potentials expressed via the corresponding scattering amplitudes. For weak interactions in 3D these potentials can be used to construct the effective Hamiltonian \eqref{DiluteHamiltonian} suitable for the perturbative expansion. The drag contributes to the magnon dispersion in the second order of the perturbation theory and can be calculated by dressing the magnons with the Bogoliubov phonon modes. The problem shares fruitful analogies with the physics of Bose polarons. Thus, an interesting direction for the future work is the search for \textit{magneto-rotons} and self-localized \textit{magnon crystals} in long-range interacting systems with specially designed microscopic potentials. In 2D we find renormalization of the magnon mass in the first approximation in $\beta$. This reflects the enhancement of quantum flcutuations in low dimensions. We thus expect the drag effect to play an important role in quantum-mechanical stabilization of a collapsing 2D Bose-Bose mixture already in the limit of weak interactions.
\section{Acknowledgements}
S.V. acknowledges the support by the Government of the Russian Federation (Grant No. 074-U01) through ITMO Postdoctoral Fellowship scheme.
\section*{Appendix A: Bogoliubov transformation}
Consider the Hamiltonian \eqref{DiluteHamiltonian} and assume the effective interaction potential to be not dependent on the particle's spin and weak, i. e. $na^3\ll 1$. Following the standard procedure, we replace the operators $\hat{a}_{0, \sigma}$ and $\hat a^\dag_{0, \sigma}$ with c-numbers: $\hat{a}_{0, \sigma} = \sqrt{N_0}$. The occupation numbers for the states with finite momenta are assumed to be small. By retaining only quadratic terms in $\hat{a}_{\bm p, \sigma}$ and $\hat a^\dag_{\bm p, \sigma}$ with $\bm p \neq 0$ we get
\begin{widetext}
\begin{equation*}
\begin{aligned}
\hat{H}_{\ast}=\sum_{{\bm p},\sigma}\varepsilon_{\bm p}^0 \hat{a}_{{\bm p},\sigma}^{\dag}
\hat{a}_{{\bm p},\sigma}+
\frac{n_0}{2} \sum_{{\bm p}} \left( g(0,{\bm p}) (\hat{a}_{{\bm p},\sigma}\hat{a}_{\bm{-p},\sigma} + \hat{a}_{{\bm p},\sigma}\hat{a}_{\bm{-p},\sigma^\prime} +\hat{a}_{\bm{-p},\sigma}\hat{a}_{{\bm p},\sigma^\prime} + \hat{a}_{{\bm p},\sigma^\prime}\hat{a}_{\bm{-p},\sigma^\prime}) + h.c. \right) + \\
\frac{n_0}{2} \sum_{{\bm p}} g(\tfrac{{\bm p}}2,\tfrac{{\bm p}}2)( 4 \hat{a}_{{\bm p},\sigma}^{\dag} \hat{a}_{{\bm p},\sigma} + 4 \hat{a}_{{\bm p},\sigma^\prime}^{\dag} \hat{a}_{{\bm p},\sigma^\prime})
+\frac{n_0}{2} \sum_{{\bm p}} g(\tfrac{{\bm p}}2,\tfrac{\bm{-p}}2)( 2\hat{a}_{{\bm p},\sigma}^{\dag} \hat{a}_{{\bm p},\sigma^\prime} + 2\hat{a}_{{\bm p},\sigma^\prime}^{\dag} \hat{a}_{{\bm p},\sigma} + 2\hat{a}_{{\bm p},\sigma}^{\dag} \hat{a}_{{\bm p},\sigma} + 2 \hat{a}_{{\bm p},\sigma^\prime}^{\dag} \hat{a}_{{\bm p},\sigma^\prime} ) \\ - g(0,0) \sum_{{\bm p}} ( 4N \hat{a}_{{\bm p},\sigma}^{\dag} \hat{a}_{{\bm p},\sigma} + 4N \hat{a}_{{\bm p},\sigma^\prime}^{\dag} \hat{a}_{{\bm p}.\sigma^\prime} ),
\end{aligned}
\end{equation*}
\end{widetext}
Denoting for simplicity $ \hat{a}_{{\bm p},\sigma} = \hat{a}_{{\bm p}},
\; \hat{a}_{{\bm p},\sigma^\prime} = \hat{b}_{{\bm p}}
\; $ and $ \; \varepsilon^\prime_{\bm p} = \varepsilon_{\bm p}^0 + 2 n_0 g(\tfrac{{\bm p}}2,\tfrac{{\bm p}}2) + n_0 g(\tfrac{{\bm p}}2,\tfrac{\bm{-p}}2) - 2 n_0 g(0,0)$ we rewrite the above equation in the form
\begin{widetext}
\begin{equation*}
\hat{H}_{\ast}=\sum_{{\bm p}} \Big[ \varepsilon_{\bm p}^\prime \big( \hat{a}_{{\bm p}}^{\dag}
\hat{a}_{{\bm p}} + \hat{b}_{{\bm p}}^{\dag}\hat{b}_{{\bm p}} \big) +
\frac{n_0}{2} \big( g(0,{\bm p}) (\hat{a}_{{\bm p}}\hat{a}_{\bm{-p}} + \hat{a}_{{\bm p}}\hat{b}_{\bm{-p}} + \hat{a}_{\bm{-p}}\hat{b}_{{\bm p}}+ \hat{b}_{{\bm p}}\hat{b}_{\bm{-p}}) + h.c. \big) + n_0 g(\tfrac{{\bm p}}2,\tfrac{\bm{-p}}2) \big( \hat{a}_{{\bm p}}^{\dag} \hat{b}_{{\bm p}} + \hat{b}_{{\bm p}}^{\dag} \hat{a}_{{\bm p}}\big) \Big].
\end{equation*}
\end{widetext}
Consider a unitary transformation $U$ with real coefficients and assume $\lambda_i(p), \beta_i(p)$ to be even functions of $p$.
\begin{eqnarray*}
U \hat{a}_{{\bm p}} U^\dag = \lambda_1(p) \hat{a}_{{\bm p}} + \lambda_2(p) \hat{a}_{\bm{-p}}^\dag + \beta_1(p) \hat{b}_{{\bm p}} + \beta_2(p) \hat{b}_{{\bm p}}^\dag \\
U \hat{b}_{{\bm p}} U^\dag = \lambda_3(p) \hat{a}_{{\bm p}} + \lambda_4(p) \hat{a}_{\bm{-p}}^\dag + \beta_3(p) \hat{b}_{{\bm p}} + \beta_4(p) \hat{b}_{{\bm p}}^\dag
\end{eqnarray*}
$U^\dag \hat{a}_{{\bm p}} U$ and $U^\dag \hat{b}_{{\bm p}} U$ can be understood as quasiparticle annihilation operators. We search for $U$ that diagonalises the Hamiltonian, so $U \hat{H}_{\ast} U^\dag = \sum_{{\bm p}}\omega_{\bm p}\big( \hat{a}_{{\bm p}}^{\dag} \hat{a}_{{\bm p}} + \hat{b}_{{\bm p}}^{\dag}\hat{b}_{{\bm p}} \big) + E_0$, where $\omega_{\bm p}$ is the excitation energy.
Using the relations $U[ \hat{H}_{\ast}, \hat{a}_{{\bm p}}^\dag] U^\dag =[ U \hat{H}_{\ast} U^\dag, U \hat{a}_{{\bm p}}^\dag U^\dag]$, $U[ \hat{H}_{\ast}, \hat{b}_{{\bm p}}^\dag] U^\dag =[ U \hat{H}_{\ast} U^\dag, U \hat{b}_{{\bm p}}^\dag U^\dag]$, we get the linear system
\begin{widetext}
\begin{equation*}
\begin{split}
\varepsilon_{\bm p}^\prime \big( \lambda_1 \hat{a}_{{\bm p}}^\dag + \lambda_2 \hat{a}_{\bm{-p}} +\beta_1\hat{b}_{{\bm p}}^\dag + \beta_2 \hat{b}_{\bm{-p}} \big) + \frac{n_0}{2} g(0,{\bm p}) \big( (\lambda_1 + \lambda_3) \hat{a}_{\bm{-p}} + (\lambda_2 +\lambda_4) \hat{a}_{{\bm p}}^\dag + (\beta_1+\beta_3) \hat{b}_{\bm{-p}} + (\beta_2+\beta_4) \hat{b}_{{\bm p}}^\dag \big) + \\ + n_0 g(\tfrac{{\bm p}}2,\tfrac{\bm{-p}}2) \big( \lambda_3 \hat{a}_{{\bm p}}^\dag + \lambda_4 \hat{a}_{\bm{-p}} +\beta_3 \hat{b}_{{\bm p}}^\dag + \beta_4 \hat{b}_{\bm{-p}} \big) = \omega_{\bm p}\big( \lambda_1 \hat{a}_{{\bm p}}^\dag - \lambda_2 \hat{a}_{\bm{-p}} +\beta_1\hat{b}_{{\bm p}}^\dag - \beta_2 \hat{b}_{\bm{-p}} \big)
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\varepsilon_{\bm p}^\prime \big( \lambda_3 \hat{a}_{{\bm p}}^\dag + \lambda_4 \hat{a}_{\bm{-p}} +\beta_3 \hat{b}_{{\bm p}}^\dag + \beta_4 \hat{b}_{\bm{-p}} \big) + \frac{n_0}{2} g(0,{\bm p}) \big( (\lambda_1 + \lambda_3) \hat{a}_{\bm{-p}} + (\lambda_2 +\lambda_4) \hat{a}_{{\bm p}}^\dag + (\beta_1+\beta_3) \hat{b}_{\bm{-p}} + (\beta_2+\beta_4) \hat{b}_{{\bm p}}^\dag \big) + \\ + n_0 g(\tfrac{{\bm p}}2,\tfrac{\bm{-p}}2) \big( \lambda_1 \hat{a}_{{\bm p}}^\dag + \lambda_2 \hat{a}_{\bm{-p}} +\beta_1\hat{b}_{{\bm p}}^\dag + \beta_2 \hat{b}_{\bm{-p}} \big) = \omega_{\bm p}\big( \lambda_3 \hat{a}_{{\bm p}}^\dag - \lambda_4 \hat{a}_{\bm{-p}} +\beta_3 \hat{b}_{{\bm p}}^\dag - \beta_4 \hat{b}_{\bm{-p}} \big)
\end{split}
\end{equation*}
\end{widetext}
The dispersion law can be obtained by equating the determinant to zero. We find
\begin{equation}
\label{gDiluteMagnon}
\omega_{\bm p} = \varepsilon_{\bm p}^0 + 2 n_0 \Big[ g(\tfrac{{\bm p}}2,\tfrac{{\bm p}}2) - g(0,0) \Big]
\end{equation}
in agreement with the formula \eqref{DiluteMagnon} and
\begin{widetext}
\begin{equation}
\label{gDilutePhonon}
\omega_{\bm p} = \sqrt{(\varepsilon_{\bm p}^0)^2+4 n_0 \varepsilon_{\bm p}^0
\big[ g(\tfrac{{\bm p}}2,\tfrac{\bm{-p}}2) + g(\tfrac{{\bm p}}2,\tfrac{{\bm p}}2) - g(0,0) \big] + 4 n_0^2 \Big[\big( g(\tfrac{{\bm p}}2,\tfrac{\bm{-p}}2) + g(\tfrac{{\bm p}}2,\tfrac{{\bm p}}2) - g(0,0) \big)^2 - g(0,{\bm p})^2 \Big]},
\end{equation}
\end{widetext}
which has the typical linear form at $\bm p\rightarrow 0$ and describes the excitation of phonons. Manifestation of the entrainment in this latter branch will be discussed in a separate paper.
\section*{Appendix B: General case of unequal masses, densities and interaction potentials}
In this section we consider a general situation - we have two types of bosons - ``a'' and ``b'' with different masses $m_a$ and $m_b$, correspondingly. Thus, we have two different bare Green's functions
\begin{eqnarray}
\label{Ga1}
G^{-1}_a(\omega, \textbf{p}) &=& \omega - \frac{p^2}{2m_a} +\mu_a +i0, \\ \label{Ga2}
G^{-1}_b(\omega, \textbf{p}) &=& \omega - \frac{p^2}{2m_b} +\mu_b +i0.
\end{eqnarray}
Although now we have two different kind of particles the basic idea of the theory is the same. It is easy to show that the main contributions to the self-energy parts stem from the ladder diagrams shown in Fig. \ref{Vertex}, other contributions are small in the gas parameter. For simplicity we consider particle scattering amplitudes as momenta-independent, corresponding renormalized interaction vertexes are $g_{aa}, g_{ab}, g_{bb}$. We denote condensates densities as $n_a$ and $n_b$. So we have the following system of Dyson equations:
\begin{widetext}
\begin{equation}\label{Matr3}
\left[
\begin{array}{cccc}
G^{-1}_a(\mathsf p) - \Sigma^{aa}_{11} & -(n_a n_b)^{1/2} g_{ab} & - n_a g_{aa} & -(n_a n_b)^{1/2} g_{ab}\\
-(n_a n_b)^{1/2} g_{ab} & G^{-1}_b(\mathsf p) - \Sigma^{bb}_{11} & -(n_a n_b)^{1/2} g_{ab} & - n_b g_{bb} \\
n_a g_{aa} & -(n_a n_b)^{1/2} g_{ab} & G^{-1}_a(-\mathsf p) - \Sigma^{aa}_{11} & -\-(n_a n_b)^{1/2} g_{ab} \\
-(n_a n_b)^{1/2} g_{ab} & - n_b g_{bb} & -(n_a n_b)^{1/2} g_{ab} & G^{-1}_b(-\mathsf p) - \Sigma^{bb}_{11} \\
\end{array}
\right] \left[
\begin{array}{c}
G_{aa}(\mathsf p) \\
G_{ba}(\mathsf p) \\
F^\dag_{aa}(\mathsf p) \\
F^\dag_{ba}(\mathsf p) \\
\end{array}
\right]
= \left[
\begin{array}{c}
1 \\
0 \\
0 \\
0 \\
\end{array}
\right],
\end{equation}
\end{widetext}
where
\begin{eqnarray}
\Sigma^{aa}_{11} &=& 2 n_a g_{aa} + n_b g_{ab}, \\
\Sigma^{bb}_{11} &=& 2 n_b g_{bb} + n_a g_{ab},
\end{eqnarray}
and the same with a change of all indexes $a \leftrightarrow b$. First one should find the chemical potentials $\mu_a$ and $\mu_b$. We put $\mathsf p=0$ into \eqref{Matr3} and solve it. After cumbersome calculations we find a solution that provides poles in Green's functions at $\mathsf p=0$ and satisfies the condition $F^\dag_{aa} \approx -G_{aa}$ and $F^\dag_{bb} \approx -G_{bb}$:
\begin{eqnarray}
\label{Chema}
\mu_a &=& n_a g_{aa} + n_b g_{ab} ,\\ \label{Chemb}
\mu_b &=& n_b g_{bb} + n_a g_{ab}.
\end{eqnarray}
This equations generalize the corresponding equations for chemical potential in the main text. Now we can find Green's functions, all of them have the same denominator
\begin{widetext}
\begin{equation}\label{Den1}
\begin{split}
D(\omega, \textbf{p}) = & \omega^4 - \left[ \varepsilon^2_a(p) + 2 n_a g_{aa} \varepsilon_a(p) + \varepsilon^2_b(p) + 2 n_b g_{bb} \varepsilon_b(p) \right] \omega^2 + \\
+ & \varepsilon_a(p) \varepsilon_b(p) \left[ \varepsilon_a(p) \varepsilon_b(p) + 2 n_b g_{bb} \varepsilon_a(p) + 2 n_a g_{aa} \varepsilon_b(p) - 4 n_a n_b (g^2_{ab} - g_{aa} g_{bb}) \right].
\end{split}
\end{equation}
\end{widetext}
Using this formula we can find quasiparticles spectra in the system. One has a positive root if $g^2_{ab}<g_{aa} g_{bb}$, which corresponds to the miscibility condition \eqref{miscibility}. After some calculations we get
\begin{widetext}
\begin{equation}\label{Spec2}
\begin{split}
\omega^2(\textbf{p})= & \frac{1}{2} \Biggl[ \varepsilon^2_a(p) + 2 n_a g_{aa} \varepsilon_a(p) + \varepsilon^2_b(p) + 2 n_b g_{bb} \varepsilon_b(p) \pm \\ \pm & \sqrt{\left(\varepsilon^2_a(p) + 2 n_a g_{aa} \varepsilon_a(p) - \varepsilon^2_b(p) - 2 n_b g_{bb} \varepsilon_b(p)\right)^2 + 16 n_a n_b g^2_{ab} \varepsilon_a(p) \varepsilon_b(p) } \Biggr]. \\
\end{split}
\end{equation}
\end{widetext}
One can see that if $g_{ab}=0$ then we get a usual phonon spectra for ``a'' and ``b'' particles, if we consider symmetric case then we get spectra given in the main text.
The Green's functions have the following form
\begin{widetext}
\begin{eqnarray}
G_{aa}(\omega, \textbf{p}) &=& \Bigl[ \omega^3 + \left(\varepsilon_a(p) + n_a g_{aa} \right) \omega^2 - \varepsilon_b(p) \left(\varepsilon_b(p) + 2 n_b g_{bb}\right) \omega - \\ \nonumber &-& \varepsilon_b(p) \left(\varepsilon_b(p)\left[\varepsilon_a(p) + n_a g_{aa}\right] + 2 \left[g_{bb}\varepsilon_a(p) - n_a g^2_{ab} + n_a g_{aa} g_{bb}\right]n_b \right) \Bigr]/D(\omega, \textbf{p}), \\
G_{ba}(\omega, \textbf{p}) &=& \frac{g_{ab}\sqrt{n_a n_b}(\omega+\varepsilon_a(p))(\omega+\varepsilon_b(p))}{D(\omega, \textbf{p})}, \\
F\dag_{aa}(\omega, \textbf{p}) &=& \frac{n_a \varepsilon_b(p) \left[ g_{aa}\varepsilon_b(p) - 2 n_b g^2_{ab} + 2 n_b g_{aa} g_{bb} \right] - n_a g_{aa} \omega^2 }{D(\omega, \textbf{p})}, \\
F^\dag_{ba}(\omega, \textbf{p}) &=& \frac{g_{ab}\sqrt{n_a n_b}(\omega+\varepsilon_a(p))(\varepsilon_b(p)-\omega)}{D(\omega, \textbf{p})},
\end{eqnarray}
\end{widetext}
other Green's functions can be obtained by changing $a \leftrightarrow b$.
\section*{Appendix C: Experimental detection of magnons}
The spin-wave dispersion can be extracted from the measurements of the dynamic structure factor
\begin{equation}
S_\mathrm{m}(\bm q,\omega)=\frac{1}{n}\int <\hat n_\uparrow (\bm r,t)\hat n_\downarrow(0,0)>e^{-(i\bm q\bm r-\omega t)}d\bm r dt
\end{equation}
as detailed in Ref. \cite{Carusotto}. Difficulties may arise in the case of the parabolic dependence \eqref{magnon} since the spectrum takes this form at the miscibility transition point, where the condensates tend to separate. What could be easier observed in this case is a change in the static structure factor
\begin{equation}
S_\mathrm{m}(\bm q)=\frac{<\hat n_{\uparrow,\bm q}\hat n_{\downarrow,-\bm q}>}{N},
\end{equation}
where $\hat n_{\sigma,\bm q}=\int \hat n_\sigma(\bm r) e^{-i\bm q\bm r}d\bm r$. Within the Bogoliubov approach one has $\hat n_{\sigma,\bm q}=\sqrt{N}(\hat a_{\sigma,-\bm q}^\dagger+\hat a_{\sigma,\bm q})$. By substituting the Bogoliubov transformation for the operators $a_{\sigma,\bm q}$ and taking advantage of the fact that at $T=0$ one has $<\hat b_{\bm q}\hat b_{\bm q}^\dagger>=1$ we find at $q\rightarrow 0$
\begin{equation}
S_\mathrm{m}(\bm q)=\frac{1}{4}\sqrt{\frac{\hbar^2 p^2}{mn}}\frac{\sqrt{g_a}-\sqrt{g_s}}{\sqrt{g_a g_s}},
\end{equation}
where $g_{s,a}=g_{\uparrow\uparrow}\pm g_{\uparrow\downarrow}$. One can see that the static structure factor diverges like $\sim 1/\sqrt{g_a}$ as $g_{\uparrow\uparrow}\rightarrow g_{\uparrow\downarrow}$.
To study renormalization of the magnon mass due to the entrainment one can employ the experimental scheme discussed in Ref. \cite{Marti}. In this experiment a standing wave of magnons is imprinted onto the condensate by illuminating the atoms with two equal-frequency circularly polarized light beams and modulating their intensity at the frequency corresponding to a Raman transfer between Zeeman levels. The dispersion relation then can be obtained by analyzing the dynamics of the resulting spin distribution. Interestingly, the authors ascertain a tiny increase of the magnon mass as compared to the bare mass of atoms.
|
1,941,325,221,081 | arxiv | \section{Introduction}
The fascinating discovery journey of Carrollian physics has begun purely out of the mathematical curiosity of Lévy-Leblond \cite{Leblond1965} when he first proposed a new non-Lorentzian limit of flat spacetime and derived its resulting contracted isometry group. The novel Carrollian\footnote{It was named after Lewis Carroll, the author of Through the Looking-Glass.} limit (also referred to as the ultra-relativistic limit and the ultra-local limit by different authors), lying at the opposite side to the familiar Galilean (non-relativistic) limit, along with associated geometries, symmetries, and rich physics unfolded in this limit have recently gained unprecedented attentions from many fields of theoretical physics, especially from the flat space holography community.
Given any relativistic theory, non-Lorentzian variants are regarded
as limits of the original relativistic theory as the \emph{speed of light}, $c$, approaches extreme values. There are two types of non-Lorentzian limit --- the Galilean limit and the Carrollian limit. The former corresponds to the limit $c \to \infty$\footnote{To be more rigorous, one would rather need to consider the dimensionless parameter $\frac{c}{v}$ where a characteristic velocity $v$ of a problem under consideration. The final results, however do not differ from naively using $c$ as the varying parameter.} while the latter corresponds to the opposite limit, $c \to 0$. Changing the speed of light affects spacetime structures, with a notable example being a structure of light cones. In the well-familiar Galilean case, light cones expand as $c \to \infty$ so that a free particle traverses spacetime without a speed limit, and there exists a notion of absolute time. Light cones, however collapse in the Carrollian limit $c \to 0$, hence freezing a free particle's motion and thereby completely inhibiting causal interactions between any spacetime events. It is in this sense that the Carrollian limit is sometimes called the ultra-local limit\footnote{Clarification of terminology is in order here. In terms of a dimensionless parameter $\frac{c}{v}$, the ultra-local limit corresponds to the case where $\frac{c}{v} \to 0$, meaning that the characteristic velocity of the problem tends to zero slower that $c$, in turn freezing the dynamics. On the other hand, the ultra-relativistic limit corresponds to the limit $\frac{c}{v} \to 1$, inferring that $v$ trends to $c$ in this limit. Unfortunately, these two terminologies have been mixed up and used interchangeably in the literature.}. In addition, the trademark of Carrollian theories, contrary to the Galilean case, is the existence of absolute space.
Spacetime symmetries are also contracted to the Galilei group and the Carroll group in their respective limits and their associated Lie algebras are derived from the Inönü-Wigner contraction \cite{Inonu1953}.
Although Lévy-Leblond deemed practical utilization of the Carrollian limit and the Carroll group problematic, interest in Carrollian physics has recently been rejuvenated and gained ever-increasing attention due to its wealth of interesting aspects and applications. Developments in this topic include the generalization of Carroll geometries beyond flat spacetime \cite{Duval:2014uoa,Ciambelli:2019lap}. Non-trivial dynamics of systems of Carroll particles which occurs when turning on interactions and when particles are coupled to non-trivial background fields has been explored in \cite{Bergshoeff:2014jla, Marsot:2021tvq, Bidussi:2021nmp, Marsot:2022qkx}. Carrollian limit has also been studied in a wide range of relativistic theories: \cite{Gibbons:2002tv, Bagchi:2015nca, Bagchi:2016yyf, Bagchi:2017cte, Bagchi:2018wsn, Bagchi:2013bga, Bagchi:2021ban,Bergshoeff:2020xhv,Roychowdhury:2019aoi} for strings and branes, \cite{Ravera:2022buz} for supergravity theories, \cite{Basu:2018dub,Bagchi:2019clu, Banerjee:2020qjj} for electrodynamics, and aspects of the Carrollian gravity have also been addressed in \cite{Hartong:2015xda, Bergshoeff:2017btm, Duval:2017els, Morand:2018tke,Bergshoeff:2019ctr,Gomis:2019nih, Ballesteros:2019mxi, Gomis:2020wxp, Grumiller:2020elf, Hansen:2021fxi,Concha:2021jnn,Guerrieri:2021cdz,Perez:2021abf,Sengupta:2022jlx,Perez:2022jpr, Campoleoni:2022ebj}. Furthermore, a recent resurgence of Carrollian physics was largely catalyzed by the deep connection between Carroll geometries and null boundaries. At asymptotic null infinities, the connection between the (conformal) Carroll group and the Bondi-van der Burg-Metzner-Sachs (BMS) group \cite{Duval:2014uva,Duval:2014lpa} plays a central role in understanding of holography of asymptotically flat spacetime \cite{Bagchi:2022emh,Campoleoni:2022wmf,Bagchi:2022owq, Donnay:2022aba, Donnay:2022sdg} and thereby motivates the studies on Carrollian field theories \cite{Bagchi:2019xfx,Gupta:2020dtl,Bagchi:2022eav}. Carrollian physics has also appeared in the context of inflationary cosmology \cite{deBoer:2021jej}.
In this work, we are interested in Carrollian fluids, which is another non-Lorentzian counterpart of relativistic fluids along with non-relativistic Galilean (or Navier-Stokes) fluids. Hydrodynamic equations governing the dynamics of Carrollian fluids are derived from the $c \to 0$ limit of the relativistic conservation laws of general relativistic fluids \cite{Ciambelli:2018xat}. While seemingly irrelevant to real-world fluids, Carrollian hydrodynamics has been shown to have applications in the field of black holes and holography \cite{Penna:2018gfx, Ciambelli:2018ojf, Ciambelli:2018wre, Campoleoni:2018ltl,Donnay:2019jiz,Ciambelli:2020eba,Ciambelli:2020ftk, Bagchi:2021qfe,Bagchi:2021gai,Petkou:2022bmz}. Akin to the Galilean case, the hydrodynamics equations of Carrollian fluids include the evolution equation of Carrollian energy density and the evolution equations of Carrollian momentum density (which are the Carrollian analog of the Navier-Stokes equations). One apparent difference between the two non-Lorentzian fluids lies in their respective continuity equations. In the Galilean case, there is a notion of a spin-0 quantity, the fluid mass density, which is conserved. Carrollian fluids instead exhibit a conserved spin-1 quantity, the Carrollian heat current. Carrollian hydrodynamics also has one more constraint equation.
Since there are conservation laws for Carrollian fluids, a natural question followed by the Noether theorem then arises --- \emph{what symmetries are associated with these Carrollian conservation laws?} This question has already been addressed in \cite{Ciambelli:2018ojf, Petkou:2022bmz} where it has been demonstrated that Carrollian symmetry corresponds to the energy density and momentum density evolutions of Carrollian hydrodynamics. Thus, these works only managed to derive a part of the Carrollian fluid dynamics and the continuity equation of the Carrollian heat current and the constraint equation needed to be additionally supplemented. Our objective is to complete and hence generalize their results and provide a complete derivation of Carrollian hydrodynamics from symmetries. The incompleteness in their derivations that we are trying to fix stems from the following:
\begin{enumerate}[label = \roman*)]
\item Phase space of Carrollian hydrodynamics presented in \cite{Ciambelli:2018ojf, Petkou:2022bmz} lacked two fluid momenta, namely the Carrollian energy density and the sub-leading Carrollian viscous stress tensor, which appears in the constraint equation. We will show that these two momenta are conjugate to the fluid velocity field and the sub-leading sphere metric.
\item Carrollian symmetry is too restrictive. Because there are more equations of Carrollian hydrodynamics than those in the original relativistic hydrodynamics, more symmetries are required. The main result in this work is the enhanced symmetries, called the near-Carrollian symmetries, that yield all equations of Carrollian fluids.
\end{enumerate}
The article is structured as follows. We start in section \ref{carroll-structure} with the introduction of Carroll structures, which serves as the most basic building block of Carroll geometries and Carrollian physics. We will discuss Carrollian hydrodynamics in section \ref{hydro} starting from relativistic conservation laws and then carefully consider the Carrollian limit. This closely follows the idea first explored in \cite{Ciambelli:2018xat} and we formalize it using the language of Carroll structures. Finally, in section \ref{hydro-sym}, we present a new viewpoint on Carrollian hydrodynamics based on symmetries. We propose a new notion of symmetries, which we call near-Carrollian symmetries, that extends the usual Carroll symmetries. We then demonstrate that these symmetries are associated to the full set of Carrollian hydrodynamics and derive the corresponding Noether charges. Lastly, we conclude in section \ref{conclusion} and comment about the possible avenue of investigations.
\section{Carroll Structures} \label{carroll-structure}
We dedicate this section to describe \emph{universal} building blocks of Carroll geometries which underpin the research field of Carrollian physics: the so-called \emph{Carroll structures}. In what follows, we consider a $3$-dimensional space $H$ endowed with a null metric $q$ whose kernel is generated by a nowhere vanishing vector field $\ell$, meaning that $q(\ell, \cdot) =0$. The triplet $(H,\ell, q)$ provides a (weak) definition of Carroll structures\footnote{A strong definition of Carroll structure requires, in addition, an affine connection that parallel transports both the metric $q$ and the vector $\ell$ \cite{Duval:2014uoa,Duval:2014uva,Duval:2014lpa}. This connection however is not uniquely determined from the pair $(\ell, q)$ due to the non-degenerate nature of $q$.} \cite{Duval:2014uoa,Duval:2014uva,Duval:2014lpa, Ciambelli:2019lap}. Carroll structures are universal intrinsic structures of null surfaces both at finite distances \cite{Chandrasekaran:2018aop, Chandrasekaran:2021hxc, Ashtekar:2021wld}\footnote{\label{kappa} In \cite{Chandrasekaran:2018aop}, a complete universal structure of a null surface, viewed as a hypersurface embedded in an ambient spacetime, also includes an in-affinity function $\kappa$ of the null vector $\ell$. $\kappa$ is defined as a time connection which transforms under rescaling $\ell \to e^\alpha \ell$ as $ \kappa \to e^{\alpha}( \kappa + \ell[\alpha])$.} and at asymptotic infinities \cite{Ashtekar:2014zsa, Ashtekar:2018lor}.
Carroll structures are naturally described in the language of fiber bundle \cite{Ciambelli:2019lap}. This specifically means that the space $H$ is a fiber bundle, $p: H \to S$, with a 1-dimensional fiber. The $2$-dimensional base space $S$ can be chosen, for relevant physics at hand, to have a topology of a $2$-sphere (and will be dubbed the sphere in this article). We denote local coordinates on the sphere $S$ by $\{ \sigma^A \}$ and denote by $q_{AB} \bm{\mathrm{d}} \sigma^A \circ \bm{\mathrm{d}} \sigma^B$ a metric on $S$.
Stemming from the fiber bundle structure of the space $H$, one defines the \emph{vertical subspace} of the tangent space $TH$, denoted by $\text{\bf vert}(H)$, to be a 1-dimensional kernel of the differential of the projection map, $\bm{\mathrm{d}} p: TH \to TS$,
\begin{align}
\mathrm{\bf vert}(H) := \text{ker}(\bm{\mathrm{d}} p).
\end{align}
A vertical vector field $\ell \in \mathrm{\bf vert}(H)$ that belongs to the Carroll structure is a preferred representative of the equivalence class $[\ell]_{\sim}$ with the equivalence relation being the rescaling that preserve the direction of $\ell$, that is $\ell \sim \mathrm{e}^{\epsilon}\ell$, where $\epsilon$ is any arbitrary function on the space $H$. In this sense, the Carrollian vector $\ell$ also serves as a basis of the vertical subspace. Another element of the Carroll structure is a null Carrollian metric $q$ whose 1-dimensional kernel coincides with the vertical subspace, inferring that $q(\ell, \cdot) = 0$. The null metric can be obtained by pulling back a metric on the sphere $S$ by the projection map, that is
\begin{align}
q = p^* (q_{AB} \bm{\mathrm{d}} \sigma^A \otimes \bm{\mathrm{d}} \sigma^B) = q_{AB} \bm{e}^A \otimes \bm{e}^B,
\end{align}
where we introduced the co-frame field $\bm{e}^A$ which is the pullback of the coordinate form $\bm{\mathrm{d}} \sigma^A$ on the sphere $S$ by the projection map,
\begin{align}
\bm{e}^A:= p^*(\bm{\mathrm{d}} \sigma^A), \qquad \text{such that} \qquad \iota_\ell \bm{e}^A=0.
\end{align}
Note that the co-frame field, by definition, is a closed form on $H$, $\bm{\mathrm{d} e}^A =0$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.25]{Carroll}
\caption{The space $H$ endowed with the Carroll structure. The general coordinates are $x^i = (u,y^A)$ where the surfaces at the cuts $u = \text{constant}$ are identified with the sphere $S$. The vertical vector $\ell$ and the horizontal vector $e_A$ span the tangent space $TH$} \label{carroll-pic}
\end{figure}
Provided the Carroll structure on $H$, it then becomes possible to have an intrinsic separation of the tangent space $TH = \text{\bf vert}(H) \oplus \text{\bf hor}(H)$ into the aforementioned vertical subspace, $\text{\bf vert}(H)$, and its complement, the \emph{horizontal subspace} denoted by $\text{\bf hor}(H)$. This splitting can be achieved by introducing a connection 1-form, $\bm{k} \in T^*H$, dual to the vertical vector $\ell$,
\begin{align}
\iota_{\ell} \bm{k} =1. \label{orthogonal}
\end{align}
The 1-form $\bm{k}$ is known as the \emph{Ehresmann connection} in the literature \cite{Ciambelli:2019lap,Chandrasekaran:2021hxc,Petkou:2022bmz }. Its kernel, seen as a linear map $\bm{k}: TH \to \mathbb{R}$, thus defines the $2$-dimensional horizontal subspace. This equivalently means that
\begin{equation}
\text{\bf hor}(H) := \{ X \in TH | \iota_X \bm{k} =0 \}.
\end{equation}
In the following, we will denote a basis of the horizontal subspace by $e_A \in \mathrm{\bf hor}(H)$ which, by definition, obeys the condition $\iota_{e_A} \bm{k}=0$. Furthermore, without loss of generality, we can choose these horizontal basis vector fields to be ones that are dual to the co-frame field,
\begin{align}
\iota_{e_A} \bm{e}^B= \delta_A^B.
\end{align}
The frame fields $(\ell, e_A)$ and the dual co-frame fields $(\bm{k}, \bm{e}^A)$ therefore serve as a complete basis for the tangent space $TH$ and the cotangent space $T^*H$, respectively (see Figure \ref{carroll-pic}). In this basis, any vector field $X \in TH$ and any 1-forms $\bm{\omega}\in T^* H$ can therefore be uniquely decomposed as follows:
\begin{align}
X= (\iota_X \bm{k}) \ell + (\iota_X \bm{e}^A) e_A, \qquad \text{and} \qquad \bm{\omega} = (\iota_{\ell} \bm{\omega}) \bm{k} + (\iota_{e_A} \bm{\omega}) \bm{e}^A.
\end{align}
Similarly, a differential of a function $F$ on the space $H$ can be expressed as
\begin{align}
\bm{\mathrm{d}} F = \ell [F] \bm{k} + e_A[F] \bm{e}^A.
\end{align}
Lastly, having the intrinsic splitting of the tangent space $TH = \text{\bf vert}(H) \oplus \text{\bf hor}(H)$, one can naturally define the horizontal projector from the tangent space $TH$ to its horizontal components as
\begin{align}
q_i{}^j := e^A{}_ie_A{}^j = \delta^j_i -k_i \ell^j, \label{C-projector}
\end{align}
and it satisfies the conditions $q_i{}^j k_j =0$ and $\ell^i q_i{}^j =0$.
\subsection{Acceleration, Vorticity, and Expansion}
Next, we introduce two important objects that are naturally inherited from the Carroll structure and they will later appear when discussing Carrollian hydrodynamics \cite{Ciambelli:2018xat, Ciambelli:2018ojf, Petkou:2022bmz}. These objects are the \textit{Carrollian acceleration}, denoted by $\varphi_A$, and the \textit{Carrollian vorticity}, denoted by $w_{AB}$. They are components of the curvature of the Ehresmann connection 1-form,
\begin{equation}
\begin{aligned}
\bm{\mathrm{d} k} &:= -\left(\varphi_A \bm{k} \wedge \bm{e}^A + \frac{1}{2} w_{AB} \bm{e}^A \wedge \bm{e}^B\right). \label{d-k}
\end{aligned}
\end{equation}
Let us also recall that the co-frame $\bm{e}^A$ is closed, i.e., $\bm{\mathrm{d} e}^A =0$. One can then show that the components $(\varphi_A,w_{AB})$ are also determined by the commutators of the basis vector fields. This correspondence can be established by invoking the identity $[\iota_X, {\mathcal{L}}_Y ]\bm{\omega} = \iota_{[X,Y]} \bm{\omega}$ for any vector fields $X, Y \in TH$ and any 1-form $\bm{\omega} \in T^*H$. By making use of the Cartan formula, ${\mathcal{L}}_X = \bm{\mathrm{d}} \iota_X + \iota_X \bm{\mathrm{d}}$, one can show that
\begin{equation}
\iota_X \iota_Y \bm{\mathrm{d} \o} = \iota_{[X,Y]}\bm{\o} + {\mathcal{L}}_Y (\iota_X \bm{\o}) -{\mathcal{L}}_X (\iota_Y \bm{\o}). \label{duality-com}
\end{equation}
Using this result and the property $\bm{\mathrm{d} e}^A =0$, we show that the commutators of the frame fields satisfy the conditions,
\begin{align}
\iota_{[\ell, e_A]} \bm{e}^B = 0, \qquad \text{and} \qquad \iota_{[e_A, e_B]} \bm{e}^C =0,
\end{align}
suggesting that both commutators $[\ell, e_A]$ and $[e_A,e_B]$ lie in the vertical subspace. Similarly, using the definition \eqref{d-k}, it then follows that,
\begin{align}
\varphi_A = \iota_{[\ell, e_A]} \bm{k}, \qquad \text{and} \qquad w_{AB} = \iota_{[e_A, e_B]} \bm{k}.
\end{align}
All these conditions therefore determines the commutation relations of the frame fields\footnote{Our definition of the Carrollian vorticity $w_{AB}$ differs from \cite{Ciambelli:2018xat, Petkou:2022bmz} by a factor of 2.},
\begin{align}
\boxed{
[e_A,e_B] = w_{AB}\ell,
\qquad \text{and} \qquad
[\ell,e_A]= \varphi_A \ell. \label{C-comm}
}
\end{align}
We comment here that the Jacobi identity of the commutators determines the evolution of the Carrollian vorticity,
\begin{align}
\ell[w_{AB}]= e_A[\varphi_B] -e_B[\varphi_A].
\end{align}
It is important to appreciate that, as we have already derived, the commutator between horizontal basis vectors $[e_A,e_B]$ does not lie in the horizontal subspace $\text{\bf hor}(H)$ when the Carrollian vorticity $w_{AB}$ does not vanish. Geometrically speaking, following from the Frobenius theorem, this means that the horizontal subspace $\text{\bf hor}(H)$ is not integrable in general, meaning that it cannot be treated as a tangent space to a $2$--dimensional submanifold of the space $H$.
Given the metric $q_{AB}$ on the sphere $S$, we define the \emph{expansion tensor} $\theta_{AB}$ as the change of the sphere metric along the vertical direction,
\begin{align}
\theta_{AB} := \frac{1}{2} {\mathcal{L}}_\ell q_{AB} = \frac{1}{2}\ell[q_{AB}].
\end{align}
The trace of the expansion tensor, called the \emph{expansion} and denoted by $\theta$, computes the change of the area element of the sphere $S$ along the vector $\ell$,
\begin{align}
\theta := q^{AB} \theta_{AB} = \ell[\ln \sqrt{q}].
\end{align}
\subsection{Horizontal Covariant Derivative}
Another ingredient that is needed in order to write the Carrollian conservation laws is the notion of the horizontal covariant derivative. To this end, we introduce the Christoffel-Carroll symbols \cite{Ciambelli:2018xat} defined in the same manner as the standard Christoffel symbols but using the $2$-sphere metric and the horizontal basis vectors,
\begin{align}
{}^{\scriptscriptstyle (2)}\Gamma^A_{BC} := \frac{1}{2} q^{AD}\left( e_B[ q_{DC}] +e_C[ q_{BD}] - e_D [q_{BC}] \right). \label{Chris-Car}
\end{align}
It is torsion-free, ${}^{\scriptscriptstyle (2)}\Gamma^A_{BC} = {}^{\scriptscriptstyle (2)}\Gamma^A_{CB}$ by definition. We then define the \emph{horizontal covariant derivative} (or sometimes called the Levi-Civita-Carroll covariant derivative \cite{Ciambelli:2018xat}) $\mathscr{D}_A$ which acts on a horizontal tensor $T = T^A{}_B e_A \otimes \bm{e}^B$ as
\begin{align}
\mathscr{D}_A T^B{}_C = e_A [T^B{}_C ]+ {}^{\scriptscriptstyle (2)}\Gamma^B_{DA} T^D{}_C - {}^{\scriptscriptstyle (2)}\Gamma^D_{CA} T^B{}_D,
\end{align}
and it can straightforwardly be generalized to a tensor of any degrees. By construction, the sphere metric $q_{AB}$ is compatible with this connection, that is $\mathscr{D}_C q_{AB} =0$.
One useful formula will be that the horizontal divergence of a horizontal vector $X = X^A e_A$ is given by
\begin{align}
\mathscr{D}_A X^A = \frac{1}{\sqrt{q}} e_A \left[ \sqrt{q}X^A \right].
\end{align}
More details on this covariant derivative are provided in Appendix \ref{hor-derivative}.
\subsection{Adapted coordinates for the Carroll structure} \label{sec-coordinate}
Up until this stage, we have always kept our presentation of the Carroll structure abstract and is thus completely independent of the choices of coordinates on the space $H$. We can pretty much continue this trend for the rest of this article. However, some physical pictures can be easily garnered when working explicitly with coordinates and, for practical purposes, some computations are conveniently carried out when expressing in coordinates. We will discuss the coordinate choices in this section.
Since the space $H$ is structured as the fiber bundle over the sphere $S$, we can, without loss of generality, choose a general coordinate system $x^i = (u,y^A)$ such that open sets of the cuts at $u=\mathrm{constant}$, which denoted by $S_u$, are identified with open sets of the sphere $S$ through the projection map, $S_u \to S$, which maps the coordinates $y^A$ to the coordinates on the sphere\footnote{More rigorously, $p^A$ is a transition map, $p^A := (\sigma \circ p \circ x^{-1} (u,y))^A$, where $x: H \to \mathbb{R}^{D-1}$ and $\sigma: S \to \mathbb{R}^{D-2}$ provide, respectively, local coordinates on $H$ and $S$. },
\begin{align}
y^A \to \sigma^A= p^A(u,y^B).
\end{align}
In what follows, we will denote the Jacobian of the push-forward by $J: T S_u\to TS$, and it is explicitly given in coordinates by
$J_A{}^B =\partial_A p^B$, where we have used the notation $\partial_A := \frac{\partial}{\partial y^A}$. In this general coordinate system, the Carroll structure is then characterized by a scale factor $\alpha$ and a velocity field $V^A$ such that
\begin{align}
\ell = e^{-\alpha} D_u, \qquad \text{and} \qquad \bm{e}^A = (\bm{\mathrm{d}} y^B - V^B\bm{\mathrm{d}} u) J_B{}^A,
\end{align}
where we defined $D_u := (\partial_u + V^A \partial_A)$. Following from the definition of the co-frame field $\bm{e}^A := p^*(\bm{\mathrm{d}} \sigma^A)$, the velocity field $V^A$ can be expressed in terms of the projection map as
\begin{align}
V^A = - \partial_u p^B (J^{-1})_B{}^A, \qquad \text{such that} \qquad D_u p^A = 0,
\end{align}
where we introduced the matrix $J^{-1}$ to be the inverse of the Jacobian such that $J_A{}^C (J^{-1})_C{}^B = (J^{-1})_A{}^C J_C{}^B = \delta_A^B$. Let us also remark here that a change of the scale factor $\alpha$ preserves the Carroll structure while a variation of the velocity field $V^A$ changes the Carroll structure. It follows from the definition of the Jacobian that
\begin{align}\label{Carrollian-0}
\partial_B J_C{}^A = \partial_C J_B{}^A.
\end{align}
In addition, the property $\bm{\mathrm{d} e}^A =0$ imposes the following constraint on the Carrollian velocity and the Jacobian,
\begin{align}\label{Carrollian}
D_u J_B{}^A = -(\partial_B V^C) J_C{}^A, \qquad \text{and} \qquad D_u (J^{-1})_B{}^A = (J^{-1})_B{}^C\partial_C V^A.
\end{align}
The Ehresmann connection, obeying the condition $\iota_\ell \bm{k}$, is characterized by the \emph{Carrollian connection density}, $\beta_A$, and it can be parameterized as
\begin{align}
\bm{k} = \mathrm{e}^\alpha (\bm{\mathrm{d}} u - \beta_A \bm{e}^A).
\end{align}
The choice of the Ehresmann connection also fixes the form of the horizontal basis vectors $e_A$ by the conditions, $\iota_{e_A} \bm{k} =0$ and also $\iota_{e_A} \bm{e}^B = \delta^B_A$. In our parameterization, the horizontal basis is given by
\begin{align}
e_A = (J^{-1})_A{}^B\partial_B +\beta_A D_u.
\end{align}
In this general coordinate system, we can evaluate the Carrollian commutators and in turn obtain the coordinate expression of the Carrollian acceleration $\varphi_A$ and the Carrollian vorticity $w_{AB}$ (see Appendix \ref{Phi-W-derive}). They are given by
\begin{align}
\varphi_A &= D_u \beta_A +e_A[\alpha], \\
w_{AB} &= \mathrm{e}^\a \left( e_A[\beta_B] - e_B[\beta_A] \right).
\end{align}
In this article, we will always work with the general coordinates $x^i = (u,y^A)$ on the space $H$ as they are, by construction, independent of the Carroll structure. Let us, however, mention that we can also choose to work with the \emph{adapted coordinates} $(u,\sigma^A)$ on $H$ which are such that the action of the projection is trivial, $p:(u,\sigma) \to \sigma$. With this choice, the coordinate $u$ is regarded as the fiber coordinate. By definition, the velocity field $V^A =0$ vanishes in the adapted coordinates. These coordinates are therefore co-moving coordinates, which are such that
\begin{align}
\ell = \mathrm{e}^{-\a}\partial_u, \qquad \text{and} \qquad \bm{e}^A = \bm{\mathrm{d}} \sigma^A.
\end{align}
To connect with the previous parameterization, one can derive, given the coordinates $y^A(u,\sigma)$, the following relations
\begin{align}
V^A = \frac{\partial y^A}{\partial u}, \qquad \text{and} \qquad (J^{-1})_A{}^B = \frac{\partial y^B}{\partial \sigma^A}.
\end{align}
The Ehresman connection in the adapted coordinates therefore reads
\begin{align}
\bm{k}= \mathrm{e}^\alpha \left( \bm{\mathrm{d}} u - \beta_A \bm{\mathrm{d}} \sigma^A \right).
\end{align}
The expressions for the the Carrollian acceleration and the Carrollian vorticity simplifies in the co-moving coordinates becomes
\begin{align}
\varphi_A &=
\left( \frac{\partial}{\partial \sigma^A} + \beta_A \partial_u \right) \alpha + \partial_u \beta_A, \\
w_{AB} &= \mathrm{e}^\alpha \left( \left( \frac{\partial}{\partial \sigma^A} + \beta_A \partial_u \right) \beta_B - \left( \frac{\partial}{\partial \sigma^B} + \beta_B \partial_u \right) \beta_A \right).
\end{align}
The co-moving coordinates have been widely adopted in the Carrollian literature (see for example \cite{Duval:2014uoa, Ciambelli:2019lap, Ciambelli:2018ojf}) as the apparent absence of the velocity field and the Jacobian factor heavily simplifies all computations. Also, this choice of coordinates works well when considering field variations that leave the Carroll structure unchanged. We will, however, be more general by considering the set of variations that can change the Carroll structure, and will therefore work with the general, field-independent, coordinates $x^i = (u,y^A)$.
Let us also comment that the vorticity is the curvature of the Witt connection
\begin{equation}
w_{AB} = \mathrm{e}^\alpha\left(\frac{\partial}{\partial \sigma^A} \b_B -\frac{\partial}{\partial \sigma^B} \b_A + [\b_A,\b_B]_{\mathrm{W}}\right),
\qquad \text{where} \qquad
[a,b]_{\mathrm{W}} := a\partial_u b-b\partial_ua.
\end{equation}
The bracket $[\ ,\ ]_{\mathrm{W}}$ is the Witt bracket\footnote{This means that $\beta_A \partial_u$ is an element of the Witt algebra. Let us also comment that it is more common to work with the Laurent polynomial basis $L_n := - u^{n+1} \partial_u$ where now $\beta_A(u,\sigma) = \sum\limits_{n \in \mathbb{Z}} \beta_A^{(n)}(\sigma) L_n$. In this basis, the Witt algebra is in the well-familiar form, $[L_n,L_m]_{\text{W}} = (n-m)L_{n+m}$.}.
This means that the corresponding symmetry group is the group $\mathrm{Diff}(\mathbb{R})$ of a space-dependent time reparameterizations.
An element of this group is denoted $\hat{U}$ and simply represented by a function $\hat{U}:u\to U(u, \sigma) $.
The demand that the vorticity vanishes $w_{AB}=0$ means that $\beta_A \partial_u = -\hat{U}^{-1} \circ \partial_A \hat{U}$ is a flat $\mathrm{Diff}(\mathbb{R})$ connections. This implies that the coefficient $\beta_A$ is given\footnote{ This follows from the fact that $[\hat{U}^{-1} \phi](u,\sigma): = \phi(U(u,\sigma),\sigma)$ which gives
\begin{eqnarray}
[\partial_A \hat{U}^{-1}] \phi &=& [\partial_AU] [\partial_u\phi](U,\sigma),\cr
[ \hat\beta_A \circ \hat{U}^{-1} ] \phi(u,\sigma) &=& \beta_A \partial_u \phi( U,\sigma) = [\beta_A \partial_uU] (\partial_u\phi ) (U,\sigma),
\end{eqnarray}
where we denoted $\hat{\beta}_A :=\beta_A \partial_u$.
}, in a comoving coordinate system, by
\begin{equation}
\b_A = -\frac{\partial_A U}{\partial_u U}.
\end{equation}
The slices $U= \text{constant}$ are then the Bondi slices.
In an arbitrary coordinate system $\beta_A$ can be written simply as $\beta_A = e_A[u + U]$.
\subsection{Carrollian transformations}
We conclude our geometrical setup on Carroll structures by discussing Carrollian diffeomorphism. In general, there are two types of diffeomorphism of the space $H$ --- one that preserves the fiber bundle structure and one that changes it. Here we will focus on the former case and we will discuss the latter case when considering hydrodynamics in the next section.
Transformations that preserve the fiber bundle structure of the space $H$, which has been particularly referred to as \emph{Carrollian transformations} or \emph{Carrollian diffeomorphism} in the literature, are such that
\begin{align}
u \to u'(u,\sigma^A), \qquad \text{and} \qquad \sigma^A \to \sigma'^A(\sigma^B).
\end{align}
In this class of transformations, the co-frame field $\bm{e}^A$ only changes by the diffeomorphism on the sphere $S$, inferring that the basis vector $\ell$ can only change by rescaling, $\delta^{\sss \text{Carr}} \ell \propto \ell$. In other words, the new Carrollian vector still belongs to the equivalence class $[\ell]_{\sim}$. This therefore means that the velocity field is unchanged under Carrollian transformations,
\begin{align}
\delta^{\sss \text{Carr}} V^A = 0.
\end{align}
We now compute how the components $(\alpha, \beta_A, q_{AB})$ of the Carroll structure vary under infinitesimal Carrollian diffeomorphism generated by a vector field
\begin{align}
\xi = \tau \ell + Y^A e_A, \qquad \text{where} \qquad \ell[Y^A] =0,
\end{align}
and $\tau$ is a generic function on the space $H$. It follows from
\begin{align}
\delta_\xi \ell = {\mathcal{L}}_\xi \ell = [\xi, \ell] = -\left( \ell[\tau] + Y^A \varphi_A \right) \ell,
\end{align}
and $\delta^{\sss \text{Carr}} \ell = -(\delta^{\sss \text{Carr}} \alpha)\ell$. So the transformation of the scale factor is
\begin{align}
\delta^{\sss \text{Carr}}_{(\tau,Y)} \alpha = \ell[\tau] + Y^A \varphi_A. \label{del-C-a}
\end{align}
For the Carrollian connection $\beta_A$, we use that $\delta^{\sss \text{Carr}}_{(\tau,Y)} \bm{k} = {\mathcal{L}}_\xi \bm{k}$ to read off the transformation of $\beta_A$, which is
\begin{equation}
\begin{aligned}
- \mathrm{e}^\a \bbdelta^{\sss \text{Carr}}_{(\tau,Y)} \beta_A = (e_A - \varphi_A)[\tau] + w_{AB} Y^B,
\label{del-C-b}
\end{aligned}
\end{equation}
where we defined the variation $\bbdelta^{\sss \text{Carr}} \beta_A :=(J^{-1})_A{}^B \delta^{\sss \text{Carr}}(J_B{}^C \beta_C)$. Lastly, we use that $\delta^{\sss \text{Carr}}_{(\tau,Y)} q = {\mathcal{L}}_\xi q$ to show that the sphere metric $q_{AB}$ transforms as
\begin{align}
\bbdelta^{\sss \text{Carr}}_{(\tau,Y)} q_{AB} = 2\left( \tau \theta_{AB} + \mathscr{D}_{(A} Y_{B)} \right),\label{del-C-q}
\end{align}
where we defined $\bbdelta^{\sss \text{Carr}} q_{AB} := (J^{-1})_A{}^C (J^{-1})_B{}^D \delta^{\sss \text{Carr}} (J_C{}^E J_D{}^F q_{EF})$. Let us also note that one can consider Carrollian isometries such that ${\mathcal{L}}_\xi q = 0$ or conformal Carrollian isometries such that ${\mathcal{L}}_\xi q = \Omega q$, for a conformal factor $\Omega$. In such cases, we will have more constraints on the transformation parameters $(\tau, Y)$ (see for instance the discussions in \cite{Ciambelli:2019lap, Ciambelli:2018ojf, Donnay:2019jiz, Duval:2014uva}).
\section{Carrollian Hydrodynamics} \label{hydro}
Having formally established essential elements of the Carroll structure, we proceed to the discussion of hydrodynamics and its ultra-relativistic cousin, namely the \emph{Carrollian hydrodynamics}. It has been well established fact that Galilean fluids can be derived by taking the non-relativistic limit, $c \to \infty$, of the general relativistic energy-momentum tensor $T^{ij}$ and their corresponding dynamics are therefore controlled by the non-relativistic version of the conservation laws, $\nabla_j T_i{}^j =0$. The equations governing the `Galilean' time evolution of the fluid are the continuity equation, energy conservation equation, and the Navier-Stokes equations. In a much similar spirit, taking the Carrollian, $c \to 0$, leads to a new, and peculiar, kind of fluids and their corresponding hydrodynamic equations that are Carrollian-covariant \cite{Ciambelli:2018xat}. In this section, we will present how the Carrollian hydrodynamic equations can be obtained from the $c \to 0$ contraction of the relativistic conservation laws.
\subsection{Metric on $H$}
Until this stage, the geometry of the space $H$ have been constructed from the Carroll structure which relied on the concept of fiber bundle. In order to discuss the conservation equations of the fluid energy-momentum tensor, $\nabla_j T_i{}^j =0$, the space $H$ needs to be equipped with an additional structure: a $3$--dimensional Lorentzian metric $h = h_{ij} \bm{\mathrm{d}} x^i \otimes \bm{\mathrm{d}} x^j$ and the Levi-Civita connection $\nabla$ compatible with it. We will discuss the metric first.
We are considering a family of Lorentzian matrices whose elements are labelled by a single real parameter, the \emph{speed of light}\footnote{In practice, it is the square of the speed of light, $c^2$, that will enter the computations.} $c$ and constructed entirely from the data of the Carroll geometry discussed in the previous section. By doing so, we ensure that the chosen metric is covariant under Carrollian diffeomorphism. We further make the following assumptions on the components of the metric\footnote{The second condition $h(\ell, e_A) =0$, in fact, can be relaxed by choosing $h(\ell, e_A) = c^2 \mathrm{e}^\a B_A$ for an arbitrary function $B_A$. The choice of $B_A$ is gauge as one can always absorb $B_A$ into the definition of the horizontal basis $e_A$, and correspondingly redefine the Ehresmann connection $\bm{k}$ and the sphere metric $q_{AB}$, by shifting the Carrollian connection $\beta_A \to \beta_A + B_A$. This new basis $e'_A = e_A + B_A D_u$ then satisfies the second condition $h(\ell,e'_A) =0$.},
\begin{align}
h(\ell, \ell) = - c^2, \qquad h(\ell, e_A) = 0, \qquad \text{and} \qquad h(e_A, e_B) = q_{AB}.
\end{align}
These conditions also infer that, when taking the limit $c \to 0$, the resulting metric on $H$ coincides with the null Carrollian metric, i.e., $h \stackrel{ c \to 0}{=} q$. Observe that the Carrollian vector field $\ell$ is timelike in general and becomes null in the Carrollian limit, $h(\ell, \ell) \stackrel{ c \to 0}{=} 0$. The metric $h$ and its inverse $h^{-1}$ are given in the Carrollian basis by\footnote{We use $\circ$ to denote the symmetric tensor product of tensors, i.e., $A \circ B = \frac{1}{2} \left(A \otimes B + B \otimes A \right)$}.
\begin{equation}
\begin{aligned}
h = - c^2 \bm{k} \circ \bm{k} + q_{AB}\bm{e}^A \circ \bm{e}^B, \qquad \text{and} \qquad h^{-1} = - c^{-2} \ell \circ \ell + q^{AB} e_A \circ e_B. \label{RPmetric}
\end{aligned}
\end{equation}
The inverse metric is thus singular in the Carrollian limit $c \to 0$. This particular form of the metric is known as the \emph{Randers-Papapetrou metric} and it has been utilized extensively in Carrollian physics literatures \cite{Ciambelli:2018xat, Ciambelli:2018wre,Ciambelli:2018ojf, Campoleoni:2018ltl, Petkou:2022bmz}. Also, having the metric $h$, one can derive the relations between the basis vectors and 1-forms, which are
\begin{align}
\bm{k} = -\frac{1}{c^2} h(\ell, \cdot), \qquad \text{and} \qquad \bm{e}^A = q^{AB} h(e_B, \cdot).
\end{align}
It is important to appreciate that the metric \eqref{RPmetric} can be viewed as the expansion in the small parameter $c^2$ around the Carrollian point, $c^2 =0$. With this in mind, we will also make another assumption that the sphere metric $q_{AB}$ admits the expansion in the small parameter $c^2$ such that
\begin{align}
q_{AB} = \mathring{q}_{AB} + 2c^2 {\lambda}_{AB} + \O(c^4), \qquad \text{and} \qquad q^{AB} = \mathring{q}^{AB} - 2c^2 {\lambda}^{AB} +\O(c^4), \label{metric-c-expand}
\end{align}
where $\mathring{q}^{AB}$ is the inverse of $\mathring{q}_{AB}$ and we defined ${\lambda}^{AB} := \mathring{q}^{AC}\mathring{q}^{BD} {\lambda}_{CD}$ and ${\lambda} := \mathring{q}^{AB} {\lambda}_{AB}$. Note also that, to properly manipulate the $c^2$-expansion, we will use the leading-order sphere metric $\mathring{q}_{AB}$ and its inverse $\mathring{q}^{AB}$ to lower and raise indices of horizontal tensors. Remarks are in order here:
$i)$ At first glance, doing this $c^2$-expansion of the sphere metric may seem like we have introduced unnecessary complications to the problems. We will later demonstrate that this expansion is necessary to derive the hydrodynamic conservation equations from symmetries.
$ii)$ In our derivations, it is sufficient to expand the Lorentzian metric $h$ to the order $c^2$. Therefore, we can assume that the components $\alpha$ and $\beta_A$ do not admit this $c^2$-expansion.
Since we now have the $c^2$-expansion of the sphere metric, some objects will also inherit this similar expansion. The obvious ones are the expansion tensor and its trace, which exhibit the following expansion
\begin{align}
\theta_{AB} = \mathring{\theta}_{AB} + c^2 \ell[{\lambda}_{AB}] + \O(c^4), \qquad \text{and} \qquad \theta = \mathring{\theta} + c^2 \ell [{\lambda}] + \O(c^4),
\end{align}
where the zeroth-order terms are
\begin{align}
\mathring{\theta}_{AB} = \frac{1}{2} \ell\left[\mathring{q}_{AB}\right], \qquad \text{and}\qquad \mathring{\theta} = \mathring{q}^{AB} \mathring{\theta}_{AB} = \ell\left[\ln \sqrt{\mathring{q}}\right].
\end{align}
Another object that will admits the $c^2$-expansion is the Christoffel-Carroll synbols ${}^{\sss (2)}\Gamma^A_{BC}$, and we present its expansion in Appendix \ref{hor-derivative}.
In order to do integration on the space $H$, we need the volume form on the $H$. We define the volume form as
\begin{equation}
\bm{\epsilon}_H := \bm{k} \wedge\bm{\epsilon}_S, \qquad \bm{\epsilon}_S = \sqrt{q} \left( \frac{ \varepsilon_{AB} }{2} \bm{\mathrm{d}} \sigma^A \wedge \bm{\mathrm{d}} \sigma^B\right),
\end{equation}
where $\varepsilon_{AB}$ is the standard Levi-Civita symbol (satisfying $\varepsilon_{AC} \varepsilon^{CB} = \delta_A^B$). $\bm{\epsilon}_S$ denotes the canonical volume form on the sphere $S$, which satisfies the relation $
\iota_\ell \bm{\epsilon}_H = p^*(\bm{\epsilon}_S)$.
As before, using that $\sqrt{q} = \sqrt{\mathring{q}}(1 + c^2 {\lambda}) + \O(c^4)$, we thus obtain the $c^2$-expansion of the volume form,
\begin{align}
\bm{\epsilon}_H = \left( 1+c^2{\lambda} \right) \mathring{\bm{\epsilon}}_H + \O(c^4), \qquad \text{and} \qquad \bm{\epsilon}_S = \left( 1+c^2{\lambda} \right) \mathring{\bm{\epsilon}}_S + \O(c^4),
\end{align}
where $\mathring{\bm{\epsilon}}_H$ and $\mathring{\bm{\epsilon}}_S$ denote the zeroth-order of the volume form on $H$ and on $S$, repectively.
\subsection{Covariant derivative}
Before considering Carrollian hydrodynamics, let us now consider the Levi-Civita connection $\nabla$ compatible with the metric \eqref{RPmetric}, that is $\nabla_i h_{jk} =0$. Let us compute the covariant derivative the basis vector fields, namely $\nabla_{\ell} \ell, \nabla_{e_A} \ell, \nabla_{\ell} e_A$, and $\nabla_{e_A} e_B$, as they will become handy tools when evaluating the hydrodynamic conservation equations. We start with the covariant derivative $\nabla_{\ell} \ell$, which we will present the computation in full detail here. Complete derivations of the others, which are done in a similar vein, are provided for the readers in Appendix \ref{cov-deri}. The term $\nabla_\ell \ell$, can be decomposed as
\begin{align}
\nabla_{\ell} \ell = (k_i \nabla_{\ell} \ell^i) \ell + (q^{AB}e_{Bi} \nabla_{\ell} \ell^i)e_A.
\end{align}
Using the metric $h$ and the Leibniz rule, one can show that the vertical component vanishes\footnote{This correspond to a choice of vanishing inafinity
$
\kappa = \ell[\ln c]=0.
$ } as follows:
\begin{equation}
\begin{aligned}
k_i \nabla_{\ell} \ell^i = - \frac{1}{c^2} h \left(\ell, \nabla_{\ell} \ell \right) = - \frac{1}{2c^2} \ell \left[ h\left(\ell, \ell \right) \right] = 0,
\end{aligned}
\end{equation}
as $h(\ell, \ell) = -c^2$ is constant. The horizontal components can be evaluated with the help of the commutation relations \eqref{C-comm} as follows:
\begin{equation}
\begin{aligned}
e_{Bi} \nabla_{\ell} \ell^i = h\left( e_B, \nabla_{\ell} \ell \right) &= -h\left( \ell , \nabla_{\ell}e_B \right) \\
& = -h\left( \ell , [\ell,e_B] \right) - \frac{1}{2} e_B[ h \left(\ell, \ell \right)] \\
& = c^2 \varphi_B.
\end{aligned}
\end{equation}
Therefore, the covariant derivative of the vertical vector field along itself is given by
\begin{align}
\nabla_{\ell} \ell = c^2 \varphi^A e_A + \O(c^4).
\end{align}
Observe that it vanishes in the Carrollian limit $c^2 \to 0$, dictating that the vector $\ell$ is the null generator of null geodesics on the space $H$.
The covariant derivative of the vertical vector along the horizontal vectors can be computed using the same technique. One can show that (see Appendix \ref{cov-deri}) it is given by
One could more simply write
\begin{align}
\nabla_{e_A} \ell= \left(\mathring{\theta}_A{}^B + c^2 \left(\frac{1}{2}w_A{}^B + \ell[{\lambda}_{A}{}^B] \right)\right)e_B + \O(c^4).
\end{align}
where $ {\lambda}_{A}{}^B = \mathring{q}^{BC} \lambda_{AC}$. The covariant derivative of the horizontal basis along the vertical basis, $\nabla_{\ell} e_A$, is already determined from $\nabla_{e_A} \ell$ and the commutator $[\ell, e_A]$. We are left with the remaining covariant derivative, $\nabla_{e_A} e_B$. Its vertical component, $k_i \nabla_{e_A} e_B{}^i$ can be inferred from $\nabla_{e_A} \ell$. For the horizontal components, $e^C{}_i \nabla_{e_A} e_B{}^i$, using that $q_{AB} = h \left(e_A, e_B \right)$ and the definition of the Christoffel-Carroll symbols \eqref{Chris-Car}, we can show that
\begin{equation}
\begin{aligned}
\nabla_{e_A} e_B = \ & \left(\frac{1}{c^2} \mathring{\theta}_{AB} + \left(\frac{1}{2}w_{AB} + \ell[{\lambda}_{AB}]\right) \right) \ell + {}^{\sss (2)}\mathring{\Gamma}^C_{AB} e_C \\
&+c^2 \left( \mathscr{D}_A{\lambda}_B{}^C +\mathscr{D}_B{\lambda}_A{}^C-\mathscr{D}^C{\lambda}_{AB} \right)e_C.
\end{aligned}
\end{equation}
With all these results, one can calculate the spacetime divergence of the basis vectors. Using the decomposition \eqref{C-projector}, we obtain
\begin{equation}
\begin{aligned}
\nabla_i \ell^i = \delta_i{}^j \nabla_j \ell^i &= \left( k_i \ell^j + e^B{}_i e_B{}^j \right) \nabla_j \ell^i = \mathring{\theta}+c^2 \ell[{\lambda}],
\end{aligned}
\end{equation}
and in a similar manner,
\begin{equation}
\begin{aligned}
\nabla_i e_A{}^i = \delta_i^j \nabla_j e_A{}^i &= \left( k_i \ell^j + e^B{}_i e_B{}^j \right) \nabla_j e_A{}^i = \varphi_A + {}^{\sss (2)}\mathring{\Gamma}^B_{AB} + c^2 e_A [{\lambda}] + \O(c^4).
\end{aligned}
\end{equation}
It is important to remark that the 3-dimensional metric compatible connection $\nabla_i$ contains a component that diverges when taking the Carrollian limit $c \to 0$. This is to be expected since the inverse metric \eqref{RPmetric} diverges in this limit. This also suggests that, in practical, computations have to be carried out at finite value of $c$ and the Carrollian limit needs to be taken at the last step.
\subsection{Carrollian Hydrodynamics}
Armed with all these tools, we are ready to discuss the hydrodynamics of Carrollian fluid. Let us start from the general form of relativistic energy-momentum tensors,
\begin{align}
T^{ij} = (\mathscr{E}+ \mathscr{P}) \frac{\ell^i \ell^j}{c^2} + \mathscr{P} h^{ij} + \frac{q^i \ell^j}{c^2} + \frac{q^j \ell^i}{c^2} + \tau^{ij}, \label{rela-em}
\end{align}
where we chose the vertical vector $\ell$ to be the fluid velocity. The variables appeared in the fluid energy-momentum tensor consist of the fluid internal energy density $\mathscr{E}$, the fluid pressure $\mathscr{P}$, the heat current $q^i$, and the viscous stress tensor $\tau^{ij}$, which is symmetric and traceless. The latter two quantities represent dissipative effects of the fluid and, by construction, they obey the orthogonality conditions with the fluid velocity, $q_i \ell^i =0$ and $ \tau_{ij}\ell^j=0$. This means that, in light of Carrollian geometry we have introduced, these dissipative tensors are horizontal tensors,
\begin{align}
q^i = q^A e_A{}^i, \qquad \text{and} \qquad \tau^{ij} = \tau^{AB} e_A{}^i e_B{}^j.
\end{align}
We are interested in the mixed indices version of the fluid energy-momentum tensor. Using the metric \eqref{RPmetric}, it is given by
\begin{align}
T_i{}^j &= - \left( \mathscr{E} \ell^j + q^A e_A{}^j\right) k_i + \left(\frac{1}{c^2} q_{AB}q^B \ell^j + \left(q
_{AC}\tau^{CB} + \mathscr{P} \delta_A^B\right) e_B{}^j \right) e^A{}_i.
\end{align}
Furthermore, we choose the following $c^2$-dependence \cite{Ciambelli:2018wre, Ciambelli:2018xat, Ciambelli:2019kiw} of the dissipative tensors,
\begin{align}
q^A = \mathscr{J}^A + c^2\left( \pi^A - 2{\lambda}^A{}_B \mathscr{J}^B \right), \qquad \tau^{AB} = \frac{\Sigma^{AB}}{c^2} + \mathscr{S}^{AB}. \label{q-tau}
\end{align}
Note also that $q_{AB}q^B =\mathscr{J}_A + c^2 \pi_A + \O(c^4)$. Following from this parameterization, the fluid energy-momentum tensor can be expressed as the expansion in $c^2$ as
\begin{align}
T_i{}^j = \frac{1}{c^2} T^{\scriptscriptstyle(-1)}_i{}^j +T^{\scriptscriptstyle(0)}_i{}^j +\O(c^2), \label{T-fluid}
\end{align}
where each term reads
\begin{subequations}
\begin{align}
T^{\scriptscriptstyle(-1)}_i{}^j &= \left(\mathscr{J}_A \ell^j +\Sigma_A{}^B e_B{}^j \right) e^A{}_i \\
T^{\scriptscriptstyle(0)}_i{}^j &= - \left( \mathscr{E} \ell^j + \mathscr{J}^A e_A{}^j\right) k_i + \left( \pi_A \ell^j + \left( \mathscr{K}_A{}^B + \mathscr{P} \delta_A{}^B \right)e_B{}^j\right)e^A{}_i, \label{e-m-fluid}
\end{align}
\end{subequations}
and we defined for convenience the combination,
\begin{align}
\mathscr{K}_A{}^B := \mathscr{S}_A{}^B + 2{\lambda}_{AC} \Sigma^{CB}.
\end{align}
The dynamics of the relativistic fluid is governed by the relativistic conservation laws, $\nabla_j T_i{}^j$. Let us first evaluate the vertical component of the conservation equations. With all the tools we derived previously, we show that
\begin{equation}
\begin{aligned}
\ell^i \nabla_j T_i{}^j &= \nabla_j \left(\ell^i T_i{}^j\right) - T_i{}^j \nabla_j \ell^i \\
&= -\nabla_j \left( \mathscr{E} \ell^j +q^A e_A{}^j \right) - \frac{1}{c^2}q^A \left(e_{Ai} \nabla_{\ell} \ell^i \right) - \left(\tau^{AB} + \mathscr{P} q^{AB}\right) \left( e_{Ai} \nabla_{e_B} \ell^i\right) \\
& =- (\ell + \theta )[\mathscr{E}] - \mathscr{P} \theta - (\mathscr{D}_A + 2 \varphi_A) q^A - \tau^{AB} \theta_{AB} \\
& = \frac{1}{c^2}\mathbb{C}+ \mathbb{E} + \O(c^2),
\end{aligned}
\end{equation}
where the coefficients of the $c^2$-expansion are
\begin{align}
\mathbb{E} &= -(\ell+ \mathring{\theta})[\mathscr{E}] - \mathscr{P} \mathring{\theta} - (\mathring{\mathscr{D}}_A + 2 \varphi_A) \mathscr{J}^A -\mathscr{S}^{AB} \mathring{\theta}_{AB} - \Sigma^{AB} \ell[{\lambda}_{AB}], \label{E-eq} \\
\mathbb{C} & = -\Sigma^{AB} \mathring{\theta}_{AB}. \label{C-eq}
\end{align}
Imposing $\ell^i \nabla_j T_i{}^j =0$ as one taking the limit $c \to 0$ demands $\mathbb{E} =0$ and $\mathbb{C} =0$. The first equation is the Carrollian energy evolution equation and second equation is the constraint equation. Note that the expression $\mathbb{E}$ for the energy equation differs from the original work \cite{Ciambelli:2018xat} due to the presence of the tensor ${\lambda}_{AB}$ and the fluid velocity $V^A$ contained implicitly in the Carrollian $\ell$. As we will discuss in the next section, these two additional variables are part of the phase space of Carrollian fluids and they are necessary when one wants to derive Carrollian conservation laws from symmetries. In this sense, our results generalizes those presented in \cite{Ciambelli:2018xat}.
In a similar manner to the vertical component, we compute the horizontal components of the conservation laws and consider the $c^2$-expansion. This is given by
\begin{equation}
\begin{aligned}
e_A{}^i \nabla_j T_i{}^j &= \nabla_j \left(e_A{}^i T_i{}^j\right) - T_i{}^j \nabla_j e_A{}^i \\
&= \nabla_j \left(\frac{1}{c^2} q_{AB}q^B \ell^j + \left(q_{AC}\tau^{CB} + \mathscr{P} \delta_A^B\right) e_B{}^j \right) + \left( \mathscr{E} k_i - \frac{1}{c^2}q^B e_{Bi} \right)\nabla_{\ell}e_A{}^i\\
& \ \ \ \ + \left( q^Bk_i - \left( q_{CD}\tau^{BD} +\mathscr{P} \delta^B_C \right)e^C{}_i \right)\nabla_{e_B}e_A{}^i \\
&= \frac{1}{c^2}(\ell+ \theta)[q_{AB} q^B]+ \mathscr{E} \varphi_A - w_{AB} q^B + (\mathscr{D}_B + \varphi_B)(q_{AC} \tau^{CB}+ \mathscr{P} \delta_A^B) \\
& = \frac{1}{c^2}\mathbb{J}_A+ \mathbb{P}_A + \O(c^2),
\end{aligned}
\end{equation}
where the zeroth-order term is
\begin{equation}
\begin{aligned}
\mathbb{P}_A = \ & (\ell+ \mathring{\theta})[\pi_A]+ \mathscr{E} \varphi_A - w_{AB} \mathscr{J}^B + (\mathring{\mathscr{D}}_B + \varphi_B)(\mathscr{K}_A{}^B + \mathscr{P} \delta_A^B) \\
& + \left( \ell[{\lambda}]\mathscr{J}_A + \Sigma_A{}^B \mathring{\mathscr{D}}_B {\lambda} + \Sigma^{BC} \mathring{\mathscr{D}}_A {\lambda}_{BC} \right), \label{P-eq}
\end{aligned}
\end{equation}
while the other term is
\begin{align}
\mathbb{J}_A &= (\ell + \mathring{\theta})[\mathscr{J}_A ] + (\mathring{\mathscr{D}}_B + \varphi_B) \Sigma^B{}_A \label{J-eq}.
\end{align}
Taking the Carrollian limit $c \to 0$ of the conservation laws, $e_A{}^i \nabla_j T_i{}^j = 0$, imposes the Carrollian momentum evolution, $\mathbb{P}_A =0$ and the conservation of Carrollian current, $\mathbb{J}_A =0$. Again, our expression for $\mathbb{P}_A$ is the generalization of \cite{Ciambelli:2018xat}.
Let us comment here the case when the sub-leading components of the sphere metric vanishes, ${\lambda}_{AB} =0$ simplifies the Carrollian evolution equations,
\begin{subequations}
\begin{align}
\mathbb{E} &= -(\ell+ \mathring{\theta})[\mathscr{E}] - \mathscr{P} \mathring{\theta} - (\mathring{\mathscr{D}}_A + 2 \varphi_A) \mathscr{J}^A -\mathscr{S}^{AB} \mathring{\theta}_{AB}, \\
\mathbb{P}_A &= (\ell+ \mathring{\theta})[\pi_A]+ \mathscr{E} \varphi_A - w_{AB} \mathscr{J}^B + (\mathring{\mathscr{D}}_B + \varphi_B)(\mathscr{S}_A{}^B + \mathscr{P} \delta_A^B), \\
\mathbb{J}_A &= (\ell +\mathring{\theta})[\mathscr{J}_A ] + (\mathring{\mathscr{D}}_B + \varphi_B) \Sigma^B{}_A, \\
\mathbb{C} & = -\Sigma^{AB} \mathring{\theta}_{AB}.
\end{align}
\end{subequations}
These are the Carrollian fluid equations given in the literature \cite{Ciambelli:2018xat, Ciambelli:2018ojf}.
Note that the solutions of these equations are invariant under the shift
$( \mathscr{E}, \pi_A , \mathscr{S}_A{}^B ) \to ( \mathscr{E}, \pi_A + a \mathscr{J}_A, \mathscr{S}_A{}^B + a \Sigma_A{}^B)$ where $a$ is an arbitrary parameter and where $(\mathscr{P},\mathscr{J}_A, \Sigma_{A}{}^B)$ is unchanged.
\section{Hydrodynamics from Symmetries} \label{hydro-sym}
In this section, we tackle the Carrollian hydrodynamics from a different, but nonetheless equivalent, perspective. Our objective is to re-derive the equations that govern Carrollian hydrodynamics \eqref{E-eq}, \eqref{C-eq}, \eqref{P-eq}, and \eqref{J-eq} from the symmetries of the space $H$.
\subsection{The Action for Carrollian Fluid}
Since the metric $h$ is defined on the space $H$, we can consider the action of the fluid whose variation yields the fluid energy-momentum tensor. We will consider the fluid action that is finite when taking the Carrollian limit $c \to 0$. The variation of the fluid action we will use takes the form
\begin{tcolorbox}[colback = white]
\vspace{-15pt}
\begin{equation}
\begin{aligned}
\delta S_{\text{fluid}} = -\int_H \left( \mathscr{E} \bbdelta \alpha - \mathrm{e}^\a\mathscr{J}^A \bbdelta \beta_A + \mathrm{e}^{-\a} \tilde{\pi}_A \bbdelta V^A - \frac{1}{2}\left( \tilde{\mathscr{S}}^{AB} +\mathscr{P} \mathring{q}^{AB}\right) \bbdelta \mathring{q}_{AB} - \Sigma^{AB} \bbdelta {\lambda}_{AB}\right)\mathring{\bm{\epsilon}}_H \label{S-fluid}.
\end{aligned}
\end{equation}
\end{tcolorbox}
We defined the momentum conjugated to the velocity field $V^A$ and the leading-order sphere metric $\mathring{q}_{AB}$ to be
\begin{align}
\tilde{\pi}_A &:= \pi_A + {\lambda}\mathscr{J}_A \\
\tilde{\mathscr{S}}^{AB} &:= \mathscr{S}^{AB} +{\lambda} \Sigma^{AB}.
\end{align}
We also absorbed the Jacobian factors and the velocity field variation into the definition of the variation $\bbdelta$ as follows,
\begin{align}
\bbdelta\a &:= \delta \a + \beta_A \bbdelta V^A, \\
\bbdelta\beta_A & := (J^{-1})_A{}^C\delta \left( J_C{}^B\beta_B \right) - (\beta \cdot \bbdelta V)\beta_A, \\
\bbdelta \mathring{q}_{AB} &:= (J^{-1})_A{}^C (J^{-1})_B{}^D\delta \left(J_C{}^E J_D{}^F \mathring{q}_{EF}\right) - 2\mathring{q}_{C(A} \beta_{B)} \bbdelta V^C, \\
\bbdelta {\lambda}_{AB} &:= (J^{-1})_A{}^C (J^{-1})_B{}^D\delta \left(J_C{}^E J_D{}^F {\lambda}_{EF}\right) - 2{\lambda}_{C(A} \beta_{B)} \bbdelta V^C,
\end{align}
and that we define
\begin{align}
\bbdelta V^A &:= \left(\delta V^B\right)J_B{}^A.
\end{align}
The action \eqref{S-fluid} is simply derived from the fluid energy-momentum tensor $T^{ij}$ and the metric variation $\delta h_{ij}$. To see this, let us consider an action $S[h_{ij}]$ and its metric variation yields the energy-momentum tensor,
\begin{align}
\delta S = \int_H \left( \frac{1}{2} T^{ij} \delta h_{ij} \right)\bm{\epsilon}_H
\end{align}
Since the fluid energy-momentum tensor \eqref{T-fluid} has a part that diverges when taking the Carrollian limit $c \to 0$, the variation $\delta S$ also diverges in this limit. To obtain the finite action \eqref{S-fluid}, we subtract the divergent part from $\delta S$ then take the Carrollian limit, that is
\begin{align}
\delta S_{\text{fluid}} := \lim_{c \to 0} \left( \delta S - \frac{1}{c^2}\delta S_{\scriptscriptstyle (-1)}\right).
\end{align}
We note that the divergent part is given by
\begin{align}
\delta S_{\scriptscriptstyle (-1)} := \lim_{c \to 0} \left(c^2 \delta S \right) = \int_H \left( \frac{1}{2} T_{\scriptscriptstyle (-1)}^{ij} \delta h_{\scriptscriptstyle(0)}{}_{ij} \right)\mathring{\bm{\epsilon}}_H,
\end{align}
where we used that the metric variation is regular as $c \to 0$ and schematically expands as $\delta h_{ij} = \delta h_{\scriptscriptstyle(0)}{}_{ij} + c^2 \delta h_{\scriptscriptstyle(1)}{}_{ij} + \O(c^4)$. The fluid action \eqref{S-fluid} is thus
\begin{align}
\delta S_{\text{fluid}} = \int_H \frac{1}{2}\left( T^{\scriptscriptstyle (0)}{}^{ij} \delta h_{\scriptscriptstyle(0)}{}_{ij} +T^{\scriptscriptstyle (-1)}{}^{ij} \delta h_{\scriptscriptstyle(1)}{}_{ij} + {\lambda} T^{\scriptscriptstyle (-1)}{}^{ij} \delta h_{\scriptscriptstyle(0)}{}_{ij}\right)\mathring{\bm{\epsilon}}_H. \label{var}
\end{align}
\subsection{Near-Carrollian Diffeomorphism} \label{near-ultra}
To derive the Carrollian hydrodynamic equations from the variation of the action \eqref{S-fluid} under certain symmetries, we first need to specify those symmetries and derive the symmetry transformations for the metric components, $(\alpha, \beta_A, V^A, \mathring{q}_{AB}, {\lambda}_{AB})$. The seemingly obvious choice one could consider is the Carrollian diffeomorphism. However, Carrollian diffeomorphism is not sufficient to derive the complete set of hydrodynamic equations \eqref{E-eq}, \eqref{C-eq}, \eqref{P-eq}, and \eqref{J-eq}, as already shown in \cite{Ciambelli:2018ojf}. The reasons for this limitation are as follows:
$i)$ Carrollian diffeomorphism fixes the variation of the velocity field, $\delta^{\sss \text{Carr}} V^A =0$, hence turning off a phase space degree of freedom conjugated to the velocity, that is the fluid momentum density.
$ii)$ There are only two symmetry parameters $(\tau, Y^A)$ for the Carrollian diffeomorphism, while there are four hydrodynamic equations. The symmetries labelled by the parameter $\tau$ and $Y^A$ correspond, respectively, to the the energy equation \eqref{E-eq} and the momentum equation \eqref{P-eq}.
To obtain the remaining two equations, the current conservation \eqref{J-eq} and the constraint \eqref{C-eq}, we need two more symmetry parameters.
\noindent We therefore need to detach our consideration from the Carrollian diffeomorphism and consider a general diffeomorphism on the space $H$. The general diffeomorphism on $H$ is labelled by vector fields of the form,
\begin{align}
\xi = f \ell + X^A e_A,
\end{align}
where $f$ and $X^A$ are arbitrary functions on $H$. This general diffeomorphism will definitely change the Carroll structure. In the same fashion as our prior discussions, let us expand the transformation parameters $(f, X^A)$ in the small parameter $c^2$ as
\begin{align}
f = \tau + c^2 \psi + \mathcal{O}(c^4), \qquad \text{and} \qquad X^A = Y^A + c^2 Z^A+\mathcal{O}(c^4), \label{fluid-diff}
\end{align}
where now the parameter $(\tau,\psi,Y^A,Z^A)$ are functions on $H$. This way, we have already secured four parameters we need for four equations of Carrollian fluid. It is of extreme importance to point out that expanding the diffeomorphism around $c^2 =0$ can be regarded as the analog to the diffeomorphism of spacetime geometry in the close vicinity of a black hole horizon, the near-horizon diffeomorphism, with $c^2$ plays the same role as the distance away from the black hole horizon. We will refer to this diffeomorphism as the \emph{near-Carrollian diffeomorphism}\footnote{Expansion in $c^2$ has been dubbed pre-ultra-local expansion in \cite{Hansen:2021fxi}.}.
As stated previously, we need to find how the metric components vary under the near-Carrollian diffeomorphism. To carry out this task, we employ the technology of the anomaly operator $\Delta_\xi$ which compares the spacetime transformtaion of the field to its field space transformation. The metric $h$ is covariant under the near-horizon diffeomorphism, meaning that its anomaly $\Delta_\xi h := \delta_\xi h - {\mathcal{L}}_\xi h$ vanishes. The anomaly of the metric $h$ decomposes as
\begin{equation}
\begin{aligned}
\Delta_\xi h &= -2c^2 (\Delta_\xi \bm{k}) \circ \bm{k} + \Delta_\xi q \\
&= -2 c^2 (\iota_\ell\Delta_\xi \bm{k}) \bm{k} \circ \bm{k} + 2\left( \Delta_\xi q (\ell,e_A) - c^2 (\iota_{e_A}\Delta_\xi \bm{k})\right) \bm{k} \circ \bm{e}^A + \Delta_\xi q (e_A,e_B) \bm{e}^A \circ \bm{e}^B.
\end{aligned}
\end{equation}
Demanding covariance, $\Delta_\xi h =0$, imposes the following conditions,
\begin{align}
\iota_\ell\Delta_\xi \bm{k} = 0, \qquad \Delta_\xi q (\ell,e_A) = c^2 (\iota_{e_A}\Delta_\xi \bm{k}) , \qquad \text{and} \qquad \Delta_\xi q (e_A,e_B) =0.
\end{align}
The problem then boils down to the computation of the anomaly of the Ehresmann connection $\bm{k}$ and the anomaly of the null Carrollian metric $q$ (we defer the derivations to the Appendix \ref{fluid-transf}). Solving the above conditions for different powers of $c^2$ gives us the transformation of the metric components under the near-Carrollian diffeomorphism,
\begin{subequations}
\label{transf-fluid}
\begin{align}
\bbdelta_\xi \alpha &= \delta^{\sss \text{Carr}}_{(\tau,Y)} \alpha \\
\mathrm{e}^\a \bbdelta_\xi \beta_A &= \mathrm{e}^\a \delta^{\sss \text{Carr}}_{(\tau,Y)} \beta_A + \mathring{q}_{AB}\ell[Z^B] \\
\bbdelta_\xi \mathring{q}_{AB} &=\delta^{\sss \text{Carr}}_{(\tau,Y)} \mathring{q}_{AB} \\
\bbdelta_\xi {\lambda}_{AB} &= \frac{1}{2}\delta^{\sss \text{Carr}}_{(\psi,Z)} \mathring{q}_{AB} + \tau \ell[{\lambda}_{AB}] + Y^C \mathring{\mathscr{D}}_C {\lambda}_{AB} + 2 {\lambda}_{C(A}\mathring{\mathscr{D}}_{B)} Y^C,
\end{align}
\end{subequations}
where we recalled the functional form of the Carrollian transformations\footnote{Although now there is no constraint on $Y^A$, unlike the Carrollian transformations where $\ell[Y^A] =0$.} \eqref{del-C-a}, \eqref{del-C-b}, and \eqref{del-C-q}, and the transformation of the velocity field is given by,
\begin{align}
\bbdelta_\xi V^A = - D_u Y^A. \label{transf-V}
\end{align}
\subsection{Hydrodynamics from Near-Carrollian Diffeomorphism}
The Carrollian hydrodynamic equations \eqref{E-eq}, \eqref{C-eq}, \eqref{P-eq}, and \eqref{J-eq} can be recovered by demanding invariance, up to boundary terms, of the fluid action \eqref{S-fluid} under the near-Carrollian transformations, $\delta_\xi S_{\text{fluid}} =0$. Using the near-Carrollian transformations \eqref{transf-fluid} and \eqref{transf-V} and the Stokes theorem \eqref{Stokes}, one can show that
\begin{equation}
\begin{aligned}
\delta_\xi S_{\text{fluid}} = -\int_H \left( \tau \mathbb{E} +\bar{\psi} \mathbb{C} + Y^A \mathbb{P}_A + \bar{Z}^A\mathbb{J}_A \right) \mathring{\bm{\epsilon}}_H + \Delta Q_\xi \label{diff1}
\end{aligned}
\end{equation}
where we defined the combinations of the transformation parameters, $\bar{\psi} := \psi +{\lambda}\tau$ and $\bar{Z}^A := Z^A + {\lambda} Y^A$. The boundary term $\Delta Q_\xi$ is the difference of Noether charges corresponding to the near-Carrollian diffeomorphism at the two ends of $H$. We clearly see that imposing $\delta_\xi S_{\text{fluid}} =0$ up to the boundary term yields the fluid equations.
The Noether charges of these transformations have three components associated with different sectors of the near-Carrollian symmetries,
\begin{equation}
\begin{aligned}
Q_\xi= Q_\tau + Q_Y + Q_{\bar{Z}},
\end{aligned}
\end{equation}
where each components are given by
\begin{subequations}
\begin{align}
Q_\tau &= -\int_S \tau \left( \mathscr{E} + \mathrm{e}^\a \mathscr{J}^A \beta_A\right) \mathring{\bm{\epsilon}}_S, \\
Q_Y & = \int_S Y^A \left( \pi_A + \mathrm{e}^\a \left( \mathscr{K}_A{}^B+\mathscr{P}\delta_A^B \right)\beta_B\right) \mathring{\bm{\epsilon}}_S, \\
Q_{\bar{Z}} &= \int_S \bar{Z}^A \left( \mathscr{J}_A + \mathrm{e}^\a \Sigma_A{}^B\beta_B\right) \mathring{\bm{\epsilon}}_S.
\end{align}
\end{subequations}
where $S$ is a sphere at constant $u$.\footnote{One can more generally express the charges at the spheres $S_f= \{ u= f(\sigma)\}$ as the same integrals with $\beta_A $ replaced by $\beta_A -e_A[f]$.}
As one would expect, the transformations labelled by $\bar{\psi}$ has zero Noether charges, as they are generators of the non-dynamical constraint \eqref{C-eq}. This means that the $\bar{\psi}$ are pure gauge.
It is important to appreciate that our results generalize those presented in \cite{Ciambelli:2018ojf} (which was only the case $V^A =0$ and ${\lambda}_{AB} =0$). In our consideration, we allow non-zero $V^A$ and ${\lambda}_{AB}$ and by using the proposed near-Carrollian diffeomorphism \eqref{fluid-diff}, we managed to derive the complete set of Carrollian hydrodynamic equations and identified all the Noether charges.
One can then compute the evolution of the charges. For the component $Q_\tau[u]$, we straightforwardly evaluate its time evolution using the energy equation \eqref{E-eq},
\begin{equation}
\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d} u} Q_\tau & = - \int_S \bigg[ \mathrm{e}^\a \left( \tau (\ell + \mathring{\theta})[\mathscr{E}] + \mathscr{E} \ell[\tau] \right) + \frac{1}{\sqrt{\mathring{q}}} D_u ( \sqrt{\mathring{q}}\tau \mathrm{e}^\a \mathscr{J}^A \beta_A) \bigg]\mathring{\bm{\epsilon}}_S \\
&= \int_S (\tau \mathrm{e}^\a \mathbb{E}) \mathring{\bm{\epsilon}}_S + \int_S \mathrm{e}^\a \left( - \mathscr{E} \ell[\tau] -\mathscr{J}^A (e_A - \varphi_A)[\tau] + \tau (\mathscr{S}^{AB} + \mathscr{P} \mathring{q}^{AB})\mathring{\theta}_{AB} + \tau\Sigma^{AB} \ell[{\lambda}_{AB}] \right)\mathring{\bm{\epsilon}}_S \\
&= \int_S (\tau \mathrm{e}^\a \mathbb{E}) \mathring{\bm{\epsilon}}_S + \int_S \mathrm{e}^\a \left( - \mathscr{E} \bbdelta_\tau \alpha + \mathrm{e}^\a \mathscr{J}^A \bbdelta_\tau \beta_A + \frac{1}{2}(\mathscr{S}^{AB} + \mathscr{P} \mathring{q}^{AB})\bbdelta_\tau \mathring{q}_{AB} + \Sigma^{AB} \bbdelta_\tau {\lambda}_{AB} \right)\mathring{\bm{\epsilon}}_S
\end{aligned}
\end{equation}
More generally one finds that the charge evolution equations can be written as
\begin{equation}
\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d} u} Q_\xi
=\ & \int_S \mathrm{e}^\a \left( \tau \mathbb{E} +\bar{\psi} \mathbb{C} + Y^A \mathbb{P}_A + \bar{Z}^A\mathbb{J}_A \right) \mathring{\bm{\epsilon}}_S \\
& + \int_S \mathrm{e}^\a \left( - \mathscr{E} \bbdelta_\xi \alpha + \mathrm{e}^\a \mathscr{J}^A \bbdelta_\xi \beta_A - \mathrm{e}^{-\a} \tilde{\pi}_A \bbdelta _\xi V^A + \frac{1}{2}(\tilde{\mathscr{S}}^{AB} + \mathscr{P} \mathring{q}^{AB})\bbdelta_\xi \mathring{q}_{AB} + \Sigma^{AB} \bbdelta_\xi {\lambda}_{AB} \right)\mathring{\bm{\epsilon}}_S.
\end{aligned}
\end{equation}
These equations can be derived directly from combining \eqref{diff1} with \eqref{S-fluid}.
\section{Conclusion} \label{conclusion}
In this work we have extended the analysis of the $c\to0$ limit of relativistic fluids towards a Carrollian fluids. Starting from Carroll structures, we studied Carrollian fluids and presented two methods to derive the corresponding hydrodynamic conservation laws. In the first and conventional method, we started from energy-momentum tensors of general relativistic fluids, then properly consider the Carrollian limit ($c \to 0$) of the standard relativistic conservation laws. Our derivations could be viewed as the generalization of \cite{Ciambelli:2018xat} due to the fact that we now have in our construction the fluid velocity $V^A$ and the sub-leading components of sphere metric ${\lambda}_{AB}$. These two quantities are indeed important parts of the phase space of Carrollian hydrodynamics. The second route, which was the highlight of this article, was to viewed Carrollian hydrodynamics as the consequence of symmetries. We argued that Carrollian diffeomorphism is not sufficient to derive the full set of Carrollian fluid equations (which has already been studied in \cite{Ciambelli:2018ojf, Petkou:2022bmz}) and that we need to go beyond Carrollian diffeomorphism. To this end, we introduced the notion of near-Carrollian symmetries \eqref{fluid-diff} and finally showed that it leads to the complete set of Carrollian hydrodynamic equations.
Many directions however remain to be explored. Let us list some of them below.
\begin{enumerate}[label=\roman*)]
\item \emph{Realization on stretched horizons and null boundaries}: The membrane paradigm \cite{Damour:1978cg, Thorne:1986iy, Price:1986yy} has established the correspondence between black hole physics and dynamics of fluids living on timelike surfaces, called stretched horizons or membranes, located near black hole horizons (which are null surfaces). As Carroll structures are universal structures of null boundaries, be they at finite distances \cite{Chandrasekaran:2018aop, Chandrasekaran:2021hxc, Ashtekar:2021wld} or infinities \cite{Ashtekar:2014zsa, Ashtekar:2018lor}, one would therefore expect the membrane fluids to be Carrollian fluids. This statement has just been realized recently in \cite{Donnay:2019jiz} (see also \cite{Penna:2018gfx}), where it has been shown that the Einstein equations on black hole horizons can be displayed as Carrollian hydrodynamic equations and that the near-horizon diffeomorphism \cite{Donnay:2015abr, Donnay:2016ejv} is Carrollian diffeomorphism. The analog of the Brown-York energy-momentum tensor of null boundaries and its conservation laws have also been studied in \cite{Chandrasekaran:2021hxc}.
We have learned in this work that Carroll structures can be endowed on any surfaces, regardless of whether they are null or timelike\footnote{Usually in the literature, the Carrollian metric $q$ is treated as an induced metric on hypersurfaces and the null-ness property of $q$ then dictates the hypersurfaces to be null. This is not necessary as we can endowed, for example, the Lorentzian metric \eqref{RPmetric} on any type of hypersurfaces while incorporating all elements of the Carroll structure into the geometry of the surfaces.}, inferring the possibility to assign the Carrollian hydrodynamic picture to timelike surfaces, say for example, stretched horizons. In fact, it is to be expected that stretched horizons encode some underlying informations of the null boundaries, in the same spirit as the near-Carrollian analysis (the value of $c^2$ deviates from zero) presented in this work. To make our argument more elaborate, further investigations are required and some aspects of it will be provided in our upcoming work \cite{Jaiakson:2022}.
\item \emph{Sub-subleading and higher order corrections}: In our analysis, only the sub-leading (order $c^2$) terms were considered. One can indeed extend our construction by including sub-subleading (order $c^4$) and higher order terms in the metric \eqref{RPmetric}, which will introduce new variables to the phase space of Carrollian fluids and in turn activates new Carrollian fluid momenta conjugate to these higher-order variables. These momenta are corresponding to the $c^{-4}, c^{-6}, c^{-8}, ... $ corrections of the dissipation tensors \eqref{q-tau} which we have truncated them at order $c^{-2}$ here. As a consequence, the near-Carrollian diffeomorphism \eqref{fluid-diff} will be enhanced with the inclusion of higher-order corrections associated with new equations governing the dynamics of these new momenta and also new Noether charges.
Let us also mention that this picture has already been realized in the context of asymptotic null infinities \cite{Freidel:2021qpz, Freidel:2021dfs, Freidel:2021ytz} which exhibit the infinite tower of charges and their corresponding conservation equations. It would then be of interest to study the higher-order dynamics of the Carrollian hydrodynamics and bridge the findings with the results at infinities.
\item \emph{Thermodynamics of Carrollian fluids}: Having established the Carrollian hydrodynamics, one natural question therefore emerges --- \emph{what are thermodynamical properties of Carrollian fluids?} Admittedly, although this question may not garner much interest in the field of fluid mechanics due to the sole fact that everyday life's fluids are Galilean in nature, we believe that answering this question will provide useful insights to the realm of black hole physics. One possible direction to explore in the future is the notion of \emph{thermodynamical horizons}, the type of surfaces that obey all laws of thermodynamics, and also the universal notion of equilibrium in any surface.
\item \emph{Galilean hydrodynamics from symmetries}: As the speed of light $c$ now plays a role of the varying parameter when taking non-Lorentzian limits, it then suggests that similar analysis could be carried out for the Galilean case ($c \to \infty$ limit), therefore giving the derivation of Galilean hydrodynamics, e.g., the continuity equation and the Navier-Stokes equations, from symmetries. In this case, the underlying structure is the Newton-Cartan structure \cite{Duval:2014uoa} (see also \cite{Duval:1984cj, Duval:1990hj}) instead of the Carroll structure.
\end{enumerate}
\section*{Acknowledgments}
We would like to thank C\'eline Zwickel for helpful discussions and insights. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. The work of LF is funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and also in part by the Alfred P. Sloan Foundation, grant FG-2020-13768. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 841923. PJ's research is supported by the DPST Grant from the government of Thailand, and Perimeter Institute for Theoretical Physics.
|
1,941,325,221,082 | arxiv | \section{Introduction} \label{introduction}
\begin{center}
\begin{figure*}
\includegraphics[width=0.65\textwidth]{Porter-Temple22_f1.pdf}
\caption{Galaxy Zoo 4 GAMA-KiDS decision tree. The decision tree can be viewed at \url{https://data.galaxyzoo.org/gz_trees/gz_trees.html} under the 'GZ GAMA-KiDS' section.}
\label{fig:GZ4}
\end{figure*}
\end{center}
Though they are visually distinctive, the exact properties that affect the formation of arms in spiral galaxies are not yet fully explored.
The potential and observed links between spiral galaxy structure and pitch angle, arm strength \citep{Seigar98,Yu20,Kendall14} are now being examined with increasing samples sizes and sophistication in analysis \citep{Hart17a,Yu18a,Lingard21}. The motivation for increased interest in spiral morphology in representative samples is to constrain the dominant formation mechanism of spiral arm structure \citep{Masters21}. For example, \cite{Pringle19a} find a constant distribution of pitch angles of arms, consistent with the density wave theory origin of spiral structure. \cite{Hart18} found 40 per cent of arm formation in massive spirals can be by ``swing amplification"; the number of arms is consistent with the prediction from this mechanism with the remainder originating from other mechanisms.
\cite{Diaz-Garcia19c} do not find observational evidence that spiral arms are driven by stellar bars \citep[as do][]{Hart17a} or through ``manifolds", pathways of infalling material, which would show as a dependence of arm strength and pitch angle \citep{Athanassoula09}. They found that bar and arm strength are correlated, while bar strength and pitch angle are not. In multi-wavelength data, \cite{Yu18b} found younger stars to reside in tighter arms and \cite{Miller19a} found that these stars then trailed out from the arms.
\cite{Seigar98} found no correlation between pitch angle and Hubble type, which was reiterated by \cite{Kendall14, Yu18}; however, they note that not finding a correlation is unsurprising given the small range of pitch angles they examined. \cite{Yu20} later found a loose correlation between pitch angle and spiral arm strength, with an overall tendency for pitch angle to decrease with weaker arm strength, while \cite{Savchenko20} find no strong difference (except for number of arms) between grand design, multi-armed, and flocculent spirals in pitch angle, arm width or strength. The link between arm strength, pitch angle and formation mechanisms remains complex.
Instead of focusing on the formation of spiral arm structure, one can examine the correlations with global properties of the galaxies such as star formation, stellar mass or specific star formation.
\cite{Hart17} investigated spiral structure using the Sloan Digital Sky Survey (SDSS) main galaxy sample, with morphological data from the public release of Galaxy Zoo 2 \citep{Willett13}, stellar mass from \cite{Chang15}, and star formation from GALEX fluxes \citep{Martin05}. Using these data, they determined no significant dependence of spiral arm number on specific star formation rate (sSFR).
In this paper, we explore the connection of spiral arm number with stellar mass, star formation rate (SFR), and specific star formation rate (sSFR), using similar methods to \cite{Hart17}. We make use of the improved star formation and stellar mass estimates by the Galaxy And Mass Assembly \citep[GAMA,][]{Driver09,Liske15} survey using self-consistent {\sc magphys} SED fits to the full uv to sub-mm SED \citep{Driver16c,Wright16} and GalaxyZoo voting base on deeper and higher-resolution KiDS data \cite[][Kelvin et al. \textit{in prep.}]{Holwerda19b}. With this improved quality data, we investigate the trends with spiral arm numbers that the results from \cite{Hart17} suggested. We compare spiral arm number subsamples of stellar mass, SFR, and sSFR to the whole set of galaxies to determine any notable differences. The sets defined by a spiral number (m = 1, 2, 3, 4, 5+) are from the visual classification from the GAMA-KiDS Galaxy Zoo project, detailed in Section \ref{galaxy zoo}. This paper is organized as follows: section \ref{data} describes the data used in the paper and how subsamples are defined, section \ref{results} presents the results for star formation, stellar mass, and specific star formation as a function of the number of spiral arms,
section \ref{discussion} discusses these results and section \ref{conclusions} lists our conclusions.
\begin{table*}
\large
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{llcccccccccccc} \toprule
& & \multicolumn{2}{c}{2-Sample K-S Test} & \multicolumn{2}{c}{k-Sample A-D Test} & \multicolumn{7}{c}{k-Sample A-D Test Critical Values} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-13}
& & Statistic & Significance & Statistic & Significance & 25\% & 10\% & 5\% & 2.5\% & 1\% & 0.5\% & 0.1\%\\\midrule
\multirow{5}{*}{ \rotatebox{90}{Stellar Mass} }
& m=1 & 0.138 & \textbf{0.038} & 3.977 & 0.008 & 0.33 & 1.23 & 1.96 & 2.72 & \textbf{3.75} & 4.59 & 6.55\\
& m=2 & 0.034 & 0.203 & 0.763 & 0.159 & \textbf{0.33} & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & 6.55\\
& m=3 & 0.152 & \textbf{0.001} & 6.642 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\
& m=4 & 0.252 & 0.081 & -0.071 & 0.250 & \textbf{0.33} & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & 6.55\\
& m=5+ & 0.264 & \textbf{0.000} & 10.216 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\ \midrule
\multirow{5}{*}{ \rotatebox{90}{SFR} }
& m=1 & 0.099 & 0.256 & 0.278 & 0.250 & \textbf{0.33} & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & 6.55\\
& m=2 & 0.110 & \textbf{0.000} & 36.514 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\
& m=3 & 0.281 & \textbf{0.000} & 39.523 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\
& m=4 & 0.291 & \textbf{0.028} & 4.342 & 0.006 & 0.33 & 1.23 & 1.96 & 2.72 & \textbf{3.75} & 4.59 & 6.55\\
& m=5+ & 0.217 & \textbf{0.004} & 7.721 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\ \midrule
\multirow{5}{*}{ \rotatebox{90}{sSFR} }
& m=1 & 0.234 & \textbf{0.000} & 9.270 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\
& m=2 & 0.072 & \textbf{0.000} & 19.745 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\
& m=3 & 0.187 & \textbf{0.000} & 10.309 & 0.001 & 0.33 & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & \textbf{6.55}\\
& m=4 & 0.261 & 0.064 & 0.901 & 0.139 & \textbf{0.33} & 1.23 & 1.96 & 2.72 & 3.75 & 4.59 & 6.55\\
& m=5+ & 0.143 & 0.126 & 1.976 & 0.050 & 0.33 & 1.23 & \textbf{1.96} & 2.72 & 3.75 & 4.59 & 6.55\\\bottomrule
\end{tabular}
\caption{\label{tab:statstable} Spiral arm number (m), Kolmogorov-Smirnov test statistic and significance for stellar mass, star formation rate, and specific star formation rate are shown under the header 2-sample K-S Test. Bold values for the K-S test significance are the statistically significant values discussed in section \ref{discussion}. The Anderson Darling test statistic and estimated significance level for Stellar Mass, SFR, and sSFR are shown under the header k-sample A-D Test. The critical values for different levels of significance are listed, with the critical value that each subsample meets in bold. The Anderson Darling test significance estimates are floored at 0.1\% and capped at 25\%.}
\end{table*}
\section{Data} \label{data}
The data used comes from the Galaxy and Mass Assembly (GAMA) survey \citep{Driver09, Liske15}. We use the GAMA DR3 \citep{Baldry18} and the Kilo Degree Survey \citep[KiDS,][]{de-Jong13,de-Jong15,de-Jong17,Kuijken19} imaging. Additionally, we use the MAGPHYS table described in the GAMA DR3. MAGPHYS computes the stellar mass and specific star formation rate used and is fully described in \cite{da-Cunha08}.
\subsection{GAMA} \label{gama}
GAMA is a combined spectroscopic and multi-wavelength imaging survey designed to study spatial structure in the nearby ($z < 0.25$) Universe on kpc to Mpc scales \citep[see][for an overview]{Driver09, Driver11}. The survey, after completion of phase 2 \citep{Liske15}, consists of three equatorial regions each spanning 5 deg in Dec and 12 deg in RA, centered in RA at approximately 9h (G09), 12h (G12) and 14.5h (G15) and two Southern fields, at 05h (G05) and 23h (G23). The three equatorial regions, amounting to a total sky area of 180 deg$^2$, were selected for this study. For the purpose of visual classification, 49,851 galaxies were selected from the equatorial fields with redshifts $z<0.15$ (see below). The GAMA survey is $>$98\% redshift complete to r $<$ 19.8 mag in all three equatorial regions. We use the {\sc magphys} SED fits data-products \citep{Driver18} from the third GAMA data-release \citep[DR3,][]{Baldry18}.
\subsection{KiDS} \label{kids}
The Kilo Degree Survey \citep[KiDS,][]{de-Jong13,de-Jong15,de-Jong17,Kuijken19} is an ongoing optical wide-field imaging survey with the OmegaCAM camera at the VLT Survey Telescope. It aims to image 1350 deg$^2$ in four filters (u g r i). The core science driver is mapping the large-scale matter distribution in the Universe, using weak lensing shear and photometric redshift measurements. Further science cases include galaxy evolution, Milky Way structure, detection of high-redshift clusters, and finding rare sources such as strong lenses and quasars.
KiDS image quality is typically 0\farcs6 resolution (for sdss-r) and depths of 23.5, 25, 25.2, 24.2 magnitude for i, r, g and u respectively. This imaging was the input for the GalaxyZoo citizen science classifications.
\subsection{Galaxy Zoo} \label{galaxy zoo}
Information on galaxy morphology is based on the GAMA-KiDS Galaxy Zoo classification \citep[][Kelvin et al., \textit{in prep.}]{Lintott08}. The GAMA-KiDS Galaxy Zoo project is described in Kelvin et al., \textit{in prep}. RGB cutouts were constructed from KiDS g-band and r-band imaging with the green channel as the mean of these. KiDS cutouts were introduced to the classification pool and mixed in with the ongoing classification efforts.
For the Galaxy Zoo classification, 49,851 galaxies were selected from the equatorial fields with redshifts $z < 0.15$. The Galaxy Zoo provided a monumental effort with almost 2 million classifications received from over 20,000 unique users over the course of the first 12 months.
This classification has been used by the GAMA team to identify dust lanes in edge-on galaxies \citep{Holwerda19}, searches for strong lensing galaxy pairs \citep{Knabel20}, and the morphology of green valley galaxies (Smith et al {\em in prep}.).
In this paper we use the visual classifications of spiral galaxies from the Galaxy Zoo project; the full decision tree for the GAMA-KiDS Galaxy Zoo project is shown in Figure \ref{fig:GZ4}.
\subsection{MAGPHYS SED} \label{magphys}
In addition to the GAMA-KiDS Galaxy Zoo classifications, we use the {\sc magphys} \citep{da-Cunha08}, spectral energy distribution fits to the GAMA multi-wavelength photometry \citep{Wright17}, presented in \cite{Driver18}. {\sc magphys} computes stellar mass, star formation rate, and specific star formation rates which will serve as comparison data for the Galaxy Zoo arm classifications.
\subsection{Sample Selection} \label{sample selection}
\begin{figure}
\centering
\includegraphics[scale=0.49]{Porter-Temple22_f2.png}
\caption{Stellar Mass vs. Redshift for the GAMA-KiDS Galaxy Zoo project data. The limited sample, which includes only those galaxies with $z < 0.08$ and $M_* > 10^9$, is indicated by the red box. Only galaxies with 30\% or more votes in favor of being a spiral galaxy are included.}
\label{fig:redshiftLIM}
\end{figure}
To be included in the subset of the GAMA-KiDS Galaxy Zoo project used (herein after referred to as 'the limited sample'), a galaxy must meet three criteria. First, the galaxy must have a stellar mass \(M_* > 10^9\). Any galaxies below that limit are excluded. Second, the galaxy must have received at least 30\% of votes in favor of it being a spiral galaxy. This is represented by question T03 in the Galaxy Zoo decision tree shown in Figure \ref{fig:GZ4}. This avoids galaxies that were misclassified as spiral galaxies due to a low number of votes. Third, included galaxies must have a redshift less than 0.08, meaning any galaxies with $z \geq 0.08$ are not included in the limited sample. Doing this excludes those galaxies whose spiral arms are not correctly represented by Galaxy Zoo votes because of unclear imaging or lack of distinction between, for example, two-armed and four-armed spirals at $z\geq 0.08$.
The limits on the limited sample from the full GAMA-KiDS Galaxy Zoo project is shown in Figure \ref{fig:redshiftLIM}.
\subsection{Defining Subsamples} \label{subsample}
Each subsample of spiral galaxies is defined by their spiral arm number as voted by Galaxy Zoo participants. This is represented by question T06 in Figure \ref{fig:GZ4}, with answers A0, A1, ..., A4 being classified in this paper as m=1, m=2, ..., m=5+.
In addition to fulfilling all the criteria described in section \ref{sample selection}, to fall into any given subsample m=x, a galaxy must meet two additional criteria. First, it must have received at least 50\% of votes in favor of having x spiral arms; that is, a galaxy is in the m=x subsample if the fraction of votes in favor of x arms is \( > 0.5\). The cutoff at 50\% means that the majority of votes dictates what subsample the galaxy falls into, so no galaxy falls into more than one subsample. Second, the galaxy must have less than 100\% of votes in favor of it having x spiral arms. This eliminates some galaxies that have a very low number of votes. So, a galaxy that with a fraction of votes $f_m$ in the range $(0.5 < f_m < 1)$ for answer A0 in Table \ref{fig:GZ4} would be included in the m=1 subsample, and similar for m=2, 3, 4, and 5+ spiral arms.
\section{Results} \label{results}
The limited sample of galaxies is compared with each subsample as determined in section \ref{subsample}, with respect to stellar mass, SFR, and sSFR. The number of galaxies N given in each subsample is shown in Table \ref{tab:statstable}, along with the Kolmogorov–Smirnov test (K-S test) statistic and p-value.
The K-S test statistic indicated how similar the subsample is to the parent sample, with smaller values being more similar and larger values being less similar, where a statistic of 0.0 indicates two identical distributions. The p-value associated with each K-S statistic dictates the significance in the K-S statistic, and we consider a p-value of .05 or lower to be significant.
For an additional test of sample similarity, we perform the Anderson-Darling test on the above samples with the resulting statistic and p-values also listed in Table \ref{tab:statstable}. The critical values for each A-D test are returned for different levels of confidence and we bold the value that is exceeded by the A-D statistic in each case. The benefit of the A-D test over the K-S test is that it identifies confidence levels independently from the reported p-value. The A-D test is much more sensitive to the tails of any distribution and the K-S test is more dependent of the center of distribution. As our distributions are all non-Gaussian, this makes the A-D test better suited for the comparison.
Broadly the K-S and A-D tests agree on which populations differ but they disagree on the level of significance. For example Stellar Mass and one arm (m=1) or SFR (m=4), the A-D test assigns higher significance to the difference. We note that the K-S test reports a small, but low significance difference for the m=5+ sSFR distribution but the A-D identified a (just) significant result (5\% critical value exceeded, Table \ref{tab:statstable}).
\begin{figure*}
\centering
\includegraphics[width=.32\textwidth]{Porter-Temple22_f3a.png}\quad
\includegraphics[width=.32\textwidth]{Porter-Temple22_f3b.png}
\smallskip
\includegraphics[width=.32\textwidth]{Porter-Temple22_f3c.png}\quad
\includegraphics[width=.32\textwidth]{Porter-Temple22_f3d.png}
\smallskip
\includegraphics[width=.32\textwidth]{Porter-Temple22_f3e.png}
\caption{Stellar mass histograms for each of the subsamples selected from the limited GAMA-KiDS Galaxy Zoo sample. The gray filled histogram shows the distributions of the entire limited data set, while the colored outlines show the distribution for the individual spiral arm number subsamples.}
\label{pics:stellarmassfig}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Porter-Temple22_f4.png}
\caption{Stellar mass distribution densities for each of the spiral arm subsamples. The shaded gray region indicates $\pm$1 standard deviation of the whole sample. The dotted gray line indicates the mean for the whole sample. Each spiral arm distribution shows the high range, mean, and low range, indicated by horizontal dash marks. The number of galaxies in each subsample is shown above each distribution.}
\label{fig:stmsummary}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=.32\textwidth]{Porter-Temple22_f5a.png}\quad
\includegraphics[width=.32\textwidth]{Porter-Temple22_f5b.png}
\smallskip
\includegraphics[width=.32\textwidth]{Porter-Temple22_f5c.png}\quad
\includegraphics[width=.32\textwidth]{Porter-Temple22_f5d.png}
\smallskip
\includegraphics[width=.32\textwidth]{Porter-Temple22_f5e.png}
\caption{SFR histograms for each of the subsamples selected from the limited GAMA-KiDS Galaxy Zoo sample. The gray filled histogram shows the distributions of the entire limited data set, while the colored outlines show the distribution for the individual spiral arm number subsamples.}
\label{pics:sfrfig}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Porter-Temple22_f6.png}
\caption{SFR distribution densities for each of the spiral arm subsamples. The shaded regions are equivalent to the definitions in Figure \ref{fig:stmsummary}.}
\label{fig:sfrsummary}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=.32\textwidth]{Porter-Temple22_f7a.png}\quad
\includegraphics[width=.32\textwidth]{Porter-Temple22_f7b.png}
\smallskip
\includegraphics[width=.32\textwidth]{Porter-Temple22_f7c.png}\quad
\includegraphics[width=.32\textwidth]{Porter-Temple22_f7d.png}
\smallskip
\includegraphics[width=.32\textwidth]{Porter-Temple22_f7e.png}
\caption{sSFR histograms for each of the subsamples selected from the limited GAMA-KiDS Galaxy Zoo sample. The gray filled histogram shows the distributions of the entire limited data set, while the colored outlines show the distribution for the individual spiral arm number subsamples.}
\label{pics:specificfig}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Porter-Temple22_f8.png}
\caption{sSFR distribution densities for each of the spiral arm subsamples. The shaded regions are equivalent to the definitions in Figure \ref{fig:stmsummary}.}
\label{fig:SSFRsummary}
\end{figure}
\subsection{Stellar Mass} \label{results: stellar mass}
Figure \ref{pics:stellarmassfig} shows the histograms resulting from the process described in sections \ref{sample selection} and \ref{subsample}, and from the K-S test described above.
The m=1, m=3, and m=5 subsamples show visual differences in their stellar mass distributions. The m=5 subsample, though limited by a small number of galaxies, shows a notable shift towards higher stellar masses. Likewise the m=3 subsample tends towards higher masses as well, with the peak falling just below 10.0 for stellar mass, versus the peak at 9.5 for the limited sample. The m=1 subsample shows a tendency to lower stellar masses.
These are reflected in the K-S statistic in Table \ref{tab:statstable}, with the m=5 subsample having the greatest difference from the limited sample. The m=4 sample has the second highest statistic value, but with the lowest number of galaxies and p-value of .08 on that statistic, we do not consider it as significant. The m=1, 3, and 5 values are significant in their A-D test as well, with slightly higher significance for m=1.
Figure \ref{fig:stmsummary} shows the distributions in the colored violin plots, with the gray band indicating the median and $\pm$ 1 standard deviation for the limited sample. This also reflects the shift in distribution, with m=1 having both a lower median than the limited sample and showing a greater quantity of galaxies at lower stellar masses. Likewise, the m=3 subsample is shifted to slightly higher stellar masses, and m=5 visibly higher than the median from the limited sample.
\subsection{Star Formation Rate} \label{results: sfr}
As for stellar mass, Figure \ref{pics:sfrfig} shows the histograms resulting from the above process. The m=3, 4, and 5 distributions show a notable difference in their star formation rate distributions, with each of them having a higher SFR distribution than the limited sample. Visually, the m=3 subsample appears to have the highest distribution for SFR, and the K-S statistic reflects a greater difference from the limited sample than most other subsamples. We find that these three subsamples have a notable difference in their distributions from the limited sample, as reflected in Table \ref{tab:statstable} with their K-S statistics being much higher than the m=1 or m=2 samples. This difference is reflected again in their A-D statistics with high significance for the m=2, 3, 4, and 5+ samples. The agreement between K-S and A-D statistics is due to the mostly Gaussian shape of the distributions in SFR with only a weak tail to lower SFR.
Figure \ref{fig:sfrsummary}, as above, shows the summarized distributions for SFR. This reflects a higher average distribution for m=3, 4, and 5 subsamples. Though the m=1 and 2 subsamples appear to have higher SFR distributions than the limited sample in Figures \ref{pics:sfrfig} and \ref{fig:sfrsummary}, they also have relatively low K-S statistics compared to the other subsamples, showing a higher similarity to the limited sample than m=3, 4, or 5.
\subsection{Specific Star Formation Rate} \label{results:SSFR}
As in sections \ref{results: stellar mass} and \ref{results: sfr}, Figure \ref{pics:specificfig} shows the histograms for the sSFR values. The distribution for subsamples m=4 and m=5 are shifted to lower sSFRs. The K-S statistics for m=4,and 5 reflect these distribution shifts, but we consider the p-values of their K-S statistics to be less significant.
Conversely, we see a significant shift in the m=1 population towards higher sSFRs, and the m=3 population's distribution weighted heavily towards -10. Again this is reflected by the K-S statistics and high significance p-values in table \ref{tab:statstable}, where the m=1 subsample has a greater difference in the limited sample, and m=3 fitting the limited sample quite well. The A-D tests confirm the significance of different distributions depending on the number of spiral arms and add a significant difference for the m=5+ distribution (5\% critical value exceeded by the A-D test). This lends high confidence to the conclusion that sSFR and arm number are strongly correlated.
As above, Figure \ref{fig:SSFRsummary} shows the summarized distributions for sSFR. We see that m=1 has a higher than average sSFR compared to the other samples, and that the distribution of m=3 is more concentrated into one peak region.
\section{Discussion} \label{discussion}
In section \ref{sample selection}, we detail how the spiral arm number subsamples are defined. In categorizing them based on the Galaxy Zoo votes (question T06, shown in Figure \ref{fig:GZ4}) we are treating spiral arms as integers. However, this does not take into account whether a spiral galaxy has well-defined arms; flocculent spiral galaxies with poorly-defined or discontinuous arms can not be well classified with an integer number of arms. The voting pattern does reflect this somewhat, as the classifications came from real people voting, and so any galaxies with poorly-defined arms would be best categorized through the majority vote. So, a galaxy is classified as accurately as it can be into an integer number of spiral arms. From question T03 (Figure \ref{fig:GZ4}), we do know that these are all spiral galaxies.
In Figures \ref{pics:stellarmassfig}, \ref{pics:sfrfig}, and \ref{pics:specificfig} we can see that the low number of galaxies in each subsample does leave the m=4 and m=5+ distribution lacking in statistical weight for the K-S test results compared to, for example, the m=2 subsample. \cite{Hart17} used the optical-WISE SED inferred stellar masses and separately estimated star formation rates from either FUV flux or 22$\mu$m flux. The improvement in our data is the use of a self-consistent SED to determine both from 21 filters spanning ultraviolet through sub-mm \citep{Wright16,Driver16c}. Additionally, the A-D test results lend more statistical significance to the m=5+ distribution in particular, while giving a more significant result (to 0.1\%) for sSFR for m=1, 2, and 3. Because of the small sample size of the m=4 subsample, the A-D test (which is more sensitive to the tails of the distribution) does not show a higher significance than the m=4 sSFR result.
Overall, we find that spiral galaxies are less efficient at forming stars if they have more spiral arms. The m=1 subsample has a much lower stellar mass on average, but a higher than average distribution for sSFR (see Figures \ref{pics:stellarmassfig} and \ref{pics:specificfig}). This is supported by the findings of \cite{Hart17}, who noted that two armed spiral galaxies are more gas deficient than other galaxies, and so are more efficient at converting gas to stars.
Galaxies with stronger bars have fewer but stronger arms \citep{Yu20}, and arm strength has been found to correlate well with SFR as a function of stellar mass \citep{Yu21}. Given our results, it is unclear if the causation is more arms leads to weaker arms which in turn leads to lower sSFR. Alternatively, it is possible that the perceived change in sSFR is caused by a subtle bias in the MAGPHYS SED results (Section \ref{magphys}), because the arm patterns rearrange the dusty interstellar medium (ISM) in the disc, skewing SED measurements of star formation. Arms are more opaque than the disc \citep{kw00b,Holwerda05b}, and therefore better at hiding directly measured star formation. The many-armed spirals with low sSFR might simply be hiding their directly measurable star formation instead of having lower rates overall. However, \cite{Hart17} found that two-armed spirals have more mid-infrared (MIR) dust emission, indicating that a greater proportion of new stars in two-armed spirals are in heavily obscured region and the MAGPHYS SED result is based on balancing the missing ultraviolet light with the observed heated dust emission. Given this, it seems unlikely that low sSFRs in many-armed spirals are caused by a higher obscuration fraction of new stars.
Higher star formation for a given mass will likely highlight the spiral structure in these disks as the site of recent star formation. \cite{Hart17} note that the mean of their distribution shifts with only 0.05 dex with each additional spiral arm. We point to Figure \ref{fig:SSFRsummary} to show that the mode of the distribution is a better indication of the change with the number of arms. Between the shift in the distribution of sSFR values and the much improved star formation and stellar mass accuracy thanks to a consistent SED treatment rather than single-flux based estimates, we find the trend in lowering sSFR with number of spiral arms convincing.
\section{Conclusions} \label{conclusions}
In this paper, we examined the connection of spiral arm number with stellar mass, star formation rate (SFR), and specific star formation rate (sSFR). Using the data from GAMA DR3 and the morphological classifications from Galaxy Zoo GAMA-KiDS, we compared subsamples consisting of galaxies with 1, 2, 3, 4, or 5+ spiral arms. Overall, we find the following:
\begin{enumerate}
\item Galaxies with more spiral arms tend towards higher stellar masses (Figure \ref{fig:stmsummary}) and higher star formation rates (Figure \ref{fig:sfrsummary}).
\item Galaxies with more spiral arms tend towards lower specific star formation rates (Figures \ref{pics:specificfig} and \ref{fig:SSFRsummary}, Table \ref{tab:statstable}).
\item The single arm (m=1) subsample tends to have lower stellar mass and higher specific star formation than both the full sample and any other subsample.
\end{enumerate}
A different, non-integer classification of the number of spiral arms, allowing for the voting tally to assign fractions of spiral arms to galaxies may reflect the reality of these galaxies better. Additionally, changing the limited sample to include only galaxies with a sufficient number of votes to ensure accuracy in arm classification (as opposed to percentages of votes in favor of spiral arm pattern) may yield a higher sample size with stronger statistical significance.
The Rubin Observatory and future iterations of the Galaxy Zoo are expected to improve the statistics of spiral arm numbers on galaxies in the nearby Universe. Equally important however, are good stellar mass and star-formation estimates from SED models for similar comparisons as this work and in \cite{Hart17a}.
The Euclid and Roman space telescopes will a wealth of morphological data on higher-redshift spiral galaxies. These will allow for a direct comparison of the evolution of spiral structure.
\section*{Acknowledgements}
The material is supported by NASA Kentucky award No: 80NSSC20M0047 (NASA-REU to L. Haberzettl and R. Porter-Temple).
BWH is supported by an Enhanced Mini-Grant (EMG). The material is based upon work supported by NASA Kentucky under NASA award No: 80NSSC20M0047.
This research made use of Astropy, a community-developed core Python package for Astronomy \citep{Astropy-Collaboration13,Astropy-Collaboration18}.
\section{Data Availability}
The data for this project is available from the GAMA DR3 website \url{http://www.gama-survey.org/dr3/schema/table.php?id=82}.
|
1,941,325,221,083 | arxiv | \section{Introduction}
Research in quantum gravity may be regarded as an attempt to
construct a theoretical scheme in which ideas from General
Relativity and quantum theory are reconciled. However, after many
decades of intense work we are still far from having a complete
quantum theory of gravity. Any theoretical scheme of gravity must
address a variety of conceptual issues including the problem of time
and identification of dynamical observables. There are many program
that attempt to address the above mentioned problems including
canonical quantum gravity.
It is well know that some of the issues such as time and observables
in quantum gravity have their roots in classical general relativity;
in such cases it seems more reasonable to identify and perhaps
address the problem first in this context.The classical theory of
gravity is invariant under the group of Diff (${\cal M}$) of
diffeomorphisms of the space-time manifold ${\cal M}$ . To be more
specific, the theory is invariant under time reparametrization and
spacial diffeomorphism. This goes against the simple Newtonian
picture of the a fixed and absolute time parameter. The classical
theory, while itself free from problems relating to the definition
and interpretation of time, contains indications of problems in the
quantum theory, where the absence of a time parameter is hard to
reconcile with our everyday experience. In fact, one can see that in
the Hamiltonian formulation of classical general relativity, time is
suppressed from the theory. There are many proposals for dealing
with this question which generally involve a re-interpretation of
the usual notion of time ( see \cite{Isham} for an overview of these
proposals).
Identification of dynamical observable for the theory is another fundamental issue
that has its roots in classical formulation of general relativity
and directly related to the issue of time. The problem of evolving
of a dynamical system from initial data is known as the Cauchy
problem or initial value problem \cite{Inverno} and in General
Relativity is naturally addressed using the 3+1 ADM representation.
In the Arnowitt-Deser--Misner (ADM) approach, the spatial
hypersurface $\Sigma(t)$ is assumed to be equipped with a space-like
3-metric $\gamma_{ij}$ ($i, j$ runs from $1$ to $3$) induced from
space-time metric $g_{\mu \nu}$($\mu ,\ \nu $ runs from $0$ to $3$).
Einstein's equations are of course covariant and do not single out a
preferred time with which to parametrise the evolution.
Nevertheless, we can specify initial data on a chosen spatial
hypersurface $\Sigma$ , and if $\Sigma$ is Cauchy, we can evolve
uniquely from it to a hypersurface in the future or past. The issue
of specification of initial or final data on Cauchy hypersurfaces
has been discussed in many papers; for example, see \cite{Hawking}.
An alternative approach to Cauchy problem is known as characteristic
initial value problem in which one may fix the initial data on null
hypersurfaces rather than
spatial hypersurfaces. There are reasons to motivate us using null
boundaries in formulating general relativity. For a summary one
may look at the \cite{Komar}\cite {Dautcourt} \cite {hughhossein}
\cite {Penrose} \cite {Sachs} \cite {Ellis} \cite {Bondi}.
In addition, the approach of setting the final data on a null hypersurface is
essential if we are interested in a theory such as quantum theory
that observations made by a single localized observer who can collect
observational data only from that subset of space-time which lies in the
causal past\cite{hughhossein}.
Studying cosmological models instead of General relativity helps us
to overcome the problems related to the infinite number of degrees
of freedom in the theory and pay more attention to the issues
arising from the time reparametrization invariance of the theory;
such as the identification of a dynamical time and also construction
of observables for the theory\cite{palii}.
There are many general homogeneous (but anisotropic) cosmological
models such as the Kantowski-Sachs models and the Bianchi models.
However, in this paper we consider Friedmann-Robertson-Walker (FRW)
cosmologies for simplicity. The standard FRW universe are of course
one special example. In that case we assume that our universe filled
with scalar massless matter field which simply has two
minisuperspace coordinates, $\{a,\phi\}$, the cosmic scale factor
and the scalar field. The conventional Hamiltonian formulation of
this model based on Dirac and ADM procedure of general relativity is
developed.
The main feature of the Hamiltonian theory of gravity is the
presence of nonphysical variables and constraints due to the
diffeomorphism invariance of the theory. As mentioned, This is in
turn an obstacle to the problem of identification of a time
parameter to measure physical quantities such as cosmological
observables (the Hubble law and red shift) and the Dirac observables
in the Hamiltonian description of the classical and quantum
cosmologies. One of the possible solutions of these problems in the
Hamiltonian approach discussed in this paper is to reduce the
original theory reparametrization-invariantly by the explicit
resolving of the first class constraints to get an equivalent
unconstrained system. In this approach one of the variables of the
extended phase space converts into the dynamic evolution parameter
that plays the role of a cosmological time \cite{hughhossein} in the
theory. Thus, instead of the extended phase space and the initial
action invariant under reparametrizations of the coordinate time, we
obtain the reduced phase space which contains only the matter field
described by the reduced Hamiltonian.
In this paper in section two, the Hamiltonian formulation of a
simple reparametrization invariant model has been presented. A
reduced Hamiltonian and a time variable has been emerged from this
model. In section three, we apply the Hamiltonian reduction
developed in section two to FRW model when a massless scalar field
minimally coupled to gravity .
In section four, a discussion of Dirac observables in general
relativity is given. The 'Rovelli's constants of motion'
\cite{Rovelli} have also been discussed. In section five Dirac
observables on the null hypersurface of a single localized observer
for FRW cosmologies is identified. These observables are similar to
Rovelli's constants of motion on the null hypersurfaces
\cite{hughhossein}. The evolution of these observables is with
respect to time variable obtained from massless scalar field coupled
to the gravity.
\section{A simple parametrized model}
To construct a reduced phase space with a reduced Hamiltonian for a
time reparametrization invariant system let us begin with a simple
toy model in classical mechanics. The case of a one dimensional
motion of a particle with the action given by \begin{equation}
S[x,\sigma]=\frac{1}{2}\int_{t_0}^{t_f}
(N^{-2}\dot{x}^2-v(x)-N^{-2}\dot{\sigma}^2)Ndt,
\label{eq:particleaction}
\end{equation}
where $N^2$ is contravariant metric and $x(t)$, $\sigma(t)$ and
$N(t)$ are the independent configuration variables for the particle.
With the $d\tau=Ndt$ to be the proper time interval and $v(x)$ the
potential, one can rewrite the action as \begin{equation}
S[x,\sigma]=\frac{1}{2}\int_{\tau(t_0)}^{\tau(t_f)}
[(\frac{dx}{d\tau})^2-v(x)-(\frac{d\sigma}{d\tau})^2]d\tau.
\end{equation}
With the gauge fixing, $N=1$, the dynamics is unique which is out of
our interest. Without gauge fixing, i.e. allowing $N(t)$ to vary,
according to the Dirac prescription the generalized Hamiltonian
dynamics for the action (\ref{eq:particleaction}) takes place on the
phase space spanned by the three canonical pairs $(x, p_x)$,
$(\sigma, p_\sigma)$ and $(N, p_N)$.
Since $\sigma$ is a dynamical variable (while $\tau$ not) and has a
simple dynamics i.e. $\sigma=\alpha\tau+\beta$ (on shell), one can
use $\sigma$ (rather than $\tau$) to parametrize $x$ and also it may
be considered as a clock time to make measurement.
The Euler-Lagrange equation for the dynamical variable $x$ with
respect to $\tau$ and $\sigma$ are:
$$\frac{d^2x}{d\tau^2}=-\frac{\partial v}{\partial x}, \ \ \ \
\frac{d^2x}{d\sigma^2}=-\frac{1}{\alpha^2}\frac{\partial
v}{\partial x},$$ where $\alpha$ is a measurable constant. Thus,
one may consider evolution of $x(\sigma)$ with respect to
(measurable) clock time $\sigma$ instead of $\tau$.
The momenta associated with the dynamical variables are
$$p_x=N^{-1}\dot{x},\ \ \ p_\sigma=-N^{-1}\dot{\sigma}, \ \ \ \
\ \ p_N=0.$$ Since the action does not explicitly depends on the
variable $\dot{N}$, the vanishing momenta $p_N$ is primary
constraint, $$p_N\approx 0.$$
The canonical Hamiltonian then is \begin{equation} H_0=p_x\dot{x}+p_\sigma
\dot{\sigma}-L=\frac{1}{2}N[p_x^2-p_\sigma^2+v(x)] \end{equation}
with the
total Hamiltonian
$$H_T=\lambda_Np_N-H_0=\lambda_Np_N+\frac{1}{2}N[p_x^2-p_\sigma^2+v(x)].$$
Following Dirac procedure, to ensure that the primary constraint
is preserved with time evolution, we also require that
$\dot{p}_N\approx 0.$ which gives us the secondary constraint
$$0\approx \frac{1}{2}[p_x^2-p_\sigma^2+v(x)]=\cal{H},$$ and
therefore the total Hamiltonian now reads \begin{equation} H_T=\lambda_N
p_N+N\cal{H}, \end{equation} where the variables $\lambda_N$ and $N$ in
Hamiltonian are Lagrange multipliers. Now, the equations of motion
for our system are
\begin{eqnarray} \dot{x}=Np_x ,\ \ \ \ \ \dot{p}_x=\frac{1}{2}Nv',\\
\dot{\sigma}=-Np_\sigma ,\ \ \ \ \ \dot{p}_\sigma=0,\\
\dot{N}=\lambda_N, \ \ \ \ \ \dot{p}_N=\lambda_N, \end{eqnarray}
which
accompanied with the two first class constranits (FCC):
\begin{equation}
{\cal H}\approx 0, \ \ \ \ \ \ p_N\approx 0. \end{equation}
It is easy to check that according to Dirac procedure $x(t)$ which
is the value of $x$ for a given value of $t$ is not eligible to be
an observable since $\{H, x(t)\}\neq 0.$ it means that specifying t
does not identify a special point on the trajectory as the
parametrization of the trajectory is not fixed. However, $x(\sigma)$
which is the value of $x$ for a given value of $\sigma$ is an
eligible observable since $\{H, x(\sigma)\}=0$. It gives us particle
location when clock says for example $3:20$. Once measured and
recorded, stays fixed for all time. (historical record!)
Among the dynamical variables, only $p_\sigma$ is a first class
variables since its poisson brackets with the constraints vanishes.
Thus, one can introduce a new canonical variables for ($\sigma
,p_\sigma $) as \begin{equation} T_\sigma=\frac{\sigma}{p_\sigma},\ \ \ \ \ \
p_T=\frac{p_\sigma^2}{2},\end{equation} in order to obtain a reduced
Hamiltonian describing the evolution of the particle with respect to
the new dynamical time variable $T_\sigma$.
In terms of the new variables the total Hamiltonian is \begin{equation}
H_T=\lambda_N p_N+Np_T, \end{equation} and one can divide the equations of
motion into two parts:1) for the canonical pairs $(N, p_N)$,
$(T_\sigma, p_T)$ with a dependency on the Lagrange multiplier
$\lambda(\tau)$, \begin{eqnarray} \dot{T}_\sigma=N ,\ \ \ \ \ \dot{p}_T=0,\\
\dot{N}=\lambda_N, \ \ \ \ \ \dot{p}_N=-p_T, \end{eqnarray} which
constrained by $p_T=0$. 2)for the canonical pair $(x, p_x)$ \begin{equation}
\dot{x}=0 ,\ \ \ \ \ \dot{p}_x=0 \end{equation} which have a unique solution
with no constraint. The reduced Hamiltonian that governs the
particle evolution in time $T_\sigma$ then is
$$H(x)=\frac{1}{2}[p_x^2+v(x)].$$
Note that although the dynamical time $T_\sigma$ does not commute
with the constraints and so is not a first class variable but its
momenta $p_T$ is a first class variable and so eligible to be
considered as a time variable to measure the passage of time.
Alternatively one can reduce the theory in terms of the coordinate
$x$ by performing the canonical transformation on $x$
\begin{equation} T_x=\int dx (2\Pi_x+v)^{-1/2},\end{equation} \begin{equation}
\Pi_x==\frac{1}{2}[p_x^2-v]\end{equation} and thus the reduced Hamiltonian that
describes the evolution of the variable $\sigma$ in time $T_x$ is
\begin{equation} H(p_\sigma)=\frac{p_\sigma^2}{2}.\end{equation}
Once again only those new canonical variables are eligible to be
considered as dynamical time that their associated momenta are first
class variables.
\section{FRW model with scalar field minimal coupling to gravity}
We begin with the line element for the FRW model in spherical
coordinates
\begin{equation} ds^2=-N^2(t)dt^2+a^2(t)h_{ij}dx^idx^j,
\label{eq:FRWmetric}\end{equation} where $N(t)$ is the lapse function, $a(t)$
is the cosmic scale factor determines the radius of the universe,
and $h_{ij}$ is the time independent metric of the three-dimensional
maximally symmetric spatial sections \begin{equation}
h_{ij}dx^idx^j=\frac{dr^2}{1-kr^2}+r^2(d\theta^2+sin^2\theta
d\phi^2) \end{equation}
of constant curvature ${}^{(3)}R(h_{ij})=-6k$,
$k=0,\pm 1.$
Inserting the metric (\ref{eq:FRWmetric}) into the action for vacuum
FRW model in the natural units gives
$c =h = 1$ \begin{equation} S[g_{\mu\nu}]=\int \sqrt{-g}\ {}^{(4)} R d^4x=\int
dt \int_{\Sigma(t)} d^3x \sqrt{-g}\ {}^{(4)}R.
\label{eq:FRWmetric1}\end{equation} By assuming the spatial homogeneity of
the FRW metric the action (\ref{eq:FRWmetric1}) can be written as
\begin{equation} S[g_{\mu\nu}]=\int dt \int 3(\frac{a\dot{a}^2}{N}-kNa)d^3x=
V_{(3)}\int 3(\frac{a\dot{a}^2}{N}-kNa)dt,\end{equation} where $V_{(3)}$ is the
volume of the three-dimensional space of constant curvature. The
momenta conjugate with the dynamical variables are
$$\Pi_a=6N^{-1}a\dot{a},$$
$$\Pi_N=0,$$ which $\Pi_N\approx 0$ is a primary constraint.
The canonical Hamiltonian is $$H_0=(\frac{N\Pi_a^2}{12}+kNa),$$
and thus the total Hamiltonian is \begin{equation}
H_T=\lambda_N\Pi_N+N(\frac{\Pi_a^2}{12}+ka). \end{equation} The secondary
constraint is \begin{equation} 0\approx {\cal H}=\frac{\Pi^2_a}{12}+ka \end{equation} and
therefore \begin{equation} H_T=\lambda_N\Pi_N+N\cal{H}.\end{equation} One can check that
our constraints are both FCC. The number of dynamical variables
are four ($a, \Pi_a ; N, \Pi_N)$. Since there are two FCC
constranits, it means that the vacuum FRW model
has no physical degrees of freedom on the classical level and
only unphysical degrees of freedom propagate. So in order to have
some non-trivial observables it is necessary to introduce the
source matter fields.
The Einstein-Hilbert action for the FRW model for the gravity
minimally coupled to a massless scalar field is given by \begin{equation}
S[g_{\mu\nu},\phi]=\int_{\Sigma(t)}\int \sqrt{-g}( -\frac{1}{2}\
{}^{(4)}R+\frac{1}{2}g^{\mu\nu}\partial_\mu \phi
\partial_\nu \phi ) dtd^3x \label{eq:actionintegral},\end{equation}
which by assuming the spatial homogeneity of the scalar field in
FRW metric yields \begin{equation} S[g_{\mu\nu},\phi]=V_{(3)}\int
[3(\frac{a\dot{a}^2}{N}-kNa)-\frac{a^3}{2N}\dot{\phi}^2]dt
\label{eq:actionintegral1}.\end{equation}
Given the action-integral (\ref{eq:actionintegral}) or
(\ref{eq:actionintegral1}) it is easy to find the canonically
conjugate momentum to the dynamical variable, \begin{eqnarray}
p_a=6N^{-1}a\dot{a},\\ p_\phi=-N^{-1}a^3\dot{\phi}, \end{eqnarray} and \begin{equation}
p_N=0\end{equation} which is a primary constraint.
The Hamiltonian is
$$H_0=N[\frac{p_a^2}{12a}+3ka-\frac {p_\phi^2}{2a^3}]$$ and thus
the total Hamiltonian is \begin{equation}
H_T=\lambda_Np_N+\frac{N}{a^3}[\frac{a^2p_a^2}{12}+3ka^4-\frac{p_\phi^2}{2}].
\end{equation}
By the non-degenerate character of the metric ($a\neq 0$), the
secondary constraint can be redefined (choosing $N_1=N/a^3$) \begin{equation}
0\approx {\cal H}_1=\frac{a^2p_a^2}{12}+3ka^4-p_\phi^2/2 \end{equation} which
shows the separability of the gravitational and the matter source
part in the constraint. Thus, the total Hamiltonian becomes \begin{equation}
H_T=\lambda_Np_N+N_1{\cal H}_1.\label{eq:totalHamiltonian}\end{equation}
One can check that our constraints are both FCC. The number of
dynamical variables are six,( $a, p_a ;\phi , p_\phi ; N, p_N)$.
There are two FCC constranits. Thus, there are only two physical
degrees of freedom on the classical level in the FRW model and
only these two physical degrees of freedom propagate. According to
the the procedure described in the last section since a unique
solution cannot be find for the equations of motion one has to
implement the Hamiltonian reduction to separate the equations of
motion into the physical and the unphysical ones. For this, let us
introduce the new canonical variable in order to obtain the
reduced Hamiltonian describing the evolution of the cosmic scalar
factor $a$, \begin{equation} T_\phi=\int_{\Sigma}
\frac{\phi}{p_\phi}\sqrt{{}^{(3)}h}d^3x,\label{eq:cosmologicaltime}
\end{equation} \begin{equation}
\Pi_\phi=\frac{p_\phi^2}{2}.\end{equation}
In terms of the new variables the total Hamiltonian is: \begin{equation}
H_T=\lambda_N p_N+N_1\Pi_\phi. \end{equation} Similarly, in here, one can
separate the equation of motion into two parts: one for the
canonical pairs $(N, p_N)$, $(T_\phi, \Pi_T)$ with a dependence on
the Lagrange multiplier $\lambda(\tau)$ \begin{eqnarray} \dot{T}_\phi=NV_{(3)}
,\ \ \ \ \ \dot{\Pi}_\phi=0\\ \dot{N}=\lambda_N, \ \ \ \ \
\dot{p}_N=-\Pi_\phi \end{eqnarray} constrained by $\Pi_\phi=0$ and second
for the dynamical variables $(a, \Pi_a)$, \begin{equation} \dot{a}=0 ,\ \ \ \ \
\dot{\Pi}_a=0 \end{equation} which have a unique solution with initial values
free from any constraints. The reduced Hamiltonian that governs
the scale factor evolution in time $T_\phi$ is
$$H(a)=\frac{a^2p_a^2}{12}+3ka^4.$$
The equation of motion for $T_\phi$ derived from
(\ref{eq:cosmologicaltime}), \begin{equation}
\frac{dT_\phi}{dt}=\int_{\Sigma(t)} \sqrt{-{}^{(4)}g}d^3x.\end{equation}
implies that $T_\phi(t)$ is just the 4-volume preceding $\Sigma_t$
plus some constant of integration. Integration with respect to
$t$, this means that, the change of the time variable equals the
four-volume enclosed between the initial and final hypersurfaces,
which is necessarily positive. This time variable, $T_\phi(t)$ may
be regarded as a cosmological time variable, as it continuously
increasing along any future directed time-like curve
\cite{Hossein}. Assuming that the scalar field is spatially
homogeneous and monotically increasing along the world line
observable normal to the spatial hypersurfaces, one therefore my
consider $T_\phi(t)$ as a monotonically increasing function along
any classical trajectory and so can indeed be used to parametrise
this trajectory \cite{hughhossein}.
This describes the
evolution of geometry with respect to the dynamical time
constructed from scalar field.
Alternatively, by reformulating the theory in terms of scalar
field one may construct a dynamical time from geometry. To find
the dynamics of the scalar field we perform the canonical
transformation on the scalar factor \begin{equation} T_a=\int_{\Sigma(t)}
\sqrt{{}^{(3)}h}d^3x\int
\frac{a^2da}{(\frac{\Pi_a}{3}-a^4k)^{1/2}},
\label{eq:cosmologicaltime1}\end{equation} \begin{equation}
\Pi_a=a^2[\frac{p_a^2}{12}+3ka^2].\end{equation}
One can also show that the dynamical time constructed from sclar
factor $a$ is a $4$-volume preceding $\Sigma(t)$.
\section{Dirac observables in General Relativity}
General Relativity, like many other field theories, is invariant
with respect to a group of local symmetry transformations
\cite{Marolf}. The local symmetry group in General Relativity is
the group Diff (${\cal M}$) of diffeomorphisms of the space-time
manifold ${\cal M}$.
In General Relativity, Dirac observables \cite{Dirac} must be
invariant under the group of local symmetry transformations. The
Hamiltonian constraint and momentum constraint in General
Relativity are generators of the symmetry transformations, and so
a function $\Psi$ on the phase space is a Dirac observable, ${\it
iff}$
\begin{equation}
\{\Psi,{\cal H}\}=\{\Psi,{\cal H}_i \}= 0,
\end{equation}
at all points $x \in {\cal M}$, where ${\cal H}$ and ${\cal H}_i $
are Hamilltonian and momentum constraints in general relativity.
Such observables are necessarily constants of motion. They are
invariant under local Lorentz rotations ${\it SO(3)}$ and ${\it
Diff}\Sigma$ (as well as ${\it SO (1, 3)}$).
The above criteria for observables in relativity appear to rule out
the existence of local observables if locations are specified in
terms of a particular coordinate system. Indeed, it might appear
that one would be left with only observables of the form \begin{equation}
\Psi=\int \psi(x)\sqrt{-g} \ d^4 x,
\end{equation}
where $\psi(x)$ is an invariant scalar as for example $R$, $R^2$ ,
$R^{\mu\nu} R_{\mu\nu}$, etc . While such observables clearly have
vanishing Poisson brackets with all the constraints, they can not be
evaluated without full knowledge of the future and past of the
universe. While this may be deducible in principle from physical
measurements made at a specific time, it is well beyond the scope of
any real experimenter.
However, in reality, observations are made locally. We therefore
ought to be able to find a satisfactory way to accommodate local
observables within General Relativity. In particular, we would
like to be able to talk about observables measured at a particular
time, so that we can discuss their evolution. Local observables in
classical or quantum gravity must be invariant under coordinate
transformations. The difficulty in defining local observables in
classical gravity is that diffeomorphism invariance makes it
difficult to identify individual points of the space-time manifold
\cite{hughhossein}\cite{Camelia}.
It is fairly easy to construct observables which commute with the
momentum constraints. Such observables can be expressed as
functions of dynamical variables on the spatial hypersurfaces.
However, according to the Dirac prescription, observables must
also commute with Hamiltonian constraint.
In a slightly different formalism, Rovelli addressed the problem
by introducing a Material Reference System (${\it MRS}$)
\cite{Rovelli}. By ${\it MRS}$, Rovelli means an ensemble of
physical bodies, dynamically coupled to General Relativity that
can be used to identify the space-time points.
Rovelli's observables can be interpreted as the values of a
quantity at the point where the particle is and at the moment in
which the clock displays the value $t$. However $t$ itself is not
an observable, even though its conjugate momentum is constant
along each classical trajectory.
Rovelli's observables are constant of motion since they commute
with Hamiltonian and momentum constraints, while evolving with
respect to the clock time $t$.
Rovelli's observables are functions defined on spatial
hypersurfaces. He assumes the space-time has a topology $\Sigma
\times R$ where $\Sigma$ is a compact spatial hypersurface and R
is the real time. In order to have evolution into the future or
past the spatial hypersurface must be a Cauchy hypersurface. This
makes sense if the underlying space-time is assumed to be globally
hyperbolic.
As discussed, one may fix the initial data on null hypersurfaces
rather than spatial hypersurfaces. In General Relativity it is
natural to work with a foliation of space-time by space-like
hypersurfaces, as this reflects the older Newtonian idea of a
3-dimensional universe developing with time. This seems close to
our experiences and is easy to visualize. In particular The
approach of setting the final data on a null hypersurface is
essential if we are interested in a theory such as quantum theory
that observations made by a single localized observer who can
collect observational data only from that subset of space-time
which lies in the causal past.
\section{ Dirac observables in FRW model}
In ADM formalism, the space-time ${\cal M }$ is assumed to be
foliated by a coordinate time $t$. Now, suppose that the metric
$g$ satisfies FRW dynamical equations which are assumed to include
a contribution from massless scalar field and we choose the
foliated 3-geometry, $\Sigma(t)$ to be observer's past null
hypersurface and also the space-time contains a future-directed
time-like geodesic ${\Gamma}$ representing the world-line of an
observer.
Also suppose that the 4-volume time variable $T_\phi(t)$ defined
in (\ref{eq:cosmologicaltime}) instead of coordinate time $t$ has
been used to label the 3-surfaces and also the future-directed
time-like geodesic ${\Gamma}$ .
It is then possible to construct a covariantly defined geometric
quantity determined by field values on $\Sigma_{T_\phi}(t)$ \begin{equation}
\Psi(\Sigma_{T_\phi})=\int_{\Sigma_{T_\phi}}
\psi(x)\sqrt{{}^{(3)}h} d^3 x,
\end{equation}
where $\psi(x)$ is any scalar invariant on $\Sigma_{T_\phi}(t)$
expressible in terms of $h_{ij}$ , $R^i_{jkl}$, and their
covariant spatial derivatives.These quantities are called world
line ${\Gamma}$-observables \cite{Hossein} for FRW model.
The so called ${\Gamma}$-observables then have vanishing poisson
brackets with any Hamiltonian $H$, equation
(\ref{eq:totalHamiltonian}), which generates time translations of
$\Sigma_{T_\phi}(t)$ along ${\Gamma}$. The observables
$\Psi(\Sigma_{T_\phi})$ do not have vanishing Poisson brackets
with the Hamiltonian constraint ${\cal H}_1$, since the
prespecified foliation is not invariant under local time evolution
\cite{Kuchar}.
If we define new quantities,$\Psi_{T_\phi}(\Sigma_{T_\phi})$ ; the
value $\Psi(\Sigma_{T_\phi})$ at a certain time $T_\phi$, then
these quantities have vanishing Poisson brackets with the
Hamiltonian constraint, $\{\Psi_{T_\phi}(\Sigma_{T_\phi}), {\cal
H}_1\}= 0$ , and can be called 'evolving constants of motion'.
These observables are the same as Rovelli's constants of motion in
a sense that they are genuine Dirac's observables. The evolution
of these observables is expressed in terms of the dynamical
variable $T_\phi(t)$, whose conjugate momenta, is a first class
constraint.Similarly, the dynamical time $T_\phi(t)$ in the new
labeling of 3-surfaces is not a Dirac observable although its
conjugate momenta is constant along the world line.
Alternatively, using (\ref{eq:cosmologicaltime1}) it is also
possible to construct a covariantly defined matter quantity
determined by the scalar factor values on $\Sigma_{T_a}(t)$ \begin{equation}
\Psi(\Sigma_{T_a})=\int_{\Sigma_{T_a}} \psi(\phi)\sqrt{{}^{(3)}h}
d^3 x,
\end{equation}
where $\psi(\phi)$ is any scalar invariant on $\Sigma_{T_a}(t)$
expressible in terms of $\phi$, and its covariant spatial
derivatives. These quantities are also called world line
${\Gamma}$-observables.
In summary we have seen that an explicit time variable has been
emerged in FRW model from gravity coupled to the massless scalar
field, interpreted as a cosmological time, and can be used by
observers as a clock to measure the passage of time. A set of
'evolving constants of motion' has been constructed by using the
dynamical time variable emerged from scalar field or scalar factor
which set the condition on the ${\Gamma}$-observables.
\section{ Acknowledgement}
I would like to thank Dr Hugh Luckock for his help in the
achievement of this work.
\newpage
|
1,941,325,221,084 | arxiv |
\section{Estimating the contribution from heavy hitters}\SectionName{heavy-contrib}
Before giving our algorithm $\mathsf{HighEnd}$ for estimating $\|x_L\|_p^p$,
we first give a few necessary lemmas and theorems.
The following theorem gives an algorithm for finding the $\phi$-heavy
hitters with respect to $F_p$.
This algorithm uses the dyadic interval idea of \cite{CM05} together
with a black-box reduction of the problem of finding $F_p$ heavy
hitters to the problem of estimating $F_p$.
Our proof is in \Section{fp-hh}. We note that our data structure both
improves
and generalizes that of \cite{GSS08}, which gave an algorithm with
slightly worse bounds that only worked in the case $p=1$.
\begin{theorem}\TheoremName{fp-hh}
There is an algorithm $\mathsf{F_pHH}$ satisfying the following properties.
Given $0<\phi<1$ and $0<\delta<1$, with probability
at least $1-\delta$,
$\mathsf{F_pHH}$ produces a list $L$ such that $L$
contains all $\phi$-heavy hitters and does not contain indices which
are not $\phi/2$-heavy hitters. For each $i\in L$, the algorithm also
outputs $\mathrm{sign}(x_i)$, as well as an estimate $\tilde{x}_i$ of $x_i$
satisfying $\tilde{x}_i^p \in [(6/7)|x_i|^p, (9/7)|x_i|^p]$.
Its space usage is $O(\phi^{-1}\log (\phi n)\log(nmM)\log(\log
(\phi n)/(\delta\phi)))$. Its update time is $O(\log
(\phi n)\cdot \log(\log (\phi n)/(\delta\phi))$. Its
reporting time is
$O(\phi^{-1}(\log (\phi n)\cdot \log(\log (\phi n)/(\delta\phi))))$.
\end{theorem}
The following moment bound can be derived from the Chernoff bound via
integration, and is most likely standard though we do not know the
earliest reference. A proof can be found in
\cite{KN10}.
\begin{lemma}\LemmaName{good-moment}
Let $X_1,\ldots,X_n$ be such that $X_i$ has expectation $\mu_i$ and
variance $\sigma_i^2$, and $X_i \le K$ almost surely. Then if the
$X_i$ are $\ell$-wise independent for some even integer $\ell\ge 2$,
$$ \mathbf{E}\left[\left(\sum_{i=1}^n X_i - \mu\right)^\ell\right] \le
2^{O(\ell)} \cdot \left(\left(\sigma\sqrt{\ell}\right)^\ell +
\left(K\ell\right)^\ell\right) ,$$
where $\mu = \sum_i \mu_i$ and $\sigma^2 = \sum_i \sigma_i^2$. In
particular,
$$\Pr\left[\left|\sum_{i=1}^n X_i - \mu\right| \ge \lambda\right] \le
2^{O(\ell)} \cdot \left(\left(\sigma\sqrt{\ell}/\lambda\right)^\ell +
\left(K\ell/\lambda\right)^\ell\right) ,
$$
by Markov's inequality on the random variable $(\sum_i X_i - \mu)^\ell$.
\end{lemma}
\begin{lemma}[Khintchine inequality
{\cite{Haagerup82}}]\LemmaName{khintchine}
For $x\in \mathbb{R}^n$, $t\ge 2$, and uniformly random
$z\in\{-1,1\}^n$, $\mathbf{E}_z[|\inprod{x,z}|^t] \le \|x\|_2^t
\cdot\sqrt{t}^t$.
\end{lemma}
In the following lemma, and henceforth in this section, $i$ denotes
$\sqrt{-1}$.
\begin{lemma}\LemmaName{unity}
Let $x\in\mathbb{R}^n$ be arbitrary. Let
$z \in \{e^{2 \pi i/r}, e^{2\pi i \cdot 2/r}, e^{2 \pi i \cdot 3/r},
\ldots, e^{2 \pi i \cdot r/r}\}^n$
be a random such vector for $r\ge 2$ an even integer.
Then for $t\ge 2$ an even integer, $\mathbf{E}_z[|\inprod{x,z}|^t] \le
\|x\|_2^t\cdot 2^{t/2}\sqrt{t}^t$.
\end{lemma}
\begin{proof}
Since $x$ is real, $\left | \langle x, z \rangle \right |^2 =
\left (\sum_{j=1}^n {\bf Re}[z_j] \cdot x_j \right )^2
+ \left (\sum_{j = 1}^n {\bf Im}[z_j] \cdot x_j \right )^2.$
Then by Minkowski's inequality,
\begin{align}
\nonumber \mathbf{E}[|\inprod{x,z}|^t] &= \mathbf{E}\left[\left|\left
(\sum_{j=1}^n {\bf Re}[z_j] \cdot x_j \right )^2
+ \left (\sum_{j = 1}^n {\bf Im}[z_j] \cdot x_j \right
)^2\right|^{t/2}\right] \\
\nonumber & \le \left(2\cdot \max\left\{\mathbf{E}\left[\left(\sum_{j=1}^n
{\bf Re}[z_j]
\cdot x_j \right)^t \right]^{2/t}, \mathbf{E}\left[\left(\sum_{j=1}^n
{\bf Im}[z_j]
\cdot x_j \right)^t \right]^{2/t}\right\}\right)^{t/2}\\
& \le 2^{t/2}\cdot \left(\mathbf{E}\left[\left(\sum_{j=1}^n
{\bf Re}[z_j]
\cdot x_j \right)^t \right]
+ \mathbf{E}\left[\left(\sum_{j = 1}^n {\bf Im}[z_j] \cdot x_j
\right)^t\right] \right).\EquationName{khintchine-it}
\end{align}
Since $r$ is even, we may write ${\bf Re}[z_j]$ as $(-1)^{y_j}|{\bf Re}[z_j]|$ and
${\bf Im}[z_j] $ as $(-1)^{y_j'}|{\bf Im}[z_j]|$, where $y, y' \in \{-1,1\}^n$
are random sign vectors
chosen independently of each other. Let us fix the values of
$|{\bf Re}[z_j]|$
and $|{\bf Im}[z_j]|$ for each $j \in [n]$, considering just the
randomness of $y$ and $y'$. Applying
\Lemma{khintchine} to bound each of the expectations in
\Equation{khintchine-it}, we obtain the bound
$2^{t/2}\cdot\sqrt{t}^t\cdot (\|b\|_2^t + \|b'\|_2^t) \le
2^{t/2}\cdot\sqrt{t}^t\cdot (\|b\|_2^2 + \|b'\|_2^2)^{t/2}$ where $b_j
= \mathbf{Re}[z_j]\cdot x_j$ and $b'_j = \mathbf{Im}[z_j]\cdot x_j$. But this is
just $2^{t/2}\cdot\sqrt{t}^t\cdot \|x\|_2^t$ since $|z_j|^2 = 1$.
\end{proof}
\subsection{The $\mathsf{HighEnd}$ data structure}\SectionName{head-contrib}
In this section, we assume we know a subset $L \subseteq [n]$ of indices $j$ so
that
\begin{enumerate}
\item for all $j$ for which $|x_j|^p \geq \alpha \|x\|^p_p$, $j \in L$,
\item if $j \in L$, then $|x_j|^p \geq (\alpha/2) \|x\|^p_p$,
\item for each $j \in L$, we know $\mathrm{sign}(x_j)$.
\end{enumerate}
for some $0<\alpha < 1/2$ which we know. We also are given some $0 <
\varepsilon < 1/2$.
We would like to output a value $\|x_L\|_p^p \pm O(\varepsilon)\|x\|_p^p$
with large constant probability. We assume $1/\alpha = O(1/\varepsilon^2)$.
We first define the $\mathsf{BasicHighEnd}$ data structure.
Put $s = \ceil{4/\alpha}$.
We choose a hash function $h:[n] \rightarrow [s]$ at random from an
$r_h$-wise independent family for $r_h = \Theta(\log(1/\alpha))$.
Also, let $r = \Theta(\log 1/\varepsilon)$ be a sufficiently large even
integer.
For each $j \in [n]$, we associate a random complex root of unity
$e^{2 \pi i g(j) /r}$, where $g:[n]\rightarrow[r]$ is drawn at random
from an $r_g$-wise independent family for $r_g = r$.
We initialize $s$
counters $b_1, \ldots, b_s$ to $0$. Given an update of the form
$(j,v)$, add $e^{2 \pi i g(j) /r} \cdot v$ to $b_{h(j)}$.
We now define the $\mathsf{HighEnd}$ data structure as follows.
Define $T = \tau\cdot \max\{\log(1/\varepsilon),\log(2/\alpha)\}$ for a
sufficiently large constant $\tau$ to be determined
later. Define
$t = 3T$ and instantiate $t$ independent copies of the
$\mathsf{BasicHighEnd}$ data structure.
Given an update $(j,v)$, perform
the update described above to each of the copies of $\mathsf{BasicHighEnd}$.
We think of this data structure as a $t \times s$ matrix of counters
$D_{j,k}$, $j \in [t]$ and $k \in [s]$. We let $g^j$ be the hash function
$g$ in the $j$th independent instantiation of $\mathsf{BasicHighEnd}$, and
similarly define $h^j$. We sometimes use $g$ to denote the tuple
$(g^1,\ldots,g^t)$, and similarly for $h$.
We now define our estimator, but first we give some notation. For
$w\in L$, let $j(w,1)<j(w,2)<\ldots<j(w,n_w)$ be the set of $n_w$
indices $j\in
[t]$ such that $w$ is {\em isolated} by $h^j$ from other indices in
$L$; that is, indices $j\in [t]$ where no other $w'\in L$ collides
with $w$ under $h^j$.
\vspace{.1in}
\noindent {\bf Event $\mathcal{E}$}. Define $\mathcal{E}$ to be the
event that $n_w \ge T$ for all $w\in L$.
\vspace{.1in}
If $\mathcal{E}$ does not hold, our estimator simply fails. Otherwise,
define
$$ x_w^* = \frac 1T\cdot \sum_{k=1}^T e^{-2 \pi i
g^{j(w,k)}(w) /r} \cdot \textrm{sign}(x_w) \cdot D_{j(w,k),
h^{j(w,k)}(w)} .$$
If $\mathbf{Re} [x_w^*] < 0$ for any $w\in L$, then we output
fail. Otherwise, define
$$ \Psi' = \sum_{w \in L} \left( x_w^*
\right )^p
.$$
Our estimator is then $\Psi = \mathbf{Re}[\Psi']$.
Note $x^*$ is a complex number. By $z^p$ for complex $z$, we mean
$|z|^p \cdot e^{ip\cdot \arg(z)}$, where
$\arg(z)\in (-\pi, \pi]$ is the angle formed by the vector from the
origin to $z$ in the complex plane.
\subsection{A useful random variable}
For $w\in L$, we make the definitions
$$y_w \mathbin{\stackrel{\rm def}{=}} \frac{x_w^* - |x_w|}{|x_w|}, \hspace{.5in} \Phi_w \mathbin{\stackrel{\rm def}{=}}
|x_w|^p\cdot
\left(\sum_{k=0}^{r/3}\binom{p}{k}\cdot y_w^k \right) $$
as well as $\Phi \mathbin{\stackrel{\rm def}{=}} \sum_{w\in L} \Phi_w$.
We assume $\mathcal{E}$ occurs so that the $y_w$ and $\Phi_w$
(and hence $\Phi$) are defined. Also, we use the definition
$\binom{p}{k} = (\prod_{j=0}^{k-1} (p - j))/k!$ (note $p$ may not be
an integer).
Our overall goal is to show that $\Psi = \|x_L\|_p^p \pm O(\varepsilon)\cdot
\|x\|_p^p$ with large constant probability. Our proof plan is
to first show that $|\Phi - \|x_L\|_p^p| = O(\varepsilon)\cdot \|x\|_p^p$
with large constant probability, then
to show that $|\Psi' - \Phi| = O(\varepsilon)\cdot \|x\|_p^p$ with large
constant probability, at which point our claim follows by a union
bound and the triangle inequality since $|\Psi - \|x_L\|_p^p| \le
|\Psi' - \|x_L\|_p^p|$ since $\|x_L\|_p^p$ is real.
Before analyzing $\Phi$, we define the following event.
\vspace{.1in}
\noindent {\bf Event $\mathcal{D}$}. Let $\mathcal{D}$ be the event
that for all $w\in L$ we have
$$\frac{1}{T^2} \sum_{k=1}^T \sum_{\substack{v\notin
L\\h^{j(w,k)}(v) = h^{j(w,k)}(w)}} x_v^2 <
\frac{(\alpha \cdot \|x\|_p^p)^{2/p}}{r} .$$
\vspace{.1in}
We also define
$$V =
\frac{1}{T^2}\sum_{w\in L}\sum_{j=1}^t\sum_{\substack{v\notin
L\\h^j(w) = h^j(v)}} |x_w|^{2p-2}\cdot |x_v|^2 .$$
\begin{theorem}\TheoremName{phiworks}
Conditioned on $h$,
$\mathbf{E}_g[\Phi] = \|x_L\|_p^p$ and $\mathbf{Var}_g[\Phi\mid \mathcal{D}] = O(V)$.
\end{theorem}
\begin{proof}
By linearity of expectation,
\begin{eqnarray*}
\mathbf{E}_g[\Phi] = \sum_{w \in L} |x_w|^p \cdot
\left [\sum_{k=0}^{r/3} {p \choose k} \mathbf{E}_g[y_w^k]\right]
= \sum_{w \in L} |x_w|^p +
\sum_{w \in L} |x_w|^p \cdot \sum_{k = 1}^{r/3} {p \choose r}
\mathbf{E}_g\left [y_w^k \right ] ,
\end{eqnarray*}
where we use that ${p \choose 0} = 1$. Then
$\mathbf{E}_g[y_w^k] = 0$ for $k > 0$ by
using linearity of expectation and $r_g$-wise independence,
since each summand involves at most
$k < r$ $r$th roots of unity.
Hence,
\begin{eqnarray*}
\mathbf{E}_g[\Phi] = \sum_{w \in L}|x_w|^p.
\end{eqnarray*}
We now compute the variance. Note that if the $g^j$ were each fully
independent, then we would have $\mathbf{Var}_g[\Phi\mid \mathcal{D}] =
\sum_{w\in
L}\mathbf{Var}_g[\Phi_w\mid \mathcal{D}]$ since different $\Phi_w$ depend on
evaluations of the $g^j$ on
disjoint $v\in [n]$. However, since $r_g > 2r/3$,
$\mathbf{E}_g[|\Phi|^2]$
is identical as in the case of full independence of the $g^j$. We thus
have $\mathbf{Var}_g[\Phi \mid\mathcal{D}] = \sum_{w\in L}
\mathbf{Var}_g[\Phi_w\mid\mathcal{D}]$ and have reduced to
computing $\mathbf{Var}_g[\Phi_w \mid \mathcal{D}]$.
\begin{eqnarray*}
\mathbf{Var}_g[\Phi_w\mid\mathcal{D}] & = & \mathbf{E}_g[|\Phi_w -
\mathbf{E}_g[\Phi_w]|^2 \mid\mathcal{D}]\\
&=& |x_w|^{2p}\cdot \mathbf{E}_g\left[\left|\sum_{k=1}^{r/3} \binom{p}{k}
y_w^k\right|^2\mid \mathcal{D}\right]\\
&=& |x_w|^{2p}\cdot \left(p^2\cdot\mathbf{E}_g[|y_w|^2\mid \mathcal{D}] +
\sum_{k=2}^{r/3}
O(\mathbf{E}_g[|y_w|^{2k} \mid \mathcal{D}])\right)
\end{eqnarray*}
We have
\begin{equation}
\mathbf{E}_g[|y_w|^2 \mid\mathcal{D}]
\mathbin{\stackrel{\rm def}{=}} u_w^2
= \frac {1}{T^2}\sum_{k=1}^T
\sum_{\substack{v\notin
L\\h^{j(w,k)}(v) = h^{j(w,k)}(w)}} \frac{x_v^2}{x_w^2}
,\EquationName{uw}
\end{equation}
so that
$$ \sum_{w\in L} p^2\cdot \mathbf{E}_g[|y_w|^2 \mid \mathcal{D}] \le p^2V .$$
\Equation{uw} follows since, conditioned on $\mathcal{E}$ so that
$y_w$ is well-defined,
$$ \mathbf{E}_g[|y_w|^2] =
\frac{1}{T^2x_w^2}\sum_{k=1}^T\sum_{k'=1}^T\sum_{\substack{v\notin
L\\h^{j(w,k)}(v) = h^{j(w,k)}(w)}}\sum_{\substack{v'\notin
L\\h^{j(w,k')}(v') = h^{j(w,k')}(w)}}\mathbf{E}[e^{-2\pi i(g^{j(w,k)}(v) -
g^{j(w,k')}(v'))/r}]x_vx_{v'} .$$
When $j(w,k)\neq j(w,k')$ the above expectation is $0$ since the
$g^j$ are independent across different $j$. When $j(w,k) = j(w,k')$
the above expectation is only non-zero for $v= v'$ since
$r_g \ge 2$.
We also have for $k\ge 2$ that
$$ \mathbf{E}_g[|y_w|^{2k} \mid \mathcal{D}] \le 2^{O(k)}\cdot u_w^{2k}\cdot
(2k)^k$$
by \Lemma{unity}, so that
$$ \sum_{k=2}^{r/3} \mathbf{E}_g[|y_w|^{2k}\mid \mathcal{D}] = O(u_w^2) $$
since $\mathcal{D}$ holds and so the sum is dominated by its first
term. Thus, $\mathbf{Var}_g[\Phi \mid \mathcal{D}] =
O(V)$.
\end{proof}
\begin{lemma}\LemmaName{Visgood} $\mathbf{E}_h[V] \le 3\alpha\cdot
\|x\|_p^{2p}/(4T)$.
\end{lemma}
\begin{proof}
For any $w\in L$, $v\notin L$, and $j\in [t]$, we have
$\Pr_h[h^j(w) = h^j(v)] = 1/s \le \alpha/4$ since $r_h\ge 2$. Thus,
\begin{align}
\nonumber \mathbf{E}_h[V] & \le \frac{\alpha}{4T^2}\sum_{\substack{w\in
L\\v\not\in L\\j\in [t]}} |x_w|^{2p-2}|x_v|^2\\
\nonumber {} & = \frac{3\alpha}{4T} \left(\sum_{w\in L} |x_w|^p
|x_w|^{p-2}\right)\left(\sum_{v\notin L} |x_v|^2\right)\\
{} & \le \frac{3\alpha}{4T} \left(\sum_{w\in
L}\|x\|_p^p(\alpha \cdot\|x\|_p^p)^{(p-2)/p}\right)\left(\frac
1{\alpha}(\alpha
\cdot\|x\|_p^p)^{2/p}\right)\EquationName{max-l2}\\
\nonumber {} & = \frac 34\cdot \alpha \cdot \|x\|_p^{2p}/T .
\end{align}
where \Equation{max-l2} used that $\|x_{[n]\backslash L}\|_2^2$ is
maximized when $[n]\backslash L$ contains exactly $1/\alpha$
coordinates $v$ each with $|x_v|^p = \alpha\|x\|_p^p$, and that
$|x_w|^{p-2} \le (\alpha\cdot \|x\|_p^p)^{(p-2)/p}$ since $p\le 2$.
\end{proof}
\begin{lemma}\LemmaName{isolated}
$\Pr_h[\mathcal{E}] \ge 1 - \varepsilon$.
\end{lemma}
\begin{proof}
For any $j\in [t]$, the probability that $w$ is isolated by $h^j$ is
at least $1/2$, since the expected number of collisions with $w$ is at
most $1/2$ by pairwise independence of the $h^j$ and the fact that
$|L| \le 2/\alpha$ so that $s\ge 2|L|$. If $X$ is the expected number
of buckets where $w$ is isolated, the Chernoff bound gives $\Pr_h[X <
(1-\epsilon)\mathbf{E}_h[X]] < \exp(-\epsilon^2\mathbf{E}_h[X]/2)$ for $0<\epsilon<1$. The
claim follows for $\tau \ge 24$ by setting $\epsilon = 1/3$ then
applying a union bound
over $w\in L$.
\end{proof}
\begin{lemma}\LemmaName{condition-D}
$\Pr_h[\mathcal{D}] \ge 63/64$.
\end{lemma}
\begin{proof}
We apply the bound of \Lemma{good-moment} for a single $w\in L$. Define
$X_{j,v} =
(x_v^2/T^2)\cdot \mathbf{1}_{h^j(v) = h^j(w)}$ and $X = \sum_{j=1}^t
\sum_{v\notin L} X_{j,v}$. Note that $X$ is an upper bound for the
left hand side of the inequality defining $\mathcal{D}$, and thus it
suffices to show a tail bound for $X$. In the notation of
\Lemma{good-moment}, we have $\sigma^2 \le
(3/(sT^3))\cdot \|x_{[n]\backslash L}\|_4^4$, $K = (\alpha\cdot
\|x\|_p^p)^{2/p}/T^2$, and $\mu =
(3/(sT))\cdot \|x_{[n]\backslash L}\|_2^2$.
Since $\|x_{[n]\backslash L}\|_2^2$ and $\|x_{[n]\backslash L}\|_4^4$
are each maximized when there are exactly $1/\alpha$ coordinates
$v\notin L$ with $|x_v|^p = \alpha\cdot \|x\|_p^p$,
$$\sigma^2
\le \frac{3}{4T^3}\cdot (\alpha\cdot\|x\|_p^p)^{4/p}, \hspace{.5in}\mu
\le \frac{3}{4T}\cdot (\alpha\cdot \|x\|_p^p)^{2/p} .$$
Setting $\lambda =
(\alpha\cdot \|x\|_p^p)^{2/p}/(2r)$, noting that $\mu <
\lambda$ for $\tau$ sufficiently large, and assuming $\ell\le r_h$ is
even, we apply \Lemma{good-moment} to obtain
$$ \Pr[X \ge 2\lambda] \le 2^{O(\ell)} \cdot
\left(\left(\frac{\sqrt{3}r\cdot
\sqrt{\ell}}{T^{3/2}}\right)^\ell +
\left(\frac{2r\cdot \ell}{T^2}\right)^\ell\right)
.$$
By setting $\tau$ sufficiently large and $\ell = \log(2/\alpha) + 6$,
the above probability is at most $(1/64)\cdot (\alpha/2)$. The
lemma follows by a union bound over all $w\in L$, since $|L| \le
2/\alpha$.
\end{proof}
We now define another event.
\vspace{.1in}
\noindent {\bf Event $\mathcal{F}$}. Let $\mathcal{F}$ be the event
that for all $w\in L$ we have $|y_w| < 1/2$.
\vspace{.1in}
\begin{lemma}\LemmaName{last-statement}
$\Pr_g[\mathcal{F} \mid \mathcal{D}] \ge 63/64$.
\end{lemma}
\begin{proof}
$\mathcal{D}$ occurring implies that $u_w \le \sqrt{1/r} \le
\sqrt{1/(64(\log(2/\alpha)+6)}$ (recall
we assume $1/\alpha = O(1/\varepsilon^2)$ and pick $r = \Theta(\log(1/\varepsilon))$
sufficiently large, and $u_w$ is as is defined in \Equation{uw}), and
we also have $\mathbf{E}_g[|y_w|^\ell \mid \mathcal{D}]
< u_w^\ell\sqrt{\ell}^\ell 2^\ell$ by \Lemma{unity}.
Applying Markov's bound on the random variable $|y_w|^\ell$ for even
$\ell \le r_g$, we have $|y_w|^\ell$ is determined by $r_g$-wise
independence of the $g^j$, and thus
$$ \Pr_g[|y_w| \ge 1/2\mid\mathcal{D}] <
\left(\sqrt{\frac{16\ell}{64(\log(2/\alpha)+6)}}\right)^\ell ,$$
which equals $(1/64)\cdot (\alpha/2)$ for $\ell = \log(2/\alpha) +
6$. We then apply a union bound over all $w\in L$.
\end{proof}
\begin{lemma}\LemmaName{good-estimator}
Given $\mathcal{F}$, $|\Psi' - \Phi| < \varepsilon \|x_L\|_p^p$.
\end{lemma}
\begin{proof}
Observe
$$ \Psi' = \sum_{w\in L}|x_w|^p\cdot (1 + y_w)^p .$$
We have that $\ln(1+z)$, as a function of $z$, is holomorphic on the
open disk of radius $1$
about $0$ in the complex plane, and thus $f(z) = (1+z)^p$ is
holomorphic in this region
since it is the composition $\exp(p\cdot \ln(1 + z))$ of holomorphic
functions. Therefore, $f(z)$ equals its Taylor expansion about $0$
for all $z\in\mathbb{C}$ with $|z| < 1$ (see for example \cite[Theorem
11.2]{Wong08}).
Then since $\mathcal{F}$ occurs, we can Taylor-expand
$f$ about $0$ for $z = y_w$ and
apply Taylor's theorem to obtain
\begin{align*}
\Psi' & = \sum_{w\in
L}|x_w|^p\left(\sum_{k=0}^{r/3}\binom{p}{k}y_w^k \pm
O\left(\binom{p}{r/3 + 1}\cdot |y_w|^{-r/3-1}\right)\right) \\
{} & = \Phi + O\left(\|x_L\|_p^p\cdot \left(\binom{p}{r/3 + 1}\cdot
|y_w|^{-r/3-1}\right)\right)
\end{align*}
The lemma follows since $\binom{p}{r/3+1} < 1$ and $|y_w|^{-r/3-1}
< \varepsilon$ for $|y_w| < 1/2$.
\end{proof}
\begin{theorem}\TheoremName{psiworks}
The space used by $\mathsf{HighEnd}$ is
$O(\alpha^{-1}\log(1/\varepsilon)\log(mM/\varepsilon) + O(\log^2(1/\varepsilon)\log
n))$. The update time is
$O(\log^2(1/\varepsilon))$. The reporting time is
$O(\alpha^{-1}\log(1/\varepsilon)\log(1/\alpha))$. Also, $\Pr_{h,g}[|\Psi -
\|x_L\|_p^p| < O(\varepsilon)\cdot \|x\|_p^p] > 7/8$.
\end{theorem}
\begin{proof}
We first argue correctness.
By a union bound, $\mathcal{E}$ and $\mathcal{D}$ hold
simultaneously with probability $31/32$.
By Markov's inequality and \Lemma{Visgood},
$V = O(\alpha\cdot \|x\|_p^{2p}/T)$ with probability $63/64$. We then
have by Chebyshev's inequality and \Theorem{phiworks} that $|\Phi -
\|x_L\|_p^p| = O(\varepsilon)\cdot \|x\|_p^p$ with probability $15/16$.
\Lemma{good-estimator} then implies
$|\Psi' - \|x_L\|_p^p| = O(\varepsilon)\cdot \|x\|_p^p$
with probability $15/16 - \Pr[\neg \mathcal{F}]>
7/8$ by \Lemma{last-statement}. In this case, the same must hold true
for $\Psi$ since $\Psi =
\mathbf{Re} [\Psi']$ and $\|x_L\|_p^p$ is real.
Next we discuss space complexity. We start with analyzing the
precision required to store the counters $D_{j,k}$. Since our
correctness analysis conditions on
$\mathcal{F}$, we can assume $\mathcal{F}$ holds.
We store the real and imaginary parts of each
counter $D_{j,k}$ separately. If we store each such part to within
precision $\gamma/(2mT)$ for
some $0<\gamma<1$ to be determined later, then each of the real and
imaginary parts, which are
the sums
of at most $m$ summands from the $m$ updates in the stream, is stored
to within additive error $\gamma/(2T)$ at the end of the stream.
Let $\tilde{x}_w^*$ be our calculation of $x_w^*$ with such
limited precision.
Then, each of the real and
imaginary parts of $\tilde{x}_w^*$ is within additive error
$\gamma/2$ of those for $x_w^*$. Since $\mathcal{F}$ occurs, $|x_w^*|
> 1/2$, and thus $\gamma/2 <
\gamma |x_w^*|$, implying $|\tilde{x}_w^*| = (1\pm
O(\gamma))|x_w^*|$. Now we argue $\arg(\tilde{x}_w^*) =
\arg(x_w^*) \pm O(\sqrt{\gamma})$. Write $x_w^* = a + ib$ and
$\tilde{x}_w^* = \tilde{a} + i\tilde{b}$ with $\tilde{a} = a \pm
\gamma/2$ and $\tilde{b} = b \pm \gamma/2$.
We have $\cos(\arg(x_w^*)) = a/\sqrt{a^2 + b^2}$.
Also,
$\cos(\arg(\tilde{x}_w^*)) = (a\pm \gamma/2) / ((1\pm
O(\gamma))\sqrt{a^2 + b^2}) = (1\pm O(\gamma))\cos(\arg(x_w^*)) \pm
O(\gamma) = \cos(\arg(x_w^*)) \pm O(\gamma)$, implying
$\arg(\tilde{x}_w^*) = \arg(x_w^*) \pm O(\sqrt{\gamma})$.
Our final output is
$\sum_{w\in L} |\tilde{x}^*_w|^p\cdot \cos(p\cdot
\arg(\tilde{x}^*_w))$. Since $\cos$ never
has derivative larger than $1$ in magnitude, this is $\sum_{w\in L}
[(1\pm O(\gamma))|x_w^*|^p\cos(p\cdot \arg(x_w^*)) \pm
O(\sqrt{\gamma})\cdot(1\pm O(\gamma))|x_w^*|^p]$. Since $\mathcal{F}$
occurs, $|x_w^*|^p < (3/2)^p\cdot |x_w|^p$, and thus our overall error
introduced from limited precision is $O(\sqrt{\gamma}\cdot
\|x_L\|_p^p)$, and it thus suffices to set $\gamma = O(\varepsilon^2)$,
implying each $D_{j,k}$ requires $O(\log(mM/\varepsilon))$ bits of
precision.
For the remaining part of the space analysis, we discuss storing the
hash functions.
The hash functions $h^j,g^j$ each require $O(\log(1/\varepsilon)\log n)$
bits of seed, and thus in total consume $O(\log^2(1/\varepsilon)\log n)$ bits.
Finally we discuss time complexity. To
perform an
update, for each $j\in [t]$ we must evaluate $g^j$ and $h^j$ then
update a counter. Each of $g^j,h^j$ require $O(\log(1/\varepsilon))$ time to
evaluate. For the reporting time, we can mark all counters with the
unique $w\in L$ which hashes to it under the corresponding $h^j$ (if a
unique such $w$
exists) in $|L| \cdot t\cdot r_h =
O(\alpha^{-1}\log(1/\varepsilon)\log(1/\alpha))$ time. Then, we sum up the
appropriate counters for each $w\in L$, using the Taylor expansion of
$\cos(p\cdot \arg(z))$ up to the $\Theta(\log(1/\varepsilon))$th degree to
achieve additive error $\varepsilon$.
Note that conditioned on $\mathcal{F}$, $\arg(x_w^*) \in (-\pi/4,
\pi/4)$, so that $|p\cdot \arg(x_w^*)|$ is bounded away from $\pi/2$
for $p$ bounded away from $2$; in fact, one can even show via some
calculus that $\arg(x_w^*)\in (-\pi/6,\pi/6)$ when $\mathcal{F}$
occurs by showing that $\cos(\arg(x_w^*)) = \cos(\arg(1 - y_w))$ is
minimized for $|y_w| \le 1/2$ when $y_w = 1/4 +
i\sqrt{3}/4$. Regardless, additive error $\varepsilon$ is relative error
$O(\varepsilon)$, since if $|p\cdot \arg(z)|$ is bounded away from $\pi/2$,
then $|\cos(p\cdot \arg(z))| = \Omega(1)$.
\end{proof}
\section{The final algorithm: putting it all
together}\SectionName{final-alg}
To obtain our final algorithm, one option is to run $\mathsf{HighEnd}$ and
$\mathsf{LightEstimator}$ in parallel after finding $L$, then output the sum of
their estimates. Note that by the variance bound in
\Theorem{modified}, the output of a single instantiation of
$\mathsf{LightEstimator}$ is $\|x_{[n]\backslash L}\|_p^p \pm O(\varepsilon)\|x\|_p^p$ with
large constant probability. The downside to this option is that
\Theorem{fp-hh} uses space that would make our
overall $F_p$-estimation algorithm suboptimal by ${\mathrm{polylog}}(n/\varepsilon)$
factors, and $\mathsf{HighEnd}$ by an $O(\log(1/\varepsilon))$ factor for $\alpha =
\varepsilon^2$ (\Theorem{psiworks}).
We can overcome this by a combination of balancing and universe
reduction. Specifically, for balancing,
notice that if instead of having $L$ be a list of $\varepsilon^2$-heavy
hitters, we instead defined it as a list of $\epsilon^2$-heavy hitters for
some $\epsilon > \varepsilon$, we could improve the space of both
\Theorem{fp-hh} and \Theorem{psiworks}. To
then make the variance in $\mathsf{LightEstimator}$ sufficiently small,
i.e. $O(\varepsilon^2\|x\|_p^2)$, we could run $O((\epsilon / \varepsilon)^2)$
instantiations of $\mathsf{LightEstimator}$ in parallel and output the average
estimate, keeping the space optimal but increasing the update time
to $\Omega((\epsilon/\varepsilon)^2)$. This balancing gives a smooth
tradeoff between space and update time;
in fact note that for $\epsilon = 1$,
our overall algorithm simply becomes a derandomized variant of Li's
geometric mean estimator. We would like though to have $\epsilon \ll
1$ to have small update time.
Doing this balancing does not
resolve all our issues though, since \Theorem{fp-hh} is
suboptimal by a $\log n$ factor. That is, even if we picked $\epsilon
= 1$, \Theorem{fp-hh} would cause our overall space to be
$\Omega(\log(n)\log(mM))$, which is suboptimal. To overcome this issue we
use universe reduction. Specifically, we set $N = 1/\varepsilon^{18}$ and
pick hash functions $h_1:[n]\rightarrow[N]$ and
$\sigma:[n]\rightarrow\{-1,1\}$. We define a new $N$-dimensional
vector $y$ by $y_i = \sum_{h_1(j) = i} \sigma(j) x_j$. Henceforth in
this section, $y$, $h_1$, and $\sigma$ are as discussed here. Rather
than computing a list $L$ of heavy hitters of
$x$,
we instead compute a list $L'$ of heavy hitters of $y$.
Then, since $y$
has length only $\mathop{{\rm poly}}(1/\varepsilon)$, \Theorem{fp-hh} is only
suboptimal by ${\mathrm{polylog}}(1/\varepsilon)$ factors and our balancing trick
applies. The list $L'$ is also used in place of $L$ for both
$\mathsf{HighEnd}$ and $\mathsf{LightEstimator}$.
Though, since we never learn $L$, we
must modify the algorithm $\mathsf{LightEstimator}$ described in \Remark{modified}.
Namely, the hash
function $h:[n]\rightarrow[R]$ in \Remark{modified} should be
implemented as the composition of $h_1$, and a hash function
$h_2:[N]\rightarrow[R]$ chosen as \Theorem{pagh} (again with $z = R$
and $c = 2$). Then, we
let $I = [R]\backslash h_2(L')$. The remaining parts of the algorithm
remain the same.
There are several issues we must address to show that our universe
reduction step still maintains correctness. Informally, we need that
(a) any $i$ which is a heavy hitter
for $y$ should have exactly one $j\in[n]$ with $h_1(j) = i$ such that
$j$ was a heavy hitter for $x$,
(b) if
$i$ is a heavy hitter for $x$, then $h_1(i)$ is a heavy hitter for
$y$, and $|y_{h_1(i)}|^p = (1\pm O(\varepsilon))|x_i|^p$ so that $x_i$'s
contribution to $\|x\|_p^p$ is properly approximated by $\mathsf{HighEnd}$,
(c) $\|y\|_p^p = O(\|x\|_p^p)$ with large probability, since the error
term in $\mathsf{HighEnd}$ is $O(\varepsilon\cdot \|y\|_p^p)$,
and (d) the amount of $F_p$ mass not
output by $\mathsf{LightEstimator}$ because it collided with a heavy hitter
for $x$ under $h_1$ is negligible. Also, the composition $h =
h_1\circ h_2$ for $\mathsf{LightEstimator}$ does not satisfy the conditions of
\Theorem{pagh} even
though $h_1$ and $h_2$ might do so individually. To see why, as a
simple analogy,
consider that the composition of two purely random functions is no longer
random. For example, as the number of compositions
increases, the probability of two items colliding increases as
well. Nevertheless, the analysis of $\mathsf{LightEstimator}$ carries over
essentially unchanged in this setting, since whenever considering
the distribution of where two items land under $h$, we can first
condition on them not colliding under $h_1$. Not colliding under $h_1$
happens with $1 -
O(\varepsilon^{18})$ probability, and thus the probability that two items
land in two particular buckets $j,j'\in [R]$ under $h$ is still $(1\pm
o(\varepsilon))/R^2$.
We now give our full description and analysis. We pick $h_1$
as in \Theorem{pagh} with $z=R$ and $c = c_h$ a sufficiently large
constant. We also
pick $\sigma$ from an $\Omega(\log N)$-wise independent family.
We run an instantiation of $\mathsf{F_pHH}$ for the vector $y$ with $\phi =
\varepsilon^2/(34C)$ for a sufficiently large constant $C>0$.
We also obtain a value $\tilde{F}_p \in [F_p/2, 3F_p/2]$ using the
algorithm of \cite{KNW10b}. We define $L'$ to be the sublist of those
$w$
output by our $\mathsf{F_pHH}$ instantiation such that $|\tilde{y}_w|^p \ge
(2\varepsilon^2/7)\tilde{F}_p$.
For ease of presentation in what follows, define $L_\phi$ to
be the list of
$\phi$-heavy hitters of $x$ with respect to $F_p$ (``$L$'', without a
subscript, always
denotes the $\varepsilon^2$-heavy hitters with respect to $x$), and
define $z_i
= \sum_{w\in h_1^{-1}(i)\backslash L_{\varepsilon^8}}
\sigma(w)x_w$, i.e. $z_i$ is the contribution to $y_i$ from the
significantly light elements of $x$.
\begin{lemma}\LemmaName{khintchine-tail}
For $x\in \mathbb{R}^n$, $\lambda > 0$ with $\lambda^2$ a multiple of $8$, and
random
$z\in\{-1,1\}^n$ drawn from a $(\lambda^2/4)$-wise independent
family, $\Pr[|\inprod{x,z}| > \lambda\|x\|_2] <
2^{-\lambda^2/4}$.
\end{lemma}
\begin{proof}
By Markov's inequality on the random variable
$\inprod{x,z}^{\lambda^2/4}$, $\Pr[|\inprod{x,z}| > \lambda] <
\lambda^{-{\lambda^2/4}}\cdot \mathbf{E}[\inprod{x,z}^{\lambda^2/4}]$. The claim
follows by applying \Lemma{khintchine}.
\end{proof}
\begin{lemma}\LemmaName{ybounded-lp}
For any $C>0$, there exists $\varepsilon_0$ such that for
$0<\varepsilon < \varepsilon_0$,
$\Pr[\|y\|_p^p > 17C\|x\|_p^p] < 2/C$.
\end{lemma}
\begin{proof}
Condition on $h_1$.
Define $Y(i)$ to be the vector $x_{h_1^{-1}(i)}$. For any vector $v$
we have $\|v\|_2 \le \|v\|_p$ since $p < 2$. Letting
$\mathcal{E}$ be the event that no $i\in[N]$
has $|y_i| > 4\sqrt{\log N}\|Y(i)\|_p$, we have
$\Pr[\mathcal{E}] \ge 1 - 1/N^4$
by \Lemma{khintchine-tail}. For $i\in [N]$, again by
\Lemma{khintchine-tail} any
$i\in [N]$ has $|y_i| \le 2t \cdot
\|Y(i)\|_2\le 2t \cdot
\|Y(i)\|_p$ with probability at
least $1 - \max\{1/(2N),2^{-t^2}\}$. Then for fixed $i\in [N]$,
\begin{align*}
\mathbf{E}[|y_i|^p\mid \mathcal{E}] & \le
2^p\|Y(i)\|_p^p +
\sum_{t=0}^\infty \Pr\left[(2\cdot 2^t)^p\|Y(i)\|_p^p <
|y_i|^p \le
(2\cdot 2^{t+1})^p\|Y(i)\|_p^p\mid\mathcal{E}\right] \cdot
(2\cdot
2^{t+1})^p\|Y(i)\|_p^p\\
&\le 2^p\|Y(i)\|_p^p + (1/\Pr[\mathcal{E}])\cdot
\sum_{t=0}^{\log(2\sqrt{\log N})} 2^{-2^{2t}} \cdot
(2\cdot 2^{t+1})^p\|Y(i)\|_p^p\\
&< 4\|Y(i)\|_p^p + (1/\Pr[\mathcal{E}])\cdot
\sum_{t=0}^{\log(2\sqrt{\log N})} 2^{-2^{2t}} \cdot
(2\cdot 2^{t+1})^2\|Y(i)\|_p^p\\
&< 17\|Y(i)\|_p^p
\end{align*}
since
$\Pr[\mathcal{E}] \ge 1 - 1/N^4$ and $\varepsilon_0$ is sufficiently small.
Thus by linearity of
expectation, $\mathbf{E}[\|y\|_p^p\mid\mathcal{E}] \le 17\|x\|_p^p$, which
implies
$\|y\|_p^p\le 17C\|x\|_p^p$ with probability $1 - 1/C$, conditioned on
$\mathcal{E}$
holding. We conclude by again using $\Pr[\mathcal{E}] \ge 1
- 1/N^4$.
\end{proof}
\begin{lemma}\LemmaName{bounded-lp}
With probability at least $1 -
\mathop{{\rm poly}}(\varepsilon)$ over $\sigma$, simultaneously
for all $i\in[N]$ we have that $|z_i| = O(\sqrt{\log(1/\varepsilon)}\cdot
\varepsilon^{6/p}\|x\|_p)$.
\end{lemma}
\begin{proof}
By \Lemma{khintchine-tail}, any individual $i\in [N]$ has $|z_i| \le
4\sqrt{\log(1/\varepsilon)} \cdot
(\sum_{w\in h_1^{-1}(i)\backslash L_{\varepsilon^8}} |x_w|^2)^{1/2}$ with probability at
least $1 - 1/N^4$. We then apply a union bound and use the
fact that $\ell_p \le \ell_2$ for $p < 2$, so that $|z_i| \le
4\sqrt{\log(1/\varepsilon)} \cdot
(\sum_{w\in h_1^{-1}(i)\backslash L_{\varepsilon^8}} |x_w|^p)^{1/p}$ (call this
event $\mathcal{E}$) with
probability $1 - \mathop{{\rm poly}}(\varepsilon)$.
We now prove our lemma, i.e. we show that with
probability $1 - \mathop{{\rm poly}}(\varepsilon)$,
$|z_i|^p = O(\log^{p/2}\varepsilon^6\|x\|_p^p)$
simultaneously for all $i\in[N]$. We apply
\Lemma{good-moment}. Specifically, fix an $i\in [N]$. For all $j$ with
$|x_j|^p \le \varepsilon^8\|x\|_p^p$, let $X_j
= |x_j|^p\cdot \mathbf{1}_{h_1(j) = i}$. Then, in
the notation of \Lemma{good-moment},
$\mu_j = |x_j|^p/N$, and $\sigma_j^2 \le |x_j|^{2p}/N$, and thus $\mu
= \|x\|_p^p/N$ and $\sigma^2 \le \|x\|_{2p}^{2p}/N \le
\varepsilon^8\|x\|_p^p/N$. Also, $K = \varepsilon^8\|x\|_p^p$.
Then if $h_1$ were $\ell$-wise independent for $\ell = 10$,
\Lemma{good-moment} would give
$$ \Pr\left[\left|\sum_i X_i - \|x\|_p^p/N\right| >
\varepsilon^6\|x\|_p^p\right] < 2^{O(\ell)}\cdot (\varepsilon^{7\ell} +
\varepsilon^{2\ell}) = O(\varepsilon/N) .$$
A union bound would then give that with probability $1-\varepsilon$, the $F_p$
mass in any bucket from items $i$ with
$|x_i|^p\le \varepsilon^8\|x\|_p^p$ is at most
$\varepsilon^6\|x\|_p^p$.
Thus by a union bound with event $\mathcal{E}$,
$|z_i|^p = O(\log^{p/2}\varepsilon^6\|x\|_p^p)$ for all $i\in[N]$
with probability $1 - \mathop{{\rm poly}}(\varepsilon)$.
Though, $h_1$ is not $10$-wise independent. Instead, it is selected
as in \Theorem{pagh}. However, for any constant
$\ell$, by increasing the constant $c_h$ in our definition of $h_1$ we
can ensure that our $\ell$th moment bound for $(\sum_i X_i -
\mu)$ is
preserved to within a constant factor, which is sufficient to apply
\Lemma{good-moment}.
\end{proof}
\begin{lemma}\LemmaName{still-heavy}
With probability $1 -
\mathop{{\rm poly}}(\varepsilon)$, for all
$w\in L$ we have
$|y_{h_1(w)}|^p = (1\pm O(\varepsilon))|x_w|^p$, and thus with probability $1
- \mathop{{\rm poly}}(\varepsilon)$ when conditioned on $\|y\|_p^p \le 17C\|x\|_p^p$, we
have that if $w$ is
an $\alpha$-heavy hitter for $x$, then
$h_1(w)$ is an $\alpha/(34C)$-heavy hitter for $y$.
\end{lemma}
\begin{proof}
Let $w$ be in $L$.
We know from \Lemma{bounded-lp} that $|z_{h_1(w)}| \le
2\sqrt{\log(1/\varepsilon)}\varepsilon^{6/p}\|x\|_p$ with probability $1 -
\mathop{{\rm poly}}(\varepsilon)$, and that the elements of $L$ are perfectly hashed
under $h_1$ with probability $1- \mathop{{\rm poly}}(\varepsilon)$. Conditioned on this
perfect hashing, we have that $|y_{h_1(w)}| \ge |x_w| -
2\varepsilon^{6/p}\sqrt{\log(1/\varepsilon)}\|x\|_p$. Since for $w\in L$ we have
$|x_w| \ge
\varepsilon^{2/p}\|x\|_p$, and since $p< 2$, we have $|y_{h_1(w)}| \ge (1 -
O(\varepsilon))|x_w|$.
For the second part of the lemma,
$(1 - O(\varepsilon))|x_w| > |x_w|/2^{1/p}$ for $\varepsilon_0$ sufficiently
small. Thus if $w$ is an $\alpha$-heavy hitter for $x$, then
$h_1(w)$ is an $\alpha/(34C)$-heavy hitter for $y$.
\end{proof}
Finally, the following lemma follows from a Markov bound followed by a
union bound.
\begin{lemma}\LemmaName{small-noise}
For $w\in [n]$ consider the quantity $s_w = \sum_{\substack{v\neq
w\\h(v) = h(w)}} |x_v|^p$. Then, with
probability at least $1-O(\varepsilon)$, $s_w \le \varepsilon^{15}\|x\|_p^p$
simultaneously for all $w\in L$.
\end{lemma}
We now put everything together.
We set $\epsilon =
\varepsilon\log(1/\varepsilon)$. As stated earlier, we define $L'$ to be the sublist
of those $w$
output by our $\mathsf{F_pHH}$ instantiation with $\phi = \epsilon^2$
such that $|\tilde{y}_w|^p \ge
(2\varepsilon^2/7)\tilde{F}_p$. We
interpret updates to $x$ as updates to $y$ to then be fed into
$\mathsf{HighEnd}$, with $\alpha = \epsilon^2/(34C)$.
Thus both $\mathsf{HighEnd}$ and
$\mathsf{F_pHH}$ require $O(\varepsilon^{-2}\log(nmM/\varepsilon))$ space.
We now define some events.
\vspace{.1in}
\noindent {\bf Event $\mathcal{A}$}. $L_{\varepsilon^8}$
is perfectly hashed
under $h_1$, and $\forall i\in [N], |z_i|^p = O(\log(1/\varepsilon)^{p/2}\cdot
\varepsilon^6\|x\|_p^p)$.
\vspace{.1in}
\noindent {\bf Event $\mathcal{B}$}. $\forall w\in L_{\epsilon^2}$,
$h_1(w)$ is
output as an $\epsilon^2/(34C)$-heavy hitter by $\mathsf{F_pHH}$.
\vspace{.1in}
\noindent {\bf Event $\mathcal{C}$}. $\forall w\in
L_{\epsilon^2/18}$,
$|y_{h_1(w)}|
= (1\pm O(\varepsilon))|x_w|$.
\vspace{.1in}
\noindent {\bf Event $\mathcal{D}$}. $\tilde{F}_p \in [(1/2)\cdot
\|x\|_p^p, (3/2)\cdot\|x\|_p^p]$, and $\mathsf{HighEnd}$, $\mathsf{LightEstimator}$, and
$\mathsf{F_pHH}$ succeed.
\vspace{.1in}
Now, suppose $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$, and
$\mathcal{D}$ all occur. Then for $w\in L_{\epsilon^2}$,
$w$ is output by $\mathsf{F_pHH}$, and furthermore $|y_{h_1(w)}|^p \ge
(1-O(\varepsilon))|x_w|^p \ge |x_w|^p/2 \ge \epsilon^2\|x\|_p^p/2$. Also,
$\tilde{y}_{h_1(w)}^p \ge
(6/7)\cdot |y_{h_1(w)}|^p$. Since $\tilde{F}_p \le 3\|x\|_p^p/2$, we
have that $h_1(w)\in L'$. Furthermore, we also know that for $i$
output
by $\mathsf{F_pHH}$, $\tilde{y}_i^p \le (9/7)\cdot |y_i|^p$, and thus $i\in
L'$ implies $|y_i|^p \ge (\epsilon^2/9)\cdot \|x\|_p^p$. Notice that
by event $\mathcal{A}$, each $y_i$ is $z_i$, plus potentially $x_{w(i)}$
for some $x_{w(i)}\in L_{\varepsilon^8}$. If $|y_i|^p \ge (\epsilon^2/9)\cdot
\|x\|_p^p$, then there must exist such a $w(i)$, and furthermore it
must be that $|x_{w(i)}|^p \ge (\epsilon^2/18)\cdot \|x\|_p^p$. Thus,
overall, $L'$ contains $h_1(w)$ for all $w\in L_{\epsilon^2}$, and
furthermore if $i\in L'$ then $w(i)\in L_{\epsilon^2/18}$.
Since $L'$ contains $h_1(L_{\epsilon^2})$,
$\mathsf{LightEstimator}$ outputs $\|x_{[n]\backslash h^{-1}(L')}\|_p^p \pm
O(\varepsilon\|x\|_p^p)$. Also, $\mathsf{HighEnd}$ outputs $\|y_{L'}\| \pm
O(\varepsilon)\cdot
\|y\|_p^p$. Now we analyze correctness. We have
$\Pr[\mathcal{A}] = 1 - \mathop{{\rm poly}}(\varepsilon)$, $\Pr[\mathcal{B}\ |\ \|y\|_p^p
\le 17C\|x\|_p^p] = 1 - \mathop{{\rm poly}}(\varepsilon)$, $\Pr[\mathcal{C}] = 1 -
\mathop{{\rm poly}}(\varepsilon)$, and $\Pr[\mathcal{D}] \ge 5/8$. We also have
$\Pr[\|y\|_p^p \le 17C\|x\|_p^p] \ge 1 - 2/C$. Thus by a union bound
and setting $C$ sufficiently large, we have $\Pr[\mathcal{A}\wedge
\mathcal{B}\wedge \mathcal{C}\wedge \mathcal{D}\wedge (\|y\|_p^p \le
17C\|x\|_p^p)] \ge 9/16$.
Define $L_{\mathrm{inv}}$ to
be the set $\{w(i)\}_{i\in L'}$, i.e. the
heavy hitters of $x$ corresponding to the heavy hitters in $L'$ for
$y$.
Now, if all these events occur, then
$\|x_{[n]\backslash h^{-1}(L')}\|_p^p = \|x_{[n]\backslash
L_{\mathrm{inv}}}\|_p^p \pm O(\varepsilon^{15})\|x\|_p^p$ with probability
$1 - O(\varepsilon)$ by \Lemma{small-noise}. We also have, since
$\mathcal{C}$ occurs and conditioned on $\|y\|_p^p = O(\|x\|_p^p)$,
that $\|y_{L'}\| \pm O(\varepsilon)\cdot \|y\|_p^p =
\|x_{L_{\mathrm{inv}}}\|_p^p \pm O(\varepsilon)\cdot \|x\|_p^p$. Thus,
overall, our algorithm outputs $\|x\|_p^p \pm O(\varepsilon)\cdot
\|x\|_p^p$ with probability $17/32 >
1/2$ as desired. Notice this probability can be amplified to
$1-\delta$ by
outputting the median of $O(\log(1/\delta))$ independent
instantiations.
We further note that for a single instantiation of $\mathsf{LightEstimator}$, we
have $\mathbf{E}_h[\mathbf{Var}_s[\Phi']] = O(\epsilon^2\|x\|_p^{2p})$. Once $h$ is
fixed, the variance of $\Phi'$ is simply the sum of variances across
the $D_j$ for $j\notin h_1(L')$. Thus, it suffices for the $D_j$ to
use pairwise independent randomness. Furthermore, in repeating
$O((\epsilon/\varepsilon)^2)$ parallel repetitions of $\mathsf{LightEstimator}$, it
suffices that all the $D_j$ across all parallel repetitions use
pairwise independent randomness, and the hash function $h$ can remain
the same. Thus, as discussed in
\Remark{modified}, the coefficients of the degree-$O(1/\varepsilon^p)$
polynomials used in all $D_j$ combined can be generated by just two
coefficient vectors, and thus the update time of $\mathsf{LightEstimator}$ with
$O((\epsilon/\varepsilon)^2)$ parallel repetitions is just
$O((\epsilon/\varepsilon)^2 + O(\log^2(1/\varepsilon)\log\log(1/\varepsilon))) =
O(\log^2(1/\varepsilon)\log\log(1/\varepsilon))$.
Thus overall, we have the
following theorem.
\begin{theorem}
There exists an algorithm such that given $0<p<2$ and $0<\varepsilon<1/2$,
the algorithm outputs $(1\pm \varepsilon)\|x\|_p^p$ with probability $2/3$
using $O(\varepsilon^{-2}\log(nmM/\varepsilon))$ space. The update time is
$O(\log^2(1/\varepsilon)\log\log(1/\varepsilon))$. The reporting time is
$O(\varepsilon^{-2}\log^2(1/\varepsilon)\log\log(1/\varepsilon))$.
\end{theorem}
The space bound above can be assumed $O(\varepsilon^{-2}\log(mM) +
\log\log n)$ by comments in \Section{notation}.
\subsection{Proof of \Theorem{gme-good}}\SectionName{gme}
In this section we prove \Theorem{gme-good}. The data structure and
estimator we give is a
slightly modified version of the geometric mean estimator of Li
\cite{Li08b}. Our modification allows us to show that only bounded
independence is required amongst the
$p$-stable random variables in our data structure.
Before giving our $D_p$ and $\mathsf{Est}_p$, we first define the
{\em $p$-stable distribution}.
\begin{definition}[Zolotarev
\cite{Zolotarev86}]\DefinitionName{pstable}
For $0<p<2$, there exists a probability distribution $\mathcal{D}_p$
called the {\em $p$-stable distribution} satisfying the following
property. For any positive integer $n$ and vector $x\in\mathbb{R}^n$,
if
$Z_1,\ldots,Z_n \sim \mathcal{D}_p$ are independent, then
$\sum_{j=1}^n Z_j x_j \sim \|x\|_pZ$ for $Z\sim\mathcal{D}_p$.
\end{definition}
Li's {\em geometric mean estimator} is as follows.
For some positive integer $t > 2$, select a matrix $A\in\mathbb{R}^{t\times
n}$ with independent $p$-stable entries, and maintain $y = Ax$ in
the stream. Given $y$, the estimate of $\|x\|_p^p$ is then
$C_{t,p}\cdot (\prod_{j=1}^t |y_j|^{p/t})$ for some constant
$C_{t,p}$. For \Theorem{gme-good}, we make the
following adjustments. First, we require $t > 4$. Next,
for any fixed row of $A$ we only require that the entries be
$\Omega(1/\varepsilon^p)$-wise independent, though the rows
themselves we keep independent. Furthermore, in parallel we run the
algorithm of \cite{KNW10b} with constant error parameter to obtain a value
$\tilde{F}_p$ in $[\|x\|_p^p/2, 3\|x\|_p^p/2]$.
The $D_p$ data structure of
\Theorem{gme-good} is then simply $y$, together with the state
maintained by the algorithm of \cite{KNW10b}.
The estimator $\mathsf{Est}_p$ is $\min\{C_{t,p}\cdot (\prod_{j=1}^t
|y_j|^{p/t}), \tilde{F}_p/\varepsilon\}$. To state the value $C_{t,p}$, we
use the following theorem.
\begin{theorem}[{\cite[Theorem 2.6.3]{Zolotarev86}}]\TheoremName{zolotarev}
For $Q\sim\mathcal{D}_p$ and $-1<\lambda < p$,
$$\mathbf{E}[|Q|^\lambda] =
\frac 2\pi\Gamma\left(1 -
\frac
{\lambda}{p}\right)\Gamma(\lambda)\sin\left(\frac{\pi}{2}\lambda\right)
.$$
\end{theorem}
\Theorem{zolotarev} implies that we should set
$$C_{t,p} = \left[\frac 2{\pi}\cdot \Gamma\left(1 - \frac
1t\right)\cdot \Gamma\left(\frac pt\right)\cdot
\sin\left(\frac{\pi p}{2t}\right)\right]^{-t} .$$
To carry out our analysis, we will need the following theorem,
which gives a way of producing a smooth approximation of
the indicator function of an interval while maintaining good bounds on
high order derivatives.
\begin{theorem}[{\cite{DKN10}}]\LemmaName{ftmol}
For any interval $[a,b]\subseteq \mathbb{R}$ and integer $c>0$, there exists
a nonnegative function $\tilde{I}^c_{[a,b]}:\mathbb{R} \rightarrow\mathbb{R}$
satisfying the
following properties:
\begin{enumerate}
\item[i.] $\|(\tilde{I}^c_{[a,b]})^{(\ell)}\|_\infty \le (2c)^\ell$ for all
$\ell\ge 0$.
\item[ii.] For any $x\in\mathbb{R}$, $|\tilde{I}^c_{[a,b]}(x) - I_{[a,b]}(x)| \le
\min\{1, 5/(2c^2\cdot d(x,\{a,b\})^2)\}$.
\end{enumerate}
\end{theorem}
We also need the following lemma of \cite{KNW10b}, which argues that
smooth, bounded functions have their expectations approximately
preserved when
their input is a linear form evaluated at boundedly independent
$p$-stable random variables, as opposed to completely independent
$p$-stable random variables.
\begin{lemma}[{\cite[Lemma 2.2]{KNW10b}}]\LemmaName{hairy}
There exists an $\varepsilon_0>0$ such that the following holds. Let
$0<\varepsilon<\varepsilon_0$ and $0<p<2$ be given. Let $f:\mathbb{R}\rightarrow\mathbb{R}$ satisfy
$\|f^{(\ell)}\|_\infty = O(\alpha^\ell)$ for all $\ell\ge 0$, for some
$\alpha$ satisfying $\alpha^p \ge \log(1/\varepsilon)$. Let $k =
\alpha^p$. Let $x\in\mathbb{R}^n$ satisfy $\|x\|_p = O(1)$. Let
$R_1,\ldots,R_n$ be drawn from a $3Ck$-wise independent family of
$p$-stable random variables for $C>0$ a sufficiently large constant,
and let $Q$ be the product of $\|x\|_p$
and a $p$-stable random variable. Then $|\mathbf{E}[f(R)] - \mathbf{E}[f(Q)]| =
O(\varepsilon)$.
\end{lemma}
We now prove a tail bound for linear forms over $k$-wise independent
$p$-stable random variables. Note that for a random variable
$X$ whose moments are bounded, one has $\Pr[X-\mathbf{E}[X] > t]
\le \mathbf{E}[(X-\mathbf{E}[X])^k] / t^k$ by applying Markov's inequality to the
random variable $(X-\mathbf{E}[X])^k$ for some even integer $k\ge 2$.
Unfortunately, for $0<p<2$, it is known that even the second moment of
$\mathcal{D}_p$ is already infinite, so this method cannot be applied.
We instead prove our tail bound via FT-mollification of
$I_{[t,\infty)}$, since $\Pr[X \ge t] = \mathbf{E}[I_{[t,\infty)}(X)]$.
We will need to refer to the following lemma.
\begin{lemma}[Nolan {\cite[Theorem 1.12]{Nolan09}}]\LemmaName{nolan}
For fixed $0<p<2$, the probability density function $\varphi_p$ of the
$p$-stable
distribution satisfies $\varphi_p(x) = O(1/(1 + |x|^{p+1}))$ and is an even
function. The cumulative
distribution function
satisfies $\Phi_p(x) = O(|x|^{-p})$.
\end{lemma}
We now prove our tail bound.
\begin{lemma}\LemmaName{kwise-tailbound}
Suppose $x\in\mathbb{R}^n$, $\|x\|_p = 1$, $0<\varepsilon<1$ is
given, and
$R_1,\ldots,R_n$ are $k$-wise
independent $p$-stable random variables for $k\ge 2$.
Let $Q\sim\mathcal{D}_p$.
Then for all
$t\ge 0$, $R =
\sum_{i=1}^n R_ix_i$ satisfies
$$ |\Pr[|Q|\ge t] - \Pr[|R| \ge t]| = O(k^{-1/p}/(1+t^{p+1})
+ k^{-2/p}/(1+t^2) + 2^{-\Omega(k)}) .$$
\end{lemma}
\begin{proof}
We have $\Pr[|Z| \ge t] = \mathbf{E}[I_{[t,\infty)}(Z)] + \mathbf{E}[I_{(-\infty,
t]}(Z)]$ for any random variable $Z$, and thus we will argue
$|\mathbf{E}[I_{[t,\infty)}(Q)] - \mathbf{E}[I_{[t,\infty)}(R)]| = O(k^{-1/p}/(1+t^{p+1})
+ k^{-2/p}/(1+t^2) + 2^{-\Omega(k)})$; a similar argument shows the same bound
for $|\mathbf{E}[I_{(-\infty,t]}(Q)] - \mathbf{E}[I_{(-\infty,t]}(R)]|$.
We argue the following chain of inequalities for $c=k^{1/p}/(3C)$, for
$C$ the constant in \Lemma{hairy}, and we define
$\gamma = k^{-1/p}/(1+t^{p+1}) + k^{-2/p}/(1+t^2)$:
$$ \mathbf{E}[I_{[t,\infty)}(Q)] \approx_\gamma
\mathbf{E}[\tilde{I}^c_{[t,\infty)}(Q)]
\approx_{2^{-c^p}} \mathbf{E}[\tilde{I}^c_{[t,\infty)}(R)]
\approx_{\gamma + 2^{-c^p}}
\mathbf{E}[I_{[t,\infty)}(R)] .$$
\noindent $\mathbf{\mathbf{E}[I_{[t,\infty)}(Q)] \approx_\gamma
\mathbf{E}[\tilde{I}^c_{[t,\infty)}(Q)]}$:
Assume $t \ge 1$. We have
\begin{align}
\nonumber |\mathbf{E}[I_{[t,\infty)}(Q)] - \mathbf{E}[\tilde{I}^c_{[t,\infty)}(Q)]| & \le
\mathbf{E}[|I_{[t,\infty)}(Q) - \tilde{I}^c_{[t,\infty)}(Q)|]\\
{} & \le \Pr[|Q-t| \le 1/c] + \left(\sum_{s=1}^{\log(ct) - 1} \Pr[|Q-t| \le
2^s/c]\cdot O(2^{-2s})\right)\EquationName{replace-this}\\
\nonumber{} & \hspace{.2in} + \Pr[|Q-t| > t/2]\cdot O(c^{-2}t^{-2}) \\
\nonumber {} & = O(1/(c\cdot t^{p+1})) + O(c^{-2}t^{-2})
\end{align}
since $\Pr[|Q - t| \le 2^s/c$ is $O(2^s/(c\cdot t^{p+1})$
as long as $2^s/c \le t/2$.
In the case $0 < t < 1$, we repeat the same argument as above but
replace \Equation{replace-this} with a summation from $s=1$ to
$\infty$, and also remove the additive $\Pr[|Q-t|>t/2]\cdot
O(c^{-2}t^{-2})$ term. Doing so gives an overall upper bound of
$O(1/c)$ in this case.
\vspace{.1in}
\noindent $\mathbf{\mathbf{E}[\tilde{I}^c_{[t,\infty)}(Q)] \approx_{2^{-c^p}}
\mathbf{E}[\tilde{I}^c_{[t,\infty)}(R)]}$: This follows from \Lemma{hairy}
with $\varepsilon = 2^{-c^p}$ and $\alpha = c$.
\vspace{.1in}
\noindent $\mathbf{\mathbf{E}[\tilde{I}^c_{[t,\infty)}(R)] \approx_{\gamma + 2^{-c^p}}
\mathbf{E}[I_{[t,\infty)}(R)]}$:
We would like to apply the same argument as when showing
$\mathbf{E}[\tilde{I}^c_{[t,\infty)}(Q)] \approx_\gamma
\mathbf{E}[I_{[t,\infty)}(Q)]$ above. The trouble is, we must bound
$\Pr[|R-t| > t/2]$ and $\Pr[|R-t| \le 2^s/c]$ given that the $R_i$ are
only $k$-wise independent. For the first probability, we above only
used that
$\Pr[|Q-t| > t/2] \le 1$, which still holds with $Q$ replaced by
$R$.
For the second probability, observe $\Pr[|R-t| \le 2^s/c] =
\mathbf{E}[I_{[t - 2^s/c, t + 2^s/c]}(R)]$. Define $\delta = 2^s/c + b/c$ for
a sufficiently large constant $b>0$ to be determined later. Then,
arguing as above, we have $\mathbf{E}[\tilde{I}^c_{[t-\delta, t+\delta]}(R)]
\approx_{2^{-c^p}} \mathbf{E}[\tilde{I}^c_{[t-\delta,t+\delta]}(Q)]
\approx_\gamma \mathbf{E}[I_{[t-\delta,t+\delta]}(Q)]$, and we also know
$\mathbf{E}[I_{[t-\delta,t+\delta]}(Q)] = O(\mathbf{E}[I_{[t - 2^s/c, t +
2^s/c]}(Q)]) = O(\Pr[|Q - t| \le 2^s/c]) = O(2^s/(c\cdot
t^{p+1}))$. Now,
for $x\notin [t-2^s/c, t+2^s/c]$, $I_{[t-2^s/c,t+2^s/c]}(x) = 0$ while
$I_{[t-\delta,t+\delta]}(x) =1$. For $x\in [t-2^s/c,t+2^s/c]$, the
distance from $x$ to $\{t-\delta,t+\delta\}$ is at least $b/c$,
implying $\tilde{I}^c_{[t-\delta,t+\delta]}(x) \ge 1/2$ for $b$
sufficiently large by item (ii) of \Lemma{ftmol}. Thus, $2\cdot
\tilde{I}^c_{[t-\delta,t+\delta]} \ge I_{[t-2^s/c,t+2^s/c]}$ on $\mathbb{R}$,
and thus in particular, $\mathbf{E}[I_{[t-2^s/c,t+2^s/c]}(R)] \le 2\cdot
\mathbf{E}[\tilde{I}^c_{[t-\delta,t+\delta]}(R)]$. Thus, in summary,
$\mathbf{E}[I_{[t-2^s/c,t+2^s/c]}(R)] = O(2^s/(c\cdot t^{p+1}) + \gamma +
2^{-c^p})$.
\end{proof}
We now prove the main lemma of this section, which implies
\Theorem{gme-good}.
\begin{lemma}
Let $x\in\mathbb{R}^n$ be such that $\|x\|_p = 1$, and suppose
$0<\varepsilon<1/2$. Let $0<p<2$, and let $t$ be any constant
greater than $4/p$.
Let $R_1,\ldots,R_n$ be
$k$-wise independent $p$-stable random variables for $k =
\Omega(1/\varepsilon^p)$, and let $Q$ be a
$p$-stable random variable. Define $f(x) = \min\{|x|^{1/t}, T\}$,
for
$T = 1/\varepsilon$.
Then, $|\mathbf{E}[f(R)] - \mathbf{E}[|Q|^{1/t}] = O(\varepsilon)$ and $\mathbf{E}[f^2(R)] =
O(\mathbf{E}[|Q|^{2/t}])$.
\end{lemma}
\begin{proof}
We first argue $|\mathbf{E}[f(R)] - \mathbf{E}[|Q|^{1/t}] = O(\varepsilon)$.
We argue through the chain of inequalities
$$ \mathbf{E}[|Q|^{1/t}] \approx_{\varepsilon} \mathbf{E}[f(Q)] \approx_\varepsilon
\mathbf{E}[f(R)] .$$
\vspace{.1in}
\noindent $\mathbf{\mathbf{E}[|Q|^{1/t}] \approx_{\varepsilon}
\mathbf{E}[f(Q)]}$: We have
\begin{align*}
|\mathbf{E}[|Q|^{1/t}] - \mathbf{E}[f(Q)]| & = 2\int_{T^t}^\infty (x^{1/t} - T)\cdot
\varphi_p(x)dx \\
{} & = \int_{T^t}^\infty (x^{1/t} - T)\cdot O(1/x^{p+1})dx\\
{} & = O\left(T^{1 - tp}\cdot \left(\frac{t}{pt-1}+
\frac{1}{p}\right)\right)\\
{} & = O(1/(Tp))\\
{} & = O(\varepsilon)
\end{align*}
\noindent $\mathbf{\mathbf{E}[f(Q)] \approx_{\varepsilon}
\mathbf{E}[f(R)]}$:
Let $\varphi_p^+$ be the probability density function corresponding to the
distribution of $|Q|$, and let $\Phi_p^+$ be its cumulative distribution
function.
Then, by integration by parts and \Lemma{kwise-tailbound},
\begin{align}
\nonumber \mathbf{E}[f(Q)] & = \int_{0}^{T^t} x^{1/t}\varphi_p^+(x)dx + T\cdot\int_{T^t}^\infty
\varphi_p^+(x)dx\\
\nonumber {} & = -[x^{1/t} \cdot (1 - \Phi_p^+(x))]_0^{T^t} - T\cdot [(1 -
\Phi_p^+(x))]_{T^t}^\infty + \frac 1t\int_0^{T^t}\frac{1}{x^{1-1/t}}(1 -
\Phi_p^+(x))dx \\
\nonumber {} & = \frac 1t\int_0^{T^t}\frac{1}{x^{1-1/t}}\cdot\Pr[|Q|\ge x] dx\\
\nonumber {} & = \frac 1t\int_0^{T^t}\frac{1}{x^{1-1/t}}\cdot (\Pr[|R|\ge x] +
O(k^{-1/p}1/(1+x^{p+1}) + k^{-2/p}/(1+x^2) + 2^{-\Omega(k)}))dx\\
\nonumber {} & = \mathbf{E}[f(R)] + \int_0^1x^{1/t - 1}\cdot O(k^{-1/p} +
k^{-2/p} + 2^{-\Omega(k)}))dx\\
{} &\hspace{.2in} +
\int_1^{T^t}x^{1/t - 1}\cdot O(k^{-1/p}/x^{p+1} + k^{-2/p}/x^2
+ 2^{-\Omega(k)}))dx \EquationName{geometric-error}\\
\nonumber {} & = \mathbf{E}[f(R)] + O(\varepsilon) + O\left(\frac{1}{k^{1/p}}\cdot
\left(\frac{1}{T^{tp + t-1}} - 1\right)\cdot \frac{1}{\frac 1t - p
-1}\right)\\
\nonumber {} & \hspace{.2in} + O\left(\frac{1}{k^{2/p}}\cdot
\left(\frac{1}{T^{2t-1}} - 1\right)\cdot \frac{1}{\frac 1t - 2}\right)
+ O(2^{-\Omega(k)}\cdot (T - 1))\\
\nonumber {} & = \mathbf{E}[f(R)] + O(\varepsilon)
\end{align}
We show $\mathbf{E}[f^2(R)] = O(|Q|^{2/t})$ similarly. Namely, we argue
through the chain of inequalities
$$ \mathbf{E}[|Q|^{2/t}] \approx_{\varepsilon} \mathbf{E}[f^2(Q)] \approx_\varepsilon
\mathbf{E}[f^2(R)] ,$$
which proves our claim since $\mathbf{E}[|Q|^{2/t}] = \Omega(1)$ by
\Theorem{zolotarev}.
\vspace{.1in}
\noindent $\mathbf{\mathbf{E}[|Q|^{1/t}] \approx_{\varepsilon}
\mathbf{E}[f^2(Q)]}$: We have
\begin{align*}
|\mathbf{E}[|Q|^{2/t}] - \mathbf{E}[f^2(Q)]| & = 2\int_{T^t}^\infty (x^{2/t} - T^2)\cdot
\varphi_p(x)dx \\
{} & = \int_{T^t}^\infty (x^{2/t} - T^2)\cdot O(1/x^{p+1})dx\\
{} & = O\left(T^{2 - tp}\cdot \left(\frac{t}{pt-2} -
\frac{1}{p}\right)\right)\\
{} & = O(1/(Tp))\\
{} & = O(\varepsilon)
\end{align*}
\vspace{.1in}
\noindent $\mathbf{\mathbf{E}[f^2(Q)] \approx_{\varepsilon}
\mathbf{E}[f^2(R)]}$: This is argued nearly identically as in our proof that
$\mathbf{E}[f(Q)] \approx_\varepsilon \mathbf{E}[f(R)]$ above. The difference is that our
error term now corresponding to \Equation{geometric-error} is
\begin{align*}
\int_0^1x^{2/t - 1} & \cdot O(k^{-1/p} +
k^{-2/p} + 2^{-\Omega(k)}))dx +
\int_1^{T^t}x^{2/t - 1}\cdot O(k^{-1/p}/x^{p+1} + k^{-2/p}/x^2
+ 2^{-\Omega(k)}))dx\\
{} & = O(\varepsilon) + O\left(\frac{1}{k^{1/p}}\cdot
\left(\frac{1}{T^{tp + t-2}} - 1\right)\cdot \frac{1}{\frac 2t - p
-1}\right)\\
{} & \hspace{.2in} + O\left(\frac{1}{k^{2/p}}\cdot
\left(\frac{1}{T^{2t-2}} - 1\right)\cdot \frac{1}{\frac 2t - 2}\right)
+ O(2^{-\Omega(k)}\cdot (T^2 - 1))\\
{} & = O(\varepsilon)
\end{align*}
\end{proof}
\section{Introduction}\SectionName{intro}
The problem of estimating frequency moments of a vector being updated in
a data stream was first studied by
Alon, Matias, and Szegedy \cite{AMS99} and has since received much
attention
\cite{BJKS04,bgks06,Indyk06,IW05,KNW10b,Li08b,NW10,W04,WoodruffThesis}.
Estimation of the second moment has applications to estimating join
and self-join sizes \cite{AlonGMS02} and to network anomaly detection
\cite{KSZC03,ThorupZhang04}. First moment estimation is useful in
mining network traffic data \cite{CMR05}, comparing empirical
probability distributions, and several other applications (see
\cite{NW10} and the references therein). Estimating fractional moments
between the $0$th and $2$nd has applications to entropy estimation for
the purpose of network anomaly detection \cite{HNO08, ZhaoLOSWX07},
mining tabular data \cite{CormodeIKM02}, image decomposition
\cite{GeigerLD99}, and weighted sampling in
turnstile streams \cite{MW10}. It was also observed
experimentally that the use of fractional moments in
$(0,1)$ can improve the effectiveness of standard clustering
algorithms \cite{ahk01}.
Formally in this problem, we are given up front a real number $p>0$.
There is also an underlying $n$-dimensional
vector $x$ which starts as $\vec{0}$. What follows is a sequence of
$m$ updates of the form $(i_1,v_1),\ldots,(i_m,v_m)\in [n]\times
\{-M,\ldots,M\}$ for some $M>0$. An update $(i,v)$ causes the change
$x_i\leftarrow x_i + v$. We would like to compute $F_p \mathbin{\stackrel{\rm def}{=}}
\|x\|_p^p \mathbin{\stackrel{\rm def}{=}} \sum_{i=1}^n |x_i|^p$, also called the {\em $p$th
frequency moment} of $x$. In many applications, it is
required that the algorithm only use very limited space while processing the
stream, e.g., in networking applications where $x$ may be indexed by
source-destination IP pairs and thus a router cannot afford to store
the entire vector in memory, or in database applications where one
wants a succinct ``sketch'' of some dataset, which can be
compared with short sketches of other datasets for fast computation of
various (dis)similarity measures.
Unfortunately, it is known
that linear space ($\Omega(\min\{n,m\})$ bits) is required unless one
allows for
(a) {\em approximation}, so that we are only guaranteed to output a
value in $[(1-\varepsilon)F_p, (1+\varepsilon)F_p]$ for some $0<\varepsilon<1/2$, and (b)
{\em randomization}, so that the output is only guaranteed to be
correct
with some probability bounded away from $1$, over the randomness used
by the algorithm \cite{AMS99}. Furthermore, it is known that
polynomial space is
required for $p>2$ \cite{BJKS04,cks03,Gronemeier09,Jayram09,SaksS02},
while it is known that the
space complexity for $0<p\le 2$ is only $\Theta(\varepsilon^{-2}\log(mM) +
\log\log(n))$ bits to achieve success probability
$2/3$ \cite{AMS99, KNW10b}, which can be amplified by outputting the
median estimate of independent
repetitions. In this work,
we focus on this ``feasible'' regime for $p$, $0 < p \le 2$, where
logarithmic space is achievable.
While
there has been much previous work on minimizing the space consumption
in streaming algorithms, only recently have researchers begun to work
toward minimizing {\em update time} \cite[Question 1]{IITK}, i.e.,
the
time taken to process a new update in the stream. We argue however
that update time itself is an important parameter to optimize, and in
some scenarios it may even be desirable to sacrifice space for speed.
For example, in
network traffic monitoring applications each packet is an update, and
thus it is important that a streaming algorithm processing the packet
stream be able to operate at network speeds (see for example the
applications in \cite{KSZC03,ThorupZhang04}).
Note that if an algorithm has update time say,
$\Omega(1/\varepsilon^2)$,
then achieving a small error parameter such as $\varepsilon=.01$ could be
intractable since this time is multiplied by the length of the stream. This
is true even if the space required of the algorithm is small enough to fit in
memory.
For
$p=2$, it is known that optimal space and $O(1)$ update time are
simultaneously achievable \cite{CCF02, ThorupZhang04}, improving
upon the original $F_2$ algorithm of Alon, Matias, and Szegedy
\cite{AMS99}. For $p=1$
it is known that near-optimal, but not quite optimal, space and
$O(\log(n/\varepsilon))$ update time
are achievable \cite{NW10}. Meanwhile, optimal (or even near-optimal)
space for other
$p\in (0,2]$ is only known to be achievable with $\mathop{{\rm poly}}(1/\varepsilon)$
update time \cite{KNW10b}.
\begin{center}
\begin{figure}
\begin{center}
\begin{small}
\begin{tabular}{|lllll|}
\hline
Paper & Space & Update Time & Model & Which $p$
\\ \hline \cite{AMS99} & $O(\varepsilon^{-2}\log(mM))$ &
$O(\varepsilon^{-2})$ & unrestricted updates & $p = 2$
\\ \hline \cite{CCF02,ThorupZhang04} & $O(\varepsilon^{-2}\log(mM))$ &
$O(1)$ & unrestricted updates & $p = 2$
\\ \hline \cite{FKSV02} & $O(\varepsilon^{-2}\log(mM))$ & $O(\varepsilon^{-2})$ & $\le 2$ updates per coordinate & $p = 1$
\\ \hline \cite{Indyk06,Li08b} & $O(\varepsilon^{-2} \log (n) \log (mM))$ & $O(\varepsilon^{-2})$ & unrestricted updates & $p \in (0,2)$
\\ \hline \cite{KNW10b} & $O(\varepsilon^{-2} \log (mM))$ & $\tilde{O}(\varepsilon^{-2})$& unrestricted updates & $p \in (0,2)$
\\ \hline \cite{nw09} & $O(\varepsilon^{-2}\log(mM)\log(1/\varepsilon))$ & $O(\log^2(mM))$ & $\le 2$ updates per coordinate & $p = 1$
\\ \hline \cite{NW10} & $O(\varepsilon^{-2}\log(mM)\log(n))$ & $O(\log(n/\varepsilon))$ & unrestricted updates & $p = 1$
\\ \hline this work & $O(\varepsilon^{-2}\log(mM))$ & $\tilde{O}(\log^2(1/\varepsilon))$ &
unrestricted updates & $p \in (0,2)$\\
\hline
\end{tabular}
\end{small}
\caption{Comparison of our contribution to previous works
on $F_p$-estimation in data streams. All space bounds hide an
additive $O(\log\log n)$ term.}\FigureName{prev-work}
\end{center}
\end{figure}
\end{center}
\noindent \textbf{Our Contribution: }
For all $0< p < 2$ and $0<\varepsilon<1/2$ we give an algorithm for
$(1\pm\varepsilon)$-approximating $F_p$ with success probability at least
$2/3$ which uses an optimal
$O(\varepsilon^{-2}\log(mM) + \log\log n)$
bits of space
with $O(\log^2(1/\varepsilon)\log\log(1/\varepsilon))$ update time.\footnote{Throughout this
document we
say $g = \tilde{O}(f)$ if $g = O(f\cdot
\mathrm{polylog}(f))$. Similarly, $g = \tilde{\Omega}(f)$ if $g =
\Omega(f / \mathrm{polylog}(f))$.} This is a nearly
exponential
improvement in the time complexity of the previous
space-optimal algorithm for every such $p$.
\vspace{.1in}
\subsection{Previous Work}\SectionName{related}
The complexity of streaming algorithms for moment estimation has a long
history; see \Figure{prev-work} for a comparison of our result
to that of previous work.
Alon, Matias, and Szegedy were the first to study moment estimation in
data streams \cite{AMS99} and gave a space-optimal algorithm for
$p=2$. The update
time was later brought down to an optimal $O(1)$ implicitly in \cite{CCF02} and
explicitly in \cite{ThorupZhang04}. The work of \cite{FKSV02} gave a
space-optimal algorithm for $p=1$, but under the restriction that each
coordinate is updated at most twice, once positively and once
negatively. Indyk \cite{Indyk06} later removed this restriction, and
also gave an algorithm handling all $0<p<2$, but
at the expense of increasing the space by a $\log n$ factor. Li later
\cite{Li08b} provided alternative estimators for all $0 < p < 2$, based on
Indyk's sketches. The
extra $\log n$ factor in the space of these algorithms was later removed
in \cite{KNW10b}, yielding optimal space. The
algorithms of \cite{FKSV02,Indyk06, KNW10b, Li08b} all required $\mathop{{\rm poly}}(1/\varepsilon)$
update time.
Nelson and Woodruff \cite{nw09} gave an algorithm for $p=1$ in
the restricted setting where each coordinate is updated at most twice,
as in \cite{FKSV02}, with space suboptimal by a $\log(1/\varepsilon)$ factor,
and with update time $\log^2(mM)$.
They also later gave an algorithm for
$p=1$ with unrestricted updates which was suboptimal by a $\log n$
factor, but had update time
only $O(\log (n/\varepsilon))$ \cite{NW10}.
On the lower bound front, a lower bound of
$\Omega(\min\{n,m,\varepsilon^{-2}\log(\varepsilon^2mM)\} +
\log\log (nmM))$ was shown in \cite{KNW10b}, together with an upper
bound of $O(\varepsilon^{-2}\log(mM) + \log\log n)$ bits. For nearly the full
range
of parameters these are tight, since if $\varepsilon \le 1/\sqrt{m}$ we can
store the entire stream in memory in $O(m\log(nM)) =
O(\varepsilon^{-2}\log(nM))$ bits of space (and we can ensure $n = O(m^2)$
via FKS hashing \cite{FredmanKS84} with just an additive $O(\log\log
n)$ bits increase in space), and if $\varepsilon \le 1/\sqrt{n}$ we
can store the entire vector in memory in $O(n\log(mM)) =
O(\varepsilon^{-2}\log(mM))$ bits. Thus, a gap exists only when $\varepsilon$ is
very near $1/\sqrt{\min\{n,m\}}$.
This lower bound followed many previous lower bounds for this
problem,
given in \cite{AMS99,BarYossefThesis,JayramKS08,W04,WoodruffThesis}.
For the case $p>2$ it was shown that $\Omega(n^{1-2/p})$ space is
required \cite{BJKS04,cks03,Gronemeier09,Jayram09,SaksS02}, and this was
shown to be tight up to $\mathop{{\rm poly}}(\log(nmM)/\varepsilon)$ factors
\cite{bgks06,IW05}.
\subsection{Overview of our approach}
At the top level, our algorithm follows the general approach set
forth by \cite{NW10} for
$F_1$-estimation. In that work, the coordinates $i \in
\{1,\ldots,n\}$ were split up into {\em heavy
hitters}, and the remaining {\em light} coordinates.
A {\em $\phi$-heavy
hitter} with respect to $F_p$ is a coordinate $i$ such that $|x_i|^p
\ge \phi\|x\|_p^p$. A list $L$ of $\varepsilon^2$-heavy hitters with respect
to $F_1$ were found by running the $\mathsf{CountMin}$ sketch of \cite{CM05}.
To estimate the contribution of the light elements to $F_1$,
\cite{NW10} used $R = \Theta(1/\varepsilon^2)$
independent Cauchy sketches $D_1,\ldots,D_R$ (actually,
$D_j$ was a tuple of $3$ independent Cauchy sketches).
A {\em Cauchy sketch} of a vector $x$, introduced by Indyk
\cite{Indyk06}, is the dot product of $x$ with a random vector $z$
with
independent entries distributed according to the Cauchy distribution.
This distribution has the property that $\inprod{z,x}$ is itself a
Cauchy random variable, scaled by $\|x\|_1$.
Upon receiving an update to $x_i$
in the stream, the update was fed to $D_{h(i)}$ for some hash function
$h:[n]\rightarrow [R]$. At the end of the stream, the estimate of the
contribution to $F_1$ from light elements was $(R / (R - |h(L)|))\cdot
\sum_{j\notin h(L)} \mathsf{EstLi}_1(D_j)$, where $\mathsf{EstLi}_p$
is Li's geometric mean estimator for $F_p$ \cite{Li08b}.
The analysis of \cite{NW10} only used that Li's
geometric mean estimator is unbiased and has a good variance bound.
Our algorithm $\mathsf{LightEstimator}$
for estimating the contribution to $F_p$ from light coordinates
for $p\neq 1$ follows the same approach. Our main contribution here is
to show that a variant of Li's geometric mean estimator has bounded
variance and is approximately unbiased (to within relative error
$\varepsilon$) even when the associated
$p$-stable random variables are only $k$-wise independent for $k =
\Omega(1/\varepsilon^p)$. This variant allows us to avoid Nisan's pseudorandom
generator \cite{Nisan92} and thus achieve optimal space. While
the work of \cite{KNW10b} also provided an estimator avoiding
Nisan's pseudorandom generator, their estimator is not known to be
approximately unbiased, which makes it less useful in applications
involving the average of many such estimators. We evaluate
the necessary $k$-wise independent hash function quickly by a combination of
buffering and fast multipoint evaluation of a collection of
pairwise independent polynomials. Our proof that bounded independence
suffices uses the FT-mollification approach introduced in
\cite{KNW10b} and refined in \cite{DKN10}, which is a method for
showing that the expectation of some function is approximately
preserved by bounded independence, via a smoothing operation
(FT-mollification) and
Taylor's theorem. One
novelty is that while \cite{DKN10,KNW10b} only ever dealt with
FT-mollifying indicator functions of regions in Euclidean space, here
we must FT-mollify functions of the form $f(x) =
|x|^{1/t}$ . To
achieve our results, we express $\mathbf{E}[f(x)] = \int_0^\infty
f(x)\varphi_p(x)dx$ as $\int_{0}^{\infty} f'(x)(1 - \Phi_p(x))dx$
via integration by parts, where $\varphi_p$ is the density function of
the absolute value of the $p$-stable distribution, and $\Phi_p$ is the
corresponding cumulative distribution function. We then note $1 -
\Phi_p(x)
= \Pr[|X| \ge x] = \mathbf{E}[I_{[x,\infty)\cup (-\infty,-x]}(X)]$ for $X$
$p$-stable, where $I_S$ is the indicator function of the set $S$.
We then FT-mollify
$I_{[x,\infty)\cup (-\infty,-x]}$, which {\em is} the indicator
function of some set, to
write $\mathbf{E}[f(x)]$ as a
weighted integral of indicator functions, from which point we can apply
the methods of \cite{DKN10,KNW10b}.
In order to estimate the contribution to $F_p$ from coordinates in
$L$, we
develop a novel data structure we refer to as $\mathsf{HighEnd}$.
Suppose $L$ contains all the $\alpha$-heavy hitters, and every index
in $L$ is an $(\alpha/2)$-heavy hitter. We would like to compute
$\|x_L\|_p^p \pm O(\varepsilon)\cdot \|x\|_p^p$, where $\alpha =
\Omega(\varepsilon^2)$.
We maintain
a matrix of counters $D_{j,k}$ for $(j,k)\in [t] \times [s]$ for $t =
O(\log(1/\varepsilon))$ and $s = O(1/\alpha)$. For each $j\in[t]$ we have a
hash function $h^j:[n]\rightarrow [s]$ and $g^j:[n]\rightarrow
[r]$ for $r = O(\log(1/\varepsilon))$. The counter $D_{j,k}$ then stores
$\sum_{h^j(v) = k} e^{2\pi i g^j(v)/r} x_v$ for $i =
\sqrt{-1}$. That is, our data structure is similar to the
$\mathsf{CountSketch}$ data structure of Charikar, Chen, and Farach-Colton
\cite{CCF02}, but rather than taking the dot product with a random
sign vector in each counter, we take the dot product with a vector
whose entries are random
complex roots of unity. At the end of the stream, our estimate of
the $F_p$-contribution from heavy hitters is
$$
\mathbf{Re} \left [\sum_{w\in L}\left(\frac 3t\sum_{k=1}^{t/3}
e^{-2 \pi i
g^{j(w,k)}(w) /r}
\cdot \mathrm{sign}(x_w) \cdot D_{j(w,k), h^{j(w,k)}(w)} \right )^p \right ]
.$$
The choice to use complex roots of unity is to ensure that our estimator
is approximately unbiased, stemming from the fact that the real
part of large powers of roots of unity is still $0$ in expectation.
Here ${\bf Re}[z]$ denotes the real part of $z$, and $j(w,k)$ denotes
the $k$th smallest value $b\in [t]$ such that $h^b$ isolates $w$ from
the other $w'\in L$ (if fewer than $t/3$ such $b$ exist, we fail). The
subroutine $\mathsf{Filter}$ for estimating the heavy hitter contribution for
$p=1$ in
\cite{NW10} did not use complex random variables, but rather just used
the dot product with a random sign vector as in
$\mathsf{CountSketch}$. Furthermore, it required a $O(\log(1/\varepsilon))$ factor
more space even for $p=1$, since it did not average estimates across
$\Omega(t)$ levels to reduce variance.
For related problems, e.g., estimating $F_p$ for $p > 2$, using
complex roots of unity leads
to sub-optimal bounds \cite{g04}. Moreover, it seems that ``similar''
algorithms using sign variables in place of roots of unity
do not work, as they have a constant factor bias in their expectation
for which it is unclear how to remove.
Our initial intuition was that an algorithm
using $p$-stable random variables would be necessary to estimate the
contribution to $F_p$ from the heavy hitters. However, such approaches
we explored suffered from too large a variance.
In parallel we must run an algorithm we develop to {\em find} the
heavy hitters. Unfortunately, this algorithm, as well as $\mathsf{HighEnd}$,
use suboptimal space.
To overcome this, we actually use
a list of $\epsilon^2$-heavy hitters for $\epsilon =
\varepsilon\cdot \log(1/\varepsilon)$. This then improves the space, at the
expense of increasing the variance of $\mathsf{LightEstimator}$.
We then run $O((\epsilon/\varepsilon)^2)$ pairwise independent instantiations of
$\mathsf{LightEstimator}$ in
parallel and take the average estimate, to bring the variance down.
This increases some part of the update time of $\mathsf{LightEstimator}$ by a
$\log^2(1/\varepsilon)$ factor, but this term turns out to anyway be
dominated by the time to evaluate various hash functions.
Though, even in the extreme case of balancing with $\epsilon =
1$, our algorithm for finding the heavy hitters algorithm requires
$\Omega(\log(n)\log(mM))$ space,
which is suboptimal. We remedy this by performing a dimensionality
reduction down to dimension $\mathop{{\rm poly}}(1/\varepsilon)$ via hashing and dot
products with random
sign vectors. We then apply $\mathsf{HighEnd}$ to estimate the contribution
from heavy hitters in this new vector, and we show that
with high probability the correctness of our overall algorithm is
still maintained.
\subsection{Notation}\SectionName{notation}
For a positive integer $r$, we use $[r]$ to denote the set
$\{1,\ldots,r\}$. All logarithms are base-$2$ unless otherwise
noted. For a complex number $z$, $\mathbf{Re}[z]$ is the real
part of $z$, $\mathbf{Im}[z]$ is the imaginary part of $z$, $\bar{z}$ is the
complex conjugate of $z$, and $|z| \mathbin{\stackrel{\rm def}{=}} \sqrt{\bar{z}z}$.
At times we consider random variables $X$ taking on {\em complex}
values. For such random variables, we use $\mathbf{Var}[X]$ to denote
$\mathbf{E}[|X - \mathbf{E}[X]|^2]$. Note that the usual statement of Chebyshev's
inequality
still holds under this definition.
For $x\in
\mathbb{R}^n$ and $S\subseteq [n]$, $x_S$ denotes the $n$-dimensional vector
whose $i$th coordinate is $x_i$ for $i\in S$ and $0$
otherwise. For a probabilistic
event $\mathcal{E}$, we use $\mathbf{1}_{\mathcal{E}}$ to denote the
indicator random variable for $\mathcal{E}$. We sometimes refer to a
constant as {\em universal} if it does not depend on other parameters,
such as $n,m,\varepsilon$, etc.
All space bounds are measured in bits. When measuring time
complexity, we assume a word RAM with machine word size
$\Omega(\log(nmM))$ so that standard arithmetic and bitwise operations
can be performed on words in constant time.
We use {\em reporting time} to refer to the time taken for a
streaming algorithm to answer some query (e.g., ``output an estimate
of $F_p$'').
Also, we can assume $n =
O(m^2)$ by FKS hashing \cite{FredmanKS84} with an additive
$O(\log\log n)$ term in our final space bound; see
Section A.1.1 of \cite{KNW10b} for details. Thus, henceforth any
terms involving $n$ appearing in space and time bounds may be
assumed at most $m^2$.
We also often assume that $n$, $m$, $M$, $\varepsilon$, and $\delta$ are
powers of $2$ (or sometimes $4$), and that $1/\sqrt{n} < \varepsilon <
\varepsilon_0$ for some
universal constant $\varepsilon_0>0$. These assumptions are
without loss of generality. We can assume $\varepsilon > 1/\sqrt{n}$ since
otherwise we could store $x$ explicitly in memory using
$O(n\log(mM)) = O(\varepsilon^{-2}\log(mM))$ bits with constant update and
reporting times. Finally, we assume
$\|x\|_p^p \ge 1$. This is because, since $x$ has integer entries,
either $\|x\|_p^p \ge 1$, or it is $0$. The case that it is $0$ only
occurs when $x$ is the $0$ vector, which can be detected in
$O(\log(nmM))$ space by the AMS sketch \cite{AMS99}.
\subsection{Organization}\SectionName{organization}
An ``$F_p$ $\phi$-heavy hitter'' is an index $j$ such that $|x_j| \ge
\phi\|x\|^p_p$. Sometimes we drop the ``$F_p$'' if $p$ is understood
from context.
In \Section{heavy-contrib}, we give an efficient subroutine $\mathsf{HighEnd}$
for estimating $\|x_L\|_p^p$ to within additive error $\varepsilon\|x\|_p^p$,
where $L$ is a list containing all $\alpha$-heavy hitters for some
$\alpha>0$, with the promise that no
$i\in L$ is not an $\alpha/2$-heavy hitter.
In \Section{light-contrib} we give a subroutine $\mathsf{LightEstimator}$ for
estimating $\|x_{[n]\backslash L}\|_p^p$. Finally,
in \Section{final-alg}, we put everything together in a way that
achieves optimal space and fast update time. We discuss how to compute
$L$ in \Section{fp-hh}.
\section{Estimating the contribution from light elements}\SectionName{light-contrib}
In this section, we show how to estimate the contribution to $F_p$
from coordinates of $x$ which are not heavy hitters.
More precisely, given a list $L\subseteq[n]$ such that $|L| \le
2/\varepsilon^2$ and $|x_i|^p \le \varepsilon^2 \|x\|_p^p$ for all $i\notin L$, we
describe a subroutine $\mathsf{LightEstimator}$ that outputs a value that is
$\|x_{[n]\backslash L}\|_p^p \pm O(\varepsilon)\cdot \|x\|_p^p$ with
probability at least $7/8$. This estimator is essentially the same as
that given for $p=1$ in \cite{NW10}, though in this work we show that
(some variant of) the geometric mean estimator of \cite{Li08b}
requires only bounded independence, in order that we may obtain
optimal space.
Our description follows. We first need the following theorem, which
comes from a derandomized variant of the geometric mean estimator.
Our proof is in \Section{gme}.
\begin{theorem}\TheoremName{gme-good}
For any $0 < p < 2$, there is a randomized data structure $D_p$, and a
deterministic
algorithm $\mathsf{Est}_p$ mapping the state space of the data structure to
reals, such that
\begin{enumerate}
\item $\mathbf{E}[\mathsf{Est}_p(D_p(x))] = (1\pm \varepsilon)\|x\|_p^p$
\item $\mathbf{E}[\mathsf{Est}_p(D_p(x))^2] \le C_p\cdot \|x\|_p^{2p}$
\end{enumerate}
for some constant $C_p>0$ depending only on $p$, and where
the expectation is taken over the randomness used by $D_p$. Aside
from storing a length-$O(\varepsilon^{-p}\log(nmM))$
random string, the space complexity is $O(\log(nmM))$. The update time
is the time to evaluate a $\Theta(1/\varepsilon^p)$-wise independent hash
function over a field of size $\mathop{{\rm poly}}(nmM)$, and the reporting time is
$O(1)$.
\end{theorem}
We also need the following algorithm for fast multipoint evaluation of
polynomials.
\begin{theorem}[{\cite[Ch. 10]{GG99}}]\TheoremName{fastmult}
Let $\mathbf{R}$ be a ring, and let $q\in \mathbf{R}[x]$ be a
degree-$d$ polynomial. Then, given distinct
$x_1,\ldots,x_d\in\mathbf{R}$, all the values $q(x_1),\ldots,q(x_d)$
can be computed using $O(d\log^2d\log\log d)$ operations
over $\mathbf{R}$.
\end{theorem}
The guarantees of the final $\mathsf{LightEstimator}$ are then given in
\Theorem{modified}, which is a
modified form of an algorithm designed in \cite{NW10} for the case
$p=1$. A description of the modifications of the algorithm in
\cite{NW10} needed to work for $p\neq 2$ is given in \Remark{modified},
which in part uses the following uniform hash family of Pagh and Pagh
\cite{PP08}.
\begin{theorem}[Pagh and Pagh {\cite[Theorem 1.1]{PP08}}]\TheoremName{pagh}
Let $S \subseteq U = [u]$ be a set of $z>1$ elements, and let $V =
[v]$, with $1<v\le u$. Suppose the machine word size is
$\Omega(\log(u))$.
For any constant $c>0$ there is a word RAM
algorithm that, using
time $\log(z)\log^{O(1)}(v)$ and $O(\log(z) + \log\log(u))$ bits of
space, selects a family $\mathcal{H}$ of functions from $U$ to $V$
(independent of $S$) such that:
\begin{enumerate}
\item With probability $1 - O(1/z^c)$, $\mathcal{H}$ is $z$-wise independent
when restricted to $S$.
\item Any $h\in \mathcal{H}$ can be represented by a RAM data structure
using $O(z\log(v))$ bits of space, and $h$ can be evaluated in
constant time after an initialization step taking $O(z)$ time.
\end{enumerate}
\end{theorem}
\begin{theorem}[{\cite{NW10}}]\TheoremName{modified}
Suppose we are given $0<\varepsilon<1$, and given a list $L\subseteq[n]$
at the end of the data stream such that $|L| \le
2/\varepsilon^2$ and $|x_i|^p < \varepsilon^2 \|x\|_p^p$ for all $i\notin L$.
Then, given access to a randomized data structure satisfying
properties (1) and (2) of \Theorem{gme-good},
there is an algorithm $\mathsf{LightEstimator}$ satisfying the
following.
The randomness used by $\mathsf{LightEstimator}$ can be broken up into a certain
random hash function $h$, and another random string $s$.
$\mathsf{LightEstimator}$ outputs a value $\Phi$' satisfying
$\mathbf{E}_{h,s}[\Phi'] = (1\pm O(\varepsilon))\|x_{[n]\backslash
L}\|_p^p$, and $\mathbf{E}_h[\mathbf{Var}_s[\Phi']] = O(\varepsilon^2 \|x\|_p^{2p})$.
The space usage is $O(\varepsilon^{-2}\log(nmM))$,
the update time is $O(\log^2(1/\varepsilon)\log\log(1/\varepsilon))$, and the
reporting time is
$O(1/\varepsilon^2)$.
\end{theorem}
\begin{remark}\RemarkName{modified}
\textup{
The claim of \Theorem{modified} is not stated in the same form in
\cite{NW10}, and thus we provide some explanation.
The work of
\cite{NW10} only focused on the case $p=1$.
There, in Section 3.2, $\mathsf{LightEstimator}$ was defined\footnote{The estimator
given there was never actually named, so we name it $\mathsf{LightEstimator}$
here.} by
creating $R = 4/\varepsilon^2$ independent instantiations of $D_1$,
which we label $D_1^1,\ldots,D_1^R$ ($R$ chosen so that $R \ge 2|L|$),
and
picking a hash function $h:[n]\rightarrow[R]$ from a random hash
family constructed as in \Theorem{pagh} with $z = R$ and $c \ge
2$. Upon receiving an update to
$x_i$ in the stream, the update was fed to $D_1^{h(i)}$. The final
estimate was defined as follows. Let $I = [R]\backslash h(L)$. Then,
the estimate was $\Phi' = (R/|I|)\cdot \sum_{j\in I} \mathsf{Est}_1(D_1^j)$.
In place of a
generic $D_1$, the presentation in \cite{NW10} used Li's geometric
mean estimator
\cite{Li08b}, though the analysis (Lemmas 7 and 8 of \cite{NW10}) only
made use of the generic
properties of $D_1$ and $\mathsf{Est}_1$ given in \Theorem{gme-good}.
Let $s = (s_1,\ldots,s_R)$ be the tuple of random strings used by the
$D_1^j$, where the entries of $s$ are pairwise independent.
The analysis then showed that (a) $\mathbf{E}_{h,s}[\Phi'] =
(1\pm O(\varepsilon))\|x_{[n]\backslash L}\|_1$, and (b) $\mathbf{E}_h[\mathbf{Var}_s[\Phi']]
= O(\varepsilon^2\|x\|_1^2)$. For (a), the same analysis applies for $p\neq
1$ when using $\mathsf{Est}_p$ and $D_p$ instead. For (b), it was shown that
$\mathbf{E}_h[\mathbf{Var}_s[\Phi']]
= O(\|x_{[n]\backslash L}\|_2^2 + \varepsilon^2\|x_{[n]\backslash
L}\|_1^2)$. The same analysis shows that $\mathbf{E}_h[\mathbf{Var}_s[\Phi']]
= O(\|x_{[n]\backslash L}\|_{2p}^{2p} + \varepsilon^2\|x_{[n]\backslash
L}\|_p^p)$ for $p\neq 1$. Since $L$ contains all the $\varepsilon^2$-heavy hitters,
$\|x_{[n]\backslash L}\|_{2p}^{2p}$ is maximized when there are
$1/\varepsilon^2$ coordinates $i\in [n]\backslash L$ each with $|x_i|^p =
\varepsilon^2\|x\|_p^p$, in which case $\|x_{[n]\backslash L}\|_{2p}^{2p} =
\varepsilon^2\|x\|_p^{2p}$.
}
\textup{
To achieve the desired update time, we buffer every $d = 1/\varepsilon^p$
updates then perform the fast multipoint evaluation of
\Theorem{fastmult} in batch (note this does not affect our space
bound since $p<2$). That is, although the
hash function $h$ can be evaluated in constant time, updating any
$D_p^j$ requires evaluating a degree-$\Omega(1/\varepsilon^p)$ polynomial,
which na\"{i}vely requires $\Omega(1/\varepsilon^p)$ time. Note that one
issue is that the different data structures
$D_p^j$ use different polynomials, and thus we may need to evaluate
$1/\varepsilon^p$ different polynomials on the $1/\varepsilon^p$ points, defeating
the purpose of batching. To remedy this, note that these polynomials
are themselves pairwise independent. That is, we can assume there are
two coefficient vectors $a,b$ of length $d+1$, and the polynomial
corresponding to $D_p^j$ is given by the coefficient vector $j\cdot a
+ b$. Thus, we only need to perform fast multipoint evaluation on the
two
polynomials defined by $a$ and $b$.
To achieve worst-case update time, this computation can
be spread over the next $d$ updates. If a query comes before $d$
updates are batched, we need to perform $O(d\log d\log\log d)$ work at
once, but this is already dominated by our $O(1/\varepsilon^2)$ reporting
time since $p<2$.
}
\end{remark}
\section{Appendix}
\subsection{A heavy hitter algorithm for $F_p$}\SectionName{fp-hh}
Note that $\mathsf{F_pReport}$, $\mathsf{F_pUpdate}$,
and $\mathsf{F_pSpace}$ below can be as in the statement in \Section{heavy-contrib}
by using the algorithm of \cite{KNW10b}.
\vspace{.1in}
\noindent \BoldTheorem{fp-hh}
{\it
There is an algorithm $\mathsf{F_pHH}$ satisfying the following properties.
Given $0<\phi,\delta<1/2$ and black-box access to an
$F_p$-estimation algorithm $\mathsf{F_pEst}(\varepsilon',\delta')$ with $\varepsilon' = 1/7$
and $\delta' =
\phi\delta/(12(\log(\phi n) + 1))$,
$\mathsf{F_pHH}$ produces a list $L$ such that $L$
contains all $\phi$-heavy hitters and does not contain indices which
are not $\phi/2$-heavy hitters with probability
at least $1-\delta$. For each $i\in L$, the algorithm also
outputs $\mathrm{sign}(x_i)$, as well as an estimate $\tilde{x}_i$ of $x_i$
satisfying $\tilde{x}_i^p \in [(6/7)|x_i|^p, (9/7)|x_i|^p]$.
Its space usage is $O(\phi^{-1}\log (\phi n)\cdot
\mathsf{F_pSpace}(\varepsilon',\delta') + \phi^{-1}\log(1/(\delta
\phi))\log(nmM))$. Its update
time is $O(\log
(\phi n)\cdot \mathsf{F_pUpdate}(\varepsilon', \delta') + \log(1/(\delta\phi)))$. Its
reporting time is
$O(\phi^{-1}(\log (\phi n)\cdot \mathsf{F_pReport}(\varepsilon', \delta') +
\log(1/(\delta\phi))))$.
Here, $\mathsf{F_pReport}(\varepsilon',\delta')$, $\mathsf{F_pUpdate}(\varepsilon',\delta')$, and
$\mathsf{F_pSpace}(\varepsilon',\delta')$ are the reporting time, update time, and space
consumption of $\mathsf{F_pEst}$ when a $(1\pm\varepsilon')$-approximation to $F_p$ is
desired with probability at least $1-\delta'$.
}
\begin{proof}
First we argue with $\delta' = \phi\delta/(12(\log n + 1))$.
We assume without loss of generality that $n$ is a power of $2$.
Consider the following data structure $\mathsf{BasicF_pHH}(\phi', \delta,
\varepsilon', k)$, where $k\in\{0,\ldots,\log n\}$. We set $R =
\ceil{1/\phi'}$ and pick a
function $h:\{0,\ldots,2^k-1\}\rightarrow[R]$ at random from a
pairwise independent
hash family. We also create instantiations $D_1,\ldots,D_R$ of
$\mathsf{F_pEst}(\varepsilon',1/5)$. This entire structure is then repeated
independently in parallel $T =
\Theta(\log(1/\delta))$ times, so that we have hash functions
$h_1,\ldots,h_T$, and instantiations $D_i^j$ of $\mathsf{F_pEst}$ for $i,j\in
[R]\times [T]$. For an integer $x$ in $[n]$, let $\mathsf{prefix}(x, k)$
denote
the length-$k$ prefix of $x-1$ when written in binary, treated as an
integer in $\{0,\ldots,2^k-1\}$.
Upon receiving an update $(i,v)$ in the stream, we feed this update to
$D_{h_j(\mathsf{prefix}(i,k))}^j$ for each $j\in[T]$.
For $t\in \{0,\ldots,2^k-1\}$,
let $F_p(t)$ denote the $F_p$ value of the vector $x$ restricted to
indices $i\in[n]$ with $\mathsf{prefix}(i) = t$.
Consider the procedure $\mathsf{Query}(t)$
which outputs the median of $F_p$-estimates given
by $D_{h_j(t)}^j$ over all $j\in[T]$.
We now argue that the output of
$\mathsf{Query}(t)$ is in the interval $[(1-\varepsilon')\cdot F_p(t),
(1+\varepsilon')\cdot(F_p(t)
+ 5\phi'\|x\|_p^p)]]$, i.e. $\mathsf{Query}(t)$ ``succeeds'', with probability
at least $1-\delta$.
For any $j\in[T]$, consider the
actual $F_p$ value $F_p(t)^j$ of the vector $x$ restricted to
coordinates $i$ such that $h_j(\mathsf{prefix}(i,k)) = h_j(t)$. Then $F_p(t)^j
= F_p(t) + R(t)^j$, where $R(t)^j$ is the
$F_p$ contribution of the $i$ with $\mathsf{prefix}(i,k)\neq t$, yet
$h_j(\mathsf{prefix}(i,k)) = h(t)$.
We have
$R(t)^j \ge 0$ always, and furthermore
$\mathbf{E}[R(t)^j] \le \|x\|_p^p/R$ by pairwise independence of $h_j$. Thus
by Markov's inequality, $\Pr[R(t)^j > 5\phi'\|x\|_p^p] <
1/5$. Note for any fixed $j\in[T]$, the $F_p$-estimate output by
$D_{h(t)}^j$ is in $[(1-\varepsilon')\cdot F_p(t),
(1+\varepsilon')\cdot(F_p(t)
+ 5\phi'\|x\|_p^p)]]$ as long as both the events ``$D_{h(t)}^j$
successfully gives a $(1\pm\varepsilon')$-approximation'' and ``$R(t)^j \le
5\phi'\|x\|_p^p$'' occur. This happens with probability at least
$3/5$. Thus, by a Chernoff bound, the output of $\mathsf{Query}(t)$ is in
the desired interval
with probability at least $1-\delta$.
We now define the final $\mathsf{F_pHH}$ data structure.
We maintain one global instantiation $D$ of $\mathsf{F_pEst}(1/7,\delta/2)$.
We also use the dyadic
interval idea for $L_1$-heavy hitters given in
\cite{CM05}. Specifically, we imagine building a binary tree
$\mathcal{T}$ over the
universe $[n]$ (without loss of generality assume $n$ is a power of
$2$). The number of levels in the tree is $\ell = 1 + \log n$, where
the root
is at level $0$ and the leaves are at level $\log n$. For each level
$j\in\{0,\ldots, \ell\}$, we maintain an instantiation
$B_j$ of $\mathsf{BasicF_pHH}(\phi/80, \delta', 1/7, j)$ for $\delta'$ as in
the theorem statement. When we receive an
update $(i,v)$ in the stream, we feed the update to $D$ and also to
each $B_j$.
We now describe how to answer a query to output the desired list
$L$.
We first query $D$ to obtain $\tilde{F}_p$, an approximation to
$F_p$. We next initiate an iterative procedure on our binary tree,
beginning at the root, which proceeds level by level. The procedure
is as follows. Initially, we
set $L = \{0\}$, $L' = \emptyset$, and $j = 0$. For each $i\in L$, we
perform $\mathsf{Query}(i)$ on $B_j$ then add $2i$ and $2i+1$ to $L'$ if the
output of $\mathsf{Query}(i)$ is at least $3\phi\tilde{F}_p/4$. After
processing every $i\in L$, we then set $L\leftarrow L'$ then
$L'\leftarrow \emptyset$, and we increment $j$. This continues until
$j=1+\log n$, at which point we halt and return $L$.
We now show why
the list $L$ output by this procedure satisfies the claim in the
theorem statement. We condition on the event $\mathcal{E}$
that $\tilde{F}_p = (1\pm 1/7)F_p$, and also on the event
$\mathcal{E}'$ that every query made throughout the recursive
procedure is successful. Let $i$ be such that $|x_i|^p \ge \phi F_p$.
Then, since $F_p(\mathsf{prefix}(i, j)) \ge |x_i|^p$
for any $j$, we always have that $\mathsf{prefix}(i,j)\in L$ at the end of the
$j$th round of our iterative procedure, since $(6/7)|x_i|^p \ge
(3/4)\phi\tilde{F}_p$ given $\mathcal{E}$. Now, consider an $i$ such
that $|x_i|^p < (\phi/2)F_p$. Then, $(8/7)\cdot (|x_i|^p - 5\cdot
(\phi/80)) < 3\phi\tilde{F}_p/4$, implying $i$ is not included in the
final output list. Also, note that since the query at the leaf
corresponding to $i\in L$ is successful, then by definition of a
successful query, we are given an estimate $\tilde{x}_i^p$ of
$|x_i|^p$ by the corresponding $\mathsf{BasicF_pHH}$ structure satisfying
$\tilde{x}_i^p \in [(6/7)|x_i|^p, (8/7)|x_i|^p + (\phi/16)F_p]$, which
is $[(6/7)|x_i|^p, (9/7)|x_i|^p]$ since $|x_i|^p \ge (\phi/2)F_p$.
We now only need to argue that $\mathcal{E}$ and $\mathcal{E}'$ occur
simultaneously with large probability. We have $\Pr[\mathcal{E}] \ge 1 -
\delta/2$. For $\mathcal{E}'$, note there are at most $2\phi$
$\phi/2$-heavy hitters at any level of the tree, where at level $j$
we are referring to heavy hitters of the $2^j$-dimensional vector
$y_j$ satisfying $(y_j)_i^p = \sum{\mathsf{prefix}(t, j) = i} |x_t|^p$. As
long as the
$\mathsf{Query}(\cdot)$ calls made for all $\phi/2$-heavy hitters and their
two children throughout the tree succeed (including at the root),
$\mathcal{E}'$ holds. Thus, $\Pr[\mathcal{E}'] \ge 1 - \delta'\cdot
6(\log n + 1)\phi^{-1} = 1 - \delta/2$. Therefore, by a union bound
$\Pr[\mathcal{E} \wedge \mathcal{E}'] \ge 1 - \delta$.
Finally, notice that the number of levels in $\mathsf{F_pHH}$ can be reduced
from $\log n$ to $\log n - \log \ceil{1/\phi} = O(\log (\phi n))$ by
simply ignoring the top $\log \ceil{1/\phi}$ levels of the tree. Then,
in the topmost level of the tree which we maintain, the universe size
is $O(1/\phi)$, so we can begin our reporting procedure by querying
all these universe items to determine which subtrees to recurse upon.
To recover $\mathrm{sign}(x_w)$ for each $w\in L$, we use the $\mathsf{CountSketch}$
data structure of \cite{CCF02} with $T = (21\cdot 2^p)/\phi$ columns and
$C = \Theta(\log(1/(\delta\phi)))$ rows; the space is
$O(\phi^{-1}\log(1/(\delta\phi))\log(nmM))$, and the update time is
$O(\log(1/(\delta\phi)))$. $\mathsf{CountSketch}$ operates by, for each row $i$,
having a pairwise independent hash function $h_i:[n]\rightarrow [T]$ and
a $4$-wise independent hash function $\sigma_i:[n]\rightarrow
\{-1,1\}$. There are $C\cdot T$ counters $A_{i,j}$ for
$(i,j)\in[C]\times [T]$. Counter $A_{i,j}$ maintains $\sum_{h_i(v) =
j} \sigma_i(v)\cdot x_v$. For $(i,j)\in [C]\times [T]$, let
$x^{i}$ be the vector $x$ restricted to coordinates $v$ with $h_i(v)
= h_i(w)$, other than $w$ itself. Then for fixed $i$, the expected
contribution to
$\|x^i\|_p^p$ is at most
$\|x\|_p^p/T$, and thus is at most $10\|x\|_p^p/T$ with
probability
$9/10$ by Markov's inequality. Conditioned on this event, $|x_w| >
\|x^i\|_p/2 \ge \|x^i\|_2/2$. The analysis of $\mathsf{CountSketch}$ also
guarantees $|A_{i,h_i(w)} - \sigma_i(w)x_w| \le 2\|x^i\|_2$ with
probability at least $2/3$, and thus by a union bound, $|x_w| >
|A_{i,h_i(w)} - \sigma_i(w)x_w|$ with probability at least $11/20$, in
which case $\sigma_i(w)\cdot \mathrm{sign}(A_{i,h_i(w)}) = \mathrm{sign}(x_w)$. Thus,
by a Chernoff
bound over all rows, together with a union bound over all $w\in L$, we
can recover $\mathrm{sign}(x_w)$ for all $w\in L$ with probability $1 -
\delta$.
\end{proof}
|
1,941,325,221,085 | arxiv | \section{Introduction}
\label{sec:intro}
%
\par Observations of the magnetically closed solar corona from the \textit{Hinode} \citep{kosugi_hinode_2007} and Solar Dynamics Observatory (SDO) \citep{pesnell_solar_2012} spacecraft have led, for the first time, to quantitative studies of the distribution of coronal plasma as a function of temperature, and preliminary deductions about the heating process \citep[see papers in][]{de_moortel_recent_2015}. The key to this has been the ability to make measurements of the corona over a wide range of temperatures from the EUV Imaging Spectrometer (EIS) \citep{culhane_euv_2007} and X-Ray Telescope (XRT) \citep{golub_x-ray_2007} instruments on \textit{Hinode}, and the Atmospheric Imaging Assembly (AIA) \citep{lemen_atmospheric_2012} on SDO. Underpinning this work is the concept of nanoflare heating of the corona. Nanoflares \citep[e.g.][]{parker_nanoflares_1988} are small bursts of energy release, which, despite the implication in their name, have unknown magnitude and duration. While commonly attributed to small-scale magnetic reconnection, nanoflares can occur in other heating scenarios \citep[e.g.][]{ofman_self-consistent_1998}.
\par One example of this approach has been studies of active region (AR) core loops \citep{warren_constraints_2011,warren_systematic_2012,winebarger_using_2011,tripathi_emission_2011,schmelz_cold_2012,bradshaw_diagnosing_2012,reep_diagnosing_2013,del_zanna_evolution_2015}. These are the brightest structures in ARs, spanning the magnetic polarity line, and are observed over a wide range of temperatures. An important result has been the determination of the emission measure distribution as a function of temperature ($\mathrm{EM}(T)\sim n^2dh$) along a line of sight. These workers showed that the emission measure peaked at $T = T_m = 10^{6.5}$ – $10^{6.6}$ K with $\mathrm{EM}(T_m)$ of order $10^{27}$ – $10^{28}$ cm$^{-5}$. Below $T_m$ a relation of the form $\mathrm{EM} \propto T^a$ was found, with $2 < a < 5$. This distribution can be understood by a combination of radiative cooling of the corona to space and an enthalpy flux to the transition region (TR) \citep[e.g.][]{bradshaw_cooling_2010,bradshaw_new_2010} and and has significant implications for nanoflare heating. Defining low and high frequency (LF and HF) nanoflares by the ratio of the average time between nanoflares on a magnetic strand or sub-loop ($\langle t_N \rangle$) to the plasma cooling time from the peak emission measure ($\tau_{cool}$), LF (HF) nanoflares have $\langle t_N \rangle > (<) \tau_{cool}$ respectively. LF nanoflares have $a \sim$ 2 - 3 and thus do not account for many of the observations. In fact, \citet{cargill_active_2014} argued that these results implied a heating mechanism with $\langle t_N \rangle$ of order 1000 - 2000 s between nanoflares, with the value of $t_N$ associated with each nanoflare being proportional to its energy. Such intermediate frequency (IF) nanoflares have different energy build-up requirements from the commonly assumed LF scenario \citep{cargill_active_2014}.
\par A second outcome of AR studies is the detection of a ``hot'' non-flaring coronal component characterised by plasma with $T > T_m$, a long-predicted consequence of nanoflare heating \citep{cargill_implications_1994,cargill_diagnostics_1995}. This has been identified from \textit{Hinode} and SDO data \citep{reale_evidence_2009,schmelz_hinode_2009,testa_hinode/eis_2012}, and retrospectively from data obtained by the X-Ray Polychrometer (XRP) instrument flown on the Solar Maximum Mission \citep{del_zanna_elemental_2014}. While characterising this emission is difficult \citep[e.g.][]{testa_temperature_2011,winebarger_defining_2012}, a similar scaling, $\mathrm{EM} \propto T^{-b}$ has been claimed \citep[e.g.][]{warren_systematic_2012}, with $b$ of order 7 – 10, though \citeauthor{del_zanna_elemental_2014} find larger values. \citeauthor{warren_systematic_2012} quote typical errors of $\pm$ 2.5 - 3 on these values due to the very limited data available above $T_m$ and \citeauthor{winebarger_defining_2012} have noted that the paucity of data from \textit{Hinode} at these temperatures could be missing significant quantities of plasma with $T > T_m$.
%
\par In an effort to diminish uncertainty in this high temperature ``blind spot'' in $\mathrm{EM}(T)$, \citet{petralia_thermal_2014} analyzed an AR core by supplementing EIS spectral observations with broadband AIA and XRT measurements. By using concurrent observations from the 94 \AA~ channel of AIA and the Ti\_poly filter of XRT, the authors showed that the $\mathrm{EM}(T)$ peaked near $T_m = 10^{6.6}$ and had a weak, hot component. Additionally, \citet{miceli_x-ray_2012}, using the SphinX instrument \citep{sylwester_sphinx:_2008,gburek_sphinx_2011}, analyzed full-disk X-ray spectra integrated over 17 days, during which time two prominent ARs were present. These authors found that a two-temperature model was needed to fit the resulting spectrum, a strong 3 MK component and a much weaker 7 MK component.
\par More recent data has come from rocket flights. The Focusing Optics X-ray Solar Imager (FOXSI) \citep{krucker_focusing_2013} first flew in November 2012 and observed an AR. A joint study with EIS and XRT by \citet{ishikawa_constraining_2014} suggested that while hot plasma existed up to 10 MK, the \textit{Hinode} instruments over-estimated the amount of plasma there. A rocket flight reported by \citet{brosius_pervasive_2014} identified emission in an Fe XIX line with peak formation temperature of $10^{6.95}$ K and reported an emission measure that was 0.59 times the emission formed at $10^{6.2}$ K. More recently, a pair of rocket flights gave observations from the Amptek X123-SDD soft X-ray spectrometer \citep{caspi_new_2015}. This provided comprehensive coverage of the 3 - 60 \AA~ wavelength range. \citeauthor{caspi_new_2015} demonstrated that the emission in this range could be fit by an emission measure with a power-law distribution slope of roughly $b = 6$. While all of these observations are very suggestive of nanoflare heating, it should also be noted that pixel-averaging, long time averages and/or inadequate instrument spatial resolution may lead to contamination of the $\mathrm{DEM}$ by multiple structures along the line of sight. It is desirable to obtain future measurements of plasma emission at $T>T_m$ from a single structure, such as a core active region loop, along the line of sight.
\par Several other workers have combined model results with observations in an effort to better elucidate nanoflare signatures. Using a hydrodynamic loop model, \citet{reale_solar_2011} showed that emission from impulsively heated subarcsecond strands is finely structured and that this predicted structure can also be found in AR core emission as observed by the 94 \AA~ channel of AIA. Most recently, \citet{tajfirouze_time-resolved_2016}, using a 0D hydrodynamic model, explored a large parameter space in event energy distribution, pulse duration, and number of loops. Using a probabilistic neural network, the authors compared their many forward-modeled light curves to 94 \AA~ AIA observations of a ``hot'' AR core. They found that the observed light curves were most consistent with a pulse duration of 50 s and a shallow event energy distribution, suggestive of nanoflare heating.
\par While the distributions of temperature and density above $T_m$ are likely to be determined by nanoflare heating and conductive cooling, there are several complications arising from the low density and high temperature present there. These are (i) the breakdown of the usual Spitzer description of thermal conduction which leads to slower conductive cooling, (ii) recognition that in cases of heating in a weakly collisional or collisionless plasma, electrons and ions need not have the same temperature since when one is heated preferentially the time for the temperature to equilibrate is longer than the electron conductive cooling time, and (iii) a lack of ionization equilibrium that can underestimate the quantity of the plasma with a given electron temperature.
\par Thus the aim of the present and following paper, \citet[in preparation]{barnes_inference_2016-1} \citepalias[hereafter]{barnes_inference_2016-1}, is to investigate this high temperature regime from a modeling viewpoint with the aim of obtaining information that can be of use in the interpretation of present and future observations. In this paper we focus on single-nanoflare simulations and build up an understanding of the role of the different pieces of physics. \citetalias{barnes_inference_2016-1} addresses the properties of nanoflare trains. Given the limitations of present observations, the results of both papers are in part predictive for a future generation of instruments. \autoref{sec:phys_sum} addresses our methodology, including simple outlines of the physics expected from conductive cooling, the preferred heating of different species, and ionization non-equilibrium. \autoref{sec:results} shows results for our single- and two-fluid models, and \autoref{sec:discussion} provides discussion of the main points of our results.
%
\section{Summary of Relevant Physics}
\label{sec:phys_sum}
%
\par We begin by considering the situation when a coronal loop (or sub-loop) cools in response to a nanoflare by the evolution of a single-fluid plasma $(T_e = T_i)$ along a magnetic field line. We deal with the case of electron-ion non-equilibrium in \autoref{subsec:two_fluid_theory}. The energy equation is,
\begin{equation}
\label{eq:energy_1d}
\frac{\partial E}{\partial t} = -\frac{\partial}{\partial s}[v(E+P)] - \frac{\partial F_c}{\partial s} + Q - n^2\Lambda(T),
\end{equation}
where $v$ is the velocity, $E=p/(\gamma -1) + \rho v^2/2$, $F_c=-\kappa_0 T^{5/2}\partial T/\partial s$ is the heat flux, $Q$ is a heating function that includes both steady and time-dependent components, $\Lambda(T)=\chi T^{\alpha}$ is the radiative loss function in an optically thin plasma \citep[e.g.][]{klimchuk_highly_2008} and $s$ is a spatial coordinate along the magnetic field. In addition the equations of mass and momentum conservation are solved. These equations are closed by $p=2nk_BT$, the equation of state. For a given initial state and $Q$, the plasma evolution can then be followed.
\par In this paper, two approaches are used to solve \autoref{eq:energy_1d}. One uses the HYDRAD code \citep{bradshaw_influence_2013} which solves the full field-aligned hydrodynamic two-fluid equations. The second develops further the zero-dimensional Enthalpy Based Thermal Evolution of Loops (EBTEL) approach which solves for average coronal plasma quantities \citep{klimchuk_highly_2008,cargill_enthalpy-based_2012,cargill_enthalpy-based_2012-1,cargill_modelling_2015}. In this paper we compare the HYDRAD and EBTEL results and outline some restrictions that apply to the use of EBTEL when modeling the hot coronal component. However, the value of the EBTEL approach lies in its simplicity and computational speed, and the consequent ability to model the corona as a multiplicity of thin loops for long times, as we do in \citetalias{barnes_inference_2016-1}. Such calculations remain challenging for field-aligned hydrodynamic models.
\par The derivation of the single-fluid EBTEL equations can be found in \citep{klimchuk_highly_2008,cargill_enthalpy-based_2012}. We assume subsonic flows, and \autoref{eq:energy_1d} and the equation of mass conservation are solved for nanoflare energy input. EBTEL treats the corona and TR as separate regions, matched at the top of the TR by continuity of conductive and enthalpy fluxes. It produces spatially-averaged, time-dependent quantities (e.g. $\bar{T}(t),\bar{n}(t)$) in the corona and can also compute quantities at the loop apex and the corona/TR boundary. The single-fluid EBTEL equations are,
%
\begin{align}
\frac{1}{\gamma - 1}\frac{d\bar{p}}{dt} =& \,\bar{Q} - \frac{1}{L}(\mathcal{R}_C + \mathcal{R}_{TR}), \label{eq:energy_0d} \\
\frac{\gamma}{\gamma - 1}(pv&)_0 + F_{c,0} + \mathcal{R}_{TR} = 0, \label{eq:tr_energy_0d} \\
\frac{d\bar{n}}{dt} =& -\frac{c_2(\gamma - 1)}{2c_3\gamma Lk_B\bar{T}}(F_{c,0} + \mathcal{R}_{TR}).\label{eq:mass_0d}
\end{align}
%
Here an overbar denotes a coronal average, $F_{c,0} = -(2/7)\kappa_0 T_a^{7/2}/L$ is the heat flux at the top of the TR (see also \autoref{subsec:hf_theory}), $\mathcal{R}_C=\bar{n}^2\Lambda(\bar{T})L$, is the integrated coronal radiation, $\mathcal{R}_{TR}$ is the integrated TR radiation, and $L$ is the loop half-length. The subscript ``0'' denotes a quantity at the top of the TR and ``$a$'' denotes a quantity at the loop apex. Solving this set of equations requires the specification of three (semi-)constants that are defined by $c_1=\mathcal{R}_{TR}/\mathcal{R}_C$, $c_2=\bar{T}/T_a$ and $c_3=T_0/T_a$. $c_2$ and $c_3$ can be taken as constant, with values of 0.9 and 0.6 respectively. \citet{cargill_enthalpy-based_2012} discuss the full implementation of $c_1 = c_1(T_a,L)$. \autoref{appendix_c1_corrections} provides a detailed discussion of the additional corrections we have applied to $c_1$ in order to ensure better agreement with HYDRAD for impulsive heating scenarios. \autoref{eq:energy_0d} is a statement of energy conservation in the combined corona and TR. \autoref{eq:tr_energy_0d} is the TR energy equation: if the heat flux into the TR is greater (smaller) than its ability to radiate then there is an enthalpy flux into (from) the corona. \autoref{eq:mass_0d} combines \autoref{eq:tr_energy_0d} with that of mass conservation.
%
\subsection{Heat Flux Limiters}
\label{subsec:hf_theory}
%
\par It is well known that thermal conduction deviates from the familiar Spitzer-H{\"a}rm formula \citep{spitzer_transport_1953} at high temperatures \citep[e.g.][]{ljepojevic_heat_1989}. There is a firm upper limit on the heat flux: the free-streaming limit, $F_s=(1/2)fnk_BTV_e$, where $V_e$ is the electron thermal speed and $f$, a dimensionless constant, is determined from a combination of lab experiments, theory, and numerical models. The free-streaming flux is included in EBTEL and HYDRAD by a simple modification \citep{klimchuk_highly_2008},
\begin{equation}
F_{c,0} = \frac{F_cF_s}{\sqrt{F_c^2 + F_s^2}},
\end{equation}
where $F_c$ is the Spitzer-H{\"a}rm heat flux. Smaller values of $f$ limit the heat flux to a greater degree. There is some disagreement on the optimal value of $f$. \citet{luciani_nonlocal_1983} use $f=0.1$ while \citet{karpen_nonlocal_1987} use $f=0.53$, and \citet{patsourakos_coronal_2005} choose $f=1/6$. Unless explicitly stated otherwise, we use $f = 1$ in order to compare EBTEL results with those of HYDRAD \citep[see appendix of][]{bradshaw_influence_2013}. The main aspect of inclusion of a free-streaming limit is to slow down conductive cooling. We do not consider here other conduction models \citep[e.g. the non-local model discussed in the coronal context by][]{karpen_nonlocal_1987,ciaravella_non-local_1991,west_lifetime_2008} since they lead to similar generic results.
%
\subsection{Two-fluid Modeling}
\label{subsec:two_fluid_theory}
%
\par In some parameter regimes nanoflare heating can also induce electron-ion non-equilibrium if the heating timescale is shorter than the electron-ion equilibration timescale. Interactions between electrons and ions in a fully-ionized hydrogen plasma like the solar corona are governed by binary Coulomb collisions. Thus, the equilbration timescale is $\tau_{ei}=1/\nu_{ei}$, where $\nu_{ei}$ is the collision frequency and is given by,
\begin{equation}
\label{eq:col_freq}
\nu_{ei} = \frac{16\sqrt{\pi}}{3}\frac{e^4}{m_em_i}\left(\frac{2k_BT_e}{m_e}\right)^{-3/2}n\ln{\Lambda},
\end{equation}
where $T_e$ is the electron temperature, $m_e,m_i$ are the electron and ion masses respectively and $\ln{\Lambda}$ is the Coulomb logarithm \citep[see both Eq. 2.5e and Section 3 of][]{braginskii_transport_1965}. For $n\sim10^9$ cm$^{-3}$ and $T_e\sim10^{7}$ K, parameters typical of nanoflare heating, $\tau_{ei}\approx800$ s. Thus, any heating that occurs on a timescale less than 800 s, such as a nanoflare with a duration of $\tau\le100$ s, will result in electron-ion non-equilibrium. While chromospheric evaporation during and after the nanoflare will increase $n$ and thus decrease $\nu_{ei}$, we argue that during the early heating phase, $\tau_{ei}\gg\tau$, with 800 s being an upper bound on $\tau_{ei}$.
%
\par While it is often assumed that the electrons are the recipients of the prescribed coronal heating function, ion heating in the solar corona should not be discounted since the exact mechanism behind coronal heating is still unknown. For example, ions may be heated via ion-cyclotron wave resonances \citep{markovskii_intermittent_2004} or magnetic reconnection \citep{ono_ion_1996,drake_onset_2014}. To address this possibility and include effects due to electron-ion non-equilibrium, we have applied the EBTEL analysis outlined in \citet{klimchuk_highly_2008} to the two-fluid hydrodynamic equations in the form given in the appendix of \citet{bradshaw_influence_2013}. Such an approach allows us to efficiently model a two-component impulsively-heated coronal plasma, and will be used extensively in \citetalias{barnes_inference_2016-1}.
%
\par The two-fluid EBTEL equations are derived fully in \autoref{appendix_two_fluid} and are,
\begin{align}
\frac{d}{dt}\bar{p}_e &=\,\frac{\gamma - 1}{L}[\psi_{TR} - (\mathcal{R}_{TR} + \mathcal{R}_C)] + \nonumber \\ & k_B\bar{n}\nu_{ei}(\bar{T}_i-\bar{T}_e) + (\gamma-1)\bar{Q}_{e},\label{eq:press_e_0d_2fl} \\[0.5em]
%
\frac{d}{dt}\bar{p}_i &=\,-\frac{\gamma - 1}{L}\psi_{TR} + k_B\bar{n}\nu_{ei}(\bar{T}_e-\bar{T}_i) + \nonumber \\ &(\gamma-1)\bar{Q}_{i},\label{eq:press_i_0d_2fl} \\[0.5em]
%
\frac{d}{dt}\bar{n} &=\,\frac{c_2(\gamma-1)}{c_3\gamma Lk_B\bar{T}_e}(\psi_{TR} - F_{ce,0}-\mathcal{R}_{TR}). \label{eq:mass_0d_2fl}
\end{align}
This set of equations is closed by the equations of state $p_e=k_BnT_e$ and $p_i=k_BnT_i$. While the notation above is largely self-evident, we draw attention to the additional term $\psi_{TR}$ which originates in the need to maintain charge and current neutrality and is defined by \autoref{eq:psi_TR}. Additionally, in both the single- and two-fluid versions of EBTEL used here, we have implemented an adaptive time-stepping routine to ensure that we are correctly resolving the thermal conduction timescale.
\subsection{Ionization Non-equilibrium}
\label{subsec:nei_theory}
%
\par Ionization non-equilibrium has long been known to be an issue in the interpretation of data from the impulsive phase of flares, and more recently it has been discussed in the context of nanoflares \citep{bradshaw_explosive_2006,reale_nonequilibrium_2008}. The main issue is that when a tenuous plasma is heated rapidly, it takes a certain time to reach ionization equilibrium so that the ionization states present do not reflect the actual (electron) temperature, assuming that the heating occurs mainly to electrons (see \autoref{subsec:two_fluid_theory} and \autoref{subsec:two_fluid_res}) rather than the heavier ions such as Fe that contribute to the observed radiation. If the heating is sustained, then eventually ionization equilibrium will be reached, and this may occur in moderate to large flares. However, for nanoflares that may last for anywhere between a few seconds and a few minutes, a different scenario arises in which on termination of heating, rapid conductive cooling arises, so that the high ionization states may never be attained.
%
\par \citet{bradshaw_explosive_2006}, \citet{reale_nonequilibrium_2008} and \citet{bradshaw_numerical_2009} have all addressed this point using slightly different approaches, but with similar conclusions, namely that short nanoflares in a low-density plasma are unlikely to be detectable. We now develop this work further to assess how the results in the first parts of \autoref{sec:results} are altered. We follow these authors and calculate an ``effective temperature'' ($T_{eff}$) as a proxy for the deviation from ionization equilibrium. This involves taking a time-series of $T$ and $n$ (e.g. from EBTEL) and using the numerical code\footnote{The numerical code used here has been made freely available by the author and is available at \url{https://github.com/rice-solar-physics/IonPopSolver.}} described in \citet{bradshaw_numerical_2009} to calculate the fractional ionization of as many states of various elements as needed, and in turn this calculates $T_{eff}$, a temperature that would be measured based on the actual ionization states. We primarily consider Fe between Fe IX and Fe XXVII, though Ca has also been calculated as a check on these results.
%
\par The feature that will prove of great relevance in our results is that despite the different nanoflare durations, $T_{eff}$ does not exceed 10 MK. There is also an ``overshoot'' of $T_{eff}$ when it reaches its maximum value: this is saying that collisions are still not strong enough for the adjustment of the ionization state to be instantaneous.
%
\section{Results}
\label{sec:results}
%
\par We now show a series of simulations of a single nanoflare with our zero-dimensional single- and two-fluid hydrodynamic EBTEL models, and the HYDRAD code. \citetalias{barnes_inference_2016-1} discusses long trains of multiple nanoflares of varying frequency in multiple loops. All results were processed using the IPython ecosystem \citep{perez_ipython:_2007} and the NumPy scientific computing package \citep{van_der_walt_numpy_2011}. All plots were produced using the matplotlib graphics environment \citep{hunter_matplotlib:_2007}.
%
\par An important output of all these models is the coronal emission measure. In EBTEL the emission measure for the entire coronal part of the loop is calculated using the familiar expression $\mathrm{EM}=n^2(2L)$, where $L$ is the loop half-length. We consider a temperature range of $4.0\le\log{T}\le8.5$ with bin sizes of $\Delta\log{T}=0.01$. At each time $t_i$, the coronal temperature range $[T_0,T_a]$ is calculated from $\bar{T}$ ($\bar{T}_e$ for the two-fluid model). For each bin that falls within $[T_0,T_a]$, $\bar{n}_i^2(2L)$ is added to that bin, where $\bar{n}_i$ is the spatially-averaged number density at $t_i$. The emission measure in each bin is then averaged over the entire simulation period. When measured observationally, $\mathrm{EM}(T)$ is a line-of-sight quantity. Assuming an aspect ratio (i.e. ratio of loop length to loop width) of 10, we apply a correction factor 1/10 to all calculated $\mathrm{EM}$ curves. The emission measure from HYDRAD is calculated using quantities averaged over the upper 80\% of the loop which corresponds to the coronal portion of the loop.
%
\subsection{Single-fluid Parameter Variations}
\label{subsec:sf_par_var}
%
\begin{figure*}
\centering
%
\begin{minipage}{0.49\textwidth}
\subfigure{%
\includegraphics[width=\columnwidth]{results/{f1a}.eps}
\label{fig:sf_T_panel1}}
\end{minipage}
%
\begin{minipage}{0.49\textwidth}
\subfigure{%
\includegraphics[width=\columnwidth]{results/{f1b}.eps}
\label{fig:sf_em_panel3}}
\end{minipage}
%
\caption{\textbf{Left:} Temperature (upper panel) and density (lower panel) profiles for a loop with $2L=80$ Mm. Each heating profile is triangular in shape with a steady background heating of $H_{bg}=3.5\times10^{-5}$ ergs cm$^{-3}$ s$^{-1}$. The duration of the heating pulse is varied according to $\tau=20,40,200,500$ s, with each value of $\tau$ indicated by a different color, as shown in the right panel. The total energy injected into the loop is fixed at $10$ ergs cm$^{-3}$. Note that time is shown on a log scale to emphasize the behavior of the heating phase. \textbf{Right:} Corresponding $\mathrm{EM}(T)$ for each pulse duration $\tau$. The relevant parameters and associated colors are shown in the legend. $\mathrm{EM}(T)$ is calculated according to the procedure outlined in the beginning of \autoref{sec:results}. In all panels, the solid (dotted) lines show the corresponding EBTEL (HYDRAD) results (see \autoref{subsubsec:hydrad_comparison_sf}).}
\label{fig:sf_tnem}
\end{figure*}
\subsubsection{Varying Pulse Duration}
\label{subsubsec:pulse_res}
%
\par In the first set of results we assume the plasma behaves as a single fluid, use a flux limiter of $f=1$, and ignore ionization non-equilibrium. The solid curves in \autoref{fig:sf_tnem} show average temperature (upper left panel) and density (lower left panel) as a function of time for a single nanoflare in a loop with $2L = 80$ Mm where the EBTEL approach is used. The heating function takes the form of a triangular pulse for four different pulse durations, $\tau=$ 20, 40, 200, and 500 s, as indicated by the legend in the right panel. The peak heating rate is varied such that the total energy input is 10 ergs cm$^{-3}$ for all cases. These parameters correspond roughly to bright AR core loops \citep{warren_systematic_2012}. In order to ensure that the temperature and density do not become negative, a small background heating of magnitude $H_{bg}=3.5\times10^{-5}$ ergs cm$^{-3}$ s$^{-1}$ is enforced at all times. It can be seen that shorter pulses give higher temperatures, as expected. Furthermore, in this early heating phase, one would expect the maximum temperature to scale roughly as $H_0^{2/7}$ (where $H_0$ is the peak heating rate); this is approximately what is found. On the other hand, the different pulse durations give approximately the same maximum density, with the shortest pulse reaching its peak value roughly 200 s before the longest.
%
\par The solid lines in the right panel of \autoref{fig:sf_tnem} show the corresponding EBTEL emission measure distributions, $\mathrm{EM}(T)$. The temperature of maximum emission ($T_m$) and the peak emission measure ($\mathrm{EM}(T_m)$)are the same in all cases and are consistent with those found in the studies of AR core loops \citep[e.g.][]{warren_systematic_2012}. While shorter pulses lead to higher initial temperatures, the shape of the emission measure below $T_m$ is independent of the properties of the heating pulse, indicating that this part of the emission measure distribution cannot provide information about the actual nanoflare duration or intensity. All cases show evidence of the heating phase, namely the bump on $\mathrm{EM}(T)$ at $\log{(T)} =$ 6.85, 7, 7.2 and 7.3. Below these bumps to just above $T = T_m$, $\mathrm{EM}(T)$ scales as $T^{-5}-T^{-5.5}$ for all cases, again indicating that information about the heating process is lost at these temperatures. However, detection of emission above $T_m$ in a single structure would still be evidence for nanoflare heating, though of undetermined duration.
\par For integration over the lifetime of unresolved structures lying transverse to the line of sight, one can write down an expression $\mathrm{EM}(T) \sim n^2\tau_{cool}(n, T)$ which simply states that what matters for determining $\mathrm{EM}(T)$ is how long the plasma spends at any given temperature \citep[e.g.][]{cargill_implications_1994,cargill_nanoflare_2004}. For an analytic solution for the cooling, one can formally define $\tau_{cool}(n, T) = (T/(dT/dt))$. In the absence of a formal solution, order of magnitude scalings can be used: the difference with analytic solutions being a numerical factor. To obtain an expression $\mathrm{EM}(T)\propto T^{-b}$, one needs to provide a relation between $T$ and $n$. For conductive cooling of the corona, one can write $\tau_{cool} \sim nL^2T^{-5/2}$, giving $\mathrm{EM} \sim n^3L^2T^{-5/2}$. In determining the relationship between $T$ and $n$, two limits are those of constant density and constant pressure. The former gives static conductive cooling \citep[e.g.][]{antiochos_influence_1976} and the latter evaporative cooling with constant thermal energy \citep[e.g.][]{antiochos_evaporative_1978}, which then lead to $b = 5/2$ and $11/2$ respectively. Fitting the EBTEL $\mathrm{EM}(T)$ results for $\tau\le200$ s (see right panel of \autoref{fig:sf_tnem}) to $T^{-b}$ on $10^{6.8}<T<10^{7}$ K yields $b\sim4.5-5$ which are more consistent with the latter.
%
\subsubsection{HYDRAD Comparison}
\label{subsubsec:hydrad_comparison_sf}
%
\par We now compare EBTEL and HYDRAD results for the different values of $\tau$. The dotted lines in all three panels of \autoref{fig:sf_tnem} show the corresponding HYDRAD results, where averaging is over the upper 80\% of the loop. The background heating in the two codes has been adjusted to ensure that EBTEL and HYDRAD start with the the same initial density since the initial temperature rise will depend on the assumed background density.
%
\par There is good agreement between the HYDRAD and EBTEL results for $\tau\ge200$ s with the well-documented result that EBTEL gives somewhat higher density maxima than HYDRAD \citep[see][]{cargill_enthalpy-based_2012}. For $\tau=20,40$ s, while the peak temperatures are at a level of agreement consistent with previous work \citep{cargill_enthalpy-based_2012}, there are notable differences in the initial temperature decay from the maximum in the upper left panel of \autoref{fig:sf_tnem} due to the difference in the initial density response.
%
\par It can be seen that the EBTEL density begins to rise almost immediately following the onset of heating, while there is a lag in the HYDRAD density. This is due to a delay in the upflow of material from the TR because a finite time is required to get material moving up the loop, an effect absent from 0D models. The slower density rise seen with HYDRAD leads to the faster conductive cooling. Another feature of the short pulses is the very spiky density profile as a function of time. This is a well-known effect, particularly in flare simulations, and is due to pairs of oppositely-directed flows colliding at the loop top, and subsequently bouncing back and forth.
%
\par As a result of this discrepancy in the density behavior, while the emission measure calculated from the EBTEL model ``sees'' temperatures well in excess of 10 MK for short pulses, in the HYDRAD model this will not be the case. This is evident from the short pulses in the right panel of \autoref{fig:sf_tnem}: the emission above 10 MK predicted by EBTEL is not present in the HYDRAD runs, the emission cutting off just above $10^7$ K. For the longer pulses, EBTEL still shows emission at higher temperatures, but the difference with HYDRAD is evident now over a smaller temperature range. Also, the characteristic bumps on the emission measure seen with EBTEL are largely eliminated in the HYDRAD runs.
%
\par This regime of short heating pulses was not considered in our earlier work using EBTEL, and the associated comparisons with field-aligned hydrodynamic codes \citep{klimchuk_highly_2008,cargill_enthalpy-based_2012}, where pulses of order 200 s or greater were considered. Other workers have used short pulses with EBTEL, albeit much less intense \citep{tajfirouze_euv_2016,tajfirouze_time-resolved_2016}. Clearly the more gentle the heating profile used, the slower the rise in the EBTEL density, leading to results closer to those found using HYDRAD. Thus it appears that caution is warranted in the use of approximate models for short, intense heating pulses. This restriction only applies to the high temperature regime: as can be seen from \autoref{fig:sf_tnem}, the emission measure profiles below $10^{6.8}$ are not affected. Nonetheless, the absence of emission near 10 MK for short pulses constitutes one of many obstacles to quantifying any hot plasma component due to nanoflares.
%
\subsubsection{Heat Flux Limiter}
\label{subsubsec:hf_res}
%
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{results/{f2}.eps}
\caption{$\mathrm{EM}(T)$ calculated from the single-fluid EBTEL model when only pure Spitzer conduction is used (turquoise, dashed) and when a flux limiter is imposed according to \autoref{subsec:hf_theory}. In the free-streaming limit, \textbf{five different values of $f$ are considered (see legend)}. The pulse duration is $\tau=200$ s. All other parameters are the same as those discussed in \autoref{subsubsec:pulse_res}. Note that here we only show $\mathrm{EM}(T)$ for $T>T_m$ \textbf{as the cool side of $\mathrm{EM}(T)$ is unaffected by our choice of $f$.}}
\label{fig:sf_hf_em}
\end{figure}
%
\par \autoref{fig:sf_hf_em} shows the effect of using a flux limiter versus Spitzer conduction on the emission measure distribution. Five different values of $f$ are shown: 1 \citep[blue,][consistent with HYDRAD]{bradshaw_influence_2013}, 0.53 \citep[green,][]{karpen_nonlocal_1987}, $1/6$ \citep[red,][]{patsourakos_coronal_2005}, 0.1 \citep[purple,][]{luciani_nonlocal_1983}, and $1/30$ (yellow). The pulse duration is 200 s and only the EBTEL results are shown. Note that for this pulse length, the HYDRAD results are expected to be similar.
\par As expected, inclusion of a limiter extends $\mathrm{EM}(T)$ to higher temperatures, though this is only notable above 10 MK. As the temperature falls to this value, evaporative upflows have increased the coronal density so that the Spitzer description is recovered. Above 10 MK flux limiting gradually becomes important, albeit with a small emission measure. Using $f=0.53,1$ yield $\mathrm{EM}(T)$ that are not discernibly different from that produced by pure Spitzer conduction while $f=1/6,0.1$ extend $\mathrm{EM}(T)$ to significantly hotter temperatures. $f=1/30$, the most extreme flux limiter, yields an emission well above $10^{7.5}$ K. Note that for all cases, $\mathrm{EM}(T)$ converges to the same value for $T\le10$ MK.
\par For flux-limited thermal conduction, $\tau_{cool} \sim LT^{-1/2}$ so that the parameter $b$ lies between 1/2 and 5/2, depending on the assumption about $n$. For $f = 1/30$, $b = 5/2$ is found in \autoref{fig:sf_hf_em} by fitting $\mathrm{EM}(T)$ to $T^{-b}$ on $10^7\le T\le10^{7.5}$ K. Since the free-streaming limit slows conduction cooling relative to that given by Spitzer, the plasma will spend more time at any given temperature, leading to smaller values of $b$. Similar conclusions hold for other conduction models \citep[e.g. the non-local model discussed in the coronal context by][]{karpen_nonlocal_1987,west_lifetime_2008} since they all inhibit conduction. While limiting of conduction is often regarded as an important process in coronal cooling, these results suggest that for nanoflare heating it may not be that important unless extreme values of the limiting parameter are used.
%
\subsection{Two-fluid Effects}
\label{subsec:two_fluid_res}
\subsubsection{Electron Heating}
\label{subsubsec:electron_heating}
%
\begin{figure*}
\centering
\begin{minipage}{0.49\textwidth}
\subfigure{%
\includegraphics[width=\columnwidth]{results/{f3a}.eps}
\label{fig:tfe_T_panel1}}
\end{minipage}
%
\begin{minipage}{0.49\textwidth}
\subfigure{%
\includegraphics[width=\columnwidth]{results/{f3b}.eps}
\label{fig:tfe_em_panel3}}
\end{minipage}
\caption{Two-fluid EBTEL simulations for $\tau=20,40,200,500$ s in which only the electrons are heated. \textbf{Left}: Electron temperature (upper panel), ion temperature (middle panel), and density (lower panel). \textbf{Right:} Corresponding $\mathrm{EM}(T)$ calculated according to \autoref{sec:results}. The pulse durations and associated colors for all panels are shown on the right. All parameters are the same as those discussed in \autoref{subsubsec:pulse_res}. In all panels, the solid (dotted) lines show the corresponding EBTEL (HYDRAD) results.}
\label{fig:tfe_tnem}
\end{figure*}
\par We now use our two-fluid model to consider the role of separate electron or ion heating, focusing on cases when only the electrons or ions are heated in order to highlight the essential difference between the two scenarios. Intermediate cases of energy distribution will be considered in subsequent papers. The solid lines in the left panels of \autoref{fig:tfe_tnem} show the electron temperature (upper panel), ion temperature (middle panel) and density (lower panel) as a function of time from the two-fluid EBTEL model for $\tau=20,40,200,500$ s for electron heating. The dotted lines show the corresponding HYDRAD results and are discussed in \autoref{subsubsec:hydrad_comparison_tf} The electrons now cool by a combination of thermal conduction and temperature equilibration, the latter becoming significant at 150 (450) s for short (long) pulses. The ions thus heat rather slowly, reaching a peak temperature of 5 MK, which overshoots the electron temperature at that time. The ions then cool via ion thermal conduction and equilibration, with $T_e \approx T_i$ after typically a few hundred seconds.
%
\par The solid lines in the right panel of \autoref{fig:tfe_tnem} show the resulting $\mathrm{EM}(T)$. In the case of electron heating and $\tau<500$ s, the emission measure slope over the temperature interval $\log{T_M}<\log{T}<6.8$ is considerably steeper compared to the single-fluid case. Recall that in the single-fluid case we assume that conduction is the only relevant cooling mechanism prior to the onset of radiative cooling such that under the assumption of constant pressure, $\mathrm{EM}\propto T^{-11/2}$ (see \autoref{subsec:sf_par_var}). When we allow for electron-ion non-equilibrium and heat only the electrons, both of these assumptions break down. Following the onset of conductive cooling, $T_e\gg T_i$, but the loop has now begun to fill. The equilibration term plays the part of a cooling term so long as $T_e>T_i$ and is the dominant cooling mechanism for several hundred seconds in between the peak electron temperature and the peak density (see \autoref{fig:psi_tr_compare}). Thus, our expression for $\tau_{cool}$ should include some contribution from the equilibration term in this temperature regime.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{results/{f4}.eps}
\caption{Pressure (left axis, blue lines) and density (right axis, red lines) as a function of temperature for the $\tau=200$ s case. All parameters are the same as those discussed in \autoref{subsubsec:pulse_res}. The single-fluid pressure $p$ and density $n$ are denoted by the solid blue and red lines, respectively. The two-fluid total pressure, $p_e+p_i$, electron pressure, $p_e$, and ion pressure, $p_i$, are denoted by the dotted, dashed, and dot-dashed blue lines respectively. The two-fluid density is represented by the dashed red line. Pressure, density, and temperature are all shown on a log scale.}
\label{fig:pnt_state_space}
\end{figure}
\par\autoref{fig:pnt_state_space} shows pressure (blue lines) and density (red lines) as a function of temperature for the $\tau=200$ s case; both the single-fluid case and the case where only the electrons are heated are shown. While $p_e+p_i$ (blue dotted line), the total pressure, like the single-fluid pressure $p$ (blue solid line) is constant over the interval $10^{6.65}<T<10^{6.8}$, the electron pressure, $p_e$ (blue dashed line) is not, meaning $n\propto T_e^{-1}$ is not a valid scaling law in the two-fluid, electron-heating case. Comparing the two-fluid density (dashed red line) and the single-fluid density (solid red line) easily confirms this. To derive a emission measure slope for the case in which only the electrons are heated, these effects must be accounted for in the $\mathrm{EM}(T)\sim n^2\tau_{cool}(n,T)$ scaling. Thus, while a power-law $b$ may be calculated by fitting the hot part of the $\mathrm{EM}(T)$ to $T^{-b}$, it is difficult to gain any physical insight from such a fit using the scaling discussed in \autoref{subsubsec:pulse_res}.
%
\subsubsection{Ion Heating}
\label{subsubsec:ion_heating}
%
\begin{figure*}
\centering
\begin{minipage}{0.49\textwidth}
\subfigure{%
\includegraphics[width=\columnwidth]{results/{f5a}.eps}
\label{fig:tfi_T_panel1}}
\end{minipage}
%
\begin{minipage}{0.49\textwidth}
\subfigure{%
\includegraphics[width=\columnwidth]{results/{f5b}.eps}
\label{fig:tfi_em_panel3}}
\end{minipage}
\caption{Two-fluid EBTEL simulations for $\tau=20,40,200,500$ s in which only the ions are heated. \textbf{Left}: Electron temperature (upper panel), ion temperature (middle panel), and density (lower panel). \textbf{Right:} Corresponding $\mathrm{EM}(T)$ calculated according to \autoref{sec:results}. The pulse durations and associated colors for all panels are shown on the right. All parameters are the same as those discussed in \autoref{subsubsec:pulse_res}. In all panels, the solid (dotted) lines show the corresponding EBTEL (HYDRAD) results.}
\label{fig:tfi_tnem}
\end{figure*}
%
\autoref{fig:tfi_tnem} shows the electron temperature (upper left panel), ion temperature (middle left panel), density (lower left panel) and the corresponding emission measure (right panel) for $\tau=20,40,200,500$ s when only the ions are heated. The solid lines show the two-fluid EBTEL results while the dotted lines show the corresponding HYDRAD results (see \autoref{subsubsec:hydrad_comparison_tf}). Ion heating leads to significantly higher temperatures due to the relative weakness of ion thermal conduction, consistent with the expected enhancement of $(\kappa_{0,e}/\kappa_{0,i})^{2/7}$. The hot ions cool by a combination of weak ion thermal conduction and temperature equilibration. However, because the Coulomb coupling timescale during the early heating phase (when $T_i\gg T_e$ and the density is low) is much larger than the ion thermal conduction timescale, by the time the electrons can ``see'' the ions, they have cooled far below their peak temperature. The peak electron temperature in all cases lies below 10 MK. Because $\mathrm{EM}(T)$ is constructed from the electron temperature, the emission measure never sees $T\ge10^7$ K, with $\mathrm{EM}(T)$ being truncated sharply near $10^{6.9}$ K for all values of $\tau$.
%
\par The reason for slower equilibration for ion heating can be seen by comparing the density plots in the lower left panels of \autoref{fig:tfe_tnem} and \autoref{fig:tfi_tnem}. These show that while the peak values of the density are similar for both heating mechanisms, the temporal behavior differs for ion heating with shorts pulses: for these cases, the density takes considerably longer to reach the maximum value. This can be attributed to the relative weakness of ion thermal conduction. Examination of \autoref{eq:0d_mass_sub} and \autoref{eq:psi_TR} shows that an upward enthalpy flux can only be effective for ion heating once temperature equilibration has become significant and an electron heat flux is established. In turn, once the upflow begins, the coronal density increases, making equilibration more effective. Thus, once temperature equilibration starts to be effective, these processes combine to give a rapid increase in density, as shown.
%
\par In the case where the heating pulse duration is long, $\tau=500$ s, the difference between the two-fluid and single-fluid emission measure distributions is diminished. Because the electrons are heated slowly, they do not have much time to evolve out of equilibrium with the ions. This in turn heavily dampens the Coulomb exchange term, allowing the two populations to evolve together as a single fluid.
%
\subsubsection{HYDRAD Comparison}
\label{subsubsec:hydrad_comparison_tf}
%
\par The dotted lines in all panels of \autoref{fig:tfe_tnem} and \autoref{fig:tfi_tnem} show the corresponding HYDRAD results for both electron and ion heating, respectively. As in \autoref{subsubsec:hydrad_comparison_sf}, the averaging is done over the upper 80\% of the loop and the background heating has been adjusted appropriately. For $\tau\ge200$ s, we find acceptable agreement in $n$, $T_e$, and $\mathrm{EM}(T)$.
%
\par For $\tau=20,40$ s, the upper and lower panels of \autoref{fig:tfe_tnem} show discrepancies in $T_e$ and $n$ similar to those discussed in \autoref{subsubsec:hydrad_comparison_sf}. The initial decay from the peak electron temperature is noticably different in the EBTEL runs compared to the corresponding HYDRAD runs, again due to the difference in the initial density response. The discrepancies in the density are exacerbated in the electron heating case (compared to the single-fluid case) since all of the energy is partitioned to the electrons, resulting in a stronger electron heat flux and a subsequently stronger upflow. The right panel of \autoref{fig:tfe_tnem} shows the effect of this premature rise in the density on $\mathrm{EM}(T)$ for these short pulses: while EBTEL predicts significant emission above 10 MK, the emission in the HYDRAD runs cuts off just below $10^{6.9}$ K.
%
\par In the ion heating case, we find acceptable agreement in $T_e$ and $\mathrm{EM}(T)$ despite similar discrepancies in $n$ for the shortest heating pulses, $\tau=20,40$ s. Because no heat is supplied to the electrons directly, the electron heating timescale is set by the Coulomb collision frequency (see \autoref{eq:col_freq}), meaning energy is deposited to the electrons over a timescale much longer than 20 or 40 s. The resulting slow evolution of $T_e$ leads to subsequently weaker upflows. Because of the much more gentle rise in density, the electrons are not able to ``see'' the ions until they have cooled well below 10 MK (see \autoref{subsubsec:ion_heating}).
%
\par In the middle panels on the left-hand side of \autoref{fig:tfe_tnem} and \autoref{fig:tfi_tnem}, the ion temperature in HYDRAD is greater than that of EBTEL by a factor of $\sim3-4$ in the late heating/early conductive cooling phase. These spikes in $T_i$ are due to steep velocity gradients that heat the ions through compressive heating and viscosity, two pieces of physics that are not included in EBTEL. Because ion thermal conduction is comparatively very weak, these sharp features in $T_i$ are not as efficiently smoothed out. While these differences in $T_i$ are more prominent when $\tau=20,40$ s, they still persist for $\tau\ge200$ s.
%
\subsection{Ionization Non-equilibrium}
\label{subsec:nei_res}
%
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{results/{f6}.eps}
\caption{$T_{eff}$ (red) for pulse durations of 20 s (left panel) and 500 s (right panel) for the single-fluid case (solid) as well as the cases where only the electrons (dashed) or only the ions (dot-dashed) are heated. $T(t)$ profiles (i.e. assuming ionization equilibrium) for $\tau=20$ s (blue lines) and $\tau=500$ s (purple lines) for all three heating scenarios are repeated here for comparison purposes.}
\label{fig:stf_Teff_compare}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{results/{f7}.eps}
\caption{$\mathrm{EM}(T_{eff})$ (red) for pulse durations of 20 s (left panel) and 500 s (right panel) for the single-fluid (solid), electron heating (dashed), and ion heating (dot-dashed) cases. $\mathrm{EM}(T)$ (i.e. assuming ionization equilibrium) for $\tau=20$ s (blue lines) and $\tau=500$ s (purple lines) for all three heating scenarios are repeated here for comparison purposes. Note that in both panels we only show $\mathrm{EM}(T)$ for $\log{T}>\log{T_M}$.}
\label{fig:stf_emeff_compare}
\end{figure*}
\par The final set of results includes our approximate treatment of non-equilibrium ionization, again using the EBTEL approach. The red curves in the left (right) panel of \autoref{fig:stf_Teff_compare} show $T_{eff}$ for $\tau=20\,(500)$ s for the single-fluid, electron heating, and ion heating cases. For comparison, equivalent results for $T$ (single-fluid) and $T_e$ (two-fluid) that assume ionization equilibrium are shown. For all cases, $T_{eff}$ never rises above 10 MK for the short pulse and 8 MK for the long pulse. Thus, for the short pulse, because a sufficiently long time is required to ionize the plasma, the hottest electron temperatures are never likely to be detectable. For the longer pulse, the slow heating gives the ionization states the opportunity to ``catch up''; thus $T_{eff}$ is a reasonable reflection of the actual plasma state.
%
\par The red curves in \autoref{fig:stf_emeff_compare} show the corresponding $\mathrm{EM}(T_{eff})$. The effect of ionization non-equilibrium is to truncate $\mathrm{EM}$ around or below 10 MK. The bump on the distribution characteristic of the heating phase is also relocated to lower temperatures. This confirms the earlier comment that, at least for short pulses, the hot electron plasma above 10 MK is undetectable. While the heating signature is shifted to smaller values of $T_{eff}$, one has no way of knowing the duration of the pulse that generates it. Thus it seems as if the temperature range $T_m<T<10$ MK is the optimal one for searching for this hot component as well as direct signatures of the heating. However, it is difficult to ``map'' what would be seen in such a state of ionization non-equilibrium back to the real system.
%
\section{Discussion}
\label{sec:discussion}
%
\par This paper has begun to address signatures of the so-called ``hot'' plasma component in the non-flaring corona, especially ARs, that is perceived as providing essential evidence for the existence of nanoflares. In this first paper in a series, we have used zero-dimensional and field-aligned single- and two-fluid modeling to examine the possible signatures of a single nanoflare occuring in a low-density plasma. This corresponds to the simplest case of so-called ``low frequency'' (LF) nanoflares, where a coronal loop is heated by many events with the same energy and with a time between events longer than the characteristic cooling time such that the plasma is allowed cool significantly before being reenergized.
%
\par When an approximate single-fluid model assuming ionization equilibrium is used, the expected signatures of conductive cooling appear in the distribution of plasma as a function of temperature, as described by the emission measure. In particular, short nanoflares with duration under 100 sec should have a significant plasma component well above 10 MK, and for longer duration events, significant plasma between the temperature of the maximum emission measure and 10 MK. However, inclusion of several pieces of additional physics modifies this result considerably, in each case making it much less likely that any plasma that is above 10 MK can be detected.
%
\par For short nanoflares, the time taken for conductively-heated chromospheric plasma to move into the coronal part of a loop is sufficiently long that the initial hot coronal plasma cools rapidly, contributing little to the emission measure such that, once the coronal density has increased, its temperature is below 10 MK. This effect is less important for long duration nanoflares. Consideration of separate electron and ion heating shows that, while electron heating leads to similar results to the single fluid case, ion heating results in no emission measure at 10 MK due to the principal electron heating mechanism being a relatively slow collisional process. Finally, relaxing the assumption of ionization equilibrium leads to a truncation of the emission measure below 10 MK, since the time needed to create highly ionized states such as Fe XXI is longer than any relevant cooling time. In all cases the hot plasma, while still in the corona, is effectively ``dark''. In addition, characteristic structures in the emission measure profile that are a signature of the heating itself in simple models are all but eliminated.
%
\par These results suggest that while showing that such a ``hot'' plasma should exist in principle may not be difficult, characterizing the heating process from its observed properties may be a lot harder. Of course we have limited ourselves to the LF nanoflares here, and we showed \citep{cargill_active_2014} that the intermediate frequency nanoflare regime does have significant differences, in large part due to the range of densities that the nanoflares occur in. This will be addressed fully, along with other parameter variations, in \citetalias{barnes_inference_2016-1}, though it is difficult to see how a component hotter than 10 MK can be resurrected. Note though that the results of \citet{caspi_new_2015} pose a challenge for our scenario unless an undetected microflare or small flare occurred during the observations.
%
\par The observational aspects of this work will be addressed more fully in \citetalias{barnes_inference_2016-1}. However, one can conclude (i) present day observations do not seem capable of making quantitative statements about the ``hot'' component, though they are highly suggestive of its existence and (ii) future measurements should be concentrated in the temperature regime $10^{6.6}$ – $10^7$ K rather than at higher temperatures. The MaGIXS instrument, due to fly in 2017, is well positioned to do this.
\acknowledgments
WTB was provided travel support to the Coronal Loops Workshop
VII held in Cambridge, UK, July 21-23, 2015, at which a preiliminary version of this work was presented, by NSF award number 1536094. We thank the anonymous referee whose comments helped to improve the final draft of this paper.
|
1,941,325,221,086 | arxiv | \section{Introduction and Background}
Since the initial discovery of dark energy (DE) by \cite{Riess1998, Perlmutter1999}, observations of Type Ia supernovae (SNe Ia) have been integral in establishing the canonical $\Lambda$CDM cosmological model.
In the $\Lambda$CDM model, the present energy density of our flat universe is dominated by cosmologically constant DE ($\Lambda$) and non-relativistic, collisionless (`cold') dark matter (CDM).
Though some tensions between predictions and observations exist \citep{Weinberg2015, Verde2019}, this simple model has successfully predicted many cosmological and astrophysical signals \citep{Peter2012, Mortonson2013}.
However, despite the success of the $\Lambda$CDM model, the physical identifies of $\Lambda$ and CDM remain unsettled.
Researchers continue to search for deviations from the predictions of $\Lambda$CDM in many data sets, including an ever-increasing archive of SNe Ia.
In particular, some non-canonical cosmological models, such as those discussed by \cite{Barenboim2005, Xia2005, Feng2006, Lazkoz2010, Wang2017}, predict that the true expansion of the universe might oscillate around the predictions of $\Lambda$CDM.
Respectively using the Gold \citep{Reiss2007}, Union \citep{Kowalski2008}, Constitution \citep{Hicken2009} and Pantheon \citep{Scolnic2018} data sets of SNe Ia, \cite{Jain2007}, \cite{Liu2009}, \cite{Lazkoz2010}, and \cite{Brownsberger2019} search for evidence of such oscillations in cosmic expansion. Though they utilize a diversity of data sets and statistical methods, those analyses universally report no evidence of oscillations in the rate of cosmic expansion.
Contrary to those previous findings, \cite{Ringermacher2015} and \cite{Ringermacher2020} (R15 and R20 henceforth) claim to identify damped oscillations in the universe's recent expansion history.
Combining data of radio galaxies and SNe Ia \citep{Conley2011, Daly2004, Reiss2004} into a `CDR' data set, R15 claim to detect cosmic oscillations in the universe's scale factor.
Using the Pantheon data set of type Ia supernovae, R20 build on R15 to claim a detection of an oscillating scale factor with a total statistical significance of at least $2\sigma$.
Such a detection of oscillatory cosmic expansion would mark an enormous paradigm shift in our understanding of the physics of the universe, changing the canonical model that has held since the first identification of DE.
The work of R20 is entirely worthwhile. Their results should be seriously considered and appropriately scrutinized.
Replicating the analysis method of R20 and applying it to simulated data, we find that there is an $11\%$ chance that the Pantheon data observed in a $\Lambda$CDM universe would produce a stronger oscillatory signal than that which R20 detect.
Our measurement does not include a statistical `trials factor' penalization for the various tunable parameters in the R20 analysis, and the significance of the detected oscillations is therefore less than our reported metric.
The oscillations noted by R20 are likely data analysis artifacts - the signature of a throughput function that consists of the uneven spacing of the Pantheon SNe in redshift and their sequencing of filtering and differentiation analysis steps.
In Section \ref{sec:replicate} below, we describe our replication of the R20 analysis. In Section \ref{sec:random}, we describe our generation of the artificial data and the assessment of the consistency of the real data with $\Lambda$CDM. We detail our conclusions in Section \ref{sec:conclusion}.
\section{Replicating the R20 Results} \label{sec:replicate}
In this Section, we describe our replication of the R20 analysis and the R20 results.
\subsection{Inferring Cosmic Time and Residual Scale Factor Derivative for SNe Ia} \label{sec:processing}
R20 search for oscillations by transforming the standard Hubble diagram (brightness vs. redshift) into plots of scale factor vs time. They claim that such a plot enables a model-independent study of the universe's expansion history.
The Pantheon data set of Type Ia supernovae consists of measured redshifts, $z_i$, distance moduli, $\mu_i$, and distance modulus uncertainties, $\sigma_{\mu, i}$. The subscripts identify each SNe in order of increasing $z_i$. These are the directly measured quantities from which cosmologists infer cosmic expansion.
From these measured quantities, R20 note oscillations in non-standard inferred quantities: the normalized cosmic time since the end of inflation and the residual in the time derivative of the scale factor.
They infer measurements of cosmic time, which we denote $t_i$, by approximating an integral in luminosity distances with a discrete sum.
They approximate residual time derivatives in scale factor by integrating in luminosity distance to determine $t_i$, subtracting the predictions of $\Lambda$CDM to measure residuals, binning these residuals, taking a discrete derivative of these binned values, smoothing these binned derivatives with distinct smoothing kernels, and subtracting the two smoothings.
We make this series of operations explicit in our notation of the inferred residual time derivative scale factor, which we denote $\Delta G(d \overline{\Delta a_i}/dt)$. Here, the inner $\Delta$ indicates a residual, $\overline{(\cdot)}$ indicates binning, $d(\cdot)/dt$ indicates discrete time differentiation, and $\Delta G$ indicates the differences between two Gaussian smoothings.
Throughout the rest of this Section, we describe how we inferred $t_i$ and $\Delta G(d \overline{\Delta a_i}/dt)$ from $z_i$ and $\mu_i$.
We measured the cosmological scale factors, $a_i$, and luminosity distances, $d_{L, i}$, using the standard relations:
\begin{equation}
a_i = \frac{1}{1+ z_i} ,
\end{equation}
and
\begin{equation}
d_{L, i}= 10 ^ {(\mu_i - 25) / 5} \textrm{ Mpc}.
\end{equation}
The scaled luminosity distances, $Y_i$, were defined by
\begin{equation}
Y_i = a_i \frac{d_{L, i}}{D_H} ,
\end{equation}
where $D_H = c / H_0$, $c$ is the speed of light, and $H_0$ is the Hubble constant.
We determined the scaled $Y$ separations between measured SNe, $\Delta Y_i$, via the relation:
\begin{equation}
a_i \Delta Y_i = a_i (Y_i - Y_{i-1}) .
\end{equation}
We calculated the normalized cosmological times, $t_{i, raw}$, by approximating an integral over cosmic time via a discrete sum:
\begin{equation}
t_{i, raw} = 1 - \int_0^{z_i} a(t) dY \simeq 1 - \sum_{j=1}^{i} a_j \Delta Y_j .
\end{equation}
We corrected the raw cosmological times using the relation:
\begin{equation}
t_{i} = \alpha_{\textrm{Pan to CDR}} (t_{i, raw}- t_{corr}) ,
\end{equation}
where $t_i$ are the corrected cosmological times.
According to R20, $t_{corr}$ corrects for the fact that the first measurement of $t$ is at the first SN where $a$ is not equal to its present day value, and $\alpha_{\textrm{Pan to CDR}}$ is a scaling to match the Pantheon $t$ range to the $t$ range of the CDR data set studied in R15. Following R20, we used $t_{corr} = 0.009579$ and $\alpha_{\textrm{Pan to CDR}}=1.041$.
We calculated the residual scale factors, $\Delta a_i$, by subtracting from the measured $a_i$ values the canonical values of $a_i$ determined from $t_{i}$:
\begin{equation}
\Delta a_i = a_i - a_{\Lambda CDM} (t_{i}) .
\end{equation}
We defined the canonical scale factor, $a_{\Lambda CDM}$, for a given cosmic time, $t$, by the integral relation
\begin{equation} \label{eq:aCanon}
t = 1 - \int_0^{1 / a_{\Lambda CDM} - 1} dz' \frac{1}{(1+z' )\sqrt{\Omega_M (1+ z') ^ 3 + \Omega_{\Lambda} }} .
\end{equation}
Copying R20, we set $\Omega_M = 0.27$ and $\Omega_{\Lambda} = 0.73$. We calculated $a_{\Lambda CDM} (t_{i})$ for each $t_{i}$ by interpolating over an array of $t$ values calculated at $N_{a, interp} = 1001$ $a_{\Lambda CDM}$ values evenly distributed over the physically relevant range of $a_{\Lambda CDM} \in [0, 1]$. With $N_{a, interp} = 1001$, our interpolated values of $a_{\Lambda CDM} (t_{i})$ converged to within $0.01\%$ of their true values.
To bin the inferred scale factor residuals, we divided the $t$ space ($0$ to $1$) into $N_{bin} = 128$ bins of equal size and calculated the mean $\Delta a_i$ value in each $t$ bin, $\overline{\Delta a_i}$ . We computed the wide baseline derivative of $\overline{\Delta a_i}$, $d\overline{\Delta a_i}/dt$, following Equation (1) of R20:
\begin{equation}
\frac{d{\overline{\Delta a_i}}}{dt} = \frac{\overline{\Delta a_{i+n/2}} - \overline{\Delta a_{i-n/2}}}{n \Delta t} .
\end{equation}
As in R20, $n = 8$ and $\Delta t = 1/128$. For the first [last] $n/2$ bins, the lower [upper] $\Delta a_{i}$ value was set to the first [last] bin and the $n$ in the denominator was set equal to the number of bins over which the derivative was measured.
We smoothed the $d\overline{\Delta a_i}/dt$ values using a Gaussian kernel. We denote these smoothed derivatives as $G_k(d \overline{\Delta a_i}/dt)$ where the $k$ index denotes the width of the Gaussian kernel in $t$:
\begin{equation} \label{eq:smoothDef}
G_k(x) =\frac{ \sum_{j=0}^{N_{bin}} x\ G( \frac{t_i - t_j}{k } ) }{ \sum_{j=0}^{N_{bin}} G( \frac{t_i - t_j}{k } )} ,
\end{equation}
where
\begin{equation} \label{eq:GDef}
G (a) = \frac{1}{\sqrt{2 \pi} \times 0.37} e^{(- a^2 / (2 \times 0.37 ^ 2))} \ .
\end{equation}
We based Equations \ref{eq:smoothDef} and \ref{eq:GDef} on the definition of the \lstinline{ksmooth} function of the \lstinline{Mathcad} software, as that is the smoothing function used by R20. We believe the value of 0.37 is an approximation of one e-folding, $1/e$.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{RingermacherPlot.pdf}
\caption{The results of applying the analysis of R20 to the Pantheon data set of Type Ia supernovae \citep{Scolnic2018}. Starting at the top with the inferred scale factor, $\Delta a$ vs inferred cosmic time, $t_i$, each plot shows the data after one additional step of the analysis discussed in Section \ref{sec:processing}. The bottom plot, showing $\Delta G(d \overline{\Delta a_i}/dt)$ vs $t_i$, resembles the results in Figure 2 of R20 and displays oscillations.} \label{fig:trueDataPlot}
\end{figure}
Our final result, $\Delta G(d \overline{\Delta a_i}/dt)$, is the difference between $G_k(d \overline{\Delta a_i}/dt)$ with two kernels:
\begin{equation}
\Delta G(\frac{d \overline{\Delta a_i}}{dt}) = G_{k1}(\frac{d \overline{\Delta a_i}}{dt}) - G_{k2}(\frac{d \overline{\Delta a_i}}{dt}) \ .
\end{equation}
Here, as in R20, we set $k1 = 0.05$ and $k2 = 0.13$.
We emphasize that $\Delta G(d \overline{\Delta a_i}/dt)$ is not a true measurement of the residual of the time derivative of the scale factor, $\Delta \dot{a}$. Rather, $\Delta G(d \overline{\Delta a_i}/dt)$ represents an attempt to infer $\Delta \dot{a}$ through a series of data processing steps. The relation between $\Delta G(d \overline{\Delta a_i}/dt)$ and $t_i$ describes the cosmic relation between $\Delta \dot{a}$ and $t$ viewed through a structured windowing function.
\subsection{Results from the True Pantheon Data}
In Figure \ref{fig:trueDataPlot}, we show the intermediate results of the data processing steps described in Section \ref{sec:processing}. To best replicate Figure 2 of R20, we measured only those bins with $t \geq 0.46 $ and $t \leq 1 $. The bottom panel, showing our calculation of $\Delta G(d \overline{\Delta a_i}/dt)$ vs $t_{i}$, represents our best replication of Figure 2 in R20.
We find a damped oscillatory relation between $\Delta G(d \overline{\Delta a_i}/dt)$ and $t_{i}$.
The oscillation amplitude decreases in $t$, with the largest excursion of $|\Delta G(d \overline{\Delta a_i}/dt)| \simeq 0.12$ occurring around $t \simeq 0.5$.
We found that such oscillations around in $\Delta \dot{a}$ in $t$ would correspond to similar oscillations in distance modulus residual, $\Delta \mu$, in redshift, $z$. We estimate that, for oscillations of the magnitude shown in Figure \ref{fig:trueDataPlot}, there would be an oscillatory signal in $\Delta \mu$ vs $z$ with a peak amplitude of about 10 millimags. Comparing this prediction to the constraints shown in Figure 6 of \cite{Brownsberger2019}, the Pantheon data are unable to rule-out such a small oscillatory signal in $\Delta \mu$ vs $z$.
Having demonstrated the consistency of our analysis with that of R20, we next show how similar oscillatory signals can arise from the above analysis applied to Pantheon-like artificial data obtained in a canonical $\Lambda$CDM universe.
\section{Demonstrating how the Claimed Oscillatory Signal is Generic, and not an Indication of Cosmic Oscillations} \label{sec:random}
The final data shown in the lower plot of Figure \ref{fig:trueDataPlot} are \emph{not} direct measurements of the residual time derivative of the scale factor $\Delta \dot{a}$.
Rather, they represent an inference of $\Delta \dot{a}$ acquired through a serious of operations.
Those operations conspire with the sampling of observed SNe in redshift to produce a complicated windowing function which itself carries structure.
In this section, we describe how we produced simulated Pantheon-like data sets and demonstrate that random data realizations distributed around the $\Lambda$CDM cosmology can generate the same sorts of oscillations identified in the bottom plot of Figure \ref{fig:trueDataPlot}.
\subsection{Generating Randomized Pantheon-Like Data}
We applied the same procedure discussed in Section \ref{sec:replicate} to randomized versions of the Pantheon data set.
For the $i^{th}$ Pantheon SN, we determined a randomized $\mu_i$ by drawing from a normal distribution with mean equal to the background $\Lambda$CDM predicted $\mu_i$ and standard deviation equal to the reported $\sigma_{\mu,i}$. The $z_i$ of each SN is well determined, and was left unchanged in the randomization. This preserves the window function in redshift. Each randomization produced a new set of 1048 $\mu_i$ values at the same $z_i$ positions.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth, angle = 0]{SampleRandomPlots_True_at_4_3.pdf}
\caption{Thirty-four plots of artificial Pantheon-like data sets and the plot of the real Pantheon data all processed using the analysis in Section \ref{sec:processing}. We identify which panel displays the true data at the end of this caption.
Because we randomized the distance moduli of these Pantheon-like data sets around $\Lambda$CDM, they are definitionally bereft of non-canonical cosmic structure. The oscillations exhibited in all but one of the above plots are nothing more than artifacts of the data processing. Because the oscillations of the true Pantheon data are not clearly distinct from the oscillations in the artificial Pantheon-like data, we argue that the oscillations identified by R20 could reasonably result from the Pantheon data observed in a canonical $\Lambda$CDM cosmology.
In this figure, the plot of the true Pantheon data is displayed in the fifth row of the fourth column.} \label{fig:randCanonPlot}
\end{figure}
Any fundamental cosmic oscillation buried in the true Pantheon data does not exist in these artificial Pantheon-like data sets.
By pushing each randomized data set through the same processing steps described in Section \ref{sec:replicate}, we generated a plot of $\Delta G(d \overline{\Delta a_i}/dt)$ vs $t_{i}$ for Pantheon-like data from which any non-$\Lambda$CDM cosmic structure (oscillatory or otherwise) has been removed. We repeated this randomization $N_{R} = 10^4$ times.
In Figure \ref{fig:randCanonPlot}, we show a representative subset of the $\Delta G(d \overline{\Delta a_i}/dt)$ vs $t_{i}$ plots of the randomized Pantheon-like data sets. Many such plots (those shown and not shown in Figure \ref{fig:randCanonPlot}) display oscillations similar in size and wavelength to the oscillations observed in Figure \ref{fig:trueDataPlot}. To underscore this point, we include the $\Delta G(d \overline{\Delta a_i}/dt)$ vs $t_{i}$ plot of the true Pantheon data amongst the plots of randomized Pantheon data in Figure \ref{fig:randCanonPlot}.
As an informal test of the real data's consistency with random fluctuations around $\Lambda$CDM, one can identify which of the plots in Figure \ref{fig:randCanonPlot} appears to show the strongest oscillatory signal and check if that plot depicts real or artificial data. We identify the real Pantheon data in the Figure's caption.
\subsection{The Frequency Spectra of Real and Artificial Pantheon Data}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth, angle = 0]{RingermacherFourierPlot.pdf}
\caption{The power spectrum of the true Pantheon data (black line) and the distribution of power spectra of the randomized Pantheon-like data. The $N_{R}=10^4$ randomized Pantheon-like data sets produce, at every frequency, a distribution of $N_{R}$ measurements of the power spectra that could result from random deviations around the $\Lambda$CDM cosmology. At each frequency, the noted percentage of randomizations lie below the labeled contour. For example, at each frequency, $50\%$ of randomizations have power below the green contour and $90\%$ of randomizations have power below the blue contour.} \label{fig:fourier}
\end{figure}
Examining the $\Delta G(d \overline{\Delta a_i}/dt)$ vs $t_{i}$ plots in Figure \ref{fig:randCanonPlot}, we cannot confidently distinguish the plot of the true Pantheon data from those of randomized data plots. The oscillations in the real data appear consistent with apparent oscillations that could result from random fluctuations around the canonical $\Lambda$CDM cosmology.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth, angle = 0]{RingermacherHistogramAtChosenFrequency.png}
\caption{The distribution of Fourier power at $f = 7.5 H$Hz measured in the artificial Pantheon-like data randomly distributed around $\Lambda$CDM and analyzed according to the analysis of Section \ref{sec:replicate}.
The power of the real Pantheon at this frequency subject to the same analysis is shown with the vertical black line.
About $11\%$ of random data realizations have more Fourier power at this chosen frequency than the real Pantheon data.
The Pantheon data, analyzed according to the methodology described by R20, shows no statistically significant evidence of oscillations in the rate of cosmic expansion.} \label{fig:hist}
\end{figure}
To make this qualitative observation quantitative, we computed power spectra of $\Delta G(d \overline{\Delta a_i}/dt)$ in $t_{i}$. The data were binned in $t_{i}$ bins of equal size, and the data to be Fourier transformed was thus evenly spaced in time.
We measured the Power Spectrum, $P_f$, of $\Delta G(d \overline{\Delta a_i}/dt)$ in $t_{i}$ via a standard discrete Fourier transform:
\begin{equation} \label{eq:fourier}
P_f = \frac{1}{N_{bin}} \Big | \sum_{j=0}^{N_{bin}-1} \dot{\overline{\Delta a_j}}^{\Delta G} e^{-i 2 \pi / f \ j / N_{bin}} \Big |^2 \ .
\end{equation}
To avoid aliased modes, we measured the Fourier power in frequencies, $f$, from $0 H$Hz to $N_{bin} /4 = 32 H$Hz. Replicating R20, 1 $H$Hz (`one Hubble Hertz') $ = 0.1023 h_{100} Gyr^{-1}$.
Using Equation \ref{eq:fourier}, we computed the Power Spectrum of $\Delta G(d \overline{\Delta a_i}/dt)$ in $t_{i}$ for the true Pantheon data set and for the $N_{R}$ artificial Pantheon-like data sets. We show the true Pantheon power spectrum and the distribution of artificial Pantheon-like power spectra in Figure \ref{fig:fourier}. At its peak, the power spectrum of the true Pantheon data (black line) lies below the $90\%$ contour of the artificial power spectra (blue shading).
R20 focus primarily on the frequency peak at $f = 7.5 H$Hz. In Figure \ref{fig:hist}, we display a histogram showing the Fourier power at $f = 7.5H$Hz of the $N_{R}$ Pantheon-like artificial data sets and show where the power of the real Pantheon data lies in this histogram (black line).
About $11\%$ of randomized Pantheon-like $\Lambda$CDM data sets have more power at the chosen frequency than the real Pantheon data.
\section{Conclusions}\label{sec:conclusion}
We replicated the analysis of the Pantheon data set of SNe Ia described by R20 and we found a similar result: the inferred $\dot{a}$ residuals oscillate in the inferred cosmic time. We show these results in Figure \ref{fig:trueDataPlot}.
We repeated this analysis on artificial Pantheon-like data sets with unchanged redshifts and with distance moduli randomly drawn from normal distributions centered at the canonical $\Lambda$CDM cosmology. By definition, this randomization erased any cosmic oscillation signature that exists in the true Pantheon data. Many plots of the R20 analysis applied to these randomized distributions (see Figure \ref{fig:randCanonPlot} for a representative subsample) display oscillations similar in amplitude and frequency to those identified in the real Pantheon data.
To make this qualitative observation quantitative, we measured the power spectra of the real Pantheon data and of the artificial Pantheon-like data sets. We showed these power spectra in Figure \ref{fig:fourier}. R20 focus on the power spectrum peak at $f = 7.5 H$Hz. In Figure \ref{fig:hist}, we showed the distribution of the artificial data sets' powers at this chosen frequency and where the true Pantheon data lies in this histogram.
About $11\%$ of the Pantheon-like $\Lambda$CDM data sets have more power at this chosen frequency than the real Pantheon data when analyzed according to the prescription of R20.
Our analysis used the same choice of tuned analysis parameters that R20 report, including the widths of the smoothed Gaussian kernels, the chosen frequency, and the number of time bins. A robust measurement of the statistical significance of this $\simeq 11\%$ effect would also include a statistical penalization for these adjustable analysis parameters.
There are potential sources of systematic error that neither we nor R20 consider. Particularly, the Pantheon data set is a combination of distinct supernova survey projects, each of which carries its own imperfectly characterized systematic errors. These inter-survey systematics inherit each individual survey's uneven distributions in redshift and on the sky.
If the oscillations noted by R20 appeared to be more than data analysis artifacts, we would analyze the signal's robustness against these inter-survey systematics.
There is at least a one-in-ten chance that statistical fluctuations around the canonical $\Lambda$CDM cosmology would conspire with the windowing function of the R20 data analysis to produce a larger oscillatory signal than that which R20 report. The apparent oscillatory signal is consistent with data processing artifacts that masquerade as an oscillating signal in a truly $\Lambda$CDM cosmology.
\section{Acknowledgments} We found the work of \cite{VanderPlas2018} particularly helpful in understanding the importance of being cautious about the potential impact of processing artifacts. SB and CS are supported by Harvard University and the US Department of Energy under grant DE-SC0007881. DS is supported by DOE grant DE-SC0010007 and the David and Lucile Packard Foundation. DS is supported in part by NASA under Contract No. NNG17PX03C issued through the WFIRST Science Investigation Teams Programme.
\newpage
|
1,941,325,221,087 | arxiv | \section{Steady-state properties}
The deterministic dynamics of the biochemical network in Figure \ref{Fig1} is captured by simple rate-equations for the mean number of activated proteins. We can augment these equations to account for stochastic fluctuations within the Linear-noise approximation by adding appropriate Langevin noise terms. In what follows, we assume that proteins are abundant and ignore saturation effects. The dynamics of the cellular circuit is described by a pair of Langevin equations for the probabilities $p_{\textrm{on}}$ and $p_{\textrm{off}}$ of the receptor to be in the on and off state, respectively, and the number of activated proteins $n$,
\bea
\frac{dp_{\textrm{on}}}{dt} &=& k_4^{\textrm{off}}(1-p_{\textrm{on}})-k_4^{\textrm{on}}p_{\textrm{on}} + \eta_{\textrm{r}}(t) \\
\frac{d n}{dt} &=& k_2^{\textrm{on}}p_{\textrm{on}} +k_2^{off}(1-p_{\textrm{on}})-k_1n+ \eta_{n}(t).
\label{LangevinEq}
\eea
The variance of the Langevin terms is given by the Poisson noise in each of the reactions
\bea
\<\eta_n(t)\eta_n(t^\prime)\> &=&( k_2^{\textrm{on}}\bar{p}_{\textrm{on}} +k_2^{off}(1-\bar{p}_{\textrm{on}})+k_1\bar{n})\delta(t-t^\prime)
\nonumber\\
\<\eta_r(t)\eta_r(t^\prime)\> &=&(k_4^{\textrm{off}}(1-\bar{p}_{\textrm{on}})+k_4^{\textrm{on}}\bar{p}_{\textrm{on}})\delta(t-t^\prime),
\eea
with $\delta(t-t^\prime)$ denoting the Dirac-delta function and barred quantities denoting the mean steady-state values \cite{detwiler2000engineering, mehta2008quantitative}
At steady-state, we can calculate the mean probability and number of proteins by setting the time derivative in Eq. \ref{LangevinEq} equal to zero and ignoring noise terms, yielding
\be
\bar{p}_{\textrm{on}}=1-\bar{p}_{\textrm{off}}=\frac{K_4^{\textrm{off}}}{K_4^{\textrm{off}}+K_4^{\textrm{on}}}
\label{pon}
\ee
and
\be
\bar{n}= (K_2^{\textrm{on}}-K_2^{\textrm{off}}) \bar{p}_{\textrm{on}} +K_2^{\textrm{off}},
\label{nbar}
\ee
where we have defined the dimensionless parameters $K_j^s=k_j^s/k_1$ with $j=2,k$ and $s=\textrm{on,off}$. For the biologically realistic case $K_2^{\textrm{off}} \ll K_2^{\textrm{on}}p_{\textrm{on}}$, as expected, the mean number of proteins is proportional to the kinase activity in the on state times the probability of being in the on state, $\bar{n} \approx K_2^{\textrm{on}}p_{\textrm{on}}$. One can also calculate the variance in protein numbers (see Appendix)
\be
\< (\delta n)^2\> = \bar{n}+ (\Delta K_2^{on})^2\frac{\bar{p}_{on} \bar{p}_{off} }{1+K_4^{on}+K_4^{off}}.
\label{varN}
\ee
The first term on the right hand side of the equation arises from Poisson noise in the synthesis and degradation of activated protein, whereas the second term is due to stochastic fluctuations in the state of the receptors.
In addition to the mean and variance, we will need the full, steady-state probability distributions for $n$ to calculate power consumption. The steady-state distribution can be calculated from the master equation for the probability, $p_s(n)$, of having $n$ active proteins with the receptor in a state $s$,
\begin{multline}
\frac{dp_s(n)}{dt} = k_1(n+1)p_s(n+1) + k_2^sp_s(n-1)\\
+ k_4^{\bar{s}}p_{\bar{s}}(n)- (k_1n+k_2^s+k_4^s)p_s(n)
\label{ME}
\end{multline}
with $\bar{s}= $ off (on) when $s=$ on (off). At steady-state, the left-hand side of Eq. \ref{ME} is zero and
\begin{multline}
K_4^{\bar{s}}p_{\bar{s}}(n)=- (n+1)p_{s}(n+1)-K_2^s p_s(n-1)\\
+(n+K_2^s+K_4^s)p_s(n).
\label{SSeq}
\end{multline}
This equation is similar to those found in \cite{iyer2009stochasticity, visco2009statistical} and can be solved via a generating function approach. Define a pair of generating functions,
\be
G_s(n)= \sum_{n=0}^\infty p_s(n) z^n,
\label{defG}
\ee
with $s=\textrm{on, off}$. We can rewrite (\ref{SSeq}) in terms of the generating functions as
\be
\left[ (z-1)\partial_z-K_2^s(z-1)+K_4^s\right]G_s(z)=K_4^{\bar{s}}G_{\bar{s}}(z).
\label{Geq1}
\ee
This equation must be supplemented by initial conditions for the $G_s(z)$. These follow from the observation that $G_{\textrm{on}}(1)=\bar{p}_{\textrm{on}}$ and $G_\textrm{off}(1)=\bar{p}_{\textrm{off}}=1-\bar{p}_{\textrm{on}}$ with $\bar{p}_{\textrm{on}}$ given by Eq. \ref{pon}. As shown in the Appendix, this equations can be solved exactly and yields
\be
G_s(z)=\frac{K_4^{\bar{s}}e^{K_2^s(z-1)}}{K_4^s+K_4^{\bar{s}}} \, _1F_1(K_4^s;1+ K_4^s+K_4^{\bar{s}}; \Delta K_2^s(z-1))
\label{Gsz}
\ee
where $\Delta K_2^s=\Delta K_2^{\bar{s}}- \Delta K_2^s$ and $_1F_1(a;b; z)$ is the confluent hypergeometric function of the first kind. As a check on this expression, we can compute the variance of $n$ directly from Eq. \ref{Gsz} and it is in agreement with (\ref{varN}) (see Appendix).
Depending on the parameters, the steady-state distributions have two qualitatively distinct behaviors (see Fig. \ref{Figure2}). In the slow switching regime with $k_2^{\textrm{off}} \ll k_2^{\textrm{on}}$ and $k_4^{\textrm{on}}, k_4^{\textrm{off}} \ll k_1 $, receptors switch at rates much slower than the protein deactivation rate $k_1$. This gives rise to a bimodal distribution of activated proteins that can be roughly thought of as a superposition of the probability distribution of activated proteins when the receptor is the on and off state. As $k_2^{\textrm{on}}$ approaches $k_2^{\textrm{off}}$, the distributions in the two states merge and the overall probability distribution becomes unimodal. On the other hand, in the fast switching regime, where $k_4^{\textrm{on}}, k_4^{\textrm{off}} \ll k_1$, the distribution of activated proteins is always unimodal. In this regime, the measurement time $T\propto k_1^{-1}$ is much longer than the average time a receptor is in the on or off state, and the biochemical network `time-averages' out the stochastic fluctuations in receptor states. In what follows, we restrict our considerations to this latter regime.
\begin{figure}
\includegraphics[scale=0.4]{Figure2-alt.eps}
\caption{ {\bf Top.} Slow switching regime, $k_1 \gg k_4^{\textrm{on}}, k_4^{\textrm{off}}$. Total probability (black solid line), probability when receptor is in the on state (blue dash-dot line), probability when receptor is in the off state (red dashed line). {\bf Middle.} Fast switching regime, $k_1 \ll k_4^{\textrm{on}}, k_4^{\textrm{off}}$. Total probability (black solid line), probability when receptor is in the on state (blue dash-dot line), probability when receptor is in the off state (red dashed line). {\bf Bottom.} The uncertainty in ligand concentration, $\left(\delta c_{rms}/\bar{c}\right)^2$ as a function of $k_1$ with mean number of active proteins $\bar{n}=25$ (dashed red line) and $\bar{n}=100$. This can be compared to the Berg-Purcell result (solid black line). Parameters: $k_2^{\textrm{off}}=0.01, k_1^{\textrm{on}}$, $k_4^{\textrm{on}}= k_4^{\textrm{off}}=1$.}
\label{Figure2}
\end{figure}
\section{Quantifying learning}
The biochemical circuit in Figure \ref{Fig1} ``computes" the external concentration of a chemical ligand by implementing a noisy version of Maximum Likelihood Estimation. As emphasized by Berg and Purcell in their seminal paper \cite{berg1977physics}, the chief obstacle in determining concentration is the stochastic fluctuations in the state of the ligand binding receptors. Berg and Purcell argued that a good measure of how much cells learn is the uncertainty in external concentration as measured by the variance of the estimated concentration $(\delta c)^2$. Berg and Purcell assumed that the cell computed the average receptor occupancy by time-averaging over a measurement time $T$, and showed that \cite{berg1977physics}
\be
\frac{(\delta c_{\textrm{BP}})^2}{c^2} =\frac{2k_4^{\textrm{on}}}{T\bar{p}_{\textrm{on}}}=2/N_{b},
\label{deltacBP}
\ee
with $k_4^{\textrm{off}}=k_+c$, $k_4^{\textrm{on}}=k_-$ and independent of $c$, and $N_b$ the number of binding events during the time $T$.
It was later shown that cells could compute concentration more accurately by implementing Maximum Likelihood Estimation (MLE) with \cite{endres2009maximum, mora2010limits}
\be
\frac{(\delta c_{\textrm{ML}})^2}{c^2} =\frac{1}{2} \times \frac{(\delta c_{\textrm{BP}})^2}{c^2} .
\label{deltacMLE}
\ee
The decreased uncertainty is a results from the fact that MLE ignores noise due to unbinding of ligands from the cell.
To quantify learning in our biochemical circuit, we follow Berg and Purcell and estimate the fluctuations in $(\delta c)^2$ as
\be
\frac{(\delta c)^2}{c^2} =\left(c \frac{\partial \bar{n}}{\partial c}\right)^{-2} (\delta n)^2,
\label{crms}
\ee
with $(\delta n)^2 = \<n^2\>-\bar{n}^2$. Substituting $k_4^{off}=k_+ c$ and $k_4^{on}=k_-$ and computing the derivative using Eq. \ref{nbar} gives
\be
\left(c \frac{\partial \bar{n}}{\partial c}\right)^2= (\bar{p}_{on}\bar{p}_{off}\Delta K_2)^2.
\label{dercn}
\ee
Substituting Eq. \ref{nbar} and \ref{varN} into Eq. \ref{crms} yields
\be
\frac{(\delta c)^2}{c^2} = \frac{\bar{n}}{(\bar{p}_{on}\bar{p}_{off}\Delta K_2)^2}+ \frac{1}{(\bar{p}_{on} \bar{p}_{off}) (1+K_4^{on}+K_4^{off})}.
\label{crmsfinal}
\ee
Similar to the linear noise calculation, the first term on the right-hand side arises from the Poisson fluctuations in activated protein number whereas the second term comes from the stochastic fluctuations in the state of receptors. Figure \ref{Figure2} shows the uncertainty, ${(\delta c)^2}/{c^2}$, as a function of the degradation rate of activated protein, $k_1$ when $\bar{n}=25$ and $\bar{n}=100$ and $k_2^{\textrm{on}} \gg k_2^{\textrm{off}}$.
By identifying the degradation rate with the inverse measurement time, $k_1=2T^{-1}$, we can also compare the results with Berg Purcell. The factor of two is due to the slight difference in how the variance of the average receptor occupancy is calculated for a biochemical network when compared to the original Berg Purcell calculation \cite{mora2010limits}. As shown in Fig. \ref{Figure2}, when $\bar{n}$ is increased, the Poisson noise in protein production is suppressed and the performance of the cellular network approaches that due to Berg-Purcell. To make the connection with Berg-Purcell more explicit, it is helpful to rewrite Eq. \ref{crmsfinal} in terms of the average number of binding events, $N_b$, during $T$
\be
\frac{(\delta c)^2}{c^2} =\frac{ \bar{n}}{(\bar{n}-K_2^{off})^2p_{off}^2}+ \frac{2}{N_{bind}}\left( 1-\frac{k_1}{k_4^{on}+k_4^{off}+k_1}\right).
\ee
When the measurement time is much longer than the timescale for fluctuations in the receptor number, $k_4^{\textrm{on,off}} \gg k_1$ (equivalently $K_2^{\textrm{on}} \gg K_2^{\textrm{off}} \gg 1$), and the average number of activated proteins is large, $\bar{n} \gg K_2^{off} \gg 1$, the expression above reduces to ${(\delta c)^2}/{c^2} \approx 2/N_{b}$ in agreement with the Eq. \ref{deltacBP}.
\section{Power consumption and entropy production}
We now compute the energy consumed by the circuit in Figure \ref{Fig1} as a function of the kinetic parameters. To do so, we exploit the fact that dynamics of the circuit can be thought of as a nonequilbrium Markov process (see Fig. \ref{Figure3}). A nonequilibrium steady-state (NESS) necessarily implies the breaking of detailed balance in the underlying Markovian dynamics and therefore a non-zero entropy production rate. The entropy production rate is precisely the amount of power consumed by the biochemical circuit to maintain the nonequilibrium steady-state. Thus, by calculating the entropy production rate as function of kinetic parameters, we can calculate the power consumed by the biochemical network implementing the computation.
Consider a general Markov process with states labeled by $\sigma$ and transition probability from $\sigma$ to $\sigma^\prime$ given by $k(\sigma, \sigma^\prime)$. Defining the steady-state probability of being in state $\sigma$ by $P_{\sigma}$, the entropy production rate for a NESS is given by \cite{lebowitz1999gallavotti},
\be
\frac{dS}{dt}= \sum_{\sigma, \sigma^\prime}P(\sigma) k(\sigma, \sigma^\prime) \log{\frac{k(\sigma, \sigma^\prime)}{k(\sigma^\prime, \sigma)}},
\label{defA}
\ee
For the biochemical network described by Eq. \ref{ME}, this becomes,
\begin{widetext}
\be
\frac{dS}{dt} = \sum_{s=\textrm{on,off}, n} p_s(n) \left[ k_2^s\log{\frac{k_2^s}{k_1(n+1)}}
+ k_1n\log{\frac{k_1n}{k_2^s}}+k_4^s\log{\frac{k_4^s}{k_4^{\bar{s}} }} \right]
\label{EP1}
\ee
\end{widetext}
Since the receptors are in thermodynamic equilibrium, from detailed balance we know that
\be
\sum_{s,n} p_s(n) k_4^s \log{\frac{k_4^s}{k_4^{\bar{s}}}}= 0,
\ee
so that
\be
\frac{dS}{dt} = k_1 \sum_{s=on,off}\sum_n p_s(n) \left( K_2^s\log{\frac{K_2^s}{n+1}}- n \log{\frac{K_2^s}{n}} \right),
\label{AvgA}
\ee
where $K_2^s= k_2^s/k_1$. The steady-state distributions $p_s(n)$ follow from Eq. \ref{Gsz}. The physical content of this expression is summarized in Fig. \ref{Figure3}. The expression states that any non-zero cyclic flux must necessarily produce entropy. If it didn't, one would have a chemical version of a perpetual motion machine. Figures \ref{Figure3} and \ref{Figure4} show the power consumption as a function of $\Delta K_2=K_2^{\textrm{on}}-K_2^{\textrm{off}}$ and $k_1$. Notice that the power consumption tends to zero as both these parameters go to zero. We cannot, however, set $k_1=0$ identically because there then no longer exists a steady-state distribution.
\begin{figure}
\includegraphics[scale=0.4]{Figure3complete.eps}
\caption{ {\bf Top.} The probabilistic Markov process underlying the circuit in Fig. \ref{Fig1}. Any non-zero cyclic flux (depicted in red) results in entropy production and power consumption. {\bf Bottom.} Power consumption (solid black line) and uncertainty (dashed purple line) as a function of $\Delta k_2= k_2^\textrm{on}-k_2^\textrm{off}$ when $\bar{n}=25$, $k_4^{\textrm{on}}= k_4^{\textrm{off}}=1$. }
\label{Figure3}
\end{figure}
\section{Energetics, Information, and Landauer's Principle}
\begin{figure}[h]
\includegraphics[scale=0.4]{Fig4final.eps}
\caption{Total energy per independent measurement (Power $\times k_1^{-1}$) as a function of $k_1$ when $\bar{n}=25$. (Inset) Power as a function of $k_1$.}
\label{Figure4}
\end{figure}
We now highlight the fundamental connection between the energy consumed by the network and the information the network learns about the environment, and briefly discuss its relation to Landauer's principle. First, note that learning information about the environment requires energy consumption by the network. This can be seen in Fig. \ref{Figure3} which shows that as $\Delta k_2 \rightarrow 0$, the uncertainty about the concentration tends to infinity. This can be made concrete by noting that the entropy production (Eq. \ref{AvgA}) is zero if and only if $\Delta k_2=0$ (see Appendix). In conjunction with Eq. \ref{crmsfinal}, which diverges as $\Delta k_2 \rightarrow 0$, this implies that learning requires consuming energy. Physically, in the limit where $\Delta k_2=0$, the dynamics of the Markov process in Fig. \ref{Figure3} become ``one-dimensional" and the dynamics obey detailed balance. At the same time, in this limit, the number of downstream proteins becomes insensitive to external ligand concentrations, since all information about concentration is contained in the relative probabilities of being in the on or off state.
Second, as shown in Fig. \ref{Figure4}, the power consumption of the circuit tends to zero as $k_1 \rightarrow 0$. This is consistent with Landauer's principle: entropy production stems from erasing memory in a computing device. The number of activated proteins serves as a memory of ligand concentration which is erased at a rate $k_1$. Thus, as the erasure rate of the memory tends to zero, the device consumes less energy, as expected. Yet despite the fact that the power consumption tends to zero as $k_1$ decreases, the total energy consumed per measurement, namely the power times the measurement time, $T\simeq 2k_1^{-1}$, still increases (see Fig. \ref{Figure4}). Thus, learning more requires consuming more total energy despite the fact that power consumption is decreasing. In effect, one is approaching the reversible computing limit where memory is erased very infrequently. Note, however, that when erasure is performed infinitely slowly, $k_1=0$, the system no longer has a NESS and our formalism does not apply.
Finally, we note that one of the important open problems in our understanding are the constraints placed on the measurement time $T$. In principle, cells can always learn more by measuring the environment for longer periods of time. However, in practice, these measurement times tend to be quite short. There are a number of constraints that can set this measurement time including rotational diffusion \cite{berg1977physics} and the restrictions placed on motility. Here, we highlight another restriction that may be important in resource-starved environments: sensing external concentration necessarily requires cells to consume energy.
\section{Discussion and Conclusion}
Cells often perform computations using elaborate biochemical networks that respond to environmental cues. One of the most common simple networks found in bacteria are two-component networks where a receptor phosphorylates a downstream response regulator \cite{laub2007specificity}. In this work, we have shown that these simple two-component networks can implement a noisy version of the Berg-Purcell strategy to compute the concentration of external ligands. Furthermore, by mapping the dynamics of the biochemical network to Nonequilibrium Steady-States in Markov processes, we explicitly derived expressions for the power consumed by the network and showed that learning requires energy consumption. Taken together, these calculations suggest that, much like man-made and neural computing \cite{laughlin2001energy, laughlin1998metabolic, balasubramanian2001metabolically, bennett1982thermodynamics}, energetic considerations may place important constraints on the design of biochemical networks that implement cellular computations. They also suggest a fundamental relationship between the efficiency of cellular computing and the energy consumption.
Bacterial cells such as {\it Bacillus subtilis} can sporulate during times of environmental stress and remain metabolically dormant for many years. While sporulation is relatively well understood, the reverse process of germination is much more difficult to study. One current model for how a spore knows when to germinate in response to external cues involves integrating the signal and triggering commitment when an accumulation threshold is reached \cite{yi2011synergism, indest2009workshop}. This corresponds to the limit of vanishingly small $k_1$ in our model, so that power consumption is minimized at the expense of retaining the entire integrated signal. Our results indicate that this behavior may be due to the extreme energetic constraints imposed on a metabolically dormant spore, rather than an evolutionarily optimized strategy.
An important insight of this work is that even a simple Berg-Purcell strategy for sensing external concentrations requires the consumption of energy. It is likely that more complicated strategies that increase how much cells learn, such as Maximum Likelihood, require additional energetic inputs. For example, it was argued in \cite{mora2010limits} that MLE can be implemented by a network similar to the perfect adaptation network where bursts are produced in response to binding events. These bursts break detailed balance and therefore require energy consumption. It will be interesting to investigate further how the trade-off between learning and energy consumption manifests itself in the design
of computational strategies employed by cells.
In this work, we restricted ourselves to the simple case where cells calculate the steady-state concentration of an external signal. In the future, it will be useful to generalize this to other computations such as responding to temporal ramps \cite{mora2010limits} and spatial gradients \cite{endres2008accuracy, hu2010physical}. It will also be interesting to understand how to generalize the considerations here to arbitrary biochemical networks. An important restriction on our work is that we reduced our considerations to nonequilibrium steady-states. It will be interesting to ask how to generalize the work here to biochemical networks with a strong temporal component.
\section{Acknowledgements}
PM and DJS would like to thank the Aspen Center for Physics, where this work was initiated. We are especially grateful to Thierry Mora for clarifying the relationship between the rate $k_1$ and the average integration time $T$. This work was partially supported by NIH Grants K25GM086909 (to P.M.). DS was partially supported by DARPA grant HR0011-05-1-0057 and NSF grant PHY-0957573.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.